Planet Russell

,

Krebs on SecurityPorn Spam Botnet Has Evil Twitter Twin

Last month KrebsOnSecurity published research into a large distributed network of apparently compromised systems being used to relay huge blasts of junk email promoting “online dating” programs — affiliate-driven schemes traditionally overrun with automated accounts posing as women. New research suggests that another bot-promoting botnet of more than 80,000 automated female Twitter accounts has been pimping the same dating scheme and ginning up millions of clicks from Twitter users in the process.

One of the 80,000+ Twitter bots ZeroFOX found that were enticing male Twitter users into viewing their profile pages.

One of the 80,000+ Twitter bots ZeroFOX found that were enticing male Twitter users into viewing their profile pages.

Not long after I published Inside a Porn-Pimping Spam Botnet, I heard from researchers at ZeroFOX, a security firm that helps companies block attacks coming through social media.

Zack Allen, manager of threat operations at ZeroFOX, said he had a look at some of the spammy, adult-themed domains being promoted by the botnet in my research and found they were all being promoted through a botnet of bogus Twitter accounts.

Those phony Twitter accounts all featured images of attractive or scantily-clad women, and all were being promoted via suggestive tweets, Allen said.

Anyone who replied was ultimately referred to subscription-based online dating sites run by Deniro Marketing, a company based in California. This was the same company that was found to be the beneficiary of spam from the porn botnet I’d written about in June. Deniro did not respond to requests for comment.

“We’ve been tracking this thing since February 2017, and we concluded that the social botnet controllers are probably not part of Deniro Marketing, but most likely are affiliates,” Allen said.

ZeroFOX found more than 86,262 Twitter accounts were responsible for more than 8.6 million posts on Twitter promoting porn-based sites, many of them promoting domains in a swath of Internet address space owned by Deniro Marketing (ASN19984).

Allen said 97.4% of bot display names had the pattern “Firstname Surname” with the first letters of each name capitalized, and each name separated by a single whitespace character that corresponded to common female names.

An analysis of the Twitter bot names used in the scheme. Graphic: ZeroFOX.

An analysis of the Twitter bot names used in the scheme. Graphic: ZeroFOX.

The accounts advertise adult content by routinely injecting links from their twitter profiles to a popular hashtag, or by @-mentioning a popular user or influencer on Twitter. Those profile links are shortened with Google’s goo.gl link shortening service, which then redirects to a free hosting domain in the dot-tk (.tk) domain space (.tk is the country code for Tokelau — a group of atolls in the South Pacific).

From there the system is smart enough to redirect users back to Twitter if they appear to be part of any automated attempt to crawl the links (e.g. by using site download and mirroring tools like cURL), the researchers found. They said this was likely a precaution on the part of the spammers to avoid detection by automated scanners looking for bot activity on Twitter. Requests from visitors who look like real users responding to tweets are redirected to the porn spam sites.

Because the links promoted by those spammy Twitter accounts all abused short link services from Twitter and Google, the researchers were able to see that this entire botnet has generated more than 30 million unique clicks from February to June 2017.

[SIDE NOTE: Anyone seeking more context about what’s being promoted here can check out the Web site datinggold[dot]com [Caution: Not-Safe-for-Work], which suggests it’s an affiliate program that rewards marketers who drive new signups to its array of “online dating” offerings — mostly “cheating,” “hookup” and “affair-themed” sites like “AdsforSex,” “Affair Hookups,” and “LocalCheaters.” Note that this program is only interested in male signups.]

The datinggold affiliate site which pays spammers to bring male signups to "online dating" services.

The datinggold affiliate site which pays spammers to bring male signups to “online dating” services.

Allen said the Twitter botnet relies heavily on accounts that have been “aged” for a period of time as another method to evade anti-spam techniques used by Twitter, which may treat tweets from new accounts with more prejudice than those from established accounts. ZeroFOX said about 20 percent of the Twitter accounts identified as part of the botnet were aged at least one year before sending their first tweet, and that the botnet overall demonstrates that these affiliate programs have remained lucrative by evolving to harness social media.

“The final redirect sites encourage the user to sign up for subscription pornography, webcam sites, or fake dating,” ZeroFOX wrote in a report being issued this week. “These types of sites, although legal, are known to be scams.”

Perhaps the most well-known example of the subscription-based dating/cheating service that turned out to be mostly phony was AshleyMadison. After AshleyMadison’s user databases were plundered and published online, the company admitted that its service used at least 70,000 female chatbots that were programmed to message new users and try to entice them into replying — which required a paid account.

“Many of the sites’ policies claim that the site owners operate most of the profiles,” ZeroFOX charged. “They also have overbearing policies that can use personally information of their customers to send to other affiliate programs, yielding more spam to the victim. Much like the infamous ‘partnerka’ networks from the Russian Business Network, money is paid out via clicks and signups on affiliate programs” [links added].

Although the Twitter botnet discovered by ZeroFOX has since been dismantled, it not hard to see how this same approach could be very effective at spreading malware. Keep your wits about you while using or cruising social media sites, and be wary of any posts or profiles that match the descriptions and behavior of the bot accounts described here.

For more on this research, see ZeroFOX’s blog post Inside a Massive Siren Social Network Spam Botnet.

,

Planet DebianJoey Hess: Functional Reactive Propellor

I wrote this code, and it made me super happy!

data Variety = Installer | Target
    deriving (Eq)

seed :: UserInput -> Versioned Variety Host
seed userinput ver = host "foo"
    & ver (   (== Installer) --> hostname "installer"
          <|> (== Target)    --> hostname (inputHostname userinput)
          )
    & osDebian Unstable X86_64
    & Apt.stdSourcesList
    & Apt.installed ["linux-image-amd64"]
    & Grub.installed PC
    & XFCE.installed
    & ver (   (== Installer) --> desktopUser defaultUser
          <|> (== Target)    --> desktopUser (inputUsername userinput)
          )
    & ver (   (== Installer) --> autostartInstaller )

This is doing so much in so little space and with so luttle fuss! It's completely defining two different versions of a Host. One version is the Installer, which in turn installs the Target. The code above provides all the information that propellor needs to convert a copy of the Installer into the Target, which it can do very efficiently. For example, it knows that the default user account should be deleted, and a new user account created based on the user's input of their name.

The germ of this idea comes from a short presentation I made about propellor in Portland several years ago. I was describing RevertableProperty, and Joachim Breitner pointed out that to use it, the user essentially has to keep track of the evolution of their Host in their head. It would be better for propellor to know what past versions looked like, so it can know when a RevertableProperty needs to be reverted.

I didn't see a way to address the objection for years. I was hung up on the problem that propellor's properties can't be compared for equality, because functions can't be compared for equality (generally). And on the problem that it would be hard for propellor to pull old versions of a Host out of git. But then I ran into the situation where I needed these two closely related hosts to be defined in a single file, and it all fell into place.

The basic idea is that propellor first reverts all the revertable properties for other versions. Then it ensures the property for the current version.

Another use for it would be if you wanted to be able to roll back changes to a Host. For example:

foos :: Versioned Int Host
foos ver = host "foo"
    & hostname "foo.example.com"
    & ver (   (== 1) --> Apache.modEnabled "mpm_worker"
          <|> (>= 2) --> Apache.modEnabled "mpm_event"
          )
    & ver ( (>= 3)   --> Apt.unattendedUpgrades )

foo :: Host
foo = foos `version` (4 :: Int)

Versioned properties can also be defined:

foobar :: Versioned Int -> RevertableProperty DebianLike DebianLike
foobar ver =
    ver (   (== 1) --> (Apt.installed "foo" <!> Apt.removed "foo")
        <|> (== 2) --> (Apt.installed "bar" <!> Apt.removed "bar")
        )

Notice that I've embedded a small DSL for versioning into the propellor config file syntax. While implementing versioning took all day, that part was super easy; Haskell config files win again!

API documentation for this feature

PS: Not really FRP, probably. But time-varying in a FRP-like way.


Development of this was sponsored by Jake Vosloo on Patreon.

Planet DebianDirk Eddelbuettel: Rcpp 0.12.12: Rounding some corners

The twelveth update in the 0.12.* series of Rcpp landed on CRAN this morning, following two days of testing at CRAN preceded by five full reverse-depends checks we did (and which are always logged in this GitHub repo). The Debian package has been built and uploaded; Windows and macOS binaries should follow at CRAN as usual. This 0.12.12 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, the 0.12.10.release in March, and the 0.12.11.release in May making it the sixteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1097 packages (and hence 71 more since the last release in May) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases contain a fairly large number of fairly small and focused pull requests most of which either correct some corner cases or improve other aspects. JJ tirelessly improved the package registration added in the previous release and following R 3.4.0. Kirill tidied up a number of small issues allowing us to run compilation in even more verbose modes---usually a good thing. Jeroen, Elias Pipping and Yo Gong all contributed as well, and we thank everybody for their contributions.

All changes are listed below in some detail.

Changes in Rcpp version 0.12.12 (2017-07-13)

  • Changes in Rcpp API:

    • The tinyformat.h header now ends in a newline (#701).

    • Fixed rare protection error that occurred when fetching stack traces during the construction of an Rcpp exception (Kirill Müller in #706).

    • Compilation is now also possibly on Haiku-OS (Yo Gong in #708 addressing #707).

    • Dimension attributes are explicitly cast to int (Kirill Müller in #715).

    • Unused arguments are no longer declared (Kirill Müller in #716).

    • Visibility of exported functions is now supported via the R macro atttribute_visible (Jeroen Ooms in #720).

    • The no_init() constructor accepts R_xlen_t (Kirill Müller in #730).

    • Loop unrolling used R_xlen_t (Kirill Müller in #731).

    • Two unused-variables warnings are now avoided (Jeff Pollock in #732).

  • Changes in Rcpp Attributes:

    • Execute tools::package_native_routine_registration_skeleton within package rather than current working directory (JJ in #697).

    • The R portion no longer uses dir.exists to no require R 3.2.0 or newer (Elias Pipping in #698).

    • Fix native registration for exports with name attribute (JJ in #703 addressing #702).

    • Automatically register init functions for Rcpp Modules (JJ in #705 addressing #704).

    • Add Shield around parameters in Rcpp::interfaces (JJ in #713 addressing #712).

    • Replace dot (".") with underscore ("_") in package names when generating native routine registrations (JJ in #722 addressing #721).

    • Generate C++ native routines with underscore ("_") prefix to avoid exporting when standard exportPattern is used in NAMESPACE (JJ in #725 addressing #723).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJunichi Uekawa: revisiting libjson-spirit.

revisiting libjson-spirit. I tried compiling a program that uses libjson-spirit and noticed that it still is broken. New programs compiled against the header does not link with the provided static library. Trying to rebuild it fixes it, but it uses compat version 8, and that needs to be fixed (trivially). hmm... actually code doesn't build anymore and there's multiple new upstream versions. ... and then I noticed that it was a stale copy already removed from Debian repository. What's a good C++ json implementation these days?

Planet DebianVasudev Kamath: debcargo: Replacing subprocess crate with git2 crate

In my previous post I talked about using subprocess crate to extract beginning and ending year from git repository for generating debian/copyright file. In this post I'm going to talk on how I replaced subprocess with native git2 crate and achieved the same result in much cleaner and safer way.

git2 is a native Rust crate which provides access to Git repository internals. git2 does not involve any unsafe invocation as it is built against libgit2-sys which is actually using Rust FFI to directly bind to underlying libgit library. Below is the new copyright_fromgit function with git2 crate.

fn copyright_fromgit(repo_url: &str) -> Result<String> {
   let tempdir = TempDir::new_in(".", "debcargo")?;
   let repo = Repository::clone(repo_url, tempdir.path())?;

   let mut revwalker = repo.revwalk()?;
   revwalker.push_head()?;

   // Get the latest and first commit id. This is bit ugly
   let latest_id = revwalker.next().unwrap()?;
   let first_id = revwalker.last().unwrap()?; // revwalker ends here is consumed by last

   let first_commit = repo.find_commit(first_id)?;
   let latest_commit = repo.find_commit(latest_id)?;

   let first_year =
       DateTime::<Utc>::from_utc(
               NaiveDateTime::from_timestamp(first_commit.time().seconds(), 0),
               Utc).year();

   let latest_year =
       DateTime::<Utc>::from_utc(
             NaiveDateTime::from_timestamp(latest_commit.time().seconds(), 0),
             Utc).year();

   let notice = match first_year.cmp(&latest_year) {
       Ordering::Equal => format!("{}", first_year),
       _ => format!("{}-{},", first_year, latest_year),
   };

   Ok(notice)
}
So here is what I'm doing
  1. Use git2::Repository::clone to clone the given URL. We are thus avoiding exec of git clone command.
  2. Get a revision walker object. git2::RevWalk implements Iterator trait and allows walking through the git history. This is what we are using to avoid exec of git log command.
  3. revwalker.push_head() is important because we want to tell revwalker from where we want to walk the history. In this case we are asking it to walk history from repository HEAD. Without this line next line will not work. (Learned it in hard way :-) ).
  4. Then we extract git2::Oid which is we can say similar to commit hash and can be used to lookup a particular commit. We take latest commit hash using RevWalk::next call and the first commit using Revwalk::last, note the order this is because Revwalk::last consumes the revwalker so doing it in reverse order will make borrow checker unhappy :-). This replaces exec of head -n1 command.
  5. Look up the git2::Commit objects using git2::Repository::find_commit
  6. Then convert the git2::Time to chrono::DateTime and take out the years.

After this change I found an obvious error which went unnoticed in previous version, that is if there was no repository key in Cargo.toml. When there was no repo URL git clone exec did not error out and our shell commands happily extracted year from the debcargo repository!. Well since I was testing code from debcargo repository It never failed, but when I executed from non-git repository folder git threw an error but that was git log and not git clone. This error was spotted right away because git2 threw me an error that I gave it empty URL.

When it comes to performance I see that debcargo is faster compared to previous version. This makes sense because previously it was doing 5 fork and exec system calls and now that is avoided.

,

CryptogramBook Review: Twitter and Tear Gas, by Zeynep Tufekci

There are two opposing models of how the Internet has changed protest movements. The first is that the Internet has made protesters mightier than ever. This comes from the successful revolutions in Tunisia (2010-11), Egypt (2011), and Ukraine (2013). The second is that it has made them more ineffectual. Derided as "slacktivism" or "clicktivism," the ease of action without commitment can result in movements like Occupy petering out in the US without any obvious effects. Of course, the reality is more nuanced, and Zeynep Tufekci teases that out in her new book Twitter and Tear Gas.

Tufekci is a rare interdisciplinary figure. As a sociologist, programmer, and ethnographer, she studies how technology shapes society and drives social change. She has a dual appointment in both the School of Information Science and the Department of Sociology at University of North Carolina at Chapel Hill, and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Her regular New York Times column on the social impacts of technology is a must-read.

Modern Internet-fueled protest movements are the subjects of Twitter and Tear Gas. As an observer, writer, and participant, Tufekci examines how modern protest movements have been changed by the Internet­ -- and what that means for protests going forward. Her book combines her own ethnographic research and her usual deft analysis, with the research of others and some big data analysis from social media outlets. The result is a book that is both insightful and entertaining, and whose lessons are much broader than the book's central topic.

"The Power and Fragility of Networked Protest" is the book's subtitle. The power of the Internet as a tool for protest is obvious: it gives people newfound abilities to quickly organize and scale. But, according to Tufekci, it's a mistake to judge modern protests using the same criteria we used to judge pre-Internet protests. The 1963 March on Washington might have culminated in hundreds of thousands of people listening to Martin Luther King Jr. deliver his "I Have a Dream" speech, but it was the culmination of a multi-year protest effort and the result of six months of careful planning made possible by that sustained effort. The 2011 protests in Cairo came together in mere days because they could be loosely coordinated on Facebook and Twitter.

That's the power. Tufekci describes the fragility by analogy. Nepalese Sherpas assist Mt. Everest climbers by carrying supplies, laying out ropes and ladders, and so on. This means that people with limited training and experience can make the ascent, which is no less dangerous -- to sometimes disastrous results. Says Tufekci: "The Internet similarly allows networked movements to grow dramatically and rapidly, but without prior building of formal or informal organizational and other collective capacities that could prepare them for the inevitable challenges they will face and give them the ability to respond to what comes next." That makes them less able to respond to government counters, change their tactics­ -- a phenomenon Tufekci calls "tactical freeze" -- make movement-wide decisions, and survive over the long haul.

Tufekci isn't arguing that modern protests are necessarily less effective, but that they're different. Effective movements need to understand these differences, and leverage these new advantages while minimizing the disadvantages.

To that end, she develops a taxonomy for talking about social movements. Protests are an example of a "signal" that corresponds to one of several underlying "capacities." There's narrative capacity: the ability to change the conversation, as Black Lives Matter did with police violence and Occupy did with wealth inequality. There's disruptive capacity: the ability to stop business as usual. An early Internet example is the 1999 WTO protests in Seattle. And finally, there's electoral or institutional capacity: the ability to vote, lobby, fund raise, and so on. Because of various "affordances" of modern Internet technologies, particularly social media, the same signal -- a protest of a given size -- reflects different underlying capacities.

This taxonomy also informs government reactions to protest movements. Smart responses target attention as a resource. The Chinese government responded to 2015 protesters in Hong Kong by not engaging with them at all, denying them camera-phone videos that would go viral and attract the world's attention. Instead, they pulled their police back and waited for the movement to die from lack of attention.

If this all sounds dry and academic, it's not. Twitter and Tear Gasis infused with a richness of detail stemming from her personal participation in the 2013 Gezi Park protests in Turkey, as well as personal on-the-ground interviews with protesters throughout the Middle East -- particularly Egypt and her native Turkey -- Zapatistas in Mexico, WTO protesters in Seattle, Occupy participants worldwide, and others. Tufekci writes with a warmth and respect for the humans that are part of these powerful social movements, gently intertwining her own story with the stories of others, big data, and theory. She is adept at writing for a general audience, and­despite being published by the intimidating Yale University Press -- her book is more mass-market than academic. What rigor is there is presented in a way that carries readers along rather than distracting.

The synthesist in me wishes Tufekci would take some additional steps, taking the trends she describes outside of the narrow world of political protest and applying them more broadly to social change. Her taxonomy is an important contribution to the more-general discussion of how the Internet affects society. Furthermore, her insights on the networked public sphere has applications for understanding technology-driven social change in general. These are hard conversations for society to have. We largely prefer to allow technology to blindly steer society or -- in some ways worse -- leave it to unfettered for-profit corporations. When you're reading Twitter and Tear Gas, keep current and near-term future technological issues such as ubiquitous surveillance, algorithmic discrimination, and automation and employment in mind. You'll come away with new insights.

Tufekci twice quotes historian Melvin Kranzberg from 1985: "Technology is neither good nor bad; nor is it neutral." This foreshadows her central message. For better or worse, the technologies that power the networked public sphere have changed the nature of political protest as well as government reactions to and suppressions of such protest.

I have long characterized our technological future as a battle between the quick and the strong. The quick -- dissidents, hackers, criminals, marginalized groups -- are the first to make use of a new technology to magnify their power. The strong are slower, but have more raw power to magnify. So while protesters are the first to use Facebook to organize, the governments eventually figure out how to use Facebook to track protesters. It's still an open question who will gain the upper hand in the long term, but Tufekci's book helps us understand the dynamics at work.

This essay originally appeared on Vice Motherboard.

The book on Amazon.com.

CryptogramFriday Squid Blogging: Eyeball Collector Wants a Giant-Squid Eyeball

They're rare:

The one Dubielzig really wants is an eye from a giant squid, which has the biggest eye of any living animal -- it's the size of a dinner plate.

"But there are no intact specimens of giant squid eyes, only rotten specimens that have been beached," he says.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramForged Documents and Microsoft Fonts

A set of documents in Pakistan were detected as forgeries because their fonts were not in circulation at the time the documents were dated.

Worse Than FailureError'd: Unfortunate Timing

"Apparently, I viewed the page during one of those special 31 seconds of the year," wrote Richard W.

 

"Well, it looks like I'll be paying full price for this repair," wrote Ryan W.

 

Marco writes, "So...that's like September 2nd?"

 

"Office 365????? I guess so, but ONLY if you're sure, Microsoft..." writes Leonid T.

 

Brandon writes, "Being someone who lives in 'GMT +1', I have to wonder, is it the 6th, 11th, or 12th of July or the 7th of June, November, or December we're talking about here?"

 

"Translated, it reads, 'Bayrou and Modem in angry mode', assuming of course it really is Mr. Bayrou in the pic," wrote Matt.

 

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet DebianChris Lamb: Installation birthday

Fancy receiving congratulations on the anniversary of when you installed your system?

Installing the installation-birthday package on your Debian machines will celebrate each birthday of your system by automatically sending a message to the local system administrator.

The installation date is based on the system installation time, not the package itself.

installation-birthday is available in Debian 9 ("stretch") via the stretch-backports repository, as well as in the testing and unstable distributions:

$ apt install installation-birthday

Enjoy, and patches welcome. :)

Don MartiSmart futures contracts on software issues talk, and bullshit walks?

Previously: Benkler’s Tripod, transactions from a future software market, more transactions from a future softwware market

Owning "equity" in an outcome

John Robb: Revisiting Open Source Ventures:

Given this, it appears that an open source venture (a company that can scale to millions of worker/owners creating a new economic ecosystem) that builds massive human curated databases and decentralizes the processing load of training these AIs could become extremely competitive.

But what if the economic ecosystem could exist without the venture? Instead of trying to build a virtual company with millions of workers/owners, build a market economy with millions of participants in tens of thousands of projects and tasks? All of this stuff scales technically much better than it scales organizationally—you could still be part of a large organization or movement while only participating directly on a small set of issues at any one time. Instead of holding equity in a large organization with all its political risk, you could hold a portfolio of positions in areas where you have enough knowledge to be comfortable.

Robb's opportunity is in training AIs, not in writing code. The "oracle" for resolving AI-training or dataset-building contracts would have to be different, but the futures market could be the same.

The cheating project problem

Why would you invest in a futures contract on bug outcomes when the project maintainer controls the bug tracker?

And what about employees who are incentivized from both sides: paid to fix a bug but able to buy futures contracts (anonymously) that will let them make more on the market by leaving it open?

In order for the market to function, the total reputation of the project and contributors must be high enough that outside participants believe that developers are more motivated to maintain that reputation than to "take a dive" on a bug.

That implies that there is some kind of relationship between the total "reputation capital" of a project and the maximum market value of all the futures contracts on it.

Open source metrics

To put that another way, there must be some relationship between the market value of futures contracts on a project and the maximum reputation value of the project. So that could be a proxy for a difficult-to-measure concept such as "open source health."

Open source journalism

Hey, tickers to put into stories! Sparklines! All the charts and stuff that finance and sports reporters can build stories around!

Planet DebianNorbert Preining: TeX Live contrib repository (re)new(ed)

It is my pleasure to announce the renewal/rework/restart of the TeX Live contrib repository service. The repository is collecting packages that cannot enter TeX Live directly (mostly due to license reasons), but are free to distribute. The basic idea is to provide a repository mimicking Debian’s nonfree branch.

The packages on this site are not distributed inside TeX Live proper for one or another of the following reasons:

  • because it is not free software according to the FSF guidelines;
  • because it is an executable update;
  • because it is not available on CTAN;
  • because it is an intermediate release for testing.

In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can hav e a placeon TLContrib.

Currently there are 52 packages in the repository, falling roughly into the following categories:

  • nosell fonts fonts and macros with nosell licenses, e.g., garamond, garamondx, etc. These fonts are mostly those that are also available via getnonfreefonts.
  • nosell packages packages with nosell licenses, e.g. acmtrans
  • nonfree support packages those packages that require non-free tools or fonts, e.g., acrotex, lucida-otf, verdana, etc

The full list of packages can be seen here.

The ultimate goal is to provide a companion to the core TeX Live (tlnet) distribution in much the same way as Debian‘s non-free tree is a companion to the normal distribution. The goal is not to replace TeX Live: packages that could go into TeX Live itself should stay (or be added) there. TLContrib is simply trying to fill in a gap in the current distribution system.

Quick Start

If you are running the current version of TeX Live, which is 2017 at the moment, the following code will suffice:

  tlmgr repository add http://contrib.texlive.info/current tlcontrib
  tlmgr pinning add tlcontrib '*'

In future there might be releases for certain years.

Verification

The packages are signed with my GnuPG RSA key: 0x6CACA448860CDC13. tlmgr will automatically verify authenticity if you add my key:

  curl -fsSL https://www.preining.info/rsa.asc | tlmgr key add -

After that tlmgr will tell you about failed authentication of this repository.

History

Taco Hoekwater started in 2010, but it hasn’t seen much activity in recent years. Taco agreed to hand it over to myself, who is currently maintaining the repository. Big thanks to Taco for his long support and cooperation.

In contrast to the original tlcontrib page, we don’t offer an automatic upload of packages or user registration. If you want to add packages here, see below.

Adding packages

If you want to see a package included here that cannot enter TeX Live proper, the following ways are possible (from most appreciated to least appreciated):

  • clone the tlcontrib git repo (see below), add the package, and publish the branch where I can pull from it. Then send me an email with the URL and explanation about free distributability/license;
  • send me the package in TDS format, with explanation about free distributability/license;
  • send me the package as distributed by the author (or on CTAN), with explanation about free distributability/license;
  • send me a link to the package explanation about free distributability/license.

Git repository

The packages are kept in a git repository and the tlmgr repo is built from after changes. The location is https://git.texlive.info/tlcontrib.

Enjoy.

,

Planet DebianJose M. Calhariz: Crossgrading a more typical server in Debian9

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

Third, do a crossgrade of the libraries:

 dpkg --list > original.dpkg
 apt-get --fix-broken --allow-remove-essential install
 for pack32 in $(grep :i386 original.dpkg | awk '{print $2}' ) ; do 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get install --yes --allow-remove-essential ${pack32%:i386} ; 
   fi ; 
 done

Forth, do a full crossgrade:

 if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then
   apt-get --fix-broken --allow-remove-essential install
   apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//)
 fi

Krebs on SecurityThieves Used Infrared to Pull Data from ATM ‘Insert Skimmers’

A greater number of ATM skimming incidents now involve so-called “insert skimmers,” wafer-thin fraud devices made to fit snugly and invisibly inside a cash machine’s card acceptance slot. New evidence suggests that at least some of these insert skimmers — which record card data and store it on a tiny embedded flash drive  — are equipped with technology allowing them to transmit stolen card data wirelessly via infrared, the same communications technology that powers a TV remote control.

Last month the Oklahoma City metropolitan area experienced rash of ATM attacks involving insert skimmers. The local KFOR news channel on June 28, 2017 ran a story stating that at least four banks in the area were hit with insert skimmers.

The story quoted a local police detective saying “the skimmer contains an antenna which transmits your card information to a tiny camera hidden somewhere outside the ATM.”

Financial industry sources tell KrebsOnSecurity that preliminary analysis of the insert skimmers used in the attacks suggests they were configured to transmit stolen card data wirelessly to the hidden camera using infrared, a short-range communications technology most commonly found in television remote controls.

Here’s a look at one of the insert skimmers that Oklahoma authorities recently seized from a compromised ATM:

An insert skimmer retrieved from a compromised cash machine in Oklahoma City.

An insert skimmer retrieved from a compromised cash machine in Oklahoma City. Image: KrebsOnSecurity.com.

In such an attack, the hidden camera has a dual function: To record time-stamped videos of ATM users entering their PINs; and to receive card data recorded and transmitted by the insert skimmer. In this scenario, the fraudster could leave the insert skimmer embedded in the ATM’s card acceptance slot, and merely swap out the hidden camera whenever its internal battery is expected to be depleted.

Of course, the insert skimmer also operates on an embedded battery, but according to my sources the skimmer in question was designed to turn on only when someone uses the cash machine, thereby preserving the battery.

Thieves involved in skimming attacks have hidden spy cameras in some pretty ingenious places, such as a brochure rack to the side of the cash machine or a safety mirror affixed above the cash machine (some ATMs legitimately place these mirrors so that customers will be alerted if someone is standing behind them at the machine).

More often than not, however, hidden cameras are placed behind tiny pinholes cut into false fascias that thieves install directly above or beside the PIN pad. Unfortunately, I don’t have a picture of a hidden camera used in the recent Oklahoma City insert skimming attacks.

Here’s a closer look at the insert skimmer found in Oklahoma:

Image: KrebsOnSecurity.com.

Image: KrebsOnSecurity.com.

A source at a financial institution in Oklahoma shared the following images of the individuals who are suspected of installing these insert skimming devices.

Individuals suspected of installing insert skimmers in a rash of skimming attacks last month in Oklahoma City. Image: KrebsOnSecurity.com.

Individuals suspected of installing insert skimmers in a rash of skimming attacks last month in Oklahoma City. Image: KrebsOnSecurity.com.

As this skimming attack illustrates, most skimmers rely on a hidden camera to record the victim’s PIN, so it’s a good idea to cover the pin pad with your hand, purse or wallet while you enter it.

Yes, there are skimming devices that rely on non-video methods to obtain the PIN (such as PIN pad overlays), but these devices are comparatively rare and quite a bit more expensive for fraudsters to build and/or buy.

So cover the PIN pad. It also protects you against some ne’er-do-well behind you at the ATM “shoulder surfing” you to learn your PIN (which would likely be followed by a whack on the head).

It’s an elegant and simple solution to a growing problem. But you’d be amazed at how many people fail to take this basic, hassle-free precaution.

If you’re as fascinated as I am with all these skimming devices, check out my series All About Skimmers.

CryptogramTomato-Plant Security

I have a soft spot for interesting biological security measures, especially by plants. I've used them as examples in several of my books. Here's a new one: when tomato plants are attacked by caterpillars, they release a chemical that turns the caterpillars on each other:

It's common for caterpillars to eat each other when they're stressed out by the lack of food. (We've all been there.) But why would they start eating each other when the plant food is right in front of them? Answer: because of devious behavior control by plants.

When plants are attacked (read: eaten) they make themselves more toxic by activating a chemical called methyl jasmonate. Scientists sprayed tomato plants with methyl jasmonate to kick off these responses, then unleashed caterpillars on them.

Compared to an untreated plant, a high-dose plant had five times as much plant left behind because the caterpillars were turning on each other instead. The caterpillars on a treated tomato plant ate twice as many other caterpillars than the ones on a control plant.

Planet Linux AustraliaAnthony Towns: Bitcoin: ASICBoost – Plausible or not?

So the first question: is ASICBoost use plausible in the real world?

There are plenty of claims that it’s not:

  • “Much conspiracy around today. I don’t believe SegWit non-activation has anything to do with AsicBoost!” – Timo Hanke, one of the patent applicants, on twitter
  • “there’s absolutely nothing but baseless accusations flying around” – Emin Gun Sirer’s take, linked from the Bitmain statement
  • “no company would ever produce a chip that would have a switch in to hide that it’s actually an ASICboost chip.” – Sam Cole formerly of KNCMiner which went bankrupt due to being unable to compete with Bitmain in 2016
  • “I believe their claim about not activating ASICBoost. It is very small money for them.” – Guy Corem of SpoonDoolies, who independently discovered ASICBoost
  • “No one is even using Asicboost.” – Roger Ver (/u/memorydealers) on reddit

A lot of these claims don’t actually match reality though: ASICBoost is implemented in Bitmain miners sold to the public, and since it defaults to off, a switch to hide it is obviously easily possible since it’s disabled by default, contradicting Sam Cole’s take. There’s plenty of circumstantial evidence of ASICBoost-related transaction structuring in blocks, contradicting the basis on which Emin Gun Sirer’s dismisses the claims. The 15%-30% improvement claims that Guy Corem and Sam Cole cite are certainly large enough to be worth looking into — and  Bitmain confirms to have done on testnet. Even Guy Corem’s claim that they only amount to $2,000,000 in savings per year rather than $100,000,000 seems like a reason to expect it to be in use, rather than so little that you wouldn’t bother.

If ASICBoost weren’t in use on mainnet it would probably be relatively straightforward to prove that: Bitmain could publish the benchmarks results they got when testing on testnet, and why that proved not to be worth doing on mainnet, and provide instructions for their customers on how to reproduce their results, for instance. Or Bitmain and others could support efforts to block ASICBoost from being used on mainnet, to ensure no one else uses it, for the greater good of the network — if, as they claim, they’re already not using it, this would come at no cost to them.

To me, much of the rhetoric that’s being passed around seems to be a much better match for what you would expect if ASICBoost were in use, than if it was not. In detail:

  • If ASICBoost were in use, and no one had any reason to hide it being used, then people would admit to using it, and would do so by using bits in the block version.
  • If ASICBoost were in use, but people had strong reasons to hide that fact, then people would claim not to be using it for a variety of reasons, but those explanations would not stand up to more than casual analysis.
  • If ASICBoost were not in use, and it was fairly easy to see there is no benefit to it, then people would be happy to share their reasoning for not using it in detail, and this reasoning would be able to be confirmed independently.
  • If ASICBoost were not in use, but the reasons why it is not useful require significant research efforts, then keeping the detailed reasoning private may act as a competitive advantage.

The first scenario can be easily verified, and does not match reality. Likewise the third scenario does not (at least in my opinion) match reality; as noted above, many of the explanations presented are superficial at best, contradict each other, or simply fall apart on even a cursory analysis. Unfortunately that rules out assuming good faith — either people are lying about using ASICBoost, or just dissembling about why they’re not using it. Working out which of those is most likely requires coming to our own conclusion on whether ASICBoost makes sense.

I think Jimmy Song had some good posts on that topic. His first, on Bitmain’s ASICBoost claims finds some plausible examples of ASICBoost testing on testnet, however this was corrected in the comments as having been performed by Timo Hanke, rather than Bitmain. Having a look at other blocks’ version fields on testnet seems to indicate that there hasn’t been much other fiddling of version fields, so presumably whatever testing of ASICBoost was done by Bitmain, fiddling with the version field was not used; but that in turn implies that Bitmain must have been testing covert ASICBoost on testnet, assuming their claim to have tested it on testnet is true in the first place (they could quite reasonably have used a private testnet instead). Two later posts, on profitability and ASICBoost and Bitmain’s profitability in particular, go into more detail, mostly supporting Guy Corem’s analysis mentioned above. Perhaps interestingly, Jimmy Song also made a proposal to the bitcoin-dev shortly after Greg’s original post revealing ASICBoost and prior to these posts; that proposal would have endorsed use of ASICBoost on mainnet, making it cheaper and compatible with segwit, but would also have made use of ASICBoost readily apparent to both other miners and patent holders.

It seems to me there are three different ways to look at the maths here, and because this is an economics question, each of them give a different result:

  • Greg’s maths splits miners into two groups each with 50% of hashpower. One group, which is unable to use ASICBoost is assumed to be operating at almost zero profit, so their costs to mine bitcoins are only barely below the revenue they get from selling the bitcoin they mine. Using this assumption, the costs of running mining equipment are calculated by taking the number of bitcoin mined per year (365*24*6*12.5=657k), multiplying that by the price at the time ($1100), and halving the costs because each group only mines half the chain. This gives a cost of mining for the non-ASICBoost group of $361M per year. The other group, which uses ASICBoost, then gains a 30% advantage in costs, so only pays 70%, or $252M, a comparative saving of approximately $100M per annum. This saving is directly proportional to hashrate and ASICBoost advantage, so using Guy Corem’s figures of 13.2% hashrate and 15% advantage, this reduces from $95M to $66M, saving about $29M per annum.
  • Guy Corem’s maths estimates Bitmain’s figures directly: looking at the AntPool hashpower share, he estimates 500PH/s in hashpower (or 13.2%); he uses the specs of the AntMiner S9 to determine power usage (0.1 J/GH); he looks at electricity prices in China and estimates $0.03 per kWh; and he estimates the ASICBoost advantage to be 15%. This gives a total cost of 500M GH/s * 0.1 J/GH / 1000 W/kW * $0.03 per kWh * 24 * 365 which is $13.14 M per annum, so a 15% saving is just under $2M per annum. If you assume that the hashpower was 50% and ASICBoost gave a 30% advantage instead, this equates to about 1900 PH/s, and gives a benefit of just under $15M per annum. In order to get the $100M figure to match Greg’s result, you would also need to increase electricity costs by a factor of six, from 3c per kWH to 20c per kWH.
  • The approach I prefer is to compare what your hashpower would be keeping costs constant and work out the difference in revenue: for example, if you’re spending $13M per annum in electricity, what is your profit with ASICBoost versus without (assuming that the difficulty retargets appropriately, but no one else changes their mining behaviour). Following this line of thought, if you have 500PH/s with ASICBoost giving you a 30% boost, then without ASICBoost, you have 384 PH/s (500/1.3). If that was 13.2% of hashpower, then the remaining 86.8% of hashpower is 3288 PH/s, so when you stop using ASICBoost and a retarget occurs, total hashpower is now 3672 PH/s (384+3288), and your percentage is now 10.5%. Because mining revenue is simply proportional to hashpower, this amounts to a loss of 2.7% of the total bitcoin reward, or just under $20M per year. If you match Greg’s assumptions (50% hashpower, 30% benefit) that leads to an estimate of $47M per annum; if you match Guy Corem’s assumptions (13.2% hashpower, 15% benefit) it leads to an estimate of just under $11M per annum.

So like I said, that’s three different answers in each of two scenarios: Guy’s low end assumption of 13.2% hashpower and a 15% advantage to ASICBoost gives figures of $29M/$2M/$11M; while Greg’s high end assumptions of 50% hashpower and 30% advantage give figures of $100M/$15M/$47M. The differences in assumptions there is obviously pretty important.

I don’t find the assumptions behind Greg’s maths realistic: in essence, it assumes that mining be so competitive that it is barely profitable even in the short term. However, if that were the case, then nobody would be able to invest in new mining hardware, because they would not recoup their investment. In addition, even if at some point mining were not profitable, increases in the price of bitcoin would change that, and the price of bitcoin has been increasing over recent months. Beyond that, it also assumes electricity prices do not vary between miners — if only the marginal miner is not profitable, it may be that some miners have lower costs and therefore are profitable; and indeed this is likely the case, because electricity prices vary over time due to both seasonal and economic factors. The method Greg uses does is useful for establishing an upper limit, however: the only way ASICBoost could offer more savings than Greg’s estimate would be if every block mined produced less revenue than it cost in electricity, and miners were making a loss on every block. (This doesn’t mean $100M is an upper limit however — that estimate was current in April, but the price of bitcoin has more than doubled since then, so the current upper bound via Greg’s maths would be about $236M per year)

A downside to Guy’s method from the point of view of outside analysis is that it requires more information: you need to know the efficiency of the miners being used and the cost of electricity, and any error in those estimates will be reflected in your final figure. In particular, the cost of electricity needs to be a “whole lifecycle” cost — if it costs 3c/kWh to supply electricity, but you also need to spend an additional 5c/kWh in cooling in order to keep your data-centre operating, then you need to use a figure of 8c/kWh to get useful results. This likely provides a good lower bound estimate however: using ASICBoost will save you energy, and if you forget to account for cooling or some other important factor, then your estimate will be too low; but that will still serve as a loose lower bound. This estimate also changes over time however; while it doesn’t depend on price, it does depend on deployed hashpower — since total hashrate has risen from around 3700 PH/s in April to around 6200 PH/s today, if Bitmain’s hashrate has risen proportionally, it has gone from 500 PH/s to 837 PH/s, and an ASICBoost advantage of 15% means power cost savings have gone from $2M to $3.3M per year; or if Bitmain has instead maintained control of 50% of hashrate at 30% advantage, the savings have gone from $15M to $25M per year.

The key difference between my method and both Greg’s and Guy’s is that they implicitly assume that consuming more electricity is viable, and costs simply increase proportionally; whereas my method assumes that this is not viable, and instead that sufficient mining hardware has been deployed that power consumption is already constrained by some other factor. This might be due to reaching the limit of what the power company can supply, or the rating of the wiring in the data centre, or it might be due to the cooling capacity, or fire risk, or some other factor. For an operation spanning multiple data centres this may be the case for some locations but not others — older data centres may be maxed out, while newer data centres are still being populated and may have excess capacity, for example. If setting up new data centres is not too difficult, it might also be true in the short term, but not true in the longer term — that is having each miner use more power due to disabling ASICBoost might require shutting some miners down initially, but they may be able to be shifted to other sites over the course of a few weeks or month, and restarted there, though this would require taking into account additional hosting costs beyond electricity and cooling. As such, I think this is a fairly reasonable way to produce an plausible estimate, and it’s the one I’ll be using. Note that it depends on the bitcoin price, so the estimates this method produces have also risen since April, going from $11M to $24M per annum (13.2% hash, 15% advantage) or from $47M to $103M (50% hash, 30% advantage).

The way ASICBoost works is by allowing you to save a few steps: normally when trying to generate a proof of work, you have to do essentially six steps:

  1. A = Expand( Chunk1 )
  2. B = Compress( A, 0 )
  3. C = Expand( Chunk2 )
  4. D = Compress( C, B )
  5. E = Expand( D )
  6. F = Compress( E )

The expected process is to do steps (1,2) once, then do steps (3,4,5,6) about four billion (or more) times, until you get a useful answer. You do this process in parallel across many different chips. ASICBoost changes this process by observing that step (3) is independent of steps (1,2) — so by finding a variety of Chunk1s — call them Chunk1-A, Chunk1-B, Chunk1-C and Chunk1-D that are each compatible with a common Chunk2. In that case, you do steps (1,2) four times for each different Chunk1, then do step (3) four billion (or more) times, and do steps (4,5,6) 16 billion (or more) times, to get four times the work, while saving 12 billion (or more) iterations of step (3). Depending on the number of Chunk1’s you set yourself up to find, and the relative weight of the Expand versus Compress steps, this comes to (n-1)/n / 2 / (1+c/e), where n is the number of different Chunk1’s you have. If you take the weight of Expand and Compress steps as about equal, it simplifies to 25%*(n-1)/n, and with n=4, this is 18.75%. As such, an ASICBoost advantage of about 20% seems reasonably practical to me. At 50% hash and 20% advantage, my estimates for ASICBoost’s value are $33M in April, and $72M today.

So as to the question of whether you’d use ASICBoost, I think the answer is a clear yes: the lower end estimate has risen from $2M to $3.3M per year, and since Bitmain have acknowledged that AntMiner’s support ASICBoost in hardware already, the only additional cost is finding collisions which may not be completely trivial, but is not difficult and is easily automated.

If the benefit is only in this range, however, this does not provide a plausible explanation for opposing segwit: having the Bitcoin industry come to a consensus about how to move forward would likely increase the bitcoin price substantially, definitely increasing Bitmain’s mining revenue — even a 2% increase in price would cover their additional costs. However, as above, I believe this is only a lower bound, and a more reasonable estimate is on the order of $11M-$47M as of April or $24M-$103M as of today. This is a much more serious range, and would require an 11%-25% increase in price to not be an outright loss; and a far more attractive proposition would be to find a compromise position that both allows the industry to move forward (increasing the price) and allows ASICBoost to remain operational (maintaining the cost savings / revenue boost).

 

It’s possible to take a different approach to analysing the cost-effectiveness of mining given how much you need to pay in electricity costs. If you have access to a lot of power at a flat rate, can deal with other hosting issues, can expand (or reduce) your mining infrastructure substantially, and have some degree of influence in how much hashpower other miners can deploy, then you can derive a formula for what proportion of hashpower is most profitable for you to control.

In particular, if your costs are determined by an electricity (and cooling, etc) price, E, in dollars per kWh and performance, r, in Joules per gigahash, then given your hashrate, h in terahash/second, your power usage in watts is (h*1e3*r), and you run this for 600 seconds on average between each block (h*r*6e5 Ws), which you divide by 3.6M to convert to kWh (h*r/6), then multiply by your electricity cost to get a dollar figure (h*r*E/6). Your revenue depends on the hashrate of the everyone else, which we’ll call g, and on average you receive (p*R*h/(h+g)) every 600 seconds where p is the price of Bitcoin in dollars and R is the reward (subsidy and fees) you receive from a block. Your profit is just the difference, namely h*(p*R/(h+g) – r*E/6). Assuming you’re able to manufacture and deploy hashrate relatively easily, at least in comparison to everyone else, you can optimise your profit by varying h while the other variables (bitcoin price p, block reward R, miner performance r, electricity cost E, and external hashpower g) remain constant (ie, set the derivative of that formula with respect to h to zero and simplify) which gives a result of 6gpR/Er = (g+h)^2.

This is solvable for h (square root both sides and subtract g), but if we assume Bitmain is clever and well funded enough to have already essentially optimised their profits, we can get a better sense of what this means. Since g+h is just the total bitcoin hashrate, if we call that t, and divide both sides, we get 6gpR/Ert = t, or g/t = (Ert)/(6pR), which tells us what proportion of hashrate the rest of the network can have (g/t) if Bitmain has optimised its profits, or, alternative we can work out h/t = 1-g/t = 1-(Ert)/(6pR) which tells us what proportion of hashrate Bitmain will have if it has optimised its profits.  Plugging in E=$0.03 per kWH, r=0.1 J/GH, t=6e6 TH/s, p=$2400/BTC, R=12.5 BTC gives a figure of 0.9 – so given the current state of the network, and Guy Corem’s cost estimate, Bitmain would optimise its day to day profits by controlling 90% of mining hashrate. I’m not convinced $0.03 is an entirely reasonable figure, though — my inclination is to suspect something like $0.08 per kWh is more reasonable; but even so, that only reduces Bitmain’s optimal control to around 73%.

Because of that incentive structure, if Bitmain’s current hashrate is lower than that amount, then lowering manufacturing costs for own-use miners by 15% (per Sam Cole’s estimates) and lowering ongoing costs by 15%-30% by using ASICBoost could have a compounding effect by making it easier to quickly expand. (It’s not clear to me that manufacturing a line of ASICBoost-only miners to reduce manufacturing costs by 15% necessarily makes sense. For one thing, this would come at a cost of not being able to mine with them while they are state of the art, then sell them on to customers once a more efficient model has been developed, which seems like it might be a good way to manage inventory. For another, it vastly increases the impact of ASICBoost not being available: rather than simply increasing electricity costs by 15%-30%, it would mean reducing output to 10%-25% of what it was, likely rendering the hardware immediately obsolete)

Using the same formula, it’s possible to work out a ratio of bitcoin price (p) to hashrate (t) that makes it suboptimal for a manufacturer to control a hashrate majority (at least just due to normal mining income): h/t < 0.5, 1-Ert/6pR < 0.5, so t > 3pR/Er. Plugging in p=2400, R=12.5, e=0.08, r=0.1, this gives a total hash rate of 11.25M TH/s, almost double the current hash rate. This hashrate target would obviously increase as the bitcoin price increases, halve if the block reward halves (if a fall in the inflation subsidy is not compensated by a corresponding increase in fee income eg), increase if the efficiency of mining hardware increases, and decrease if the cost of electricity increases. For a simpler formula, assuming the best hosting price is $0.08 per kWh, and while the Antminer S9’s efficiency at 0.1 J/GH is state of the art, and the block reward is 12.5 BTC, the global hashrate in TH/s should be at least around 5000 times the price (ie 3R/Er = 4787.5, near enough to 5000).

Note that this target also sets a limit on the range at which mining can be profitable: if it’s just barely better to allow other people to control >50% of miners when your cost of electricity is E, then for someone else whose cost of electricity is 2*E or more, optimal profit is when other people control 100% of hashrate, that is, you don’t mine at all. Thus if the best large scale hosting globally costs $0.08/kWh, then either mining is not profitable anywhere that hosting costs $0.16/kWh or more, or there’s strong centralisation pressure for a mining hardware manufacturer with access to the cheapest electrictiy to control more than 50% of hashrate. Likewise, if Bitmain really can do hosting at $0.03/kWh, then either they’re incentivised to try to control over 50% of hashpower, or mining is unprofitable at $0.06/kWh and above.

If Bitmain (or any mining ASIC manufacturer) is supplying the majority of new hashrate, they actually have a fairly straightforward way of achieving that goal: if they dedicate 50-70% of each batch of ASICs built for their own use, and sell the rest, with the retail price of the sold miners sufficient to cover the manufacturing cost of the entire batch, then cashflow will mostly take care of itself. At $1200 retail price and $500 manufacturing costs (per Jimmy Song’s numbers), that strategy would imply targeting control of up to about 58% of total hashpower. The above formula would imply that’s the profit-maximising target at the current total hashrate and price if your average hosting cost is about $0.13 per kWh. (Those figures obviously rely heavily on the accuracy of the estimated manufacturing costs of mining hardware; at $400 per unit and $1200 retail, that would be 67% of hashpower, and about $0.09 per kWh)

Strategies like the above are also why this analysis doesn’t apply to miners who buy their hardware rather from a vendor, rather than building their own: because every time they increase their own hash rate (h), the external hashrate (g) also increases as a direct result, it is not valid to assume that g is constant when optimising h, so the partial derivative and optimisation is in turn invalid, and the final result is not applicable.

 

Bitmain’s mining pool, AntPool, obviously doesn’t directly account for 58% or more of total hashpower; though currently they’re the pool with the most hashpower at about 20%. As I understand it, Bitmain is also known to control at least BTC.com and ConnectBTC which add another 7.6%. The other “Emergent Consensus” supporting pools (Bitcoin.com, BTC.top, ViaBTC) account for about 22% of hashpower, however, which brings the total to just under 50%, roughly the right ballpark — and an additional 8% or 9% could easily be pointed at other public pools like slush or f2pool. Whether the “emergent consensus” pools are aligned due to common ownership and contractual obligations or simply similar interests is debatable, though. ViaBTC is funded by Bitmain, and Canoe was built and sold by Bitmain, which means strong contractual ties might exist, however  Jihan Wu, Bitmain’s co-founder, has disclaimed equity ties to BTC.top. Bitcoin.com is owned by Roger Ver, but I haven’t come across anything implying a business relationship between Bitmain and Bitcoin.com beyond supplier and customer. However John McAffee’s apparently forthcoming MGT mining pool is both partnered with Bitmain and advised by Roger Ver, so the existence of tighter ties may be plausible.

It seems likely to me that Bitmain is actually behaving more altruistically than is economically rational according to the analysis above: while it seems likely to me that Bitcoin.com, BTC.top, ViaBTC and Canoe have strong ties to Bitmain and that Bitmain likely has a high level of influence — whether due to contracts, business relationships or simply due to the loyalty and friendship — this nevertheless implies less control over the hashpower than direct ownership and management, and likely less profit. This could be due to a number of factors: perhaps Bitmain really is already sufficiently profitable from mining that they’re focusing on building their business in other ways; perhaps they feel the risks of centralised mining power are too high (and would ultimately be a risk to their long term profits) and are doing their best to ensure that mining power is decentralised while still trying to maximise their return to their investors; perhaps the rate of expansion implied by this analysis requires more investment than they can cover from their cashflow, and additional hashpower is funded by new investors who are simply assigned ownership of a new mining pool, which may helps Bitmain’s investors assure themselves they aren’t being duped by a pyramid scheme and gives more of an appearance of decentralisation.

It seems to me therefore there could be a variety of ways in which Bitmain may have influence over a majority of hashpower:

  • Direct ownership and control, that is being obscured in order to avoid an economic backlash that might result from people realising over 50% of hashpower is controlled by one group
  • Contractual control despite independent ownership, such that customers of Bitmain are committed to follow Bitmain’s lead when signalling blocks in order to maintain access to their existing hardware, or to be able to purchase additional hardware (an account on reddit appearing to belong to the GBMiners pool has suggested this is the case)
  • Contractual control due to offering essential ongoing services, eg support for physical hosting, or some form of mining pool services — maintaining the infrastructure for covert ASICBoost may be technically complex enough that Bitmain’s customers cannot maintain it themselves, but that Bitmain could relatively easily supply as an ongoing service to their top customers.
  • Contractual influence via leasing arrangements rather than sale of hardware — if hardware is leased to customers, or financing is provided, Bitmain could retain some control of the hardware until the leasing or financing term is complete, despite not having ownership
  • Coordinated investment resulting in cartel-like behaviour — even if there is no contractual relationship where Bitmain controls some of its customers in some manner, it may be that forming a cartel of a few top miners allows those miners to increase profits; in that case rather than a single firm having control of over 50% of hashrate, a single cartel does. While this is technically different, it does not seem likely to be an improvement in practice. If such a cartel exists, its members will not have any reason to compete against each other until it has maximised its profits, with control of more than 70% of the hashrate.

 

So, conclusions:

  • ASICBoost is worth using if you are able to. Bitmain is able to.
  • Nothing I’ve seen suggest Bitmain is economically clueless; so since ASICBoost is worth doing, and Bitmain is able to use it on mainnet, Bitmain are using it on mainnet.
  • Independently of ASICBoost, Bitmain’s most profitable course of action seems to be to control somewhere in the range of 50%-80% of the global hashrate at current prices and overall level of mining.
  • The distribution of hashrate between mining pools aligned with Bitmain in various ways makes it plausible, though not certain, that this may already be the case in some form.
  • If all this hashrate is benefiting from ASICBoost, then my estimate is that the value of ASICBoost is currently about $72M per annum
  • Avoiding dominant mining manufacturers tending towards supermajority control of hashrate requires either a high global hashrate or a relatively low price — the hashrate in TH/s should be about 5000 times the price in dollars.
  • The current price is about $2400 USD/BTC, so the corresponding hashrate to prevent centralisation at that price point is 12M TH/s. Conversely, the current hashrate is about 6M TH/s, so the maximum price that doesn’t cause untenable centralisation pressure is $1200 USD/BTC.

Worse Than FailureCodeSOD: Changing Requirements

Requirements change all the time. A lot of the ideology and holy wars that happen in the Git processes camps arise from different ideas about how source control should be used to represent these changes. Which commit changed which line of code, and to what end? But what if your source control history is messy, unclear, or… you’re just not using source control?

For example, let’s say you’re our Anonymous submitter, and find the following block of code. Once upon a time, this block of code enforced some mildly complicated rules about what dates were valid to pick for a dashboard display.

Can you tell which line of code was in a reaction to a radically changed requirement?

function validateDashboardDateRanges (dashboard){
    return true; 
    if(dashboard.currentDateRange && dashboard.currentDateRange.absoluteEndDate && dashboard.currentDateRange.absoluteStartDate) {
        if(!isDateRangeCorrectLength(dashboard.currentDateRange.absoluteStartDate, dashboard.currentDateRange.absoluteEndDate)){
            return false;
        }
    }
    if(dashboard.containers){
        for(var c = 0; c < dashboard.containers.length; c++){
            var container = dashboard.containers[c];
            for(var i = 0; i < container.widgets.length; i++){
                if(container.widgets[i].settings){
                    if(container.widgets[i].settings.customDateRange){
                        if(container.widgets[i].settings.dateRange.relativeDate){
                            if (container.widgets[i].settings.dateRange.relativeDate === '-1y'){
                                return false;
                            }
                        } else if (!isDateRangeCorrectLength(container.widgets[i].settings.dateRange.absoluteStartDate, container.widgets[i].settings.dateRange.absoluteEndDate)){
                            return false;
                        }
                    }
                }
            }
        }
    }
    return true;
}
[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Planet DebianLars Wirzenius: Adding Yakking to Planet Debian

In a case of blatant self-promotion, I am going to add the Yakking RSS feed to the Planet Debian aggregation. (But really because I think some of the readership of Planet Debian may be interested in the content.)

Yakking is a group blog by a few friends aimed at new free software contributors. From the front page description:

Welcome to Yakking.

This is a blog for topics relevant to someone new to free software development. We assume you are already familiar with computers, and are curious about participating in the production of free software. You don't need to be a programmer: software development requires a wide variety of skills, and you can be a valued core contributor to a project without being a programmer.

If anyone objects, please let me know.

Planet DebianVincent Fourmond: Screen that hurts my eyes, take 2

Six month ago, I wrote a lengthy post about my new computer hurting my eyes. I haven't made any progress with that, but I've accidentally upgraded my work computer from Kernel 4.4 to 4.8 and the nvidia drivers from 340.96-4 to 375.66-2. Well, my work computer now hurts, I've switched back to the previous kernel and drivers, I hope it'll be back to normal.

Any ideas of something specific that changed, either between 4.4 and 4.8 (kernel startup code, default framebuffer modes, etc ?), or between the 340.96 and the 375.66 drivers ? In any case, I'll try that specific combination of kernels/drivers home to see if I can get it to a useable state.

Don MartiBlind code reviews experiment

In case you missed it, here's a study that made the rounds earlier this year: Gender differences and bias in open source: Pull request acceptance of women versus men:

This paper presents the largest study to date on gender bias, where we compare acceptance rates of contributions from men versus women in an open source software community. Surprisingly, our results show that women's contributions tend to be accepted more often than men's. However, women's acceptance rates are higher only when they are not identifiable as women.

A followup, from Alice Marshall, breaks out the differences between acceptance of "insider" and "outsider" contributions.

For outsiders, women coders who use gender-neutral profiles get their changes accepted 2.8% more of the time than men with gender-neutral profiles, but when their gender is obvious, they get their changes accepted 0.8% less of the time.

We decided to borrow the blind auditions concept from symphony orchestras for the open source experiments program.

The experiment, launching this month, will help reviewers who want to try breaking habits of unconscious bias (whether by gender or insider/outsider status) by concealing the name and email adddress of a code author during a review on Bugzilla. You'll be able to un-hide the information before submitting a review, if you want, in order to add a personal touch, such as welcoming a new contributor.

Built with the WebExtension development work of Tomislav Jovanovic ("zombie" on IRC), and the Bugzilla bugmastering of Emma Humphries. For more info, see the Bugzilla bug discussion.

Data collection

The extension will "cc" one of two special accounts on a bug, to indicate if the review was done partly or fully blind. This lets us measure its impact without having to make back-end changes to Bugzilla.

(Yes, WebExtensions let you experiment with changing a user's experience of a site without changing production web applications or content sites. Bonus link: FilterBubbler.)

Coming soon

A first release is on a.m.o., here: Blind Reviews BMO Experiment, if you want an early look. We'll send out notifications to relevant places when the "last" bugs are fixed and it's ready for daily developer use.

Planet Linux AustraliaColin Charles: CFP for Percona Live Europe Dublin 2017 closes July 17 2017!

I’ve always enjoyed the Percona Live Europe events, because I consider them to be a lot more intimate than the event in Santa Clara. It started in London, had a smashing success last year in Amsterdam (conference sold out), and by design the travelling conference is now in Dublin from September 25-27 2017.

So what are you waiting for when it comes to submitting to Percona Live Europe Dublin 2017? Call for presentations close on July 17 2017, the conference has a pretty diverse topic structure (MySQL [and its diverse ecosystem including MariaDB Server naturally], MongoDB and other open source databases including PostgreSQL, time series stores, and more).

And I think we also have a pretty diverse conference committee in terms of expertise. You can also register now. Early bird registration ends August 8 2017.

I look forward to seeing you in Dublin, so we can share a pint of Guinness. Sláinte.

Planet DebianLucy Wayland: Basic Chilli Template

Amounts are to taste:
[Stage one]
Chopped red onion
Chopped garlic
Chopped fresh ginger
Chopped big red chillies (mild)
Chopped birds eye chillies (red or green, quite hot)
Chopped scotch bonnets (hot)
[Fry onion in some olive oil. When getting translucent, and rest of ingredients. May need to add some more oil. When the garlic is browning. On to stage two.]
[Stage two]
Some tins of chopped tomato
Some tomato puree
Some basil
Some thyme
Bay leaf optional
Some sliced mushroom
Some chopped capsicum pepper
Some kidney beans
Other beans optional (butter beans are nice)
Lentils optional (Pro tip: if adding lentils to adding lentils, especially red lentils, I recommend adding some garam masala as well. Lifts the flavour.)
Veggie mince optional
Pearled barley very optional
Stock (some reclaimed from swilling water around tom tims)
Water to keep topping up with if it get too sticky or dry
Dash of red wine optional
Worcester sauce optional
Any other flavouring you feel like optional (I quite often add random herbs or spices
[[Secret ingredient: a spoonful of Marmite]]
[Cook everything up together, but wait until there is enough fluid before you add the dry/sticky ingredients in.]
[Serve with carb of choice. I currently fond of using Ryvita as dipper instead of tortilla chips.]
[Also serve with a a “cooler” such as natural yogurt, soured cream or something else.

You want more than one type of chilli in there to broaden the flavour. I use all three, plus occasionally others as well. If you are feeling masochistic you can go hotter than scotch bonnets, but I although you may get something of the same heat, I think you lose something in the flavour.

BTW – if you get the chance, of all the tortilla chips, I think blue corn ones are the best. Only seem to find them in health food shops.

There you go. It’s a “Zen” recipe, which is why I couldn’t give you a link. You just do it until it looks right, feels right, tastes right. And with practice you get it better and better.


,

Planet DebianJose M. Calhariz: Crossgrading a minimal install of Debian 9

By testing the previous instructions for a full crosgrade I run into trouble. Here is the results of my tests to do a full crossgrade of a minimal installation of Debian inside a VM.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures
 apt-get --fix-broken --allow-remove-essential install

Third do a full crossgrade:

 apt-get install --allow-remove-essential $(dpkg --list | grep :i386 | awk '{print $2}' | sed -e s/:i386// )

This procedure seams to be a little fragile, but worked most of the time for me.

Planet DebianJose M. Calhariz: Crossgrading the kernel in Debian 9

I have a very old installation of 32bits Debian running in new hardware. Until now running a 64bits kernel was enough to use efficiently more than 4GiB of RAM. The only problem I found was the proprietary drivers from AMD/ATI and NVIDIA, that did not like this mixed environment and some problems with openafs, easilly solved with the help of the package maintainers of openafs. Crossgrading the Qemu/KVM to 64 bits did not pose a problem, so I have been running 64bits VMs for some time.

But now the nouveau driver do not work with my new display adapter and I need to run tools from OpsCode not available as 32bits. So is time to do a CrossGrade. Finding some problems I can not recommend it to the inexperienced people. Is time investigate the issues and report bugreports to Debian were appropriate.

If you run 32bits Debian installation you can easily install a 64bits kernel . The procedure is simple and well tested.

dpkg --add-architecture amd64
apt-get update
apt-get install linux-image-amd64:amd64

And reboot to test the new kernel.

You can expect here more articles about crossgrading.

LongNowThe Artangel Longplayer Letters: Alan Moore writes to Stewart Lee

Alan Moore (left) chose comedian Stewart Lee as the recipient of his Longplayer letter.


In January 02017, Iain Sinclair, a writer and filmmaker whose recent work focuses on the psychogeography of London, wrote a letter to writer Alan Moore as part of the Artangel Longplayer Letters series. The series is a relay-style correspondence, with the recipient of the letter responding with a letter to a different recipient of his choosing. Iain Sinclair wrote to Alan Moore. Now, Alan Moore has written a letter to standup comedian Stewart Lee.

The first series of correspondence in the Longplayer Letters, lasting from 02013-02015 and including correspondence with Long Now board members Stewart Brand, Esther Dyson, and Brian Eno, ended when a letter from Manuel Arriga to Giles Fraser went unanswered. You can find the previous correspondences here.


From: Alan Moore, Phippsville, Northampton

To: Stewart Lee, London

2 February 2017

Dear Stewart,

I’ll hopefully have spoken to you before you receive this and filled you in on what our bleeding game is. Iain Sinclair kicked off by writing a letter to me, and the idea is that I should kind-of-reply to Iain’s letter in the form of a letter to a person of my choosing. The main criteria seems to be that this person be anybody other than Iain, so you’re probably beginning to see how perfectly you fit the bill. Also, since your comedy often consists of repeating the same phrase, potentially dozens of times in the space of a couple of minutes, I thought you’d bring a contrasting, if not jarring, point of view to the whole Longplayer process.

As you’ve no doubt realised, this is actually a chain letter. In 2016, dozens of the world’s most beloved celebrities, the pro-European British public and the population of the USA all broke the chain, as did my first choice as a recipient of this letter, Kim Jong-nam. I’m just saying.

In his letter, Iain raised the point of what a problematic concept ‘long-term’ is for those of us at this far end of our character-arc; little more than apprentice corpses, really. Mind you – with the current resident of the White House – I suppose this is currently a problem whatever age we are. In terms of existential unease, eleven is the new eighty.

Iain also talks about “having tried, for too many years, to muddy the waters with untrustworthy fictions and ‘alternative truths’, books that detoured into other books, I am now colonised by longplaying images and private obsessions, vinyl ghost in an unmapped digital multiverse”, quoting Sebald with “And now, I am living the wrong life.”

I have to admit, that resonated with me. I’ve been thinking lately about the relationship between art and the artist, and I keep coming back to that Escher image of two hands, each holding a pencil, each sketching and creating the other (EscherSketch?). Yes, on a straightforward material level we are creating our art – our writing, our music, our comedy – but at the same time, in that a creator is modified by any significant work that they bring into being, the art is also altering and creating us. And when we embark upon our projects, it’s generally on little more than an inspired whim and with no idea at all about the person that we’ll be at the end of the process. Inevitably, we fictionalise ourselves. In terms of our personal psychology, we clearly don’t have what you’d call a plan, do we? Thus we actually have little say in the person that we end up as. Nobody could be this deliberately.

Adding to the problem for some of us is that, as artists, we tend to cultivate multiplying identities. The person that I am when I’m writing an introduction to William Hope Hodgson’s The House on the Borderland is different to my everyday persona as someone who is continually worshipping a snake and being angry about Batman. My persona when writing introductions is wearing an Edwardian smoking jacket and puffing smugly on a Meerschaum. I know that my recent infatuation with David Foster Wallace stemmed from an awareness that the persona he adopted for many of his essays and the various fictionalised versions of David Foster Wallace that appear in his novels and short stories were different entities to him-in-himself. I wonder how you, and also how the Comedian Stewart Lee, feels about this? I suppose at the end of the day this applies to everybody, doesn’t it? I mean you don’t have to be an artist to present yourself differently according to who you’re presenting yourself to, and in what role. We don’t talk to our parents the way that we do to our sexual partners, and we don’t talk to our sexual partners how we do to our houseplants. With good reason. The upshot of this is that all human identity is probably consciously or unconsciously constructed, and that for this reason its default position is shifting and fluid. What I’m saying is we may be normal.

Iain Sinclair quotes the verdict on George Michael’s death, “Unexplained but not suspicious”, as a fair assessment of all human lives, before going on to mention Jeremy Prynne’s offended response to a request for an example of his thought, “Like a lump of basalt.” The idea being that any thought is really part of a field of awareness; part of a cerebral weather condition that can’t be hacked out of its context without being “as horrifying as that morning radio interlude when listeners channel-hop and make their cups of tea: Thought for the Day.” I know what he means, but of course couldn’t help thinking about your New Year’s Day curatorship of the Today program, where you impishly got me to contribute to the religiously-inclined Thought for the Day section, broadcast at an unearthly hour of the morning, with an unfathomable diatribe about my sock-puppet snake deity, Glycon. Of course, I’m not saying that this is the specific edition of the show that Iain tuned into and found horrifying, but we have no indication that it wasn’t. Thinking about it, assuming that Thought for the Day is archived, then thanks to your clever inclusion of the world’s sole Glycon worshipper in amongst a fairly uniform rota of rabbis, imams and vicars, future social historians are going to have a drastically skewed and disproportionate view regarding early 21st-century spiritual beliefs. Actually, that’s a halfway decent call back to the idea of long-term thinking.

After circling around like an unusually keen-eyed intellectual buzzard for a couple of pages, Iain alights on the ‘time as solid’ premise of Jerusalem, which he likens to “a form of discontinued public housing in which pastpresentfuture coexist” (and yes, I am going to continue quoting his letter in an effort to bring some quality prose to my stretch of this serial epistle). He talks about that central idea of a muttering community of the living, the dead and the unborn, all existing in their different reaches of an eternal Now, referencing Yeats with “the living can assist the imagination of the dead” and remarking how much these words have defined his literary project since he first started writing about London. In light of what I was saying about identity earlier, I was reading in New Scientist that what we think of as ‘our’ consciousness is actually partly infiltrated by and composed of the consciousnesses that surround us. This of course includes the consciousness of a deceased person that we may be considering, as well as that of any imaginary person we may be projecting on the present or the future. I would be a slightly different creator and a slightly different person, for example, had I never entered into a consideration of the work and the consciousness of Flann O’Brien, or Mervyn Peake, or Angela Carter, or Kathy Acker, or William Burroughs, or a thousand other creators. Looked at like that, it’s as if we take on elements of other people’s consciousness, living or dead or imaginary, almost as a way of building up our psychological immune system. This makes us all fluctuating composites, feeding into and out of each other, and perhaps suggests a long-term possibility that goes beyond our mortal lifespan or status as individuals. If this were the case, if identity were a fluid commodity and we all flowed in and out of each other, then you’d have to see someone like John Clare as an instance where the levees had been overwhelmed and he was pretty much drowning in everybody.

Mentioning Clare’s asylum-mate Lucia Joyce and my rather brave stab at approximating her dad’s language, Iain introduced a notion of William Burroughs’ manufacture that I hadn’t come across before, that of the ‘word vine’: write a word, and the next word will suggest itself, and so on. I have to say, that is how the writing process seems to work with me. Perhaps people assume that writers have an idea and then they write it down fully formed, but that isn’t how it works in practice, or at least not for me. I don’t know if it’s the same for you, but for me most of the writing is generated by the act of writing itself. An average idea, if properly examined, may turn out to have strong neural threads of association or speculation that link it to an absolutely brilliant idea. I’m sure you must have found this with comedy routines; that a minor commonplace absurdity will open up logic gates on a string of increasingly striking or funny ideas, like finding a nugget that leads to a small but profitable gold seam. I don’t think creative people have ideas like a hen lays eggs; more that they arise by a scarcely-definable action from largely involuntary mental processes.

Still talking about Burroughs, Iain then moved on to a discussion of dreams via Burroughs’ My Education – “Couldn’t find my room as usual in the Land of the Dead. Followed by bounty hunters.” On Jerusalem’s pet subject of Eternalism, Iain took a position that sees our real eternity as residing in dreams, “in residues of sleep, in the community of sleepers, between worlds, beyond mortality.” While I’m not sure about that, it does admittedly feel right, perhaps because of the classical world’s equivalence between the world of dreams and the world of the dead, dreams being the only place you reliably met dead people.

I’ve become much more interested in dreams since Steve Moore’s death in 2014. Realising how much I missed reading through the last couple of weeks of Steve’s methodically recorded dreams on visits up to Shooters Hill, I’ve even started a much sparser and more impoverished dream-record of my own. I just really love the flavour of dreams, whether mine or other people’s. Iain reports a dream where me and him were ascending the stairs of the Princelet Street synagogue, the one featured in Rodinsky’s Room, in what seemed to him almost like an out-take from Chris Petit’s The Cardinal and the Corpse. After a ritual exchange of velvet cricket caps between us, the vista outside the synagogue window began to strobe like the climactic vision in Hodgson’s The House on the Borderland. Martin Stone – Pink Fairies, Mighty Baby – was laying out a pattern of white lines on a black case, and explained that he’d recently found a first edition of Hodgson’s book in an abandoned villa, lavishly inscribed and previously owned by Aleister Crowley. Isn’t that fantastic? Knowing that I was there in the dream gives me a sort of pseudo-memory of actually having been present at this unlikely event.

While I’m not sure about the connection between dreams and Eternalism, Iain is probably right. I very recently stumbled across – in a fascinating collection of outré individuals from David Bramwell entitled The Odditorium – the peculiar scientific theories of J.W. Dunne. Dunne proposed a kind of solid time similar to the Einsteinian state of affairs posited in Jerusalem, which I found mildly pleasing just as a supporting argument from another source, but I was taken aback by Dunne’s reasoning, in which he suggests that the accreted ‘substance’ of our dreams is somehow crucial to the phenomenon. This is so like the idea in Jerusalem of the timeless higher dimension of Mansoul being made from accumulated dream-stuff and chimes so well with Iain’s comment about eternity being located “in residues of sleep” that I should probably chew these notions over a bit more before coming to any conclusions.

Dreams certainly seem to be in the air at the moment, with dreams being the theme of our next-but-one Arts Lab magazine to be released, and Iain winding up his letter by referring to Steve Moore’s dream-centred rehearsals for eternity up there on Shooters Hill. For the anniversary of Steve’s death, I’ve decided to pay a long-postponed visit to his house, or rather to the structure that’s replaced it since the place was sold and rebuilt. I’ll hopefully get a chance to visit the Shrewsbury Lane burial mound where we scattered Steve’s ashes by the light of the supermoon following tropical storm Bertha in August 2014. Around a month or so ago I noticed an article in The Guardian about the classification of various places as World Heritage sites by English History, and was ridiculously pleased to find that the Shrewsbury Mound, the last surviving Bronze Age burial mound of several on Shooters Hill with the others having all been bulldozed in the early 1930s, was to be included. Steve’s instructions that his final resting place should be his favourite local landmark seems to me to be a way of fusing with the landscape and its history, its dreamtime if you like, which is perhaps as close to a genuine long-term strategy as its possible for a human being to get.

Anyway, it’s late – the moon tonight is a beautiful first-quarter crescent – and I should probably wrap this up. I’d like to leave you with the ‘Brexit Poem’ that I jotted down in an idle moment a month or two ago:

“I wrote this verse the moment that I heard/ the good news that we’d got our language back/ whence I, in a misjudged racial attack,/ kicked out French, German and Italian words/ and then I ”

With massive love and respect, as ever –-

Alan


Alan Moore was born in Northampton in 1953 and is a writer, performer, recording artist, activist and magician.
His comic-book work includes Lost Girls with Melinda Gebbie, From Hell with Eddie Campbell and The League of Extraordinary Gentlemen with Kevin O’Neill. He has worked with director Mitch Jenkins on the Showpieces cycle of short films and on forthcoming feature film The Show, while his novels include Voice of the Fire (1996) and his current epic Jerusalem (2016). Only about half as frightening as he looks, he lives in Northampton with his wife and collaborator Melinda Gebbie.

Stewart Lee was born in 1968 in Shropshire but grew up in Solihull. He started out on the stand-up comedy circuit in London in 1989, and started to work out whatever it was he was trying to do zoo after the turn of the century. He has written and performed four series of his own stand-up show on BBC2, had shows on at the Edinburgh fringe for 28 of the last 30 years, and has the last six of his full length stand-up shows out on DVD/download/whatever. He is the writer or co-writer of five theatre pieces and two art installations, a number of radio series, three books about comedy and a bad novel. He lives in Stoke Newington, North London with his wife, also a comedian, and two children. He was enrolled in the literary society The Friends of Arthur Machen by Alan Moore, and is a regular, if disguised, presence on London’s volunteer-fronted arts radio station Resonance 104.4 fm. His favourite comics book characters are Deathlok The Demolisher, Howard The Duck, Conan The Barbarian, Concrete and The Thing. His favourite bands/musicians are The Fall, Giant Sand, Dave Graney, John Coltrane, Miles Davis, Derek Bailey, Evan Parker, Bob Dylan, The Byrds and Shirley Collins. His favourite filmmakers are Sergio Leone, Sergio Corbucci, Andrew Kotting, Hal Hartley, and Akira Kurowsa. His favourite writers are Arthur Machen, William Blake, Ian Sinclair, Alan Moore, Stan Lee, Ray Bradbury, DH Lawrence, Thomas Hardy, Philip Larkin, Richard Brautigan, Geoff Dyer, Neil M Gunn, Francis Brett Young, and Eric Linklater and Robert E Howard.

Debian Administration Implementing two factor authentication with perl

Two factor authentication is a system which is designed to improve security, by requiring a second factor, in addition to a username/password pair, to access a particular resource. Here we'll look briefly at how you add two factor support to your applications with Perl.

Planet DebianSven Hoexter: environment variable names with dots and busybox 1.26.0 ash

In case you're for example using Alpine Linux 3.6 based docker images, and you've been passing through environment variable names with dots, you might miss them now in your actual environment. It seems that with busybox 1.26.0 the busybox ash got a lot stricter regarding validation of environment variable names and now you can no longer pass through variable names with dots in them. They just won't be there. If you've been running ash interactively you could not add them in the past, but until now you could do something like this in your Dockerfile

ENV foo.bar=baz

and later on accces a variable "foo.bar".

bash still allows those invalid variable names and is way more tolerant. So to be nice to your devs, and still bump your docker image version, you can add bash and ensure you're starting your application with /bin/bash instead of /bin/sh inside of your container.

Links

Cory DoctorowTalking Walkaway with Reason Magazine


Of all the press-stops I did on my tour for my novel Walkaway, I was most excited about my discussion with Katherine Mangu-Ward, editor-in-chief of Reason Magazine, where I knew I would have a challenging and meaty conversation with someone who was fully conversant with the political, technological and social questions the book raised.

I was not disappointed.


The interview, which was published online today, is a substantial and challenging one that gets at the core of what I’d hoped to do with the book. I hope you’ll give it a read.

But I think Uber is normal and dystopian for a lot of people, too. All the dysfunctions of Uber’s reputation economics, where it’s one-sided—I can tank your business by giving you an unfair review. You have this weird, mannered kabuki in some Ubers where people are super obsequious to try and get you to five-star them. And all of that other stuff that’s actually characteristic of Down and Out in the Magic Kingdom. I probably did predict Uber pretty well with what would happen if there are these reputation economies, which is that you would quickly have a have and a have-not. And the haves would be able to, in a very one-sided way, allocate reputation to have-nots or take it away from them, without redress, without rule of law, without the ability to do any of the things we want currency to do. So it’s not a store of value, it’s not a unit of exchange, it’s not a measure of account. Instead this is just a pure system for allowing the powerful to exercise power over the powerless.

Isn’t the positive spin on that: Well, yeah, but the way we used to do that allocation was by punching each other in the face?


Well, that’s one of the ways we used to. I was really informed by a book by David Graeber called Debt: The First 5,000 Years, where he points out that the anthropological story that we all just used to punch each other in the face all the time doesn’t really match the evidence. That there’s certainly some places where they punched each other in the face and there’s other places where they just kind of got along. Including lots of places where they got along through having long arguments or guilting each other.

I don’t know. Kabuki for stars on the Uber app still seems better than the long arguments or the guilt.

That’s because you don’t drive Uber for a living and you’ve never had to worry that tomorrow you won’t be able to.


Cory Doctorow’s ‘Fully Automated Luxury Communist Civilization’
[Katherine Mangu-Ward/Reason]


(Photo: Julian Dufort)

Planet DebianReproducible builds folks: Reproducible Builds: week 115 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday July 2 and Saturday July 8 2017:

Reproducible work in other projects

Ed Maste pointed to a thread on the LLVM developer mailing list about container iteration being the main source of non-determinism in LLVM, together with discussion on how to solve this. Ignoring build path issues, container iteration order was also the main issue with rustc, which was fixed by using a fixed-order hash map for certain compiler structures. (It was unclear from the thread whether LLVM's builds are truly path-independent or rather that they haven't done comparisons between builds run under different paths.)

Bugs filed

Patches submitted upstream:

Reviews of unreproducible packages

52 package reviews have been added, 62 have been updated and 20 have been removed in this week, adding to our knowledge about identified issues.

No issue types were updated or added this week.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (143)
  • Andreas Beckmann (1)
  • Dmitry Shachnev (1)
  • Lucas Nussbaum (3)
  • Niko Tyni (3)
  • Scott Kitterman (1)
  • Sean Whitton (1)

diffoscope development

Development continued in git with contributions from:

  • Ximin Luo:
    • Add a PartialString class to help with lazily-loaded output formats such as html-dir.
    • html and html-dir output:
      • add a size-hint to the diff headers and lazy-load buttons
      • add new limit flags and deprecate old ones
    • html-dir output
      • split index pages up if they get too big
      • put css/icon data in separate files to avoid duplication
    • main: warn if loading a diff but also giving diff-calculation flags
    • Test fixes for Python 3.6 and CI environments without imagemagick (#865625).
    • Fix a performance regression (#865660) involving the Wagner-Fischer algorithm for calculating levenshtein distance.

With these changes, we are able to generate a dynamically loaded HTML diff for GCC-6 that can be displayed in a normal web browser. For more details see this mailing list post.

Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianYakking: Time - What are clocks?

System calls

You can't talk about time without clocks. A standard definition of clocks is "An instrument to measure time".

Strictly speaking the analogy isn't perfect, since these clocks aren't just different ways of measuring time, they measure some different notions of time, but this is a somewhat philosophical point.

The interface for this in Linux are system calls with a clkid_t parameter, though not all system calls are valid for every clock.

This is not an exhaustive list of system calls, since there are also older system calls for compatibility with POSIX or other UNIX standards, that offer a subset of the functionality of the above ones.

System calls beginning clock_ are about getting the time or waiting, beginning timer_ are for setting periodic or one-shot timers, timerfd_ are for timers that can be put in event loops.

For the sake of not introducing too many concepts at once, we're going to start with the simplest clock and work our way up.

CLOCK_MONOTONIC_RAW

The first clock we care about is CLOCK_MONOTONIC_RAW.

This is the simplest clock.

It it initialised to an arbitrary value on boot and counts up at a rate of one second per second to the best of its ability.

This clock is of limited use on its own since the only clock-related system calls that work with it are clock_gettime(2) and clock_getres(2). It can be used to determine the order of events, if the time of the event was recorded and the relative time difference between when they happened, since we know that the clock increments one second per second.

Example

In this program below, we time how long it takes to read 4096 bytes from /dev/zero, and print the result.

#include <stdio.h> /* fopen, fread, printf */
#include <time.h> /* clock_gettime, CLOCK_MONOTONIC_RAW, struct timespec */

int main(void) {
    FILE *devzero;
    int ret;
    struct timespec start, end;
    char buf[4096];

    devzero = fopen("/dev/zero", "r");
    if (!devzero) {
        return 1;
    }

    /* get start time */
    ret = clock_gettime(CLOCK_MONOTONIC_RAW, &start);
    if (ret < 0) {
        return 2;
    }

    if (fread(buf, sizeof *buf, sizeof buf, devzero) != sizeof buf) {
        return 3;
    }

    /* get end time */
    ret = clock_gettime(CLOCK_MONOTONIC_RAW, &end);
    if (ret < 0) {
        return 4;
    }

    end.tv_nsec -= start.tv_nsec;
    if (end.tv_nsec < 0) {
        end.tv_sec--;
        end.tv_nsec += 1000000000l;
    }
    end.tv_sec -= start.tv_sec;

    printf("Reading %zu bytes took %.0f.%09ld seconds\n",
           sizeof buf, (double)end.tv_sec, (long)end.tv_nsec);

    return 0;
}

You're possibly curious about the naming of the clock called MONOTONIC_RAW.

In the next article we will talk about CLOCK_MONOTONIC, which may help you understand why it's named the way it is.

I suggested the uses of this clock are for the sequencing of events and for calculating the relative period between them. If you can think of another use please send us a comment.

If you like to get hands-on, you may want to try reimplementing the above program in your preferred programming language, or extending it to time arbitrary events.

CryptogramMore on the NSA's Use of Traffic Shaping

"Traffic shaping" -- the practice of tricking data to flow through a particular route on the Internet so it can be more easily surveiled -- is an NSA technique that has gotten much less attention than it deserves. It's a powerful technique that allows an eavesdropper to get access to communications channels it would otherwise not be able to monitor.

There's a new paper on this technique:

This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad. In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term "traffic shaping" to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance. Since it is hard to intercept Yemen's international communications from inside Yemen itself, the agency might try to "shape" the traffic so that it passes through communications cables located on friendlier territory. Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.

The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.

Could the NSA use traffic shaping to redirect domestic Internet traffic -- ­emails and chat messages sent between Americans, say­ -- to foreign soil, where its surveillance can be conducted beyond the purview of Congress and the courts? It is impossible to categorically answer this question, due to the classified nature of many national-security surveillance programs, regulations and even of the legal decisions made by the surveillance courts. Nevertheless, this report explores a legal, technical, and operational landscape that suggests that traffic shaping could be exploited to sidestep legal restrictions imposed by Congress and the surveillance courts.

News article. NSA document detailing the technique with Yemen.

This work builds on previous research that I blogged about here.

The fundamental vulnerability is that routing information isn't authenticated.

Worse Than FailureThe Defensive Contract

Working for a contractor within the defense industry can be an interesting experience. Sometimes you find yourself trying to debug an application from a stack trace which was handwritten and faxed out of a secured facility with all the relevant information redacted by overzealous security contractors who believe that you need a Secret clearance just to know that it was a System.NullReferenceException. After weeks of frustration when you are unable to solve anything from a sheet of thick black Sharpie stripes, they may bring you there for on-site debugging.

Security cameras perching on a seaside rock, like they were seagulls

Beforehand, they will lock up your cell phone, cut out the WiFi antennas from your development laptop, and background check you so thoroughly that they’ll demand explanations for the sins of your great-great-great-great grandfather’s neighbor’s cousin’s second wife’s stillborn son before letting you in the door. Once inside, they will set up temporary curtains around your environment to block off any Secret-rated workstation screens to keep you from peeking and accidentally learning what the Top Secret thread pitch is for the lug nuts of the latest black-project recon jet. Then they will set up an array of very annoying red flashing lights and constant alarm whistles to declare to all the regular staff that they need to watch their mouths because an uncleared individual is present.

Then you’ll spend several long days trying to fix code. But you’ll have no Internet connection, no documentation other than whatever three-ring binders full of possibly-relevant StackOverflow questions you had forseen to prepare, and the critical component which reliably triggers the fault has been unplugged because it occasionally sends Secret-rated UDP packets, and, alas, you’re still uncleared.

When you finish the work, if you’re lucky they’ll even let you keep your laptop. Minus the hard drive, of course. That gets pulled, secure-erased ten times over, and used for target practice at the local Marine battalion’s next SpendEx.

Despite all the inherent difficulties though, defense work can be very financially-rewarding. If you play your cards right, your company may find itself milking a 30-year-long fighter jet development project for all it’s worth with no questions asked. That’s good for salaries, good for employee morale, and very good for job security.

That’s not what happened to Nikko, of course. No. His company didn’t play its cards right at all. In fact, they didn’t even have cards. They were the player who walked up to the poker table after the River and went all-in despite not even being dealed into the game. “Hey,” the company’s leaders said to themselves, “Yeah we’ll lose some money, but at least we get to play with the big boys. That’s worth a lot, and someday we’ll be the lead contractor for the software on the next big Fire Control Radar!”

So Nikko found himself working on a project his company was the subcontractor (a.k.a. the lowest bidder) for. But in their excited rush to take on the work, nobody read the contract and signed it as-is. The customer’s requirements for this component were vague, contradictory, at times absurd, and of course the contract offered no protection for Nikko’s company.

In fact, months later when Nikko–not yet aware of the mess he was in–met with engineers from the lead contractor–whom we’ll call Acme–for guidance on the project, one of them plainly told him in an informal context “Yeah, it’s a terrible component. We just wanted to get out from under it. It’s a shame you guys bid on it…”

The project began, using a small team of a project manager, Nikko as the experienced lead, and two junior engineers. Acme did not make things easy on them. They were expected to write all code at Acme’s facilities, on a network with no Internet access. They were asked to bring their own laptops in to develop on, but the information security guys refused and instead offered them one 15-year-old Pentium 4 that the three engineers were expected to share. Of course, using such an ancient system meant that a clean compile took 20 minutes, and the hidden background process that the security guys used to audit file access constantly brought disk I/O to a halt.

But development started anyway, depsite all the red flags. They were required to use an API from another subcontractor. However, that subcontractor would only give them obfuscated JAR files with no documentation. Fortunately it was a fairly simple API and the team had some success decompiling it and figuring out how it works.

But their next hurdle was even worse. All the JAR did was communicate with a REST interface from a server. But due to the way the Acme security guys had things set up, there was no test server on the development network. It wasn’t allowed. Period.

The actual server lived in an integration lab located several miles away, but coding was not allowed there. Access to it was tightly-controlled and scheduled. Nikko found himself writing code, checking it in, and scheduling a time slot at the lab (which often took days) to try out his changes.

The integration lab was secured. He could not bring anything in and Acme information security specialists had to sign off on strict paperwork every time he wanted to transfer the latest build there. Debuggers were forbidden due to the fears of giving an uncleared individual access to the system’s memory, and Nikko had to hand-copy any error logs using pen and paper to bring any error messages out of the facility and back to the development lab.

Three months into the project, Nikko was alone. The project manager threw some kind of temper tantrum and either quit or was fired. One of the junior engineers gave birth and quit the company during maternity leave. And the other junior engineer accepted an offer from another company and also left.

Nikko, feeling burned out and unable to sleep one night, then remembered his father’s story of punchcard programming in computing’s early days. Back then, your program was a stack of punchcards, with each card performing a single machine instruction. You had to schedule a 15-minute timeslot with the computer to run through your program which was actually a stack of punchcards. And sometimes the operator accidentally dropped your box of punchcards on the way to the machine but made no effort to ensure they were executed in the correct order, ruining the job.

The day after that revelation, Nikko met with his bosses. He was upset, and flatly told them that the project could not succeed, they were following 1970’s punchcard programming methodologies in the year 2016, and that he would have no part in it anymore.

He then took on a job at a different defense contractor. And then found himself working again as a subcontractor on an Acme component. He decided to stick with it for a while since his new company actually read contracts before signing, so maybe it would be better this time? Still, in the back of his mind he started to wonder if he had died and Acme was his purgatory.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianFrancois Marier: Toggling Between Pulseaudio Outputs when Docking a Laptop

In addition to selecting the right monitor after docking my ThinkPad, I wanted to set the correct sound output since I have headphones connected to my Ultra Dock. This can be done fairly easily using Pulseaudio.

Switching to a different pulseaudio output

To find the device name and the output name I need to provide to pacmd, I ran pacmd list-sinks:

2 sink(s) available.
...
  * index: 1
    name: <alsa_output.pci-0000_00_1b.0.analog-stereo>
    driver: <module-alsa-card.c>
...
    ports:
        analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown)
            properties:

        analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
            properties:
                device.icon_name = "audio-speakers"

From there, I extracted the soundcard name (alsa_output.pci-0000_00_1b.0.analog-stereo) and the names of the two output ports (analog-output and analog-output-speaker).

To switch between the headphones and the speakers, I can therefore run the following commands:

pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker

Listening for headphone events

Then I looked for the ACPI event triggered when my headphones are detected by the laptop after docking.

After looking at the output of acpi_listen, I found jack/headphone HEADPHONE plug.

Combining this with the above pulseaudio names, I put the following in /etc/acpi/events/thinkpad-dock-headphones:

event=jack/headphone HEADPHONE plug
action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output"

to automatically switch to the headphones when I dock my laptop.

Finding out whether or not the laptop is docked

While it is possible to hook into the docking and undocking ACPI events and run scripts, there doesn't seem to be an easy way from a shell script to tell whether or not the laptop is docked.

In the end, I settled on detecting the presence of USB devices.

I ran lsusb twice (once docked and once undocked) and then compared the output:

lsusb  > docked 
lsusb  > undocked 
colordiff -u docked undocked 

This gave me a number of differences since I have a bunch of peripherals attached to the dock:

--- docked  2017-07-07 19:10:51.875405241 -0700
+++ undocked    2017-07-07 19:11:00.511336071 -0700
@@ -1,15 +1,6 @@
 Bus 001 Device 002: ID 8087:8000 Intel Corp. 
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
-Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub
-Bus 003 Device 080: ID 17ef:1010 Lenovo 
 Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
-Bus 002 Device 041: ID xxxx:xxxx ...
-Bus 002 Device 040: ID xxxx:xxxx ...
-Bus 002 Device 039: ID xxxx:xxxx ...
-Bus 002 Device 038: ID 17ef:100f Lenovo 
-Bus 002 Device 037: ID xxxx:xxxx ...
-Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub
-Bus 002 Device 036: ID 17ef:1010 Lenovo 
 Bus 002 Device 002: ID xxxx:xxxx ...
 Bus 002 Device 004: ID xxxx:xxxx ...
 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

I picked 17ef:1010 as it appeared to be some internal bus on the Ultra Dock (none of my USB devices were connected to Bus 003) and then ended up with the following port toggling script:

#!/bin/bash

if /usr/bin/lsusb | grep 17ef:1010 > /dev/null ; then
    # docked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
else
    # undocked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker
fi

,

Krebs on SecurityAdobe, Microsoft Push Critical Security Fixes

It’s Patch Tuesday, again. That is, if you run Microsoft Windows or Adobe products. Microsoft issued a dozen patch bundles to fix at least 54 security flaws in Windows and associated software. Separately, Adobe’s got a new version of its Flash Player available that addresses at least three vulnerabilities.

brokenwindowsThe updates from Microsoft concern many of the usual program groups that seem to need monthly security fixes, including Windows, Internet Explorer, Edge, Office, .NET Framework and Exchange.

According to security firm Qualys, the Windows update that is most urgent for enterprises tackles a critical bug in the Windows Search Service that could be exploited remotely via the SMB file-sharing service built into both Windows workstations and servers.

Qualys says the issue affects Windows Server 2016, 2012, 2008 R2, 2008 as well as desktop systems like Windows 10, 7 and 8.1.

“While this vulnerability can leverage SMB as an attack vector, this is not a vulnerability in SMB itself, and is not related to the recent SMB vulnerabilities leveraged by EternalBlue, WannaCry, and Petya.” Qualys notes, referring to the recent rash of ransomware attacks which leveraged similar vulnerabilities.

Other critical fixes of note in this month’s release from Microsoft include at least three vulnerabilities in Microsoft’s built-in browser — Edge or Internet Explorer depending on your version of Windows. There are at least three serious flaws in these browsers that were publicly detailed prior to today’s release, suggesting that malicious hackers may have had some advance notice on figuring out how to exploit these weaknesses.

brokenflash-aAs it is accustomed to doing on Microsoft’s Patch Tuesday, Adobe released a new version of its Flash Player browser plugin that addresses a trio of flaws in that program.

The latest update brings Flash to v. 26.0.0.137 for Windows, Mac and Linux users alike. If you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. An extremely powerful and buggy program that binds itself to the browser, Flash is a favorite target of attackers and malware. For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

If you choose to keep Flash, please update it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version). A green arrow in the upper right corner of my Chrome installation today gave me the prompt I needed to update my version to the latest.

Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.

As always, if you experience any issues downloading or installing any of these updates, please leave a note about it in the comments below.

Planet DebianJoey Hess: bonus project

Little bonus project after the solar upgrade was replacing the battery box's rotted roof, down to the cinderblock walls.

Except for a piece of plywood, used all scrap lumber for this project, and also scavenged a great set of hinges from a discarded cabinet. I hope the paint on all sides and an inch of shingle overhang will be enough to protect the plywood.

Bonus bonus project to use up paint. (Argh, now I want to increase the size of the overflowing grape arbor. Once you start on this kind of stuff..)

After finishing all that, it was time to think about this while enjoying this.

(Followed by taking delivery of a dumptruck full of gravel -- 23 tons -- which it turns out was enough for only half of my driveway..)

TEDSneak preview lineup unveiled for Africa’s next TED Conference

On August 27, an extraordinary group of people will gather in Arusha, Tanzania, for TEDGlobal 2017, a four-day TED Conference for “those with a genuine interest in the betterment of the continent,” says curator Emeka Okafor.

As Okafor puts it: “Africa has an opportunity to reframe the future of work, cultural production, entrepreneurship, agribusiness. We are witnessing the emergence of new educational and civic models. But there is, on the flip side, a set of looming challenges that include the youth bulge and under-/unemployment, a food crisis, a risky dependency on commodities, slow industrializations, fledgling and fragile political systems. There is a need for a greater sense of urgency.”

He hopes the speakers at TEDGlobal will catalyze discussion around “the need to recognize and amplify solutions from within the Africa and the global diaspora.”

Who are these TED speakers? A group of people with “fresh, unique perspectives in their initiatives, pronouncements and work,” Okafor says. “Doers as well as thinkers — and contrarians in some cases.” The curation team, which includes TED head curator Chris Anderson, went looking for speakers who take “a hands-on approach to solution implementation, with global-level thinking.”

Here’s the first sneak preview — a shortlist of speakers who, taken together, give a sense of the breadth and topics to expect, from tech to the arts to committed activism and leadership. Look for the long list of 35–40 speakers in upcoming weeks.

The TEDGlobal 2017 conference happens August 27–30, 2017, in Arusha, Tanzania. Apply to attend >>

Kamau Gachigi, Maker

“In five to ten years, Kenya will truly have a national innovation system, i.e. a system that by its design audits its population for talented makers and engineers and ensures that their skills become a boon to the economy and society.” — Kamau Gachigi on Engineering for Change

Dr. Kamau Gachigi is the executive director of Gearbox, Kenya’s first open makerspace for rapid prototyping, based in Nairobi. Before establishing Gearbox, Gachigi headed the University of Nairobi’s Science and Technology Park, where he founded a Fab Lab full of manufacturing and prototyping tools in 2009, then built another one at the Riruta Satellite in an impoverished neighborhood in the city. At Gearbox, he empowers Kenya’s next generation of creators to build their visions. @kamaufablab

Mohammed Dewji, Business leader

“My vision is to facilitate the development of a poverty-free Tanzania. A future where the opportunities for Tanzanians are limitless.” — Mohammed Dewji

Mohammed Dewji is a Tanzanian businessman, entrepreneur, philanthropist, and former politician. He serves as the President and CEO of MeTL Group, a Tanzanian conglomerate operating in 11 African countries. The Group operates in areas as diverse as trading, agriculture, manufacturing, energy and petroleum, financial services, mobile telephony, infrastructure and real estate, transport, logistics and distribution. He served as Member of Parliament for Singida-Urban from 2005 until his retirement in 2015. Dewji is also the Founder and Trustee of the Mo Dewji Foundation, focused on health, education and community development across Tanzania. @moodewji

Meron Estefanos, Refugee activist

“Q: What’s a project you would like to move forward at TEDGlobal?
A: Bringing change to Eritrea.” —Meron Estefanos

Meron Estefanos is an Eritrean human rights activist, and the host and presenter of Radio Erena’s weekly program “Voices of Eritrean Refugees,” aired from Paris. Estefanos is executive director of the Eritrean Initiative on Refugee Rights (EIRR), advocating for the rights of Eritrean refugees, victims of trafficking, and victims of torture. Ms Estefanos has been key in identifying victims throughout the world who have been blackmailed to pay ransom for kidnapped family members, and was a key witness in the first trial in Europe to target such blackmailers. She is co-author of Human Trafficking in the Sinai: Refugees between Life and Death and The Human Trafficking Cycle: Sinai and Beyond, and was featured in the film Sound of Torture. She was nominated for the 2014 Raoul Wallenberg Award for her work on human rights and victims of trafficking. @meronina

Touria El Glaoui, Art fair founder

“I’m looking forward to discussing the roles we play as leaders and tributaries in redressing disparities within arts ecosystems. The art fair is one model which has had a direct effect on the ways in which audiences engage with art, and its global outlook has contributed to a highly mobile and dynamic means of interaction.” — Touria El Glaoui

Touria El Glaoui is the founding director of the 1:54 Contemporary African Art Fair, which takes place in London and New York every year and, in 2018, launches in Marrakech. The fair highlights work from artists and galleries across Africa and the diaspora, bringing visibility in global art markets to vital upcoming visions. El Glaoui began her career in the banking industry before founding 1:54 in 2013. Parallel to her career, Touria has organised and co-curated exhibitions of her father’s work, the Moroccan artist Hassan El Glaoui, in London and Morocco. @154artfair

Gus Casely-Hayford, Historian

“Technological, demographic, economic and environmental change are recasting the world profoundly and rapidly. The sentiment that we are traveling through unprecedented times has left many feeling deeply unsettled, but there may well be lessons to learn from history — particularly African history — lessons that show how brilliant leadership and strategic intervention have galvanised and united peoples around inspirational ideas.” — Gus Casely-Hayford

Dr. Gus Casely-Hayford is a curator and cultural historian who writes, lectures and broadcasts widely on African culture. He has presented two series of The Lost Kingdoms of Africa for the BBC and has lectured widely on African art and culture, advising national and international bodies on heritage and culture. He is currently developing a National Portrait Gallery exhibition that will tell the story of abolition of slavery through 18th- and 19th-century portraits — an opportunity to bring many of the most important paintings of black figures together in Britain for the first time.

Oshiorenoya Agabi, Computational neuroscientist

“Koniku eventually aims to build a device that is capable of thinking in the biological sense, like a human being. We think we can do this in the next two to five years.” — Oshiorenoya Agabi on IndieBio.co

With his startup Koniku, Oshiorenoya Agabi is working to integrate biological neurons and silicon computer chips, to build computers that can think like humans can. Faster, cleverer computer chips are key to solving the next big batch of computing problems, like particle detection or sophisticated climate modeling — and to get there, we need to move beyond the limitations of silicon, Agabi believes. Born and raised in Lagos, Nigeria, Agabi is now based in the SF Bay Area, where he and his lab mates are working on the puzzle of connecting silicon to biological systems.

Natsai Audrey Chieza, Design researcher

Photo: Natsai Audrey Chieza

Natsai Audrey Chieza is a design researcher whose fascinating work crosses boundaries between technology, biology, design and cultural studies. She is founder and creative director of Faber Futures, a creative R&D studio that conceptualises, prototypes and evaluates the resilience of biomaterials emerging through the convergence of bio-fabrication, digital fabrication and traditional craft processes. As Resident Designer at the Department of Biochemical Engineering, University College London, she established a design-led microbiology protocol that replaces synthetic pigments with natural dyes excreted by bacteria — producing silk scarves dyed brilliant blues, reds and pinks. The process demands a rethink of the entire system of fashion and textile production — and is also a way to examine issues like resource scarcity, provenance and cultural specificity. @natsaiaudrey

Stay tuned for more amazing speakers, including leaders, creators, and more than a few truth-tellers … learn more >>


Planet DebianAndreas Bombe: PDP-8/e Replicated — Overview

This is an overview of the hardware and internals of the PDP-8/e replica I’m building.

The front panel board

If you know the original or remember the picture from the first post it is clear that this is a functional replica not aiming to be as pretty as those of the other projects I mentioned. I have reordered the switches into two rows to make the board more compact (which also means cheaper) without sacrificing usability.

There’s the two rows of display lights plus one run light the 8/e provides. The upper row is the address made up of 12 bits of memory address and 3 bits of extended memory address or field. Below are the 12 bits indicator which can show one data set out of six as selected by the user.

All the switches of the original are implemented as more compact buttons1. While momentary switches are easily substituted by buttons, all buttons implementing two position switches toggle on/off with each press and they have a LED above that shows the current state. The six position rotary switch is implemented as a button cycling through all indicator displays together with six LEDs which show the active selection.

Markings show the meaning of the indicator and switches as on the original, grouped in threes as the predominant numbering system for the PDPs was octal. The upper line shows the meaning for the state indicator, the middle for the status indicator and bit numbers for the rest. Note that on the PDP-8 and opposite to modern conventions, the most significant bit was numbered 0.

I designed it as a pure front panel board without any PDP-8 simulation parts. The buttons and associated lights are controllable via SPI lines with a 3.3 V supply. The address and indicator lights have a separate common anode configuration with all cathodes individually available on a pin header without any resistors in the path, leaving voltage and current regulation up to the simulation board. This board is actually a few years old from a similar project where I emulated the PDP-8 in software on a microcontroller and the flexible design allowed me to reuse it unchanged.

The main board

This is where the magic happens. You can see three big ICs on the board: On the left is the STM32F405 microcontroller (with ARM Cortex-M4 core), the bigger one in the middle is the Altera2 MAX 10 FPGA and finally to the right is the SRAM that is large enough to hold all the main memory of the 32 KW maximum expansion of the PDP-8/e. The two smaller chips to the right of that are just buffers that drive the front panel address LEDs, the small chip at the top left is a RS-232 level shifter.

The idea behind this is that the PDP-8 and peripherals that are simple to directly implement, such as GPIO or a serial port, are fully on the FPGA. Other peripherals such as paper and magnetic tape and disks, which are after all not connected to real PDP-8 drives but disk images on a microSD, are implemented on the microcontroller interfacing with stub devices in the FPGA. Compared to implementing everything everything in the FPGA, the STM32F4 has the advantage of useful built in peripherals such as two host/device capable USB ports. 5 V tolerant I/O pins are very useful and simply not available in any FPGA.

I have to admit that this board was a bit of a rush job in order to have something at all to show at the Vintage Computer Festival Europe 18.0. Given that it was my first time designing a board with a large microcontroller and the first time with an FPGA, it wasn’t exactly my fastest progressing project either and I got basic functionality (front panel allows toggling in small programs and running them) working just in time. For various reasons the project hasn’t progressed much since, so the following is still just plans, but plans for which the hardware was designed.

Since the aim is to have a cycle accurate PDP-8/e implementation, digital I/O was always planned. Rather than defining my own header I have included Arduino R3 compatible headers (for 3.3 V compatible boards only) that have become a popular even outside the Arduino world for this purpose. The digital Arduino pins are connected directly to the FPGA and will be directly controllable by PDP-8 software. The downside of choosing the Arduino headers is that the original PDP-8 digital I/O interface is not a perfect match since it naturally has 12 lines whereas the Arduino has 15. The analog inputs are not connected to the FPGA, the documentation of the MAX10’s ADC in the EQFP package are not very encouraging. They are connected to the STM32 instead3.

Another interface connected directly to the FPGA and that would be directly under PDP-8 control is a standard 9 pin RS-232 interface. RX, TX, CTS and RTS are connected and level-shifted between 3.3 V and RS-232 levels by a MAX3232.

Besides the PDP-8, I also plan to implement a full video terminal on the board. The idea is that with a power supply, keyboard and monitor this board would form a complete system without the need of connecting another computer to act as a terminal. To that end, there is a VGA port attached to the FPGA with simple resistor network DACs for 9 bits color (RGB with 3 bits each). This is another spot where I left myself room to expand, for e.g. a VT220 you really only need one color in two brightness levels. Keyboards will be connected either via the PS/2 connector on the right or the USB-A host port at the top left.

Last of the interface ports is the USB micro-AB port on the left, which for now I am using only for power supply. I mainly plan to use it to provide alternative or additional serial ports to the PDP-8 or to export the video terminal serial port for testing purposes. Other possible uses are access to the image files on the microSD and firmware updates.

This has gotten rather long again, so I’m stopping here and leave some implementation notes for another post.


  1. They are also much cheaper. Given the large number of switches, the savings are substantial. Additionaly the buttons are nicer to operate than long rows of tiny switches. [return]
  2. Or rather Intel now. At least Altera’s web site, documentation and software have already been thoroughly rebranded, but the chips I got were produced prior to that. [return]
  3. That’s not to say that the analog conversions on the STM32 are necessarily better than those of the MAX10 when you can’t follow their guidelines, I have no comparisons. Certainly following the guidelines would have been prohibitive given how many pins’ usage they restrict. [return]

Rondam RamblingsThere's yer smoking gun

I predicted the existence of a Russia-gate smoking gun back in March, but I didn't expect it to actually turn up so soon.  And I certainly didn't expect it to turn up by having Donald Trump Jr. whip it out, shoot himself in the foot with it (twice!), and then loudly shout, "I told you there's nothing to see here, move along!" Here is the most damning part of the email chain released by Trump Jr.

Google AdsenseClarification around Pop-Unders

At Google, we value users, advertisers and publishers equally. We have policies in place that define where Google ads should appear and how they must be implemented. These policies help ensure a positive user experience, as well as maintain a healthy ads ecosystem that benefits both publishers and advertisers.

To get your attention, some ads pop up in front of your current browser window, obscuring the content you want to see. Pop-under ads can be annoying as well, as they will "pop under" your window, so that you don't see them until you minimize your browser. We do not believe these ads provide a good user experience, and therefore are not suitable for Google ads.

That is why we recently clarified our policies around pop-ups and pop-unders to help remove any ambiguity. To simplify our policies, we are no longer permitting the placement of Google ads on pages that are loaded as a pop-up or pop-under. Additionally, we do not permit Google ads on any site that contains or triggers pop-unders, regardless of whether Google ads are shown in the pop-unders.

We continually review and evaluate our policies to address emerging trends, and in this case we determined that a policy change was necessary.

As with all policies, publishers are ultimately responsible for ensuring that traffic to their site is compliant with Google policies. To assist publishers, we’ve provided guidance on best practices for buying traffic.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 161 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly with one new bronze sponsor and another silver sponsor is in the process of joining.

The security tracker currently lists 49 packages with a known CVE and the dla-needed.txt file 54. The number of open issues is close to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

CryptogramHacking Spotify

Some of the ways artists are hacking the music-streaming service Spotify.

Planet Linux AustraliaJames Morris: Linux Security Summit 2017 Schedule Published

The schedule for the 2017 Linux Security Summit (LSS) is now published.

LSS will be held on September 14th and 15th in Los Angeles, CA, co-located with the new Open Source Summit (which includes LinuxCon, ContainerCon, and CloudCon).

The cost of LSS for attendees is $100 USD. Register here.

Highlights from the schedule include the following refereed presentations:

There’s also be the usual Linux kernel security subsystem updates, and BoF sessions (with LSM namespacing and LSM stacking sessions already planned).

See the schedule for full details of the program, and follow the twitter feed for the event.

This year, we’ll also be co-located with the Linux Plumbers Conference, which will include a containers microconference with several security development topics, and likely also a TPMs microconference.

A good critical mass of Linux security folk should be present across all of these events!

Thanks to the LSS program committee for carefully reviewing all of the submissions, and to the event staff at Linux Foundation for expertly planning the logistics of the event.

See you in Los Angeles!

Worse Than FailureCodeSOD: Questioning Existence

Michael got a customer call, from a PHP system his company had put together four years ago. He pulled up the code, which thankfully was actually up to date in source control, and tried to get a grasp of what the system does.

There, he discovered a… unique way to define functions in PHP:

if(!function_exists("GetFileAsString")){
        function GetFileAsString($FileName){
                $Contents = "";
                if(file_exists($FileName)){
                        $Contents = file_get_contents($FileName);
                }
                return $Contents;
        }
}
if(!function_exists("RemoveEmptyArrayValuesAndDuplicates")){
        function RemoveEmptyArrayValuesAndDuplicates($ArrayName){
                foreach( $ArrayName as $Key => $Value ) {
                        if( empty( $ArrayName[ $Key ] ) )
                   unset( $ArrayName[ $Key ] );
                }
                $ArrayName = array_unique($ArrayName);
                $ReIndexedArray = array();
                $ReIndexedArray = array_values($ArrayName);
                return $ReIndexedArray;
        }
}

Before we talk about the function_exists nonsense, let’s comment on the functions themselves: they’re both unnecessary. file_get_contents returns a false-y value that gets converted to an empty string if you ever treat it as a string , which is exactly the same thing that GetFileAsString does. The replacement for RemoveEmptyArrayValuesAndDuplicates could also be much simpler: array_values(array_unique(array_filter($rawArray)));. That’s still complex enough it could merit its own function, but without the loop, it’s far easier to understand.

Neither of these functions needs to exist, which is why, perhaps, they’re conditional. I can only guess about how these came to be, but here’s my guess:

Once upon a time, in the Dark Times, many developers were working on the project. They worked with no best-practices, no project organization, no communication, and no thought. Each of them gradually built up a library of include files, that carried with it their own… unique solution to common problems. It became spaghetti, and then eventually a forest, and eventually, in the twisted morass of code, Sallybob’s GetFileAsString conflicted with Joebob’s GetFileAsString. The project teetered on the edge of failure… so someone tried to “fix” it, by decreeing every utility function needed this kind of guard.

I’ve seen PHP projects go down this path, though never quite to this conclusion.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

,

Planet DebianSteve Kemp: bind9 considered harmful

Recently there was another bind9 security update released by the Debian Security Team. I thought that was odd, so I've scanned my mailbox:

  • 11 January 2017
    • DSA-3758 - bind9
  • 26 February 2017
    • DSA-3795-1 - bind9
  • 14 May 2017
    • DSA-3854-1 - bind9
  • 8 July 2017
    • DSA-3904-1 - bind9

So in the year to date there have been 7 months, in 3 of them nothing happened, but in 4 of them we had bind9 updates. If these trends continue we'll have another 2.5 updates before the end of the year.

I don't run a nameserver. The only reason I have bind-packages on my system is for the dig utility.

Rewriting a compatible version of dig in Perl should be trivial, thanks to the Net::DNS::Resolver module:

These are about the only commands I ever run:

dig -t a    steve.fi +short
dig -t aaaa steve.fi +short
dig -t a    steve.fi @8.8.8.8

I should do that then. Yes.

Planet DebianMarkus Koschany: My Free Software Activities in June 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java + Android

Debian LTS

This was my sixteenth month as a paid contributor and I have been paid to work 16 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • I triaged mp3splt and putty and marked CVE-2017-5666 and CVE-2017-6542 as no-dsa because the impact was very low.
  • DLA-975-1. I uploaded the security update for wordpress which I prepared last month fixing 6 CVE.
  • DLA-986-1. Issued a security update for zookeeper fixing 1 CVE.
  • DLA-989-1. Issued a security update for jython fixing 1 CVE.
  • DLA-996-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-1002-1. Issued a security update for smb4k fixing 1 CVE.
  • DLA-1013-1. Issued a security update for graphite2 fixing 8 CVE.
  • DLA-1020-1. Issued a security update for jetty fixing 1 CVE.
  • DLA-1021-1. Issued a security update for jetty8 fixing 1 CVE.

Misc

  • I updated wbar, fixed #829981 and uploaded mediathekview and osmo to unstable. For the Buster release cycle I decided to package the fork of xarchiver‘s master branch which receives regular updates and bug fixes. Besides being an GTK-3 application now, a lot of older bugs could be fixed.

Thanks for reading and see you next time.

Sociological ImagesHow LSD opened minds and changed America

In the 1950s and ’60s, a set of social psychological experiments seemed to show that human beings were easily manipulated by low and moderate amounts of peer pressure, even to the point of violence. It was a stunning research program designed in response to the horrors of the Holocaust, which required the active participation of so many people, and the findings seemed to suggest that what happened there was part of human nature.

What we know now, though, is that this research was undertaken at an unusually conformist time. Mothers were teaching their children to be obedient, loyal, and to have good manners. Conformity was a virtue and people generally sought to blend in with their peers. It wouldn’t last.

At the same time as the conformity experiments were happening, something that would contribute to changing how Americans thought about conformity was being cooked up: the psychedelic drug, LSD.

Lysergic acid diethylamide was first synthesized in 1938 in the routine process of discovering new drugs for medical conditions. The first person to discover it psychedelic properties — its tendency to alter how we see and think — was the scientist who invented it, Albert Hoffmann. He ingested it accidentally, only to discover that it induces a “dreamlike state” in which he “perceived an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors.”

By the 1950s , LSD was being administered to unwitting American in a secret, experimental mind control program conducted by the United States Central Intelligence Agency, one that would last 14 years and occur in over 80 locations. Eventually the fact of the secret program would leak out to the public, and so would LSD.

It was the 1960s and America was going through a countercultural revolution. The Civil Rights movement was challenging persistent racial inequality, the women’s and gay liberation movements were staking claims on equality for women and sexual minorities, the sexual revolution said no to social rules surrounding sexuality and, in the second decade of an intractable war with Vietnam, Americans were losing patience with the government. Obedience had gone out of style.

LSD was the perfect drug for the era. For its proponents, there was something about the experience of being on the drug that made the whole concept of conformity seem absurd. A new breed of thinker, the “psychedelic philosopher,” argued that LSD opened one’s mind and immediately revealed the world as it was, not the world as human beings invented it. It revealed, in other words, the social constructedness of culture.

In this sense, wrote the science studies scholar Ido Hartogsohn, LSD was truly “countercultural,” not only “in the sense of being peripheral or opposed to mainstream culture [but in] rejecting the whole concept of culture.” Culture, the philosophers claimed, shut down our imagination and psychedelics were the cure. “Our normal word-conditioned consciousness,” wrote one proponent, “creates a universe of sharp distinctions, black and white, this and that, me and you and it.” But on acid, he explained, all of these rules fell away. We didn’t have to be trapped in a conformist bubble. We could be free.

The cultural influence of the psychedelic experience, in the context of radical social movements, is hard to overstate. It shaped the era’s music, art, and fashion. It gave us tie-dye, The Grateful Dead, and stuff like this:


via GIPHY

The idea that we shouldn’t be held down by cultural constrictions — that we should be able to live life as an individual as we choose — changed America.

By the 1980s, mothers were no longer teaching their children to be obedient, loyal, and to have good manners. Instead, they taught them independence and the importance of finding one’s own way. For decades now, children have been raised with slogans of individuality: “do what makes you happy,” “it doesn’t matter what other people think,” “believe in yourself,” “follow your dreams,” or the more up-to-date “you do you.”

Today, companies choose slogans that celebrate the individual, encouraging us to stand out from the crowd. In 2014, for example, Burger King abandoned its 40-year-old slogan, “Have it your way,” for a plainly individualistic one: “Be your way.” Across the consumer landscape, company slogans promise that buying their products will mark the consumer as special or unique. “Stay extraordinary,” says Coke; “Think different,” says Apple. Brands encourage people to buy their products in order to be themselves: Ray-Ban says “Never hide”; Express says “Express yourself,” and Reebok says “Let U.B.U.”

In surveys, Americans increasingly defend individuality. Millennials are twice as likely as Baby Boomers to agree with statements like “there is no right way to live.” They are half as likely to think that it’s important to teach children to obey, instead arguing that the most important thing a child can do is “think for him or herself.” Millennials are also more likely than any other living generation to consider themselves political independents and be unaffiliated with an organized religion, even if they believe in God. We say we value uniqueness and are critical of those who demand obedience to others’ visions or social norms.

Paradoxically, it’s now conformist to be an individualist and deviant to be conformist. So much so that a subculture emerged to promote blending in. “Normcore,” it makes opting into conformity a virtue. As one commentator described it, “Normcore finds liberation in being nothing special…”

Obviously LSD didn’t do this all by itself, but it was certainly in the right place at the right time. And as a symbol of the radical transition that began in the 1960s, there’s hardly one better.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianJonathan McDowell: Going to DebConf 17

Going to DebConf17

Completely forgot to mention this earlier in the year, but delighted to say that in just under 4 weeks I’ll be attending DebConf 17 in Montréal. Looking forward to seeing a bunch of fine folk there!

Outbound:

2017-08-04 11:40 DUB -> 13:40 KEF WW853
2017-08-04 15:25 KEF -> 17:00 YUL WW251

Inbound:

2017-08-12 19:50 YUL -> 05:00 KEF WW252
2017-08-13 06:20 KEF -> 09:50 DUB WW852

(Image created using GIMP, fonts-dkg-handwriting and the DebConf17 Artwork.)

CryptogramThe Future of Forgeries

This article argues that AI technologies will make image, audio, and video forgeries much easier in the future.

Combined, the trajectory of cheap, high-quality media forgeries is worrying. At the current pace of progress, it may be as little as two or three years before realistic audio forgeries are good enough to fool the untrained ear, and only five or 10 years before forgeries can fool at least some types of forensic analysis. When tools for producing fake video perform at higher quality than today's CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem. The growth in this technology will transform the meaning of evidence and truth in domains across journalism, government communications, testimony in criminal justice, and, of course, national security.

I am not worried about fooling the "untrained ear," and more worried about fooling forensic analysis. But there's an arms race here. Recording technologies will get more sophisticated, too, making their outputs harder to forge. Still, I agree that the advantage will go to the forgers and not the forgery detectors.

Worse Than FailureRubbed Off

Early magnetic storage was simple in its construction. The earliest floppy disks and hard drives used an iron (III) oxide surface coating a plastic film or disk. Later media would use cobalt-based surfaces, providing a smaller data resolution than iron oxide, but wouldn’t change much.

Samuel H. never had think much about this until he met Micah.

an 8 inch floppy disk

The Noisiest Algorithm

In the fall of 1980, Samuel was a freshman at State U. The housing department had assigned him Micah as his roommate, assuming that since both were Computer Science majors, they would get along.

On their first night together, Samuel asked why Micah kept throwing his books off the shelf onto the floor. “Oh, I just keep shuffling the books around until they’re in the right order,” Micah said.

“Have you tried, I don’t know, taking out one book at a time, starting from the left?” Samuel asked. “Or sorting the books in pairs, then sorting pairs of pairs, and so on?” He had read about sorting algorithms over the summer.

Micah shrugged, continuing to throwing books on the floor.

Divided Priorities

In one of their shared classes, Samuel and Micah were assigned as partners on a project. The assignment: write a program in Altair BASIC that analyzes rainfall measurements from the university weather station, then displays a graph and some simple statistics, including the dates with the highest and lowest values.

All students had access to Altair 8800s in the lab. They were given one 8“ floppy disk with the rainfall data, and a second for additional code. Samuel wanted to handle the data read/write code and leave the display to Micah, but Micah insisted on doing the data-handling code himself. ”I’ve learned a lot," he said. Samuel remembered the sounds of books crashing on the floor and flinched. Still, he thought the display code would be easier, so he let Micah at it.

Samuel finished his half of the code early. Micah, though, was obsessed with Star Trek, a popular student-coded space tactics game, and waited until the day before to start work. “Okay, tonight, I promise,” he said, as Samuel left him in the computer lab at an Altair. As he left, he hard Micah close the drive, and the read-head start clacking across the disk.

Corrupted

The next morning, Samuel found Micah still in the lab at his Altair. He was in tears. “The data’s gone,” he said. “I don’t know what I did. I started getting read errors around 1AM. I think the data file got corrupted somehow.”

Samuel gasped when Micah handed him the floppy cask. Through the read window in the cover, he could see transparent stripes in the disk. The magnetic write surface had been worn away, leaving the clear plastic backing.

Micah explained. He had written his code to read the data file, find the lowest value, write it to an entirely new file, then mark the value in the original file as read. Then it would read the original file again, write another new file, and so on.

Samuel calculated that, with Micah’s “algorithm,” the original file would be read and written to n–1 times, given n entries.

Old Habits Die Hard

Samuel went to the professor and pleaded for a new partner, showing him the floppy with the transparent medium inside. Samuel was given full credit for his half of the program. Micah would have to write his entire program from scratch with a new copy of the data.

Samuel left Micah for another roommate that spring. He didn’t see much of him, as the latter had left Computer Science to major in Philosophy. He didn’t hear about Micah again until a spring day in 1989, after he had finished his PhD.

A grad student, who worked at the computer help desk, told Samuel about a strange man at the computer lab one night. He was despondent, trying to find someone who could help him recover his thesis from a 3 1/2" floppy disk. The student offered to help, and when the man handed him the disk, he pulled the metal tab aside to check for dust.

Most of the oxide coating had been worn away, leaving thin, transparent stripes.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianKees Cook: security things in Linux v4.12

Previously: v4.11.

Here’s a quick summary of some of the interesting security things in last week’s v4.12 release of the Linux kernel:

x86 read-only and fixed-location GDT
With kernel memory base randomization, it was stil possible to figure out the per-cpu base address via the “sgdt” instruction, since it would reveal the per-cpu GDT location. To solve this, Thomas Garnier moved the GDT to a fixed location. And to solve the risk of an attacker targeting the GDT directly with a kernel bug, he also made it read-only.

usercopy consolidation
After hardened usercopy landed, Al Viro decided to take a closer look at all the usercopy routines and then consolidated the per-architecture uaccess code into a single implementation. The per-architecture code was functionally very similar to each other, so it made sense to remove the redundancy. In the process, he uncovered a number of unhandled corner cases in various architectures (that got fixed by the consolidation), and made hardened usercopy available on all remaining architectures.

ASLR entropy sysctl on PowerPC
Continuing to expand architecture support for the ASLR entropy sysctl, Michael Ellerman implemented the calculations needed for PowerPC. This lets userspace choose to crank up the entropy used for memory layouts.

LSM structures read-only
James Morris used __ro_after_init to make the LSM structures read-only after boot. This removes them as a desirable target for attackers. Since the hooks are called from all kinds of places in the kernel this was a favorite method for attackers to use to hijack execution of the kernel. (A similar target used to be the system call table, but that has long since been made read-only.)

KASLR enabled by default on x86
With many distros already enabling KASLR on x86 with CONFIG_RANDOMIZE_BASE and CONFIG_RANDOMIZE_MEMORY, Ingo Molnar felt the feature was mature enough to be enabled by default.

Expand stack canary to 64 bits on 64-bit systems
The stack canary values used by CONFIG_CC_STACKPROTECTOR is most powerful on x86 since it is different per task. (Other architectures run with a single canary for all tasks.) While the first canary chosen on x86 (and other architectures) was a full unsigned long, the subsequent canaries chosen per-task for x86 were being truncated to 32-bits. Daniel Micay fixed this so now x86 (and future architectures that gain per-task canary support) have significantly increased entropy for stack-protector.

Expanded stack/heap gap
Hugh Dickens, with input from many other folks, improved the kernel’s mitigation against having the stack and heap crash into each other. This is a stop-gap measure to help defend against the Stack Clash attacks. Additional hardening needs to come from the compiler to produce “stack probes” when doing large stack expansions. Any Variable Length Arrays on the stack or alloca() usage needs to have machine code generated to touch each page of memory within those areas to let the kernel know that the stack is expanding, but with single-page granularity.

That’s it for now; please let me know if I missed anything. The v4.13 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 3, week 1

Today marks the start of a new term in Queensland, although most states and territories have at least another week of holidays, if not more. It’s always hard to get back into the swing of things in the 3rd term, with winter cold and the usual round of flus and sniffles. OpenSTEM’s 3rd term units branch into new areas to provide some fresh material and a new direction for the new semester. This term younger students are studying the lives of children in the past from a narrative context, whilst older students are delving into aspects of Australian history.

Foundation/Prep/Kindy to Year 3

The main resource for our youngest students for Unit F.3 is Children in the Past – a collection of stories of children from a range of different historical situations. This resource contains 6 stories of children from Aboriginal Australia more than 1,000 years ago, Ancient Egypt, Ancient Rome, Ancient China, Aztec Mexico and Zulu Southern Africa several hundred years ago. Teachers can choose one or two stories from this resource to study in depth with the students this term. The range of stories allows teachers to tailor the material to their class and ensure that there is no need to repeat the same stories in consecutive years. Students will compare the lives of children in the stories with their own lives – focusing on different aspects in different weeks of the term. In this first week teachers will read the stories to the class and help them find the places described on the OpenSTEM “Our World” map and/or a globe.

Students in integrated Foundation/Prep/Kindy and Year 1 classes (Unit F-1.3), will also be examining stories from the Children in the Past resource. Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) will also be comparing their own lives with those of children in the past; however, they will use a collection of stories called Living in the Past, which covers the same areas and time periods as Children in the Past, but provides more in-depth information about a broader range of subject areas and includes the story of the young Tom Petrie, growing up in Brisbane in the 1840s. Students in Year 1 will be considering family structures and the differences and similarities between their own families and the families described in the stories. Students in Year 2 are starting to understand the differences which technology makes to peoples’ lives, especially the technology behind different modes of transport. Students in Year 3 retain a focus on local history. In fact, the Understanding Our World® units for Year 3, term 3 are tailored to match the capital city of the state or territory in which the student lives. Currently units are available for Brisbane and Perth, other capital cities are in preparation. Additional resources are available describing the foundation and growth of Brisbane and Perth, with other cities to follow. Teachers may also prefer to focus on the local community in a smaller town and substitute their own resources for those of the capital city.

Years 3 to 6

Opening of the first parliamentFirst Australian Parliament

Older students are focusing on Australian history this term – Year 3 students (Unit 3.7) will be considering the history of their capital city (or local community) within the broader context of Australian history. Students in Year 4 (Unit 4.3) will be examining Australia in the period up to and including the first half of the 19th century. Students in Year 5 (Unit 5.3) examine the colonial period in Australian history; whilst students in Year 6 (Unit 6.3) are investigating Federation and Australia in the 20th century. In this first week of term, students in Years 3 to 6 will be compiling a timeline of Australian history and filling in important events which they already know about or have learnt about in previous units. Students will revisit this timeline in later weeks to add additional information. The main resources for this week are The History of Australia, a broad overview of Australian history from the Ice Age to the 20th century; and the History of Australian Democracy, an overview of the development of the democratic process in Australia.

Governor's House Sydney 1791The rest of the 3rd term will be spent compiling a scientific report on an investigation into an aspect of Australian history. Students in Year 3 will choose a research topic from a list of themes concerning the history of their capital city. Students in Year 4 will choose from themes on Australia before 1788, the First Fleet, experiences of convicts and settlers, including children, as well as the impact of different animals brought to Australia during the colonial period. Students in Year 5 will choose from themes on the Australian colonies and people including explorers, convicts and settlers, massacres and resistance, colonial animals and industries such as sugar in Queensland. Students in Year 6 will choose from themes on Federation, including personalities such as Henry Parkes and Edmund Barton, Sport, Women’s Suffrage, Children, the Boer War and Aboriginal experiences. This research topic will be undertaken as a guided investigation throughout the term.

Planet Linux AustraliaAnthony Towns: Bitcoin: ASICBoost and segwit2x – Background

I’ve been trying to make heads or tails of what the heck is going on in Bitcoin for a while now. I’m not sure I’ve actually made that much progress, but I’ve at least got some thoughts that seem coherent now.

First, this post is background for people playing along at home who aren’t familiar with the issues or jargon: Bitcoin is a currency based on an electronic ledger that essentially tracks how much Bitcoin exists, and how someone can be authorised to transfer it to someone else; that ledger is currently about 100GB in size, growing at a rate of about a gigabyte a week. The ledger is updated by miners, who compete by doing otherwise pointless work running cryptographic hashes (and in so doing obtain a “proof of work”), and in return receive a reward (denominated in bitcoin) made up from fees by people transacting and an inflation subsidy. Different miners are competing in an essentially zero-sum game, because fees and inflation are essentially a fixed amount that is (roughly) divided up amongst miners according to how much work they do — so while you get more reward for doing more work, it comes at a cost of other miners receiving less reward.

Because the ledger only grows by (about) a gigabyte each week (or a megabyte per block, which is roughly every ten minutes), there is a limit on how many transactions can be included each week (ie, supply is limited), which both increases fees and limits adoption — so for quite a while now, people in the bitcoin ecosystem with a focus on growth have wanted to work out ways to increase the transaction rate. Initial proposals in mid 2015 suggested allowing miners to regularly determine the limit with no official upper bound (nominally “BIP100“, though never actually formally submitted as a proposal), or to increase by a factor of eight within six months, then double every two years after that, until reaching almost 200 times the current size by 2036 (BIP101), or to increase at a rate of about 17% per annum (suggested on the mailing list, but never formally proposed BIP103). These proposals had two major risks: locking in a lot of growth that may turn out to be unnecessary or actively harmful, and requiring what is called a “hard fork”, which would render the existing bitcoin software unable to track the ledger after the change took affect with the possible outcome that two ledgers would coexist and would in turn cause a range of problems. To reduce the former risk, a minimal compromise proposal was made to “kick the can down the road” and just double the ledger growth rate, then figure out a more permanent solution down the road (BIP102) (or to double it three times — to 2MB, 4MB then 8MB — over four years, per Adam Back). A few months later, some of the devs figured out a way to more or less achieve this that also doesn’t require a hard fork, and comes with a host of other benefits, and proposed an update called “segregated witness” at the December 2015 Scaling Bitcoin conference.

And shortly after that things went completely off the rails, and have stayed that way since. Ultimately there seem to be two camps: one group is happy to deploy segregated witness, and is eager to make further improvements to Bitcoin based on that (this is my take on events); while the other group does not, perhaps due to some combination of being opposed to the segregated witness changes directly, wanting a more direct upgrade immediately, being afraid deploying segregated witness will block other changes, or wanting to take control of the bitcoin codebase/roadmap from the current developers (take this with a grain of salt: these aren’t opinions I share or even find particularly reasonable, so I can’t do them justice when describing them; cf ViaBTC’s post to get that side of the argument made directly, eg)

Most recently, and presumably on the basis that the opposed group are mostly worried that deploying segregated witness will prevent or significantly delay a more direct increase in capacity, a bitcoin venture capitalist, Barry Silbert, organised an agreement amongst a number of companies including many miners, to both activate segregated witness within the next month, and to do a hard fork capacity increase by the end of the year. This is the “segwit2x” project; named because it takes segregated witness, (“segwit”) and then additionally doubles its capacity increase (“2x”). This agreement is not supported by any of the existing dev team, and is being developed by Jeff Garzik (who was behind BIP100 and BIP102 mentioned above) in a forked codebase renamed “btc1“, so if successful, this may also satisfy members of the opposed group motivated by a desire to take control of the bitcoin codebase and roadmap, despite that not being an explicit part of the agreement itself.

To me, the arguments presented for opposing segwit don’t really seem plausible. As far as future development goes, a roadmap was put out in December 2015 and endorsed by many developers that explicitly included a hard fork for increased capacity (“moderate block size increase proposals (such as 2/4/8 …)”), among many other things, so the risk of no further progress happening seems contrary to the facts to me. The core bitcoin devs are extremely capable in my estimation, so replacing them seems a bad idea from the start, but even more than that, they take a notably hands off approach to dictating where Bitcoin will go in future — so, to my mind, it seems like a more sensible thing to try would be working with them to advance the bitcoin ecosystem in whatever direction you want, rather than to try to replace them outright. In that context, it seems particularly notable to me that in the eighteen months between the segregated witness proposal and the segwit2x agreement, there hasn’t been any serious attempt to propose a hard fork capacity increase that meets the core dev’s quality standards; for instance there has never been any code for BIP100, and of the various hard forking codebases that have arisen by advocates of the hard fork approach — Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, btc1, and Bitcoin ABC — none have been developed in a way that’s suitable for the changes to be reviewed and merged into core via a pull request in the normal fashion. Further, since one of the main criticisms of a hard fork is that deployment costs are higher when it is done in a short time frame (weeks or a few months versus a year or longer), that lack of engagement over the past 18 months followed by a desperate rush now seems particularly poor to me.

A different explanation for the opposition to segwit became public in April, however. ASICBoost is a patent-pending optimisation to the way Bitcoin miners do the work that entitles them to extend the ledger (for which they receive the rewards described earlier), and while there are a few ways of making use of ASICBoost, perhaps the most effective way turns out to be incompatible with segwit. There are three main alternatives to the covert, segwit-incompatible approach, all of which have serious downsides. The first, overt ASICBoost via modifying the block version reveals that you’re using ASICBoost, which would either (a) encourage other miners to also use the optimisation reducing your profits, (b) give the patent holder cause to charge you royalties or cause other problems (assuming the patent is eventually granted and deemed valid), or (c) encourage the bitcoin community at large to change the ecosystem rules so that the optimisation no longer works. The second, mining empty blocks via ASICBoost means you don’t gain any fee income, reducing your revenue and hence profit. And the third, rolling the extranonce to find a collision rather than combining partial transaction trees increases the preparation work by a factor of ten or so, which is probably enough to outweigh the savings from the optimisation in the first place.

If ASICBoost were being used by a significant number of miners, and segregated witness prevents its continued use in practice, then we suddenly have a very plausible explanation for much of the apparent madness: the loss of the optimisation could significantly increase some miners’ costs or reduce their revenue, reducing profit either way (a high end estimate of $100,000,000 per year was given in the original explanation), which would justify significant investment in blocking that change. Further, an honest explanation of the problem would not be feasible, because this would be just as bad as doing the optimisation overtly — it would increase competition, alert the potential patent owners, and might cause the optimisation to be deliberately disabled — all of which would also negatively affect profits. As a result, there would be substantial opposition to segwit, but the reasons presented in public for this opposition would be false, and it would not be surprising if the people presenting these reasons only give half-hearted effort into providing evidence — their purpose is simply to prevent or at least delay segwit, rather than to actually inform or build a new consensus. To this line of thinking the emphasis on lack of communication from core devs or the desire for a hard fork block size increase aren’t the actual goal, so the lack of effort being put into resolving them over the past 18 months from the people complaining about them is no longer surprising.

With that background, I think there are two important questions remaining:

  1. Is it plausible that preventing ASICBoost would actually cost people millions in profit, or is that just an intriguing hypothetical that doesn’t turn out to have much to do with reality?
  2. If preserving ASICBoost is a plausible motivation, what will happen with segwit2x, given that by enabling segregated witness, it does nothing to preserve ASICBoost?

Well, stay tuned…

,

Planet DebianNiels Thykier: Approaching the exclusive “sub-minute” build time club

For the first time in at least two years (and probably even longer), debhelper with the 10.6.2 upload broke the 1 minute milestone for build time (by mere 2 seconds – look for “Build needed 00:00:58, […]”).  Sadly, the result it is not deterministic and the 10.6.3 upload needed 1m + 5s to complete on the buildds.

This is not the result of any optimizations I have done in debhelper itself.  Instead, it is the result of “questionable use of developer time” for the sake of meeting an arbitrary milestone. Basically, I made it possible to parallelize more of the debhelper build (10.6.1) and finally made it possible to run the tests in parallel (10.6.2).

In 10.6.2, I also made the most of the tests run against all relevant compat levels.  Previously, it would only run the tests against one compat level (either the current one or a hard-coded older version).

Testing more than one compat turned out to be fairly simple given a proper test library (I wrote a “Test::DH” module for the occasion).  Below is an example, which is the new test case that I wrote for Debian bug #866570.

$ cat t/dh_install/03-866570-dont-install-from-host.t
#!/usr/bin/perl
use strict;
use warnings;
use Test::More;

use File::Basename qw(dirname);
use lib dirname(dirname(__FILE__));
use Test::DH;
use File::Path qw(remove_tree make_path);
use Debian::Debhelper::Dh_Lib qw(!dirname);

plan(tests => 1);

each_compat_subtest {
  my ($compat) = @_;
  # #866570 - leading slashes must *not* pull things from the root FS.
  make_path('bin');
  create_empty_file('bin/grep-i-licious');
  ok(run_dh_tool('dh_install', '/bin/grep*'));
  ok(-e "debian/debhelper/bin/grep-i-licious", "#866570 [${compat}]");
  ok(!-e "debian/debhelper/bin/grep", "#866570 [${compat}]");
  remove_tree('debian/debhelper', 'debian/tmp');
};

I have cheated a bit on the implementation; while the test runs in a temporary directory, the directory is reused between compat levels (accordingly, there is a “clean up” step at the end of the test).

If you want debhelper to maintain this exclusive (and somewhat arbitrary) property (deterministically), you are more than welcome to help me improve the Makefile. 🙂  I am not sure I can squeeze any more out of it with my (lack of) GNU make skills.


Filed under: Debhelper, Debian

Harald WelteTen years after first shipping Openmoko Neo1973

Exactly 10 years ago, on July 9th, 2007 we started to sell+ship the first Openmoko Neo1973. To be more precise, the webshop actually opened a few hours early, depending on your time zone. Sean announced the availability in this mailing list post

I don't really have to add much to my ten years [of starting to work on] Openmoko anniversary blog post a year ago, but still thought it's worth while to point out the tenth anniversary.

It was exciting times, and there was a lot of pioneering spirit: Building a Linux based smartphone with a 100% FOSS software stack on the application processor, including all drivers, userland, applications - at a time before Android was known or announced. As history shows, we'd been working in parallel with Apple on the iPhone, and Google on Android. Of course there's little chance that a small taiwanese company can compete with the endless resources of the big industry giants, and the many Neo1973 delays meant we had missed the window of opportunity to be the first on the market.

It's sad that Openmoko (or similar projects) have not survived even as a special-interest project for FOSS enthusiasts. Today, virtually all options of smartphones are encumbered with way more proprietary blobs than we could ever imagine back then.

In any case, the tenth anniversary of trying to change the amount of Free Softwware in the smartphone world is worth some celebration. I'm reaching out to old friends and colleagues, and I guess we'll have somewhat of a celebration party both in Germany and in Taiwan (where I'll be for my holidays from mid-September to mid-October).

Sam VargheseFrench farce spoils great Test series in New Zealand

Referees or umpires can often put paid to an excellent game of any sport by making stupid decisions. When this happens — and it does so increasingly these days — the reaction of the sporting body concerned is to try and paper over the whole thing.

Additionally, teams and their coaches/managers are told not to criticise referees or umpires and to respect them. Hence a lot tends to be covered up.

But the fact is that referees and umpires are employees who are being paid well, especially when the sports they are officiating are high-profile. Do they not need to be competent in what they do?

And you can’t get much higher profile than a deciding rugby test between the New Zealand All Blacks and the British and Irish Lions. A French referee, Romain Poite, screwed up the entire game in the last few minutes through a wrong decision.

Poite awarded a penalty to the All Blacks when a restart found one of the Lions players offside. He then changed his mind and awarded a scrum to the All Blacks instead, using mafia-like language, “we will make a deal about this” before he mentioned the change of decision.

When he noticed the infringement initially, Poite should have held off blowing his whistle and allowed New Zealand the advantage as one of their players had gained possession of the ball and was making inroads into Lions territory. But he did not.

He blew, almost as a reflex action, and stuck his arm up to signal a penalty to New Zealand. It was in a position which was relatively easy to convert and would have given New Zealand almost certain victory as the teams were level 15-all at that time. There were just two minutes left to play when this incident happened.

The New Zealand coach Steve Hansen tried to paper over things at his post-match press conference by saying that his team should have sewn up things much earlier — they squandered a couple of easy chances and also failed to kick a penalty and convert one of their two tries — and could not blame Poite for their defeat.

This kind of talk is diplomacy of the worst kind. It encourages incompetent referees.

One can cast one’s mind back to 2007 and the quarter-finals of the World Cup rugby tournament when England’s Wayne Barnes failed to spot a forward pass and awarded France a try which gave them a 20-18 lead over New Zealand; ultimately the French won the game by this same score.

Barnes was never pulled into line and to this day he seems to be unable to spot a forward pass. He continues to referee international games and must be having quite powerful sponsors to continue.

Hansen did make one valid point though: that there should be consistency in decisions. And that did not happen either over the three tests. It is funny that referees use the same rulebook and interpret things differently depending on whether they are from the southern hemisphere or northern hemisphere.

Is there no chief of referees to thrash out a common ruling for the officials? It makes rugby look very amateurish and spoils the game for the viewer.

Associations that run various sports are often heard complaining that people do not come to watch games. Put a couple more people like Poite to officiate and you will soon have empty stadiums.

Planet DebianSteinar H. Gunderson: Nageru 1.6.1 released

I've released version 1.6.1 of Nageru, my live video mixer.

Now that Solskogen is coming up, there's been a lot of activity on the Nageru front, but hopefully everything is actually coming together now. Testing has been good, but we'll see whether it stands up to the battle-hardening of the real world or not. Hopefully I won't be needing any last-minute patches. :-)

Besides the previously promised Prometheus metrics (1.6.1 ships with a rather extensive set, as well as an example Grafana dashboard) and frame queue management improvements, a surprising late addition was that of a new transcoder called Kaeru (following the naming style of Nageru itself, from the japanese verb kaeru (換える) which means roughly to replace or excahnge—iKnow! claims it can also mean “convert”, but I haven't seen support for this anywhere else).

Normally, when I do streams, I just let Nageru do its thing and send out a single 720p60 stream (occasionally 1080p), usually around 5 Mbit/sec; less than that doesn't really give good enough quality for the high-movement scenarios I'm after. But Solskogen is different in that there's a pretty diverse audience when it comes to networking conditions; even though I have a few mirrors spread around the world (and some JavaScript to automatically pick the fastest one; DNS round-robin is really quite useless here!), not all viewers can sustain such a bitrate. Thus, there's also a 480p variant with around 1 Mbit/sec or so, and it needs to come from somewhere.

Traditionally, I've been using VLC for this, but streaming is really a niche thing for VLC. I've been told it will be an increased focus for 4.0 now that 3.0 is getting out the door, but over the last few years, there's been a constant trickle of little issues that have been breaking my transcoding pipeline. My solution for this was to simply never update VLC, but now that I'm up to stretch, this didn't really work anymore, and I'd been toying around with the idea of making a standalone transcoder for a while. (You'd ask “why not the ffmpeg(1) command-line client?”, but it's a bit too centered around files and not streams; I use it for converting to HLS for iOS devices, but it has a nasty habit of I/O blocking real work, and its HTTP server really isn't meant for production work. I could survive the latter if it supported Metacube and I could feed it into Cubemap, but it doesn't.)

It turned out Nageru had already grown most of the pieces I needed; it had video decoding through FFmpeg, x264 encoding with speed control (so that it automatically picks the best preset the machine can sustain at any given time) and muxing, audio encoding, proper threading everywhere, and a usable HTTP server that could output Metacube. All that was required was to add audio decoding to the FFmpeg input, and then replace the GPU-based mixer and GUI with a very simple driver that just connects the decoders to the encoders. (This means it runs fine on a headless server with no GPU, but it also means you'll get FFmpeg's scaling, which isn't as pretty or fast as Nageru's. I think it's an okay tradeoff.) All in all, this was only about 250 lines of delta, which pales compared to the ~28000 lines of delta that are between 1.3.1 (used for last Solskogen) and 1.6.1. It only supports a rather limited set of Prometheus metrics, and it has some limitations, but it seems to be stable and deliver pretty good quality. I've denoted it experimental for now, but overall, I'm quite happy with how it turned out, and I'll be using it for Solskogen.

Nageru 1.6.1 is on its way into Debian, but it depends on a new version of Movit which needs to go through the NEW queue (a soname bump), so it might be a few days. In the meantime, I'll be busy preparing for Solskogen. :-)

,

Krebs on SecuritySelf-Service Food Kiosk Vendor Avanti Hacked

Avanti Markets, a company whose self-service payment kiosks sit beside shelves of snacks and drinks in thousands of corporate breakrooms across America, has suffered of breach of its internal networks in which hackers were able to push malicious software out to those payment devices, the company has acknowledged. The breach may have jeopardized customer credit card accounts as well as biometric data, Avanti warned.

According to Tukwila, Wash.-based Avanti’s marketing literature, some 1.6 million customers use the company’s break room self-checkout devices — which allow customers to pay for drinks, snacks and other food items with a credit card, fingerprint scan or cash.

An Avanti Markets kiosk. Image: Avanti

An Avanti Markets kiosk. Image: Avanti

Sometime in the last few hours, Avanti published a “notice of data breach” on its Web site.

“On July 4, 2017, we discovered a sophisticated malware attack which affected kiosks at some Avanti Markets. Based on our investigation thus far, and although we have not yet confirmed the root cause of the intrusion, it appears the attackers utilized the malware to gain unauthorized access to customer personal information from some kiosks. Because not all of our kiosks are configured or used the same way, personal information on some kiosks may have been adversely affected, while other kiosks may not have been affected.”

Avanti said it appears the malware was designed to gather certain payment card information including the cardholder’s first and last name, credit/debit card number and expiration date.

Breaches at point-of-sale vendors have become almost regular occurrences over the past few years, but this breach is especially notable as it may also have jeopardized customer biometric data. That’s because the newer Avanti kiosk systems allow users to pay using a scan of their fingerprint.

“In addition, users of the Market Card option may have had their names and email addresses compromised, as well as their biometric information if they used the kiosk’s biometric verification functionality,” the company warned.

On Thursday, KrebsOnSecurity learned from a source at a law firm that the food vending machine in its employee lunchroom was no longer able to accept credit cards. The source said his firm’s information technology personnel told him the credit card functionality had been temporarily disabled because of a breach at Avanti.

Another source told this author that Avanti’s corporate network had been breached, and that Avanti had made the decision to turn off all self-checkouts for now — although the source said customers could still use cash at the machines.

“I was told that about half of the self-checkouts do not have P2Pe,” the source said, on condition of anonymity. P2Pe is short for “point-to-point encryption,” and it’s a technological solution that encrypts sensitive data such as credit card information at every point in the card transaction. In theory, P2Pe should to be able to protect card data even if there is malicious software resident on the device or network in question.

Avanti said in its notice that it had shut down payment processing at some locations, and that the company was working with its operators to purge infected systems of any malware from the attack and to take steps to “substantially minimize the risk of a data compromise in the future.”

THE MALWARE

On Friday evening, security firm RiskAnalytics published a blog post that detailed an experience from a customer who shared a remarkably similar experience to the one referenced by the anonymous law firm source above.

RiskAnalytics’s Noah Dunker wrote that the company’s technology on July 4 flagged suspicious behavior by a break room vending kiosk. Further inspection of the device and communications traffic emanating from it revealed it was infected with a family of point-of-sale malware known as PoSeidon (a.k.a. “FindPOS”) that siphons credit card data from point-of-sale devices.

“In our analysis of the incident, it seems most likely that the larger vendor was compromised, and some or all of the kiosks maintained by local vendors were impacted,” Dunker wrote. “We’ve been able to identify at least two smaller vendors with local operations that have been impacted in two different cities though we are not naming any impacted vendors yet, as we’ve been unable to contact them directly.”

KrebsOnSecurity reached out to RiskAnalytics to see if the vendor of the snack machine used by the victim organization he wrote about also was Avanti. Dunker confirmed that the kiosk vendor that was the subject of his post was indeed Avanti.

Dunker noted that much like point-of-sale devices at many restaurant chains, these snack machines usually are installed and managed by third-party technology companies, adding another layer of complexity to the challenge of securing these devices from hackers.

Dunker said RiskAnalytics first noticed something wasn’t right with its client’s break room snack machine after it began sending data out of the client’s network using an SSL encryption certificate that has long been associated with cybercrime activity — including ransomware activity dating back to 2015.

“This is a textbook example of an ‘Internet of Things’ (IoT) threat: A network-connected device, controlled and maintained by a third party, which cannot be easily patched, audited, or controlled by your own IT staff,” Dunker wrote.

ANALYSIS

Credit card machines and point-of-sale devices are favorite targets of malicious hackers, mainly because the data stolen from those systems is very easy to monetize. However, the point-of-sale industry has a fairly atrocious record of building insecure products and trying to tack on security only after the products have already gone to market. Given this history, it’s remarkable that some of these same vendors are now encouraging customers to entrust them with biometric data.

Credit cards can be re-issued, but biometric identifiers are for life. Companies that choose to embed biometric capabilities in their products should be held to a far higher security standard than those used to protect card data.

For starters, any device that requests, stores or transmits biometric data should at a minimum ensure that the data remains strongly encrypted both at rest and in transit. Judging by Avanti’s warning that some customer biometric data may have been compromised in this breach, it seems this may not have been the case for at least a subset of their products.

I would like see some industry acknowledgement of this before we start to see more stand-alone payment applications entice users to supply biometric data, but I share Dunker’s fear that we may soon see biometric components added to a whole host of Internet-connected (IoT) devices that simply were not designed with security in mind.

Also, breaches like this illustrate why it’s critically important for organizations to segment their internal networks, and to keep payment systems completely isolated from the rest of the network. However, neither of the victim organizations referenced above appear to have taken this important precaution.

To illustrate this concept a bit further, it may well be that the criminal masterminds behind this attack could have made far more money had they used the remote access they apparently had to these Avanti devices to push ransomware out to Microsoft Windows computers residing on the same internal network as the payment kiosks.

Planet Linux AustraliaLev Lafayette: One Million Jobs for Spartan

Whilst it is a loose metric, our little cluster, "Spartan", at the University of Melbourne ran its 1 millionth job today after almost exactly a year since launch.

The researcher in question is doing their PhD in biochemistry. The project is a childhood asthma study:

"The nasopharynx is a source of microbes associated with acute respiratory illness. Respiratory infection and/ or the asymptomatic colonisation with certain microbes during childhood predispose individuals to the development of asthma.

Using data generated from 16S rRNA sequencing and metagenomic sequencing of nasopharyn samples, we aim to identify which specific microbes and interactions are important in the development of asthma."

Moments like this is why I do HPC.

Congratulations to the rest of the team and to the user community.

read more

Planet DebianUrvika Gola: Outreachy Progress on Lumicall

unnamedLumicall 1.13.0 is released! 😀

Through Lumicall, you can make encrypted calls and send messages using open standards. It uses the SIP protocol to inter-operate with other apps and corporate telephone systems.

During the Outreachy Internship period I worked on the following issues :-

I researched on creating a white label version of Lumicall. Few ideas on how the white label build could be used..

  1. Existing SIP providers can use white label version of Lumicall to expand their business and launch SIP client. This would provide a one stop shop for them!!
  2. New SIP clients/developers can use Lumicall white label version to get the underlying working of making encrypted phone calls using SIP protocol, it will help them to focus on other additional functionalities they would like to include.

Documentation for implementing white labelling – Link 1 and Link 2

 

Since Lumicall is majorly used to make encrypted calls, there was a need to designate quiet times and the phone will not make an audible ringing tone during that time & if the user has multiple SIP accounts, the user can set the silent mode functionality on one of them, maybe, the Work account.
Documentation for adding silent mode feature  – Link 1 and Link 2

 

  • Adding 9 Patch Image 

Using Lumicall, users can send SIP messages across. Just to improve the UI a little, I added a 9 patch image in the message screen. A 9 patch image is created using 9 patch tools and are saved as imagename.9.png . The image will resize itself according to the text length and font size.

Documentation for 9 patch image – Link

9patch

You can try the new version of Lumicall here ! and know more about Lumicall on a blog by Daniel Pocock.
Looking forward to your valuable feedback !! 😀


Planet DebianDaniel Silverstone: Gitano - Approaching Release - Access Control Changes

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects.

In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines

With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:

define is_steve user exact steve
allow "Steve can read my repo" is_steve op_read

And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:

define is_jeff user exact jeff
define is_steve user exact steve
define readers anyof is_jeff is_steve
allow "Steve and Jeff can read my repo" readers op_read

This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:

allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]

Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:

define readers anyof [user exact jeff] [user exact steve] [user exact susan]
allow "My friends can read my repo" op_read readers

The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY

As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better.

If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:

  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.

No more 'basic' matches

Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/${user}/. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy.

To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is.

This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.

  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano

If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system.

Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

,

CryptogramFriday Squid Blogging: Why It's Hard to Track the Squid Population

Counting squid is not easy.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramAn Assassin's Teapot

This teapot has two chambers. Liquid is released from one or the other depending on whether an air hole is covered. I want one.

Krebs on SecurityB&B Theatres Hit in 2-Year Credit Card Breach

B&B Theatres, a company that owns and operates the 7th-largest theater chain in America, says it is investigating a breach of its credit card systems. The acknowledgment comes just days after KrebsOnSecurity reached out to the company for comment on reports from financial industry sources who said they suspected the cinema chain has been leaking customer credit card data to cyber thieves for the past two years.

bandbHeadquartered in Gladstone, Missouri, B&B Theatres operates approximately 400 screens across 50 locations in seven states, including Arkansas, Arizona, Florida, Kansas, Missouri, Mississippi, Nebraska, Oklahoma and Texas.

In a written statement forwarded by B&B spokesman Paul Farnsworth, the company said B&B Theatres was made aware of a potential breach by a local banking partner in one of its communities.

“Upon being notified we immediately engaged Trustwave, a third party security firm recommended to B&B by partners at major credit card brands, to work with our internal I.T. resources to contain the breach and mitigate any further potential penetration,” the statement reads. “While some malware was identified on B&B systems that dated back to 2015, the investigation completed by Trustwave did not conclude that customer data was at risk on all B&B systems for the entirety of the breach.”

The statement continued:

“Trustwave’s investigation has since shown the breach to be contained to the satisfaction of our processing partners as well as the major credit card brands. B&B Theatres values the security of our customer’s data and will continue to implement the latest available technologies to keep our networks & systems secure into the future.”

In June, sources at two separate U.S.-based financial institutions reached out to KrebsOnSecurity about alerts they’d received privately from the credit card associations regarding lists of card numbers that were thought to have been compromised in a recent breach.

The credit card companies generally do not tell financial institutions in these alerts which merchants got breached, leaving banks and credit unions to work backwards from those lists of compromised cards to a so-called “common point-of-purchase” (CPP).

In addition to lists of potentially compromised card numbers, the card associations usually include a “window of exposure” — their best estimate of how long the breach lasted. Two financial industry sources said initial reports from the credit card companies said the window of exposure at B&B Theatres was between Sept. 1, 2015 and April 7, 2017.

However, a more recent update to this advisory shared by my sources shows that the window of exposure is currently estimated between April 2015 and April 2017, meaning cyber thieves have likely been siphoning credit and debit card data from B&B Theatres customers for nearly two years undisturbed.

Malicious hackers can steal credit card data from organizations that accept cards by hacking into point-of-sale systems remotely and seeding those systems with malicious software that can copy account data stored on a card’s magnetic stripe. Thieves can then use that data to clone the cards and use the counterfeit cards to buy high-priced merchandise from electronics stores and big box retailers.

The statement from B&B Theatres made no mention of whether their credit card systems were set up to handle transactions from more secure chip-based credit and debit cards, which are far more difficult and expensive for thieves to counterfeit.

Under credit card association rules that went into effect in 2015, merchants that do not have the ability to process transactions from chip-based cards assume full liability for all fraudulent charges on purchases involving chip-enabled cards that were instead merely swiped through a regular mag-stripe reader at the point of purchase.

If there is a silver lining in this breach of a major silver screens operator, perhaps it is this: One source in the financial industry told this author that the breach at B&B persisted for so long that a decent percentage of the cards listed in the alerts his employer received from the credit card companies had been listed as compromised in other major breaches and so had already been canceled and re-issued.

Interested in learning the back story of why the United States is embarrassingly the last of the G20 nations to move to more secure chip-based cards? Ever wondered why so many retailers have chip-enabled readers at the checkout counter but are still asking you to swipe? Curious about how cyber thieves have adapted to this bonanza of credit card booty? If you answered “yes” to any of the above questions, you may find these stories useful and/or interesting.

Why So Many Card Breaches?

The Great EMV Fake-Out: No Chip for You!

Visa Delays Chip Deadline for Pumps to 2020

Chip & PIN vs. Chip and Signature

CryptogramDNI Wants Research into Secure Multiparty Computation

The Intelligence Advanced Research Projects Activity (IARPA) is soliciting proposals for research projects in secure multiparty computation:

Specifically of interest is computing on data belonging to different -- potentially mutually distrusting -- parties, which are unwilling or unable (e.g., due to laws and regulations) to share this data with each other or with the underlying compute platform. Such computations may include oblivious verification mechanisms to prove the correctness and security of computation without revealing underlying data, sensitive computations, or both.

My guess is that this is to perform analysis using data obtained from different surveillance authorities.

CryptogramFriday Squid Blogging: Food Supplier Passes Squid Off as Octopus

According to a lawsuit (main article behind paywall), "a Miami-based food vendor and its supplier have been misrepresenting their squid as octopus in an effort to boost profits."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: Unmapped Potential

"As an Australian, I demand that they replace one of the two Belgiums with something to represent the quarter of the Earth they missed!" writes John A.

 

Andrew wrote, "{47}, {48}, and I would be {49} if {50} and {51}."

 

"Apparently, DoorDash is listening to their drivers about low wages and they 'fixed the glitch'," write Mark H.

 

"Advertising in Chicago's Ogilvie transportation center in need of recovery," writes Dave T.

 

"On the one hand, I'm interested in how Thunderbird plans to quaduple my drive space, but I'm kind of scared to click on the button," wrote Josh B.

 

"Good thing the folks at Microsoft put the little message in parenthesis," writes Bobbie, "else, you know, I might expect a touch bar to magically appear on my laptop."

 

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Planet Linux AustraliaDanielle Madeley: Using the Nitrokey HSM with GPG in macOS

Getting yourself set up in macOS to sign keys using a Nitrokey HSM with gpg is non-trivial. Allegedly (at least some) Nitrokeys are supported by scdaemon (GnuPG’s stand-in abstraction for cryptographic tokens) but it seems that the version of scdaemon in brew doesn’t have support.

However there is gnupg-pkcs11-scd which is a replacement for scdaemon which uses PKCS #11. Unfortunately it’s a bit of a hassle to set up.

There’s a bunch of things you’ll want to install from brew: opensc, gnupg, gnupg-pkcs11-scd, pinentry-mac, openssl and engine_pkcs11.

brew install opensc gnupg gnupg-pkcs11-scd pinentry-mac \
    openssl engine-pkcs11

gnupg-pkcs11-scd won’t create keys, so if you’ve not made one already, you need to generate yourself a keypair. Which you can do with pkcs11-tool:

pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so -l \
    --keypairgen --key-type rsa:2048 \
    --id 10 --label 'Danielle Madeley'

The --id can be any hexadecimal id you want. It’s up to you to avoid collisions.

Then you’ll need to generate and sign a self-signed X.509 certificate for this keypair (you’ll need both the PEM form and the DER form):

/usr/local/opt/openssl/bin/openssl << EOF
engine -t dynamic \
    -pre SO_PATH:/usr/local/lib/engines/engine_pkcs11.so \
    -pre ID:pkcs11 \
    -pre LIST_ADD:1 \
    -pre LOAD \
    -pre MODULE_PATH:/usr/local/lib/opensc-pkcs11.so
req -engine pkcs11 -new -key 0:10 -keyform engine \
    -out cert.pem -text -x509 -days 3640 -subj '/CN=Danielle Madeley/'
x509 -in cert.pem -out cert.der -outform der
EOF

The flag -key 0:10 identifies the token and key id (see above when you created the key) you’re using. If you want to refer to a different token or key id, you can change these.

And import it back into your HSM:

pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so -l \
    --write-object cert.der --type cert \
    --id 10 --label 'Danielle Madeley'

You can then configure gnupg-agent to use gnupg-pkcs11-scd. Edit the file ~/.gnupg/gpg-agent.conf:

scdaemon-program /usr/local/bin/gnupg-pkcs11-scd
pinentry-program /usr/local/bin/pinentry-mac

And the file ~./gnupg/gnupg-pkcs11-scd.conf:

providers nitrokey
provider-nitrokey-library /usr/local/lib/opensc-pkcs11.so

gnupg-pkcs11-scd is pretty nifty in that it will throw up a (pin entry) dialog if your token is not available, and is capable of supporting multiple tokens and providers.

Reload gpg-agent:

gpg-agent --server gpg-connect-agent << EOF
RELOADAGENT
EOF

Check your new agent is working:

gpg --card-status

Get your key handle (grip), which is the 40-character hex string after the phrase KEY-FRIEDNLY (sic):

gpg-agent --server gpg-connect-agent << EOF
SCD LEARN
EOF

Import this key into gpg as an ‘Existing key’, giving the key grip above:

gpg --expert --full-generate-key

You can now use this key as normal, create sub-keys, etc:

gpg -K
/Users/danni/.gnupg/pubring.kbx
-------------------------------
sec> rsa2048 2017-07-07 [SCE]
 1172FC7B4B5755750C65F9A544B80C280F80807C
 Card serial no. = 4B43 53233131
uid [ultimate] Danielle Madeley <danielle@madeley.id.au>

echo -n "Hello World" | gpg --armor --clearsign --textmode

Side note: the curses-based pinentry doesn’t deal with piping content into stdin, which is why you want pinentry-mac.

Terminal console showing a gpg signing command. Over the top is a dialog box prompting the user to insert her Nitrokey token

You can also import your certificate into gpgsm:

gpgsm --import < ca-certificate
gpgsm --learn-card

And that’s it, now you can sign your git tags with your super-secret private key, or whatever it is you do. Remember that you can’t exfiltrate the secret keys from your HSM in the clear, so if you need a backup you can create a DKEK backup (see the SmartcardHSM docs), or make sure you’ve generated that revocation certificate, or just decided disaster recovery is for dweebs.

Don MartiTwo approaches to adfraud, and some good news

Adfraud is a big problem, and we keep seeing two basic approaches to it.

Flight to quality: Run ads only on trustworthy sites. Brands are now playing the fraud game with the "reputation coprocessors" of the audience's brains on the brand's side. (Flight to quality doesn't mean just advertise on the same major media sites as everyone else—it can scale downward with, for example, the Project Wonderful model that lets you choose sites that are "brand safe" for you.)

Increased surveillance: Try to fight adfraud by continuing to play the game of trying to get big-money impressions from the cheapest possible site, but throw more tracking at the problem. Biggest example of this is to move ad money to locked-down mobile platforms and away from the web.

The problem with the second approach is that the audience is no longer on the brand's side. Trying to beat adfraud with technological measures is just challenging hackers to a series of hacking contests. And brands keep losing those. Recent news: The Judy Malware: Possibly the largest malware campaign found on Google Play.

Anyway, I'm interested in and optimistic about the results of the recent Mozilla/Caribou Digital report. It turns out that USA-style adtech is harder to do in countries where users are (1) less accurately tracked and (2) equipped with blockers to avoid bandwidth-sucking third-party ads. That's likely to mean better prospects for ad-supported news and cultural works, not worse. This report points out the good news that the so-called adtech tax is lower in developing countries—so what kind of ad-supported businesses will be enabled by lower "taxes" and "reinvention, not reinsertion" of more magazine-like advertising?

Of course, working in those markets is going to be hard for big US or European ad agencies that are now used to solving problems by throwing creepy tracking at them. But the low rate of adtech taxation sounds like an opportunity for creative local agencies and brands. Maybe the report should have been called something like "The Global South is Shitty-Adtech-Proof, so Brands Built Online There Are Going to Come Eat Your Lunch."

,

Google AdsenseIntroducing AdSense Native ads

Today we’re introducing the new AdSense Native ads -- a suite of ad formats designed to match the look and feel of your site, providing a great user experience for your visitors. AdSense Native ads come in three categories: In-feed, In-article, and Matched content*. They can all be used at once or individually and are designed for:

  • A great user experience: they fit naturally on your site and use high quality advertiser elements, such as high resolution images, longer titles and descriptions, to provide a more attractive experience for your visitors. 
  • A great look and feel across different screen sizes: the ads are built to look great on mobile, desktop, and tablet. 
  • Ease of use: easy-to-use editing tools help you make the ads look great on your site.

Native In-feed opens up new revenue opportunity in your feeds
Available to all publishers, In-feed ads slot neatly inside your feeds, e.g. a list of articles or products on your site. These ads are highly customizable to match the look and feel of your feed content and offer new places to show ads.

Native In-article offers a better advertising experience
Available to all publishers, In-article ads are optimized by Google to help you put great-looking ads between the paragraphs of your pages. In-article ads use high-quality advertising elements and offer a great reading experience to your visitors.

Matched Content* drives more users to your content 
Available to publishers that meet the eligibility criteria, Matched content is a content recommendation tool that helps you promote your content to visitors and potentially increase revenue, page views, and time spent on site. Publishers that are eligible for the “Allow ads” feature can also show relevant ads within their Matched content units, creating an additional revenue opportunity in this placement.

Getting started with AdSense Native ads 
AdSense Native ads can be placed together, or separately, to customize your website’s ad experience. Use In-feed ads inside your feed (e.g. a list of articles, or products), In-article ads between the paragraphs of your pages, and Matched content ads directly below your articles.  When deciding your native strategy, keep the content best practices in mind.

To get started with AdSense Native ads:

  • Sign in to your AdSense account
  • In the left navigation panel, click My ads
  • Click +New ad unit
  • Select your ad category: In-article, In-feed or Matched content

We'd love to hear what you think about these ad formats in the comments section below this post.

Posted by: Violetta Kalathaki - AdSense Product Manager

*This format has already been available to eligible publishers, and is now part of AdSense Native ads.

LongNowThe Hermit Who Inadvertently Shaped Climate-Change Science

Billy Barr was just trying to get away from it all when he went to live at the base of Gothic Mountain in the Colorado wilderness in 1973. He wound up creating an invaluable historical record of climate change. His motivation for meticulously logging the changing temperatures, snow levels, weather, and wildlife sightings? Simple boredom.

Morgan Heim / Day’s Edge Production

Now, the Rocky Mountain Biological Observatory is using his 44 years of data to understand how climate change affects Gothic Mountain’s ecology, and scaling those learnings to high alpine environments around the world.

Read The Atlantic’s profile on Billy Barr in full (LINK). 

Cory DoctorowHow to write pulp fiction that celebrates humanity’s essential goodness


My latest Locus column is “Be the First One to Not Do Something that No One Else Has Ever Not Thought of Doing Before,” and it’s about science fiction’s addiction to certain harmful fallacies, like the idea that you can sideline the actual capabilities and constraints of computers in order to advance the plot of a thriller.


That’s the idea I turned on its head with my 2008 novel Little Brother, a book in which what computers can and can’t do were used to determine the plot, not the other way around.

With my latest novel Walkaway, I went after another convenient fiction: the idea that in disasters, people are at their very worst, neighbor turning on neighbor. In Walkaway, disaster is the moment at which the refrigerator hum of petty grievances stops and leaves behind a ringing moment of silent clarity, in which your shared destiny with other people trumps all other questions.


In this scenario — which hews much more closely to the truth of humanity under conditions of crisis — the fight isn’t between good guys and bad guys: it’s between people of goodwill who still can’t agree on what to do, and between people who are trying to help and people who are convinced that any attempt to help is a pretense covering up a bid to conquer and subjugate.


In its own way, man-vs-man-vs-nature is every bit as much a fantasy as the technothriller’s impos­sible, plot-expedient computers. As Rebecca Solnit documented in her must-read history book A Paradise Built in Hell, disaster is not typically attended by a breakdown in the social order that lays bare the true bestial nature of your fellow human. Instead, these are moments in which people rise brilliantly to the occasion, digging their neighbors out of the rubble, rushing to give blood, opening their homes to strangers.

Of course, it’s easy to juice a story by ignoring this fact, converting disaster to catastrophe by pitting neighbor against neighbor, and this approach has the added bonus of pandering to the reader’s lurking (or overt) racism and classism – just make the baddies poor and/or brown.

But what if a story made the fact of humanity’s essential goodness the center of its conflict? What if, after a disaster, everyone wanted to help, but no one could agree on how to do so? What if the argument was not between enemies, but friends – what if the fundamental, irreconcilable differences were with people you loved, not people you hated?

As anyone who’s ever had a difficult Christmas dinner with family can attest, fighting with people you love is a lot harder than fighting with people you hate.

Be the First One to Not Do Something that No One Else Has Ever Not Thought of Doing Before [Cory Doctorow/Locus]

Planet DebianUrvika Gola: Speaking at Open Source Bridge’17

Recently, I and my Co – speaker Pranav Jain, got a chance to speak at Open Source Bridge conference which was held in Portland, Oregon!

Pranav talked about GSoC and I talked about Outreachy , together we talked about Free RTC project Lumicall.
OSB conference was much more than just a ‘conference’. More than content in the talks, it had meaning. I am referring to the amazing keynote session by Nicole Sanchez on Tech Reform. She explained wonderfully the need of the hour, i.e Diversity inclusion is not just ‘inclusion’. Focus should be on what comes after the inclusion, Growth.

We also met several Debian developers and Debian mentor for Outreachy (Hoping to meet my mentors someday!! )

Thanks to OSB, I got to meet Outreachy co-ordinator Sarah Sharp! It was wonderful meeting an Outreachy-person! 😀 We talked and exchanged ideas about the programme. and.. she clicked beautiful pictures of us delivering the talk.

Urvika Gola at Open Source BridgePicture Courtesy – Sarah Sharp
Urvika Gola at Open Source BridgePicture Courtesy – Sarah Sharp

The talk ended with an unexpected and very precious hand written note written by Audrey Eschright..

20170622_114118-1

Thank you Debian for giving us a chance to speak at Open Source Bridge and to meet wonderful people in Open Source. ❤


Planet DebianJoachim Breitner: The Micro Two Body Problem

Inspired by recent PhD comic “Academic Travel” and not-so-recent xkcd comic “Movie Narrative Charts”, I created the following graphics, which visualizes the travels of an academic couple over the course of 10 months (place names anonymized).

Two bodies traveling the world

Two bodies traveling the world

Planet DebianHolger Levsen: 20170706-fcmc.tv

a media experiment: fcmc.tv / G20 not welcome

Our view currently every day and night:

No one is illegal!

No football for fascists!

The FC/MC is a collective initiative to change the perception of the G20 in Hamburg - the summit itself and the protests surrounding it. FC/MC is a media experiment, located in the stadium of the amazing St.Pauli football club. We will operate until this Sunday, providing live coverage (text, photos, audio, video), back stories and much much more. Another world is possible!

Disclaimer: I'm not involved in content generation, I'm just doing computer stuff as usual, but that said, I really like the work of those who are! :-)

Worse Than FailureAnnouncements: Build Totally Non-WTF Products at Inedo

As our friends at HIRED will attest, finding a good workplace is tough, for both the employee and the employer. Fortunately, when it comes looking for developer talent, Inedo has a bit of an advantage: in addition to being a DevOps products company, we publish The Daily WTF.

Not too long ago, I shared a Support Analyst role here and ended up hiring fellow TDWTF Ben Lubar to join the Inedo team. He's often on the front lines, supporting our customer base; but he's also done some interesting dev projects as well (including a Source Gear Vault to Git migration tool).

Today, we're looking for another developer to work from our Cleveland office. Our code is all in .NET, but we have a lot of integrations; so if you can write C# fairly comfortably but know Docker really well, then that's a great fit. The reason is that, as a software product company that builds tools for other developers, you'll do more than just write C# - in fact, a big part of the job will be resisting the urge to write mountains of code that don't actually solve a real problem. More often than not, a bit of support, a tutorial, an extension/plug-in, and better documentation go a heck of a lot further than new core product code.

We do have a couple of job postings for the position (one on Inedo.com, the other on Indeed), and you're welcome to read those to get a feel for the actual bullet points. But if you're reading this and are interested in learning more, you can use the VIP line and bypass the normal process: just shoot me an email directly at apapadimoulis at inedo dot com with "[TDWTF/Inedo] .NET Developer" as the subject and your resume attached.

Oh, we're also looking for a Community Manager, to help with both The Daily WTF and Inedo communities. So if you know anyone who might be interested in that, send them my way!

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

CryptogramNow It's Easier than Ever to Steal Someone's Keys

The website key.me will make a duplicate key from a digital photo.

If a friend or coworker leaves their keys unattended for a few seconds, you know what to do.

Planet Linux AustraliaRusty Russell: Broadband Speeds, 2 Years Later

Two years ago, considering the blocksize debate, I made two attempts to measure average bandwidth growth, first using Akamai serving numbers (which gave an answer of 17% per year), and then using fixed-line broadband data from OFCOM UK, which gave an answer of 30% per annum.

We have two years more of data since then, so let’s take another look.

OFCOM (UK) Fixed Broadband Data

First, the OFCOM data:

  • Average download speed in November 2008 was 3.6Mbit
  • Average download speed in November 2014 was 22.8Mbit
  • Average download speed in November 2016 was 36.2Mbit
  • Average upload speed in November 2008 to April 2009 was 0.43Mbit/s
  • Average upload speed in November 2014 was 2.9Mbit
  • Average upload speed in November 2016 was 4.3Mbit

So in the last two years, we’ve seen 26% increase in download speed, and 22% increase in upload, bringing us down from 36/37% to 33% over the 8 years. The divergence of download and upload improvements is concerning (I previously assumed they were the same, but we have to design for the lesser of the two for a peer-to-peer system).

The idea that upload speed may be topping out is reflected in the Nov-2016 report, which notes only an 8% upload increase in services advertised as “30Mbit” or above.

Akamai’s State Of The Internet Reports

Now let’s look at Akamai’s Q1 2016 report and Q1-2017 report.

  • Annual global average speed in Q1 2015 – Q1 2016: 23%
  • Annual global average speed in Q1 2016 – Q1 2017: 15%

This gives an estimate of 19% per annum in the last two years. Reassuringly, the US and UK (both fairly high-bandwidth countries, considered in my previous post to be a good estimate for the future of other countries) have increased by 26% and 19% in the last two years, indicating there’s no immediate ceiling to bandwidth.

You can play with the numbers for different geographies on the Akamai site.

Conclusion: 19% Is A Conservative Estimate

17% growth now seems a little pessimistic: in the last 9 years the US Akamai numbers suggest the US has increased by 19% per annum, the UK by almost 21%.  The gloss seems to be coming off the UK fixed-broadband numbers, but they’re still 22% upload increase for the last two years.  Even Australia and the Philippines have managed almost 21%.

Worse Than FailureOpen Sources

121212 2 OpenSwissKnife

Here's how open-source is supposed to work: A goup releases a product, with the source code freely available. Someone finds a problem. They solve the problem, issue a pull request, and the creators merge that into the product, making it better for everyone.

Here's another way open-source is supposed to work: A group releases a product, with the source code freely available. Someone finds a problem, but they can't fix it themselves, so they issue a bug report. Someone else fixes the problem, issues a pull request, and the creators merge that into the product, making it better for everyone.

Here's one way open-source works: Someone creates a product. It gets popular—and I mean really, really popular, practically overnight. The creator didn't ask for this. They have no idea what to do with success. They try their best to keep up, but they can't keep on top of everything all the time. They haven't even set up a build pipeline yet. They're flying by the seat of their pants. One day, unwisely choosing to program with a fever, they commit broken code and push it up to GitHub, triggering an automatic release. Fifteen thousand downstram dependencies find their build broken and show up to scold the creator for not running tests before releasing.

Here's another way open-source works: A group creates a product. It gets popular. Some time later, there are 600 open issues and over 50 pending pull requests. The creator hasn't commented in a year, but people keep vainly trying to improve the product.

Here's another way open-source works: A group creates a product. They decide to avoid the above PR disaster by using some off-site bug tracker. Someone files a bug. Then another bug. Then 5 or 10 more. The creator goes on a rampage, insisting that everyone is using it wrong, and deletes all the bug reports, banning the users who submitted them. The product continues to gain success, and more and more people file bugs, only to find the bug reports summarily closed. Sometimes people get lucky and their reports will be closed, then re-opened when another dev decides to fix the problem.

Here's another way open-source works: Group A creates a product. Group B creates a product, and uses the first product as their support forum. That forum gets hacked. Group B files a bug to Group A, rightly worried about the security of the software they use. After all, if the forum has a remote code exploit, maybe they should move to something newer, maybe written in Ruby instead of PHP. One of the developers from Group A, today's submitter, offers to investigate.

Many forums allow admins to edit the themes for the site directly; for example, NodeBB provides admins a textbox in which they can paste CSS to tweak the theme to their liking. This forum figures that since the admins are already on the wrong side of the airtight hatchway, they can inject bits of PHP code directly into the forum header. Code like, say, saving a poison payload to a disk that creates a endpoint that accepts arbitrary file uploads so they can root the box.

But how did the hacker get access to that admin panel? Surely that's a security flaw, right? Turns out they'd found a weak link in the security chain and applied just enough force to break their way in. If you work in security, I'm sure you won't be surprised to hear the flaw: one of the admins had a weak password, one right out of a classic dictionary list.

Here's another way open-source works: 15% of the NPM ecosystem was controlled by people with weak passwords. Someone hacks them all, tells NPM how they did it, and gets a mass forced-reset on everyone's passwords. Everyone is more secure.

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

Planet DebianThadeu Lima de Souza Cascardo: News on Debian on apexqtmo

I had been using my Samsung Galaxy S Relay 4G for almost three years when I decided to get a new phone. I would use this new phone for daily tasks and take the chance to get a new model for hacking in the future. My apexqtmo would still be my companion and would now be more available for real hacking.

And so it also happened that its power button got stuck. It was not the first time, but now it would happen every so often, and would require me to disassemble it. So I managed to remove the plastic button and leave it with a hole so I could press the button with a screwdriver or a paperclip. That was the excuse I needed to get it to running Debian only.

Though it's now always plugged on my laptop, I got the chance to hack on it on my scarce free time. As I managed to get a kernel I built myself running on it, I started fixing things like enabling devtmpfs. I didn't insist much on running systemd, though, and kept with System V. The Xorg issues were either on the server or the client, depending on which client I ran.

I decided to give a chance to running the Android userspace on a chroot, but gave up after some work to get some firmware loaded.

I managed to get the ALSA controls right after saving them inside a chroot on my CyanogenMod system. Then, restoring them on Debian allowed to play songs. Unfortunately, it seems I broke the audio jack when disassembling it. Otherwise, it would have been a great portable audio player. I even wrote a small program that would allow me to control mpd by swiping on the touchscreen.

Then, as Debian release approached, I decided to investigate the framebuffer issue closely. I ended finding out that it was really a bug in the driver, and after fixing it, the X server and client crashes were gone. It was beautiful to get some desktop environment running with the right colors, get a calculator started and really using the phone as a mobile device.

There are two lessons or findings here for me. The first one is that the current environments are really lacking. Even something like GPE can't work. The buttons are tiny, scrollbars are still the only way for scrolling, some of the time. No automatic virtual keyboards. So, there needs to be some investing in the existing environments, and maybe even the development of new environments for these kinds of devices. This was something I expected somehow, but it's still disappointing to know that we had so much of those developed in the past and now gone. I really miss Maemo. Running something like Qtopia would mean grabing a very old unmaintained software not available in Debian. There is still matchbox, but it's as subpar as the others I tested.

The second lesson is that building a userspace to run on old kernels will still hit the problem of broken drivers. In my particular case, unless I wrote code for using Ion instead of the framebuffer, I would have had that problem. Or it would require me to add code to xorg-xserver that is not appropriate. Or fix the kernel drivers of available kernel sourcecodes. But this does not scale much more than doing the right thing and adding upstream support for these devices.

So, I decided it was time I started working on upstream support for my device. I have it in progress and may send some upstream patches soon. I have USB and MMC/SDcard working fine. DRM is still a challenge, but thanks to Rob Clark, it's something I expect to get working soon, and after that, I would certainly celebrate. Maybe even consider starting the work on other devices a little sooner.

Trying to review my post on GNU on smartphones, here is where I would put some of the status of my device and some extra notes.

On Halium

I am really glad people started this project. This was one of the things I criticized: that though Ubuntu Phone and FirefoxOS built on Android userspace, they were not easily portable to many devices out there.

But as I am looking for a more pure GNU experience, let's call it that, Halium does not help much in that direction. But I'd like to see it flourish and allow people to use more OSes on more devices.

Unfortunately, it suffers from similar problems as the strategy I was trying to go with. If you have a device with a very old kernel, you won't be able to run some of the latest userspace, even with Android userspace help. So, lots of devices would be left unsupported, unless we start working on some upstream support.

On RYF Hardware

My device is one of the worst out there. It's a modem that has a peripherical CPU. Much has already been said about Qualcomm chips being some of the least freedom-friendly. Ironically, it's some with the best upstream support, as far as I found out while doing this upstreaming work. Guess we'll have to wait for opencores, openrisc and risc-v to catch up here.

Diversity

Though I have been experimenting with Debian, the upstream work would sure benefit lots of other OSes out there, mainly GNU+Linux based ones, but also other non-GNU Linux based ones. Not so much for other kernels.

On other options

After the demise of Ubuntu Phone, I am glad to see UBports catching up. I hope the project is sustainable and produce more releases for more devices.

Rooting

This needs documentation. Most of the procedures rely on booting a recovery system, which means we are already past the root requirement. We simply boot our own system, then. However, for some debugging strategies, getting root on the OEM system is useful. So, try to get root on your system, but beware of malware out there.

Booting

Most of these devices will have their bootloaders in there. They may be unlocked, allowing unsigned kernels to be booted. Replacing these bootloaders is still going to be a challenge for another future phase. Though adding a second bootloader there, one that is freedom respecting, and that allows more control on that booting step to the user is something possible once you have some good upstream support. One could either use kexec for that, or try to use the same device tree for U-Boot, and use the knowledge of the device drivers for Linux on writing drivers for U-Boot, GRUB or Libreboot.

Installation

If you have root on your OEM system, this is something that could be worked on. Otherwise, there is magic-device-tool, whose approach is one that could be used.

Kernels

While I am working on adding Linux upstream support for my device, it would be wonderful to see more kernels supporting those gadgets. Hopefully, some of the device driver writing and reverse engineering could help with that, though I am not too much optimistic. But there is hope.

Basic kernel drivers

Adding the basic support, like USB and MMC, after clocks, gpios, regulators and what not, is the first step to a long road. But it would allow using the device as a board computer, under better control of the user. Hopefully, lots of eletronic garbage out there would have some use as control gadgets. Instead of buying a new board, just grab your old phone and put it to some nice use.

Sensors, input devices, LEDs

There are usually easy too. Some sensors may depend on your modem or some userspace code that is not that easily reverse engineered. But others would just require some device tree work, or some small input driver.

Graphics

Here, things may get complicated. Even basic video output is something I have some trouble with. Thanks to some other people's work, I have hope at least for my device. And using the vendor's linux source code, some framebuffer should be possible, even some DRM driver. But OpenGL or other 3D acceleration support requires much more work than that, and, at this moment, it's not something I am counting on. I am thankful for the work lots of people have been doing on this area, nonetheless.

Wireless

Be it Wifi or Bluetooth, things get ugly here. The vendor driver might be available. Rewriting it would take a long time. Even then, it would most likely require some non-free firmware loading. Using USB OTG here might be an option.

Modem/GSM

The work of the Replicant folks on that is what gives me some hope that it might be possible to get this working. Something I would leave to after I have a good interface experience in my hands.

GPS

Problem is similar to the Modem/GSM one, as some code lives in userspace, sometimes talking to the modem is a requirement to get GPS access, etc.

Shells

This is where I would like to see new projects, even if they work on current software to get them more friendly to these form factors. I consider doing some work there, though that's not really my area of expertise.

Next steps

For me, my next steps are getting what I have working upstream, keep working on DRM support, packaging GPE, then experimenting with some compositor code. In the middle of that, trying to get some other devices started.

But documenting some of my work is something I realized I need to do more often, and this post is some try on that.

,

LongNowThe AI Cargo Cult: The Myth of a Superhuman Artificial Intelligence

In a widely-shared essay first published in Backchannel, Kevin Kelly, a Long Now co-founder and Founding Editor of Wired Magazine, argues that the inevitable rise of superhuman artificial intelligence—long predicted by leaders in science and technology—is a myth based on misperceptions without evidence.

Kevin is now Editor at Large at Wired and has spoken in the Seminar and Interval speaking series several times, including on his most recent book, The Inevitable—sharing his ideas on the future of technology and how our culture responds to it. Kevin was part of the team that developed the idea of the Manual for Civilization, making a series of selections for it, and has also put forth a similar idea called the Library of Utility.

Read Kevin Kelly’s essay in full (LINK).

Planet DebianShirish Agarwal: Debian 9 release party at Co-hive

Dear all, This would be a biggish one so please have a chai/coffee or something stronger as it would take a while.

I would start with attempt at some alcohol humor. While some people know that I have had a series of convulsive epileptic seizure I had shared bits about it in another post as well. Recovery is going through allopathic medicines as well as physiotherapy which I go to every alternate day.

One of the exercises that I do in physiotherapy sessions is walk cross-legged on a line. While doing it today, it occurred to me that this is the same test that a Police inspector would do if they caught you drinking or are suspected of drunk driving. While some in the police force have now also have breath analyzer machines to determine alcohol content in the breath and body (and ways to deceive it are also there) the above exercise is still an integral part of examination. Now few of my friends who do drink have and had made expertise of walking on a line, while I due to this neurological disorder still have issues of walking on a line. So while I don’t think of a drinking party in the near future (6 months at least), if I ever do get caught with a friend who is drunk (by association I would also be a suspect) by a policeman who doesn’t have a breath analyzer machine, I could be in a lot of trouble. In addition if I tell him I have a neurological disorder I am bound to land up in a cell as he will think I’m trying to make a fool of him. If you are able to picturize the situation, I’m sure you will get a couple of laughs.

Now coming to the release party, I was a bit apprehensive. It’s been quite a while I had faced an audience and just coming out of illness didn’t know how well or ill-prepared I would be for the session. I had forsaken/given up exercising two days earlier before the event as I wanted to have loose body, loose limbs all over. I also took a mild sedative (1mg) the day before just so I will have a fit night sleep and be able to focus all my energies on the big day. (I don’t recommend sedatives unless the doctor prescribes) and I did have a doctor prescription so was able to have a nice sleep. I didn’t do any Debian study as I hoped my somewhat long experience with both Ubuntu and Debian should help me.

On the d-day, I had asked dhanesh (the organizer of the event) to accompany me from home to venue and back as I was unsure of the journey as it was around 9-10 kms. from my place and while I had been to the venue about couple of years back, I had just a mild remembrance of the place.

Anyways, Dhanesh compiled with my request and together we reached the venue before the appointed 1500 hrs. As it was a Sunday I was unsure as how many people would turn up as people usually like to cozy up on a Sunday.

Around 1530 hrs everybody showed up

The whole group

It included couple of co-organizers with most people being newbies so while I had thought of showing how to contribute via reporting bugs or putting up patches, had to set that aside and explain how things work in free software and open-source world. We didn’t get into the debate of free vs open-source or free/open-source/open-core as that would have been counter-productive and probably confusing for newbies.

We did however share the debian tree structure

debian-tree-structure-discussion

I was stumped by /var and /proc . I hadn’t taken my lappy as it is expensive (a lenovo thinkpad I love very dearly) and I was unsure if I would be able to take care of it (weight wise). Dhanesh had told me that he had Debian on his lappy + zsh both of which are my favourites.

While back at home I realized /var has been relegated to having apache/server logs and stuff like that, I do recall (vaguely) a thread on debian-devel about removing /var although that discussion went nowhere.

One of the bugs that we hit early on is that nobody had enough space on their hdd to have Debian comfortably. It took a while to get an external hdd and push some of the content from somebody’s lappy to the external drive to have space for the installation.

I did share the /. /home, optional swap while Dhanesh helped by sharing about having a separate /boot partition as well which I had forgotten. I can’t even begin to remember the number of times having a separate /boot partition has helped me in all of my systems.

That done, we did try to install/show Debian 9 without network but were hit with #866629 so wasn’t able to complete the installation. We had got the latest 9.0.1 as I had seen Steve’s message about issues with the live images but even then we were hit with the above bug. As shared in the bug history, it might be a good idea to have the last couple of RC’s (Release Candidate releases) as pre-release parties so people have a chance to report bugs and get them fixed. It was also nice to see Praveen raising the seriousness of the bug shared above.

The next day I also filed #866971 as I had mistaken the release to be a regular release and not the live instance. I have pushed my rationale and hope the right thing happens.

As installation takes a bit of time, we used the time to share about Google’s Summer of Code and absence of Debian from GSoC this year. I should have directed them to an itfoss article I wrote sometime ago and also shared that Debian is also looking to having a similar apprenticeship within Debian itself. There were questions about why Debian would like to take the administrative overhead, my response was that it probably had to do with Debian wanting more control over the process. While Debian has had some great luck getting all number of seats that it asks for in GSoC, the ball is always in Google’s court. Having that uncertainty off would be beneficial to Debian both in short-term as well as long-term. One interesting stat that was shared with me was that something akin to 89 students from India had been selected this year to GSoC even with the lower stipend and that is a far cry from the 10-15 students who are able to complete GSoC every year. Let’s see what happens this year.

One of the interesting fact/gossip I shared with them is that Google uses a modified/forked Debian internally which it probably would never release ever.

There were quite a few queries about GSoC which resulted into how contributions are made and how git had become the master of all VCS (Version Control Systems). While I do have pet bugs about git (the biggest one being for places/countries having not big bandwidth git fails many a times while cloning). I *think* the bug has been reported enough times but haven’t seen any improvements yet. There is a need of a solution like wget and wget -c so git just works even under the most trying circumstances (bandwidth wise)

We also shared what a repo is and Dhanesh helpfully did a git shortlog to show people how commits are commented. It was from his official work so he couldn’t show anything apart from the shortlog.

I also shared how non-technical people can help with regard to documentation, artwork but didn’t get into any concrete examples although wiki.debian.org would have been a good start. I also don’t know if it’s a fact or not but it seems/seemed that moinmoin (the current wiki solution used by debian) seems to have got sectional edit feature which I used sometime back. If moinmoin has got this feature then it is on par with mediawiki, although do know that mediawiki has lot more features going for it.

Dhanesh did manage to install a Debian 8.0.7 (while 8.8.0 was the last release) which might have been better. The debian-installer (d-i) looks the same even though I know there are improvements with respect to UEFI and many updated components.

There are and were many bugs which I wanted to share but didn’t know if it was the right forum or not, for e.g. #597176 which probably needs improvements in other libraries along with the JPEG 2000 support #604859 all of which I’m subscribed to.

We also had a talk about code documentation and code readability, python (as almost everything in Debian is based on python) I had actually wanted to show metrics.debian.net but had seen it was down the day before and again checked to see it is down now as well, hence reported it, will hopefully come on Debian BTS some time soonish. The idea was to share that Debian does have/uses other programming languages as well and is only limited by people interested in a specific language and be willing to package and maintain packages in that specific programming language.

As Dhanesh was part of Firefox OS and a Campus Ambassador we did discuss what went wrong in Firefox OS deployment and Firefox as a whole, specifically between the community and Mozilla the Corporation.

In this there were lots that was lot that I wasn’t able to share as had become tired otherwise would have shared that ncurses debian-installer interface although bit ugly to look at is still good as it has speech for visually differently abled people as well as people with poor sight.

There is and was lots to share about Debian but was conscious that I might over-burden the audience and my stamina was also stretched.

We also shared about automated builds bud didn’t show either travis.debian.net or jeankins.debian.net

We also had a small discussion about Artificial Intelligence, Robotics and how the outlook for people coming in Computer Science looks today.

One thing I forgot to share we did do the cake-cutting together

Dhanesh-Shirish cutting cake together

Dhanesh is on the left while I’m on the right.

We did have a cute 3-4 year old boy as our guest as well bud didn’t get good pictures of him.

Lastly, the Debian cake itself

Debian 9 cake

We also talked about the kernel and subsystem maintainers and how Linus is the overlord.

Look forward to comments. I am hoping Dhanesh might recollect some points that I might have missed/forgotten.

Update 07/07/17 – All programming languages stats. can be seen here


Filed under: Miscellenous Tagged: #alcohol humor, #co-hive, #crystal-gazing, #Debian bugs, #Debian cake, #Debian-release-party-9-pune, #live-installer, #moinmoin, #planet-debian, debian-installer, Pune

CryptogramDubai Deploying Autonomous Robotic Police Cars

It's hard to tell how much of this story is real and how much is aspirational, but it really is only a matter of time:

About the size of a child's electric toy car, the driverless vehicles will patrol different areas of the city to boost security and hunt for unusual activity, all the while scanning crowds for potential persons of interest to police and known criminals.

Cory DoctorowMy presentation from ConveyUX

Last March, I traveled to Seattle to present at the ConveyUX conference, with a keynote called “Dark Patterns and Bad Business Models”, the video for which has now been posted: “The Internet’s broken and that’s bad news, because everything we do today involves the Internet and everything we’ll do tomorrow will require it. But governments and corporations see the net, variously, as a perfect surveillance tool, a perfect pornography distribution tool, or a perfect video on demand tool—not as the nervous system of the 21st century. Time’s running out. Architecture is politics. The changes we’re making to the net today will prefigure the future our children and their children will thrive in—or suffer under.”

Cory DoctorowInterview with Wired UK’s Upvote podcast

Back in May, I stopped by Wired UK while on my British tour for my novel Walkaway to talk about the novel, surveillance, elections, and, of course, DRM. (MP3)

Planet DebianPatrick Matthäi: Bug in Stretch Linux vmxnet3 driver

Hey,

Are I am the only one experiencing #864642 (vmxnet3: Reports suspect GRO implementation on vSphere hosts / one VM crashes)? Unluckily I still do not have got any answer on my report so that I may help to track this issue down.

Help is welcome :)

Rondam RamblingsA brief history of political discourse in the United States

1776 When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel

Sociological ImagesPolitical baseball: Happiness, unhappiness, and whose team is winning

Are conservatives happier than liberals. Arthur Brooks thinks so. Brooks is president of the American Enterprise Institute, a conservative think tank. And he’s happy, or at least he comes across as happy in his monthly op-eds for the Times.  In those op-eds, he sometimes claims that conservatives are generally happier. When you’re talking about the relation between political views and happiness, though — as I do — you ought to consider who is in power. Otherwise, it’s like asking whether Yankee fans are happier than RedSox fans without checking the AL East standings.

Now that the 2016 GSS data is out, we can compare the happiness of liberals and conservatives during the Bush years and the Obama years. The GSS happiness item asks, “Taken all together, how would you say things are these days –  would you say that you are very happy, pretty happy, or not too happy?”

On the question of who is “very happy,” it looks as though Brooks is onto something. More of the people on the right are very happy in both periods. But also note that in the Obama years about 12% of those very happy folks (five out of 40) stopped being very happy.But something else was happening during the Obama years. It wasn’t just that some of the very happy conservatives weren’t quite so happy. The opposition to Obama was not about happiness. Neither was the Trump campaign with its unrelenting negativism. What it showed was that a lot of people on the right were angry. None of that sunny Reaganesque “Morning in America: for them. That feeling is reflected in the numbers who told the GSS that they were “not too happy.”

Among extreme conservatives, the percent who were not happy doubled during the Obama years. The increase in unhappiness was about 60% for those who identified themselves as “conservative” (neither slight nor extreme).  In the last eight years, the more conservative a person’s views, the greater the likelihood of being not too happy. The pattern is reversed for liberals during the Bush years. Unhappiness rises as you more further left.

The graphs also show that for those in the middle of the spectrum – about 60% of the people – politics makes no discernible change in happiness. Their proportion of “not too happy” remained stable regardless of who was in the White House.  Those middle categories do give support to Brook’s idea that conservatives are generally somewhat happier. But as you move further out on the political spectrum the link between political views and happiness depends much more on which side is winning. Just as at Fenway or the Stadium, the fans who are cheering – or booing – the loudest are the ones whose happiness will be most affected by their team’s won-lost record.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

Sociological ImagesAlgorithms replace your biases with someone else’s

Originally posted at Scatterplot.

There has been a lot of great discussion, research, and reporting on the promise and pitfalls of algorithmic decisionmaking in the past few years. As Cathy O’Neil nicely shows in her Weapons of Math Destruction (and associated columns), algorithmic decisionmaking has become increasingly important in domains as diverse as credit, insurance, education, and criminal justice. The algorithms O’Neil studies are characterized by their opacity, their scale, and their capacity to damage.

Much of the public debate has focused on a class of algorithms employed in criminal justice, especially in sentencing and parole decisions. As scholars like Bernard Harcourt and Jonathan Simon have noted, criminal justice has been a testing ground for algorithmic decisionmaking since the early 20th century. But most of these early efforts had limited reach (low scale), and they were often published in scholarly venues (low opacity). Modern algorithms are proprietary, and are increasingly employed to decide the sentences or parole decisions for entire states.

“Code of Silence,” Rebecca Wexler’s new piece in Washington Monthlyexplores one such influential algorithm: COMPAS (also the study of an extensive, if contested, ProPublica report). Like O’Neil, Wexler focuses on the problem of opacity. The COMPAS algorithm is owned by a for-profit company, Northpointe, and the details of the algorithm are protected by trade secret law. The problems here are both obvious and massive, as Wexler documents.

Beyond the issue of secrecy, though, one issue struck me in reading Wexler’s account. One of the main justifications for a tool like COMPAS is that it reduces subjectivity in decisionmaking. The problems here are real: we know that decisionmakers at every point in the criminal justice system treat white and black individuals differently, from who gets stopped and frisked to who receives the death penalty. Complex, secretive algorithms like COMPAS are supposed to help solve this problem by turning the process of making consequential decisions into a mechanically objective one – no subjectivity, no bias.

But as Wexler’s reporting shows, some of the variables that COMPAS considers (and apparently considers quite strongly) are just as subjective as the process it was designed to replace. Questions like:

Based on the screener’s observations, is this person a suspected or admitted gang member?

In your neighborhood, have some of your friends or family been crime victims?

How often do you have barely enough money to get by?

Wexler reports on the case of Glenn Rodríguez, a model inmate who was denied parole on the basis of his puzzlingly high COMPAS score:

Glenn Rodríguez had managed to work around this problem and show not only the presence of the error, but also its significance. He had been in prison so long, he later explained to me, that he knew inmates with similar backgrounds who were willing to let him see their COMPAS results. “This one guy, everything was the same except question 19,” he said. “I thought, this one answer is changing everything for me.” Then another inmate with a “yes” for that question was reassessed, and the single input switched to “no.” His final score dropped on a ten-point scale from 8 to 1. This was no red herring.

So what is question 19? The New York State version of COMPAS uses two separate inputs to evaluate prison misconduct. One is the inmate’s official disciplinary record. The other is question 19, which asks the evaluator, “Does this person appear to have notable disciplinary issues?”

Advocates of predictive models for criminal justice use often argue that computer systems can be more objective and transparent than human decisionmakers. But New York’s use of COMPAS for parole decisions shows that the opposite is also possible. An inmate’s disciplinary record can reflect past biases in the prison’s procedures, as when guards single out certain inmates or racial groups for harsh treatment. And question 19 explicitly asks for an evaluator’s opinion. The system can actually end up compounding and obscuring subjectivity.

This story was all too familiar to me from Emily Bosk’s work on similar decisionmaking systems in the child welfare system where case workers must answer similarly subjective questions about parental behaviors and problems in order to produce a seemingly objective score used to make decisions about removing children from home in cases of abuse and neglect. A statistical scoring system that takes subjective inputs (and it’s hard to imagine one that doesn’t) can’t produce a perfectly objective decision. To put it differently: this sort of algorithmic decisionmaking replaces your biases with someone else’s biases.

Dan Hirschman is a professor of sociology at Brown University. He writes for scatterplot and is an editor of the ASA blog Work in Progress. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianYakking: Open your minds, and your data (formats)

Whether I am writing my own program, or chosing between existing solutions, one aspect of the decision making process which always weighs heavily on my mind is that of the input and output data formats.

I have been spending a lot of my work days recently working on converting data from a proprietary tool's export format into another tool's input format. This has involved a lot of XML diving, a lot more swearing, and a non-trivial amount of pain. This drove home to me once more that the format of input and output of data is such a critical part of software tooling that it must weigh as heavily as, or perhaps even more heavily than, the software's functionality.

As Tannenbaum tells us, the great thing about standards is that there's so many of them to choose from. XKCD tells us, how that comes about. Data formats are many and varied, and suffer from specifications as vague as "plain text" to things as complex as the structure of data stored in custom database formats.

If you find yourself writing software which requires a brand new data format then, while I might caution you to examine carefully if it really does need a new format, you should ensure that you document the format carefully and precisely. Ideally give your format specification to a third party and get them to implement a reader and writer for your format, so that they can check that you've not missed anything. Tests and normative implementations can help prop up such an endeavour admirably.

Be sceptical of data formats which have "implementation specific" areas, or "vendor specific extension" space because this is where everyone will put the most important and useful data. Do not put such beasts into your format design. If you worry that you've made your design too limiting, deal with that after you have implemented your use-cases for the data format. Don't be afraid to version the format and extend later; but always ensure that a given version of the data format is well understood; and document what it means to be presented with data in a format version you do not normally process.

Phew.


Given all that, I exhort you to consider carefully how your projects manage their input and output data, and for these things to be uppermost when you are choosing between different solutions to a problem at hand. Your homework is, as you may have grown to anticipate at this time, to look at your existing projects and check that their input and output data formats are well documented if appropriate.

CryptogramCommentary on US Election Security

Good commentaries from Ed Felten and Matt Blaze.

Both make a point that I have also been saying: hacks can undermine the legitimacy of an election, even if there is no actual voter or vote manipulation.

Felten:

The second lesson is that we should be paying more attention to attacks that aim to undermine the legitimacy of an election rather than changing the election's result. Election-stealing attacks have gotten most of the attention up to now -- ­and we are still vulnerable to them in some places -- ­but it appears that external threat actors may be more interested in attacking legitimacy.

Attacks on legitimacy could take several forms. An attacker could disrupt the operation of the election, for example, by corrupting voter registration databases so there is uncertainty about whether the correct people were allowed to vote. They could interfere with post-election tallying processes, so that incorrect results were reported­ an attack that might have the intended effect even if the results were eventually corrected. Or the attacker might fabricate evidence of an attack, and release the false evidence after the election.

Legitimacy attacks could be easier to carry out than election-stealing attacks, as well. For one thing, a legitimacy attacker will typically want the attack to be discovered, although they might want to avoid having the culprit identified. By contrast, an election-stealing attack must avoid detection in order to succeed. (If detected, it might function as a legitimacy attack.)

Blaze:

A hostile state actor who can compromise a handful of county networks might not even need to alter any actual votes to create considerable uncertainty about an election's legitimacy. It may be sufficient to simply plant some suspicious software on back end networks, create some suspicious audit files, or add some obviously bogus names to to the voter rolls. If the preferred candidate wins, they can quietly do nothing (or, ideally, restore the compromised networks to their original states). If the "wrong" candidate wins, however, they could covertly reveal evidence that county election systems had been compromised, creating public doubt about whether the election had been "rigged". This could easily impair the ability of the true winner to effectively govern, at least for a while.

In other words, a hostile state actor interested in disruption may actually have an easier task than someone who wants to undetectably steal even a small local office. And a simple phishing and trojan horse email campaign like the one in the NSA report is potentially all that would be needed to carry this out.

Me:

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­ by extension ­ our democracy.

And, finally, a report from the Brennan Center for Justice on how to secure elections.

Krebs on SecurityWho is the GovRAT Author and Mirai Botmaster ‘Bestbuy’?

In February 2017, authorities in the United Kingdom arrested a 29-year-old U.K. man on suspicion of knocking more than 900,000 Germans offline in an attack tied to Mirai, a malware strain that enslaves Internet of Things (IoT) devices like security cameras and Internet routers for use in large-scale cyberattacks. Investigators haven’t yet released the man’s name, but news reports suggest he may be better known by the hacker handle “Bestbuy.” This post will follow a trail of clues back to one likely real-life identity of Bestbuy.

At the end of November 2016, a modified version of Mirai began spreading across the networks of German ISP Deutsche Telekom. This version of the Mirai worm spread so quickly that the very act of scanning for new infectable hosts overwhelmed the devices doing the scanning, causing outages for more than 900,000 customers. The same botnet had previously been tied to attacks on U.K. broadband providers Post Office and Talk Talk.

dtoutage

Security firm Tripwire published a writeup on that failed Mirai attack, noting that the domain names tied to servers used to coordinate the activities of the botnet were registered variously to a “Peter Parker” and “Spider man,” and to a street address in Israel (27 Hofit St). We’ll come back to Spider Man in a moment.

According to multiple security firms, the Mirai botnet responsible for the Deutsche Telekom outage was controlled via servers at the Internet address 62.113.238.138Farsight Security, a company that maps which domain names are tied to which Internet addresses over time, reports that this address has hosted just nine domains.

The only one of those domains that is not related to Mirai is dyndn-web[dot]com, which according to a 2015 report from BlueCoat (now Symantec) was a domain tied to the use and sale of a keystroke logging remote access trojan (RAT) called “GovRAT.” The trojan is documented to have been used in numerous cyber espionage campaigns against governments, financial institutions, defense contractors and more than 100 corporations.

Another report on GovRAT — this one from security firm InfoArmor — shows that the GovRAT malware was sold on Dark Web cybercrime forums by a hacker or hackers who went by the nicknames BestBuy and “Popopret” (some experts believe these were just two different identities managed by the same cybercriminal).

The hacker "bestbuy" selling his Govrat trojan on the dark web forum "Hell." Image: InfoArmor.

The hacker “bestbuy” selling his GovRAT trojan on the dark web forum “Hell.” Image: InfoArmor.

GovRAT has been for sale on various other malware and exploit-related sites since at least 2014. On oday[dot]today, for example, GovRAT was sold by a user who picked the nickname Spdr, and who used the email address spdr01@gmail.com.

Recall that the domains used to control the Mirai botnet that hit Deutsche Telekom all had some form of Spider Man in the domain registration records. Also, recall that the controller used to manage the GovRAT trojan and that Mirai botnet were both at one time hosted on the same server with just a handful of other (Mirai-related) domains.

According to a separate report (PDF) from InfoArmor, GovRAT also was sold alongside a service that allows anyone to digitally sign their malware using code-signing certificates stolen from legitimate companies. InfoArmor said the digital signature it found related to the service was issued to an open source developer Singh Aditya, using the email address parkajackets@gmail.com.

Interestingly, both of these email addresses — parkajackets@gmail.com and spdr01@gmail.com — were connected to similarly-named user accounts at vDOS, for years the largest DDoS-for-hire service (that is, until KrebsOnSecurity last fall outed its proprietors as two 18-year-old Israeli men).

Last summer vDOS got massively hacked, and a copy of its user and payments databases was shared with this author and with U.S. federal law enforcement agencies. The leaked database shows that both of those email addresses are tied to accounts on vDOS named “bestbuy” (bestbuy and bestbuy2).

Spdr01's sales listing for the GovRAT trojan on a malware and exploits site shows he used the email address spdr01@gmail.com

Spdr01’s sales listing for the GovRAT trojan on a malware and exploits site shows he used the email address spdr01@gmail.com

The leaked vDOS database also contained detailed records of the Internet addresses that vDOS customers used to log in to the attack-for-hire service. Those logs show that the bestbuy and bestbuy2 accounts logged in repeatedly from several different IP addresses in the United Kingdom and in Hong Kong.

The technical support logs from vDOS indicate that the reason the vDOS database shows two different accounts named “bestbuy” is the vDOS administrators banned the original “bestbuy” account after it was seen logged into the account from both the UK and Hong Kong. Bestbuy’s pleas to the vDOS administrators that he was not sharing the account and that the odd activity could be explained by his recent trip to Hong Kong did not move them to refund his money or reactivate his original account.

A number of clues in the data above suggest that the person responsible for both this Mirai botnet and GovRAT had ties to Israel. For one thing, the email address spdr01@gmail.com was used to register at least three domain names, all of which are tied back to a large family in Israel. What’s more, in several dark web postings, Bestbuy can be seen asking if anyone has any “weed for sale in Israel,” noting that he doesn’t want to risk receiving drugs in the mail.

The domains tied to spdr01@gmail.com led down a very deep rabbit hole that ultimately went nowhere useful for this investigation. But it appears the nickname “spdr01” and email spdr01@gmail.com was used as early as 2008 by a core member of the Israeli hacking forum and IRC chat room Binaryvision.co.il.

Visiting the Binaryvision archives page for this user, we can see Spdr was a highly technical user who contributed several articles on cybersecurity vulnerabilities and on mobile network security (Google Chrome or Google Translate deftly translates these articles from Hebrew to English).

I got in touch with multiple current members of Binaryvision and asked if anyone still kept in contact with Spdr from the old days. One of the members said he thought Spdr held dual Israeli and U.K. citizenship, that he would be approximately 30 years old at the moment. Another said Spdr was engaged to be married recently. None of those who shared what they recalled about Spdr wished to be identified for this story.

But a bit of searching on those users’ social networking accounts showed they had a friend in common that fit the above description. The Facebook profile for one Daniel Kaye using the Facebook alias “DanielKaye.il” (.il is the top level country code domain for Israel) shows that Mr. Kaye is now 29 years old and is or was engaged to be married to a young woman named Catherine in the United Kingdom.

The background image on Kaye’s Facebook profile is a picture of Hong Kong, and Kaye’s last action on Facebook was apparently to review a sports and recreation facility in Hong Kong.

dankaye

Using Domaintools.com [full disclosure: Domaintools is an advertiser on this blog], I ran a “reverse WHOIS” search on the name “Daniel Kaye,” and it came back with exactly 103 current and historic domain names with this name in their records. One of them in particular caught my eye: Cathyjewels[dot]com, which appears to be tied to a homemade jewelry store located in the U.K. that never got off the ground.

Cathyjewels[dot]com was registered in 2014 to a Daniel Kaye in Egham, U.K., using the email address danielkaye02@gmail.com. I decided to run this email address through Socialnet, a plugin for the data analysis tool Maltego that scours dozens of social networking sites for user-defined terms. Socialnet reports that this email address is tied to an account at Gravatar — a service that lets users keep the same avatar at multiple Web sites. The name on that account? You guessed it: Spdr01.

The output from the Socialnet plugin for Maltego when one searches for the email address danielkaye02@gmail.com.

The output from the Socialnet plugin for Maltego when one searches for the email address danielkaye02@gmail.com.

Daniel Kaye did not return multiple requests for comment sent via Facebook and the various email addresses mentioned here.

In case anyone wants to follow up on this research, I highlighted the major links between the data points mentioned in this post in the following mind map (created with the excellent and indispensable MindNode Pro for Mac).

A “mind map” tracing some of the research mentioned in this post.

Worse Than FailureCodeSOD: Swap the Workaround

Blane D is responsible for loading data into a Vertica 8.1 database for analysis. Vertica is a distributed, column-oriented store, for data-warehousing applications, and its driver has certain quirks.

For example, a common task that you might need to perform is swapping storage partitions around between tables to facilitate bulk data-loading. Thus, there is a SWAP_PARTITIONS_BETWEEN_TABLES() stored procedure. Unfortunately, if you call this function from within a prepared statement, one of two things will happen: the individual node handling the request will crash, or the entire cluster will crash.

No problem, right? Just don’t use a prepared statement. Unfortunately, if you use the ODBC driver for Python, every statement is converted to a prepared statement. There’s a JDBC driver, and a bridge to enable it from within Python, but it also has that problem, and it has the added cost of requiring a a JVM running.

So Blane did what any of us would do in this situation: he created a hacky-workaround that does the job, but requires thorough apologies.

def run_horrible_partition_swap_hack(self, horrible_src_table, horrible_src_partition,
                                     terrible_dest_table, terrible_dest_partition):
    """
    First things first - I'm sorry, I am a terrible person.

    This is a horrible horrible hack to avoid Vertica's partition swap bug and should be removed once patched!

    What does this atrocity do?
    It passes our partition swap into the ODBC connection string so that it gets executed outside of a prepared
    statement... that's it... I'm sorry.
    """
    conn = self.get_connection(getattr(self, self.conn_name_attr))

    hacky_sql = "SELECT SWAP_PARTITIONS_BETWEEN_TABLES('{src_table}',{src_part},{dest_part},'{dest_table}')"\
        .format(src_table=horrible_src_table,
                src_part=horrible_src_partition,
                dest_table=terrible_dest_table,
                dest_part=terrible_dest_partition)

    even_hackier_sql = hacky_sql.replace(' ', '+')

    conn_string = ';'.join(["DSN={}".format(conn.host),
                            "DATABASE={}".format(conn.schema) if conn.schema else '',
                            "UID={}".format(conn.uid) if conn.uid else '',
                            "PWD={}".format(conn.password) if conn.password else '',
                            "ConnSettings={}".format(even_hackier_sql)])  # :puke:

    odbc_conn = pyodbc.connect(conn_string)

    odbc_conn.close()

Specifically, Blane leverages the driver’s ConnSettings option in the connection string, which allows you to execute a command when a client connects to the database. This is specifically meant for setting up session parameters, but it also has the added “benefit” of not performing that action using a prepared statement.

If it’s stupid but it works, it’s probably the vendor’s fault, I suppose.

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!

,

Planet DebianSteve Kemp: I've never been more proud

This morning I remembered I had a beefy virtual-server setup to run some kernel builds upon (when I was playing with Linux security moduels), and I figured before I shut it down I should use the power to run some fuzzing.

As I was writing some code in Emacs at the time I figured "why not fuzz emacs?"

After a few hours this was the result:

 deagol ~ $ perl -e 'print "`" x ( 1024 * 1024  * 12);' > t.el
 deagol ~ $ /usr/bin/emacs --batch --script ./t.el
 ..
 ..
 Segmentation fault (core dumped)

Yup, evaluating an lisp file caused a segfault, due to a stack-overflow (no security implications). I've never been more proud, even though I have contributed code to GNU Emacs in the past.

CryptogramGoldenEye Malware

I don't have anything to say -- mostly because I'm otherwise busy -- about the malware known as GoldenEye, NotPetya, or ExPetr. But I wanted a post to park links.

Please add any good relevant links in the comments.

Planet Linux AustraliaDanielle Madeley: python-pkcs11 with the Nitrokey HSM

So my Nitrokey HSM arrived and it works great, thanks to the Nitrokey peeps for sending me one.

Because the OpenSC PKCS #11 module is a little more lightweight than some of the other vendors, which often implement mechanisms that are not actually supported by the hardware (e.g. the Opencryptoki TPM module), I wrote up some documentation on how to use the device, focusing on how to extract the public keys for using outside of PKCS #11, as the Nitrokey doesn’t implement any of the public key functions.

Nitrokey with python-pkcs11

This also encouraged me to add a whole bunch more of the import/extraction functions for the diverse key formats, including getting very frustrated at the lack of documentation for little things like how OpenSSL stores EC public keys (the answer is as SubjectPublicKeyInfo from X.509), although I think there might be some operating system specific glitches with encoding some DER structures. I think I need to move from pyasn1 to asn1crypto.

Planet DebianReproducible builds folks: Reproducible Builds: week 114 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 25 and Saturday July 1 2017:

Upcoming and past events

Our next IRC meeting is scheduled for July 6th at 17:00 UTC (agenda). Topics to be discussed include an update on our next Summit, a potential NMU campaign, a press release for buster, branding, etc.

Toolchain development and fixes

  • James McCoy reviewed and merged Ximin Luo's script debpatch into the devscripts Git repository. This is useful for rebasing our patches onto new versions of Debian packages.

Packages fixed and bugs filed

Ximin Luo uploaded dash, sensible-utils and xz-utils to the deferred uploads queue with a delay of 14 days. (We have had patches for these core packages for over a year now and the original maintainers seem inactive so Debian conventions allow for this.)

Patches submitted upstream:

Reviews of unreproducible packages

4 package reviews have been added, 4 have been updated and 35 have been removed in this week, adding to our knowledge about identified issues.

One issue types has been updated:

One issue type has been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (68)
  • Daniel Schepler (1)
  • Michael Hudson-Doyle (1)
  • Scott Kitterman (6)

diffoscope development

tests.reproducible-builds.org

  • Vagrant Cascadian working on testing Debian:
    • Upgraded the 27 armhf build machines to stretch.
    • Fix mtu check to only display status when eth0 is present.
  • Helmut Grohne worked on testing Debian:
    • Limit diffoscope memory usage to 10GB virtual per process. It currently tends to use 50GB virtual, 36GB resident which is bad for everything else sharing the machine. (This is #865660)
  • Alexander Couzens working on testing LEDE:
    • Add multiple architectures / targets.
    • Limit LEDE and OpenWrt jobs to only allow one build at the same time using the jenkins build blocker plugin.
    • Move git log -1 > .html to node`document environment().
    • Update TODOs.
  • Mattia Rizzolo working on testing Debian:
    • Updated the maintenance script for postgresql-9.6.
    • Restored the postgresql database from backup after Holger accidently purged it.
  • Holger Levsen working on:
    • Merging all the above commits.
    • Added a check for (known) Jenkins zombie jobs and report them. (This is an long known problem with jenkins; deleted jobs sometimes come back…)
    • Upgraded the remaining amd64 nodes to stretch.
    • Accidentially purged postgres-9.4 from jenkins, so we could test our backups ;-)
    • Updated our stretch upgrade TODOs.

Misc.

This week's edition was written by Chris Lamb, Ximin Luo, Holger Levsen, Bernhard Wiedemann, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Worse Than FailureClassic WTF: The Proven Fix

It's a holiday weekend in the US, and I can't think of a better way to celebrate the history of the US than by having something go terribly wrong in a steel foundry. AMERICA! (original)--Remy

Photo Credit: Bryan Ledgard @ flickr There are lots of ways to ruin a batch of steel.

Just like making a cake, add in too much of one ingredient, add an ingredient at the wrong time, or heat everything to the wrong temperature, and it could all end in disaster. But in the case of a steel mill, we're talking about a 150 ton cake made of red-hot molten iron that's worth millions of dollars. Obviously, the risk of messing things up is a little bit higher. So, to help keep potential financial disaster at bay, the plants remove part of the human error factor and rely upon automated systems to keep things humming along smoothly. Systems much like the ones made by the company where Robert M. was a development manager.

The systems that Robert's group developed were not turnkey solutions; instead the software that they produced was intended to interact with the plant's hardware at a very low level. Because of this — and the fact that nobody wanted to be "that guy" who caused a plant to lose a crapton of money — all bug fixes and enhancements had to run though a plant simulator system first. While the arrangement worked well, Robert was always grateful when he was allowed to bring on additional help. And this is why he was very interested when heard that Vijay would be coming onto his team.

Considerable Generosity!

The company was run by the founder and his son. Phil, the son, was in charge of the "Advanced Technologies" group, which developed all sorts of neural networks and other things might be found on the latest cover of ComputerWorld; they sounded very impressive but never quite became products. All of them had advanced degrees in computer science and other, related fields. Perhaps not coincidently, all of the money-making products were developed and maintained by people without advanced degrees.

"I'm telling you Robert, you're going to really appreciate having Vijay around," said Phil, "He interviewed real strong and, get this. He has a PhD in Computer Science! You'll be thanking me by the end of next week, you'll see!"

Vijay's tenure, however, was temporary in nature. Until a project was available to truly engage Vijay's vast knowledge, he would be "on loan" to Robert's team to "improve his group's knowledge base" while gaining valuable real-world experience. "You'll want to exercise some care and feeding with that boy," warned Phil, "he likes to be right. But I have faith that you'll be able to handle him when he arrives first thing tomorrow!"

Welcome Aboard

When Vijay arrived on the scene, there was no mistaking him. His sunglasses and clothing made him seem like he jumped out of the latest J.Crew catalog, but his swagger and facial hair made him look more like Dog the Bounty Hunter.

Robert greeted Vijay warmly and started showing him around the office, introducing him to the receptionist, the other developers on his team, how to join the coffee club and other good information like that. Once acquainted with the development environment, Vijay's first assignment was to fix a minor bug. It was the sort of problem that you give to the new guy so that he can start to learn the code base. And later that afternoon, Vijay came to Robert's desk to announce that he had fixed the bug.

"Glad to hear that you tracked down the bug," Phil responded enthusiastically. "Did you run into any problems running the fixed code through the simulator?"

"I didn't need to!" replied Vijay, "it's a proven fix. It's perfect!"

Robert looked to a nearby colleague wondering if perhaps he had missed something Vijay had said.

"Vijay, it's not that I doubt your skills, but what exactly do you mean by a 'proven fix'?"

 

Bullet Proofing the Proof

Vijay left Robert's office and returned with a notebook. He explained that, basically, the idea is that you create a mathematical proof via a formal method that describes a system. What he had done was created a proof of how the system functioned, and then he recalculated after writing the actual fix.

While patting his notebook, he smiled and concluded, "therefore, this is my proof that the code will work — in writing!"

"Vijay,you work is quite impressive," Robert began, "but would you mind if we went over to the simulator and tried it out?"

Vijay was agreeable to the proposition and joined and Robert at the simulator. As per their normal test plan, Robert installed the patch and set up the conditions for the bug. Vijay watched on silently with a very stoic of course it will you idiot during the whole process. And then his mood suddenly changed when Robert found that the bug was still there.

"You see - the annealing temperature that appears on the operator's screen still reads -255F. Perhaps—", Robert began before being abruptly cut off.

"But I proved that it was correct!" exclaimed Vijay while stabbing his notebook with his pointer finger, "Now you see here! Page 3! There! SEE?!?!"

Quietly, Robert offered, "Maybe there is an error in your proof," but it did no good. Muttering under his breath, Vijay stormed back to his desk and spent the remainder of the day (literally) pounding on his keyboard.

The next morning, the founder's son, Phil, stopped by Robert's office. "Bad news Robert," he opened, "as of this morning, Vijay will no longer be working with your group. He was pretty upset that your software was defective and that it didn't work when he fixed it... even when his fix was proven to be correct."

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianRaphaël Hertzog: My Free Software Activities in June 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 12 hours to work on security updates for Debian 7 Wheezy. During this time I did the following:

  • Released DLA-983-1 and DLA-984-1 on tiff3/tiff to fix 4 CVE. I also updated our patch set to get back in sync with upstream since we had our own patches for a while and upstream ended up using a slightly different approach. I checked that the upstream fix did really fix the issues with the reproducer files that were available to us.
  • Handled CVE triage for a whole week.
  • Released DLA-1006-1 on libarchive (2 CVE fixed by Markus Koschany, one by me).

Debian packaging

Django. A last-minute update to Django in stretch before the release, I uploaded python-django 1:1.10.7-2 fixing two bugs (among which one was release critical) and filed the corresponding unblock request.

schroot. I tried to prepare another last-minute update, this time for schroot. The goal was to fix the bash completion (#855283) and a problem encountered by the Debian sysadmins (#835104). Those issues are fixed in unstable/testing but my unblock request got turned into a post-release stretch update because the release managers wanted to give the package some more testing time in unstable. Even now, they are wondering whether they should accept the new systemd service file.

live-build, live-config and live-boot. On live-build, I merged a patch to add a keyboard shortcut for the advanced option menu entry (#864386). For live-config, I uploaded version 5.20170623 to fix a broken boot sequence when you have multiple partitions (#827665). For live-boot, I uploaded version 1:20170623 to fix the path to udevadm (#852570) and avoiding a file duplication in the initrd (864385).

zim. I packaged a release candidate (0.67~rc2) in Debian Experimental and started to use it. I quickly discovered two annoying regressions that I reported upstream (here and here).

logidee-tools. This is a package I authored a long time ago and that I’m no longer actively using. It does still work but I sometimes wonder if it still has real users. Anyway I wanted to quickly replace the broken dependency on pgf but I ended up converting the Subversion repository to Git and I also added autopkgtests. At least those tests will inform me when the package no longer works… otherwise I would not notice since I’m no longer using it.

Bugs filed. I filed #865531 on lintian because the new check testsuite-autopkgtest-missing is giving some bad advice and probably does its check in a bad way. I also filed #865541 on sbuild because sbuild --apt-distupgrade can under some circumstances remove build-essential and break the build chroot. I filed an upstream ticket on publican to forward the request I received in #864648.

Sponsorship. I sponsored a jessie update for php-tcpdf (#814030) and dolibarr 5.0.4+dfsg3-1 for unstable. I sponsored many other packages, but all in the context of the pkg-security team.

pkg-security work

Now that the Stretch freeze is over, the team became more active again and I have been overwhelmed with the number of packages to review and sponsor:

  • knocker
  • recon-ng
  • dsniff
  • libnids
  • rfdump
  • snoopy
  • dirb
  • wcc
  • arpwatch
  • dhcpig
  • backdoor-factory

I also updated hashcat to a new upstream release (3.6.0) and had to discuss with upstream about its weird versioning change. Looks like we will have to introduce an epoch to be able to get back in sync with upstream. 🙁 To be able to get in sync with Kali, I introduced an hashcat-meta source package (in contrib) providing hashcat-nvidia to make it easy to install hashcat for owners of NVidia hardware.

Misc stuff

Distro Tracker. I merged a small CSS fix from Aurélien Couderc (#858101) and added a missing constraint on the data model (found through an unexpected traceback that I received by email). I also updated the list of repositories shortly after the stretch release (#865070).

Salt formulas. As part of my Kali work, I did setup a build daemon on Debian stretch host and I encountered a couple of issues with my Salt rules. I reported one against salt-formula (here) and I pushed updates for debootstrap-formula, apache-formula and schroot-formula.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianFoteini Tsiami: Internationalization, part two

Now that sch-scripts has been renamed to ltsp-manager and translated to English, it was time to set up a proper project site for it in launchpad: http://www.launchpad.net/ltsp-manager

The following subpages were created for LTSP Manager there:

  • Code: a review of all the project code, which currently only includes the master git repository.
  • Bugs: a tracker where bugs can be reported. I already filed a few bugs there!
  • Translations: translators will use this to localize LTSP Manager to their languages. It’s not quite ready yet.
  • Answers: a place to ask the LTSP Manager developers for anything regarding the project.

We currently have an initial version of LTSP Manager running in Debian Stretch; although more testing and more bug reports will be needed before we start the localization phase. Attaching a first screenshot!

 


Planet DebianArturo Borrero González: Netfilter Workshop 2017: I'm new coreteam member!

nfws2017

I was invited to attend the Netfilter Workshop 2017 in Faro, Portugal this week, so I’m here with all the folks enjoying some days of talks, discussions and hacking around Netfilter and general linux networking.

The Coreteam of the Netfilter project, with active members Pablo Neira Ayuso (head), Jozsef Kadlecsik, Eric Leblond and Florian Westphal have invited me to join them, and the appointment has happened today.

You may contact me now at my new email address: arturo@netfilter.org

This is the result of my continued contribution to the Netfilter project since several years now (probably since 2012-2013). I’m really happy with this, and I appreciate their recognition. I will do my best in this new position. Thanks!

Regarding the workshop itself, we are having lots of interesting talks and discussions about the state of the Netfilter technology, open issues, missing features and where to go in the future.

Really interesting!

Don Martitransactions from a future software market

More on the third connection in Benkler’s Tripod, which was pretty general. This is just some notes on more concrete examples of how new kinds of direct connections between markets and peer production might work in the future.

Smart contracts should make it possible to enable these in a trustworthy, mostly decentralized, way.

Feature request I want emoji support on my blog, so I file, or find, a wishlist bug on the open source blog package I use: "Add emoji support." I then offer to enter into a smart contract that will be worthless to me if the bug is fixed on September 1, or give me my money back if the bug is unfixed at that date.

A developer realizes that fixing the bug would be easy, and wants to do it, so takes the other side of the contract. The developer's side will expire worthless if the bug is unfixed, and pay out if the bug is fixed.

"Unfixed" results will probably include bugs that are open, wontfix, invalid, or closed as duplicate of a bug that is still open.

"Fixed" results will include bugs closed as fixed, or any bug closed as a duplicate of a bug that is closed as fixed.

If the developer fixes the bug, and its status changes to fixed, then I lose money on the smart contract but get the feature I want. If the bug status is still unfixed, then I get my money back.

So far this is just one user paying one developer to write a feature. Not especially exciting. There is some interesting market design work to be done here, though. How can the developer signal serious interest in working on the bug, and get enough upside to be meaningful, without taking too much risk in the event the fix is not accepted on time?

Arbitrage I post the same offer, but another user realizes that the blog project can only support emoji if the template package that it depends on supports them. That user becomes an arbitrageur: takes the "fixed" side of my offer, and the "unfixed" side of the "Add emoji support" bug in the template project.

As an end user, I don't have to know the dependency relationship, and the market gives the arbitrageur an incentive to collect information about multiple dependent bugs into the best place to fix them.

Front-running Dudley Do-Right's open source project has a bug in it, users are offering to buy the "unfixed" side of the contract in order to incentivize a fix, and a trader realizes that Dudley would be unlikely to let the bug go unfixed. The trader takes the "fixed" side of the contract before Dudley wakes up. The deal means that the market gets information on the likelihood of the bug being fixed, but the developer doing the work does not profit from it.

This is a "picking up nickels in front of a steamroller" trading strategy. The front-runner is accepting the risk of Dudley burning out, writing a long Medium piece on how open source is full of FAIL, and never fixing a bug again.

Front-running game theory could be interesting. If developers get sufficiently annoyed by front-running, they could delay fixing certain bugs until after the end of the relevant contracts. A credible threat to do this might make front-runners get out of their positions at a loss.

CVE prediction A user of a static analysis tool finds a suspicious pattern in a section of a codebase, but cannot identify a specific vulnerability. The user offers to take one side of a smart contract that will pay off if a vulnerability matching a certain pattern is found. A software maintainer or key user can take the other side of these contracts, to encourage researchers to disclose information and focus attention on specific areas of the codebase.

Security information leakage Ernie and Bert discover a software vulnerability. Bert sells it to foreign spies. Ernie wants to get a piece of the action, too, but doesn't want Bert to know, so he trades on a relevant CVE prediction. Neither Bert nor the foreign spies know who is making the prediction, but the market movement gives white-hat researchers a clue on where the vulnerability can be found.

Open source metrics: Prices and volumes on bug futures could turn out to be a more credible signal of interest in a project than raw activity numbers. It may be worth using a bot to trade on a project you depend on, just to watch the market move. Likewise, new open source metrics could provide useful trading strategies. If sentiment analysis shows that a project is melting down, offer to take the "unfixed" side of the project's long-running bugs? (Of course, this is the same market action that incentivizes fixes, so betting that a project will fail is the same thing as paying them not to. My brain hurts.)

What's an "oracle"?

The "oracle" is the software component that moves information from the bug tracker to the smart contracts system. Every smart contract has to be tied to a given oracle that both sides trust to resolve it fairly.

For CVE prediction, the oracle is responsible for pattern matching on new CVEs, and feeding the info into the smart contract system. As with all of these, CVE prediction contracts are tied to a specific oracle.

Bots

Bots might have several roles.

  • Move investments out of duplicate bugs. (Take a "fixed" position in the original and an "unfixed" position in the duplicate, or vice versa.)

  • Make small investments in bugs that appear valid based on project history and interactions by trusted users.

  • Track activity across projects and social sites to identify qualified bug fixers who are unlikely to fix a bug within the time frame of a contract, and take "unfixed" positions on bugs relevant to them.

  • For companies: when a bug is mentioned in an internal customer support ticketing system, buy "unfixed" on that bug. Map confidential customer needs to possible fixers.

Don Martimore transactions from a future software market

Previously: Benkler’s Tripod, transactions from a future software market

Why would you want the added complexity of a market where anyone can take either side of a futures contract on the status of a software bug, and not just offer to pay people to fix bugs like a sensible person? IMHO it's worth trying not just because of the promise of lower transaction costs and more market liquidity (handwave) but because it enables other kinds of transactions. A few more.

Partial work I want a feature, and buy the "unfixed" side of a contract that I expect to lose. A developer decides to fix it, does the work, and posts a pull request that would close the bug. But the maintainer is on vacation, leaving her pull request hanging with a long comment thread. Another developer is willing to take on the political risk of merging the work, and buys out the original developer's position.

Prediction/incentivization With the right market design, a prediction that something won't happen is the same as an incentive to make it happen. If we make an attractive enough way for users to hedge their exposure to lack of innovation, we create a pool of wealth that can be captured by innovators. (Related: dominant assurance contracts)

Bug triage Much valuable work on bugs is in the form of modifying metadata: assigning a bug to the correct subsystem, identifying dependency relationships, cleaning up spam, and moving invalid bugs into a support ticket tracker or forum. This work is hard to reward, and infamously hard to find volunteers for. An active futures market could include both bots that trade bugs probabilistically based on status and activity, and active bug triagers who make small market gains from modifying metadata in a way that makes them more likely to be resolved.

Don MartiApplying proposed principles for content blocking

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

In 2015, Denelle Dixon at Mozilla wrote Proposed Principles for Content Blocking.

The principles are:

  • Content Neutrality: Content blocking software should focus on addressing potential user needs (such as on performance, security, and privacy) instead of blocking specific types of content (such as advertising).

  • Transparency & Control: The content blocking software should provide users with transparency and meaningful controls over the needs it is attempting to address.

  • Openness: Blocking should maintain a level playing field and should block under the same principles regardless of source of the content. Publishers and other content providers should be given ways to participate in an open Web ecosystem, instead of being placed in a permanent penalty box that closes off the Web to their products and services.

See also Nine Principles of Policing by Sir Robert Peel, who wrote,

[T]he police are the public and that the public are the police, the police being only members of the public who are paid to give full-time attention to duties which are incumbent on every citizen in the interests of community welfare and existence.

Web browser developers have similar responsibilities to those of Peel's ideal police: to build a browser to carry out the user's intent, or, when setting defaults, to understand widely held user norms and implement those, while giving users the affordances to change the defaults if they choose.

The question now is how to apply content blocking principles to today's web environment. Some qualities of today's situation are:

  • Tracking protection often doesn't have to be perfect, because adfraud. The browser can provide some protection, and influence the market in a positive direction, just by getting legit users below the noise floor of fraudbots.

  • Tracking protection has the potential to intensify a fingerprinting arms race that's already going on, by forcing more adtech to rely on fingerprinting in place of third-party cookies.

  • Fraud is bad, but not all anti-fraud is good. Anti-fraud technologies that track users can create the same security risks as other tracking—and enable adtech to keep promising real eyeballs on crappy sites. The "flight to quality" approach to anti-fraud does not share these problems.

  • Adtech and adfraud can peek at Mozilla's homework, but Mozilla can't see theirs. Open source projects must rely on unpredictable users, not unpredictable platform decisions, to create uncertainty.

Which suggests a few tactics—low-risk ways to apply content blocking principles to address today's adtech/adfraud problems.

Empower WebExtensions developers and users. Much of the tracking protection and anti-fingerprinting magic in Firefox is hidden behind preferences. This makes a lot of sense because it enables developers to integrate their work into the browser in parallel with user testing, and enables Tor Browser to do less patching. IMHO this work is also important to enable users to choose their own balance between privacy/security and breaking legacy sites.

Inform and nudge users who express an interest in privacy. Some users care about privacy, but don't have enough information about how protection choices match up with their expectations. If a user cares enough to turn on Do Not Track, change cookie settings, or install an ad blocker, then try suggesting a tracking protection setting or tool. Don't assume that just because a user has installed an ad blocker with deceptive privacy settings that the user would not choose privacy if asked clearly.

Understand and report on adfraud. Adfraud is more than just fake impressions and clicks. New techniques include attribution fraud: taking advantage of tracking to connect a bogus ad impression to a real sale. The complexity of attribution models makes this hard to track down. (Criteo and Steelhouse settled a lawsuit about this before discovery could reveal much.)

A multi-billion-dollar industry is devoted to spreading a story that minimizes adfraud, while independent research hints at a complex and lucrative adfraud scene. Remember how there were two Methbot stories: Methbot got a bogus block of IP addresses, and Methbot circumvented some widely used anti-fraud scripts. The ad networks dealt with the first one pretty quickly, but the second is still a work in progress.

The more that Internet freedom lovers can help marketers understand adfraud, and related problems such as brand-unsafe ad placements, the more that the content blocking story can be about users, legit sites, and brands dealing with problem tracking, and not just privacy nerds against all web business.

Planet DebianJohn Goerzen: Time, Frozen

We’re expecting a baby any time now. The last few days have had an odd quality of expectation: any time, our family will grow.

It makes time seem to freeze, to stand still.

We have Jacob, about to start fifth grade and middle school. But here he is, still a sweet and affectionate kid as ever. He loves to care for cats and seeks them out often. He still keeps an eye out for the stuffed butterfly he’s had since he was an infant, and will sometimes carry it and a favorite blanket around the house. He will also many days prepare the “Yellow House News” on his computer, with headlines about his day and some comics pasted in — before disappearing to play with Legos for awhile.

And Oliver, who will walk up to Laura and “give baby a hug” many times throughout the day — and sneak up to me, try to touch my arm, and say “doink” before running off before I can “doink” him back. It was Oliver that had asked for a baby sister for Christmas — before he knew he’d be getting one!

In the past week, we’ve had out the garden hose a couple of times. Both boys will enjoy sending mud down our slide, or getting out the “water slide” to play with, or just playing in mud. The rings of dirt in the bathtub testify to the fun that they had. One evening, I built a fire, we made brats and hot dogs, and then Laura and I sat visiting and watching their water antics for an hour after, laughter and cackles of delight filling the air, and cats resting on our laps.

These moments, or countless others like Oliver’s baseball games, flying the boys to a festival in Winfield, or their cuddles at bedtime, warm the heart. I remember their younger days too, with fond memories of taking them camping or building a computer with them. Sometimes a part of me wants to just keep soaking in things just as they are; being a parent means both taking pride in children’s accomplishments as they grow up, and sometimes also missing the quiet little voice that can be immensely excited by a caterpillar.

And yet, all four of us are so excited and eager to welcome a new life into our home. We are ready. I can’t wait to hold the baby, or to lay her to sleep, to see her loving and excited older brothers. We hope for a smooth birth, for mom and baby.

Here is the crib, ready, complete with a mobile with a cute bear (and even a plane). I can’t wait until there is a little person here to enjoy it.

,

Planet DebianAntoine Beaupré: My free software activities, June 2017

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This time I worked on Mercurial, sudo and Puppet.

Mercurial remote code execution

I issued DLA-1005-1 to resolve problems with the hg server --stdio command that could be abused by "remote authenticated users to launch the Python debugger, and consequently execute arbitrary code, by using --debugger as a repository name" (CVE-2017-9462).

Backporting the patch was already a little tricky because, as is often the case in our line of work, the code had changed significantly in newer version. In particular, the commandline dispatcher had been refactored which made the patch non-trivial to port. On the other hand, mercurial has an extensive test suite which allowed me to make those patches in all confidence. I also backported a part of the test suite to detect certain failures better and to fix the output so that it matches the backported code. The test suite is slow, however, which meant slow progress when working on this package.

I also noticed a strange issue with the test suite: all hardlink operations would fail. Somehow it seems that my new sbuild setup doesn't support doing hardlinks. I ended up building a tarball schroot to build those types of packages, as it seems the issue is related to the use of overlayfs in sbuild. The odd part is my tests of overlayfs, following those instructions, show that it does support hardlinks, so there maybe something fishy here that I misunderstand.

This, however, allowed me to get a little more familiar with sbuild and the schroots. I also took this opportunity to optimize the builds by installing an apt-cacher-ng proxy to speed up builds, which will also be useful for regular system updates.

Puppet remote code execution

I have issued DLA-1012-1 to resolve a remote code execution attack against puppetmaster servers, from authenticated clients. To quote the advisory: "Versions of Puppet prior to 4.10.1 will deserialize data off the wire (from the agent to the server, in this case) with a attacker-specified format. This could be used to force YAML deserialization in an unsafe manner, which would lead to remote code execution."

The fix was non-trivial. Normally, this would have involved fixing the YAML parsing, but this was considered problematic because the ruby libraries themselves were vulnerable and it wasn't clear we could fix the problem completely by fixing YAML parsing. The update I proposed took the bold step of switching all clients to PSON and simply deny YAML parsing from the server. This means all clients need to be updated before the server can be updated, but thankfully, updated clients will run against an older server as well. Thanks to LeLutin at Koumbit for helping in testing patches to solve this issue.

Sudo privilege escalation

I have issued DLA-1011-1 to resolve an incomplete fix for a privilege escalation issue (CVE-2017-1000368 from CVE-2017-1000367). The backport was not quite trivial as the code had changed quite a lot since wheezy as well. Whereas mercurial's code was more complex, it's nice to see that sudo's code was actually simpler and more straightforward in newer versions, which is reassuring. I uploaded the packages for testing and uploaded them a year later.

I also took extra time to share the patch in the Debian bugtracker, so that people working on the issue in stable may benefit from the backported patch, if needed. One issue that came up during that work is that sudo doesn't have a test suite at all, so it is quite difficult to test changes and make sure they do not break anything.

Should we upload on fridays?

I brought up a discussion on the mailing list regarding uploads on fridays. With the sudo and puppet uploads pending, it felt really ... daring to upload both packages, on a friday. Years of sysadmin work hardwired me to be careful on fridays; as the saying goes: "don't deploy on a friday if you don't want to work on the weekend!"

Feedback was great, but I was surprised to find that most people are not worried worried about those issues. I have tried to counter some of the arguments that were brought up: I wonder if there could be a disconnection here between the package maintainer / programmer work and the sysadmin work that is at the receiving end of that work. Having myself to deal with broken updates in the past, I'm surprised this has never come up in the discussions yet, or that the response is so underwhelming.

So far, I'll try to balance the need for prompt security updates and the need for stable infrastructure. One does not, after all, go without the other...

Triage

I also did small fry triage:

Hopefully some of those will come to fruitition shortly.

Other work

My other work this month was a little all over the place.

Stressant

Uploaded a new release (0.4.1) of stressant to split up the documentation from the main package, as the main package was taking up too much space according to grml developers.

The release also introduces limited anonymity option, by blocking serial numbers display in the smartctl output.

Debiman

Also did some small followup on the debiman project to fix the FAQ links.

Local server maintenance

I upgraded my main server to Debian stretch. This generally went well, althought the upgrade itself took way more time than I would have liked (4 hours!). This is partly because I have a lot of cruft installed on the server, but also because of what I consider to be issues in the automation of major Debian upgrades. For example, I was prompted for changes in configuration files at seemingly random moments during the upgrade, and got different debconf prompts to answer. This should really be batched together, and unfortunately I had forgotten to use the home-made script I established when i was working at Koumbit which shortens the upgrade a bit.

I wish we would improve on our major upgrade mechanism. I documented possible solutions for this in the AutomatedUpgrade wiki page, but I'm not sure I see exactly where to go from here.

I had a few regressions after the upgrade:

  • the infrared remote control stopped working: still need to investigate
  • my home-grown full-disk encryption remote unlocking script broke, but upstream has a nice workaround, see Debian bug #866786
  • gdm3 breaks bluetooth support (Debian bug #805414 - to be fair, this is not a regression in stretch, it's just that I switched my workstation from lightdm to gdm3 after learning that the latter can do rootless X11!)

Docker and Subsonic

I did my first (and late?) foray into Docker and containers. My rationale was that I wanted to try out Subsonic, an impressive audio server which some friends have shown me. Since Subsonic is proprietary, I didn't want it to contaminate the rest of my server and it seemed like a great occasion to try out containers to keep things tidy. Containers may also allow me to transparently switch to the FLOSS fork LibreSonic once the trial period is over.

I have learned a lot and may write more about the details of that experience soon, for now you can look at the contributions I made to the unofficial Subsonic docker image, but also the LibreSonic one.

Since Subsonic also promotes album covers as first-class citizens, I used beets to download a lot of album covers, which was really nice. I look forward to using beets more, but first I'll need to implement two plugins.

Wallabako

I did a small release of wallabako to fix the build with the latest changes in the underlying wallabago library, which led me to ask upstream to make versionned releases.

I also looked into creating a separate documentation site but it looks like mkdocs doesn't like me very much: the table of contents is really ugly...

Small fry

That's about it! And that was supposed to be a slow month...

Planet DebianBen Hutchings: Debian LTS work, June 2017

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 5 hours. I worked all 20 hours.

I spent most of my time working - together with other Linux kernel developers - on backporting and testing several versions of the fix for CVE-2017-1000364, part of the "Stack Clash" problem. I uploaded two updates to linux and issued DLA-993-1 and DLA-993-2. Unfortunately the latest version still causes regressions for some applications, which I will be investigating this month.

I also released a stable update on the Linux 3.2 longterm stable branch (3.2.89) and prepared another (3.2.90) which I released today.

Sociological ImagesTransfers of military equipment to police increases civilian and canine death

Human beings are prone to a cognitive bias called the Law of the Instrument. It’s the tendency to see everything as being malleable according to whatever tool you’re used to using. In other words, if you have a hammer, suddenly all the world’s problems look like nails to you.

Objects humans encounter, then, aren’t simply useful to them, or instrumental; they are transformative: they change our view of the world. Hammers do. And so do guns. “You are different with a gun in your hand,” wrote the philosopher Bruno Latour, “the gun is different with you holding it. You are another subject because you hold the gun; the gun is another object because it has entered into a relationship with you.”

In that case, what is the effect of transferring military grade equipment to police officers?

In 1996, the federal government passed a law giving the military permission to donate excess equipment to local police departments. Starting in 1998, millions of dollars worth of equipment was transferred each year. Then, after 9/11, there was a huge increase in transfers. In 2014, they amounted to the equivalent of 796.8 million dollars.

Image via The Washington Post:

Those concerned about police violence worried that police officers in possession of military equipment would be more likely to use violence against civilians, and new research suggests that they’re right.

Political scientist Casey Delehanty and his colleagues compared the number of civilians killed by police with the monetary value of transferred military equipment across 455 counties in four states. Controlling for other factors (e.g., race, poverty, drug use), they found that killings rose along with increasing transfers. In the case of the county that received the largest transfer of military equipment, killings more than doubled.

But maybe they got it wrong? High levels of military equipment transfers could be going to counties with rising levels of violence, such that it was increasingly violent confrontations that caused the transfers, not the transfers causing the confrontations.

Delehanty and his colleagues controlled for the possibility that they were pointing the causal arrow the wrong way by looking at the killings of dogs by police. Police forces wouldn’t receive military equipment transfers in response to an increase in violence by dogs, but if the police were becoming more violent as a response to having military equipment, we might expect more dogs to die. And they did.

Combined with research showing that police who use military equipment are more likely to be attacked themselves, literally everyone will be safer if we reduce transfers and remove military equipment from the police arsenal.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianVincent Bernat: Performance progression of IPv4 route lookup on Linux

TL;DR: Each of Linux 2.6.39, 3.6 and 4.0 brings notable performance improvements for the IPv4 route lookup process.


In a previous article, I explained how Linux implements an IPv4 routing table with compressed tries to offer excellent lookup times. The following graph shows the performance progression of Linux through history:

IPv4 route lookup performance

Two scenarios are tested:

  • 500,000 routes extracted from an Internet router (half of them are /24), and
  • 500,000 host routes (/32) tightly packed in 4 distinct subnets.

All kernels are compiled with GCC 4.9 (from Debian Jessie). This version is able to compile older kernels1 as well as current ones. The kernel configuration used is the default one with CONFIG_SMP and CONFIG_IP_MULTIPLE_TABLES options enabled (however, no IP rules are used). Some other unrelated options are enabled to be able to boot them in a virtual machine and run the benchmark.

The measurements are done in a virtual machine with one vCPU2. The host is an Intel Core i5-4670K and the CPU governor was set to “performance”. The benchmark is single-threaded. Implemented as a kernel module, it calls fib_lookup() with various destinations in 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC (and converted to nanoseconds by assuming a constant clock).

The following kernel versions bring a notable performance improvement:

  • In Linux 2.6.39, commit 3630b7c050d9, David Miller removes the hash-based routing table implementation to switch to the LPC-trie implementation (available since Linux 2.6.13 as a compile-time option). This brings a small regression for the scenario with many host routes but boosts the performance for the general case.

  • In Linux 3.0, commit 281dc5c5ec0f, the improvement is not related to the network subsystem. Linus Torvalds disables the compiler size-optimization from the default configuration. It was believed that optimizing for size would help keeping the instruction cache efficient. However, compilers generated under-performing code on x86 when this option was enabled.

  • In Linux 3.6, commit f4530fa574df, David Miller adds an optimization to not evaluate IP rules when they are left unconfigured. From this point, the use of the CONFIG_IP_MULTIPLE_TABLES option doesn’t impact the performances unless some IP rules are configured. This version also removes the route cache (commit 5e9965c15ba8). However, this has no effect on the benchmark as it directly calls fib_lookup() which doesn’t involve the cache.

  • In Linux 4.0, notably commit 9f9e636d4f89, Alexander Duyck adds several optimizations to the trie lookup algorithm. It really pays off!

  • In Linux 4.1, commit 0ddcf43d5d4a, Alexander Duyck collapses the local and main tables when no specific IP rules are configured. For non-local traffic, both those tables were looked up.


  1. Compiling old kernels with an updated userland may still require some small patches

  2. The kernels are compiled with the CONFIG_SMP option to use the hierarchical RCU and activate more of the same code paths as actual routers. However, progress on parallelism are left unnoticed. 

CryptogramA Man-in-the-Middle Attack against a Password Reset System

This is nice work: "The Password Reset MitM Attack," by Nethanel Gelerntor, Senia Kalma, Bar Magnezi, and Hen Porcilan:

Abstract: We present the password reset MitM (PRMitM) attack and show how it can be used to take over user accounts. The PRMitM attack exploits the similarity of the registration and password reset processes to launch a man in the middle (MitM) attack at the application level. The attacker initiates a password reset process with a website and forwards every challenge to the victim who either wishes to register in the attacking site or to access a particular resource on it.

The attack has several variants, including exploitation of a password reset process that relies on the victim's mobile phone, using either SMS or phone call. We evaluated the PRMitM attacks on Google and Facebook users in several experiments, and found that their password reset process is vulnerable to the PRMitM attack. Other websites and some popular mobile applications are vulnerable as well.

Although solutions seem trivial in some cases, our experiments show that the straightforward solutions are not as effective as expected. We designed and evaluated two secure password reset processes and evaluated them on users of Google and Facebook. Our results indicate a significant improvement in the security. Since millions of accounts are currently vulnerable to the PRMitM attack, we also present a list of recommendations for implementing and auditing the password reset process.

Password resets have long been a weak security link.

BoingBoing Post.

Worse Than FailureCodeSOD: Classic WTF: When the Query String is Just Not Enough

It's a holiday weekend in the US, as as we prepare for the 4th of July, we have some query strings that are worth understanding. (original)--Remy

As Stephen A.'s client was walking him through their ASP.NET site, Stephen noticed a rather odd URL scheme. Instead of using the standard Query String -- i.e., http://their.site/Products/?ID=2 -- theirs used some form of URL-rewriting utilizing the "@" symbol in the request name: http://their.site/Products/@ID=2.aspx. Not being an expert on Search Engine Optimization, Stephan had just assumed it had something to do with that.

A few weeks later, when Stephan finally had a chance to take a look at the code, he noticed something rather different...

That's right; every "dynamic-looking" page was, in fact, static. It was going to be a long maintenance contract...

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Don MartiSoftware: annoying speech or crappy product?

Zeynep Tufekci, in the New York Times:

Since most software is sold with an “as is” license, meaning the company is not legally liable for any issues with it even on day one, it has not made much sense to spend the extra money and time required to make software more secure quickly.

The software business is still stuck on the kind of licensing that might have made sense in the 8-bit micro days, when "personal computer productivity" was more aspirational than a real thing, and software licenses were printed on the backs of floppy sleeves.

Today, software is part of products that do real stuff, and it makes zero sense to ship a real product, that people's safety or security depends on, with the fine print "WE RESERVE THE RIGHT TO TOTALLY HALF-ASS OUR JOBS" or in business-speak, "SELLER DISCLAIMS THE IMPLIED WARRANTY OF MERCHANTABILITY."

But what about open source and collaboration and science, and all that stuff? Software can be both "product" and "speech". Should there be a warranty on speech? If I dig up my shell script for re-running the make command when a source file changes, and put it on the Internet, should I be putting a warranty on it?

It seems that there are two kinds of software: some is more product-like, and should have a grown-up warranty on it like a real busines. And some software is more speech-like, and should have ethical requirements like a scientific paper, but not a product-like warranty.

What's the dividing line? Some ideas.

"productware is shipped as executables, freespeechware is shipped as source code" Not going to work for elevator_controller.php or a home router security tool written in JavaScript.

"productware is preinstalled, freespeechware is downloaded separately" That doesn't make sense when even implanted defibrillators can update over the net.

"productware is proprietary, freespeechware is open source" Companies could put all the fragile stuff in open source components, then use the DMCA and CFAA to enable them to treat the whole compilation as proprietary.

Software companies are built to be good at getting around rules. If a company can earn all its money in faraway Dutch Sandwich Land and be conveniently too broke to pay the IRS in the USA, then it's going to be hard to make it grow up licensing-wise without hurting other people first.

How about splitting out the legal advantages that the government offers to software and extending some to productware, others to freespeechware?

Freespeechware licenses

  • license may disclaim implied warranty

  • no anti-reverse-engineering clause in a freespeechware license is enforceable

  • freespeechware is not a "technological protection measure" under section 1201 of Title 17 of the United States Code (DMCA anticircumvention)

  • exploiting a flaw in freespeechware is never a violation of the Computer Fraud and Abuse Act

  • If the license allows it, a vendor may sell freespeechware, or a derivative work of it, as productware. (This could be as simple as following the You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. term of the GPL.)

Productware licenses:

  • license may not disclaim implied warranty

  • licensor and licensee may agree to limit reverse engineering rights

  • DMCA and CFAA apply (reformed of course, but that's another story)

It seems to me that there needs to be some kind of quid pro quo here. If a company that sells software wants to use government-granted legal powers to control its work, that has to be conditioned on not using those powers just to protect irresponsible releases.

,

Planet Linux AustraliaOpenSTEM: The Science of Cats

Ah, the comfortable cat! Most people agree that cats are experts at being comfortable and getting the best out of life, with the assistance of their human friends – but how did this come about? Geneticists and historians are continuing to study how cats and people came to live together and how cats came to organise themselves into such a good deal in their relationship with humans. Cats are often allowed liberties that few other animals, even domestic animals, can get away with – they are fed and usually pampered with comfortable beds (including human furniture), are kept warm, cuddled on demand; and, very often, are not even asked to provide anything except affection (on their terms!) in return. Often thought of as solitary animals, cats’ social behaviour is actually a lot more complex and recently further insights have been gained about how cats and humans came to enjoy the relationship that they have today.

Many people know that the Ancient Egyptians came to certain agreements with cats – cats are depicted in some of their art and mummified cats have been found. It is believed that cats may have been worshipped as representatives of the Cat Goddess, Bastet – interestingly enough, a goddess of war! Statues of cats from Ancient Egypt emphasise their regal bearing and tendency towards supercilious expressions. Cats were present in Egyptian art by 1950 B.C. and it was long thought that Egyptians were the first to domesticate the cat. However, in 2004 a cat was found buried with a human  on the island of Cyprus in the Mediterranean 9,500 years ago, making it the earliest known cat associated with humans. This date was many thousands of years earlier than Egyptian cats. In 2008 a site in the Nile Valley was found which contained the remains of 6 cats – a male, a female and 4 kittens, which seemed to have been cared for by people about 6,000 years ago.

African Wild Cat, photo by Sonelle, CC-BY-SA

It is now fairly well accepted that cats domesticated people, rather than the other way round! Papers refer to cats as having “self-domesticated”, which sounds in line with cat behaviour. Genetically all modern cats are related to African (also called Near Eastern) wild cats 8,000 years ago. There was an attempt to domesticate leopard cats about 7,500 years ago in China, but none of these animals contributed to the genetic material of the world’s modern cat populations. As humans in the Near East developed agriculture and started to live in settled villages, after 10,000 years ago, cats were attracted to these ready sources of food and more. The steady supply of food from agriculture allowed people to live in permanent villages. Unfortunately, these villages, stocked with food, also attracted other animals, such as rats and mice, not as welcome and potential carriers of disease. The rats and mice were a source of food for the cats who probably arrived in the villages as independent, nocturnal hunters, rather than as deliberately being encouraged by people.

Detail of cat from tomb of Nebamun

Once cats were living in close proximity to people, trust developed and soon cats were helping humans in the hunt, as is shown in this detail from an Egyptian tomb painting on the right. Over time, cats became pets and part of the family and followed farmers from Turkey across into Europe, as well as being painted sitting under dining tables in Egypt. People started to interfere with the breeding of cats and it is now thought that the Egyptians selected more social, rather than more territorial cats. Contrary to the popular belief that cats are innately solitary, in fact African Wild Cats have complex social behaviour, much of which has been inherited by the domestic cat. African wild cats live in loosely affiliated groups made up mostly of female cats who raise kittens together. There are some males associated with the group, but they tend to visit infrequently and have a larger range, visiting several of the groups of females and kittens. The female cats take turns at nursing, looking after the kittens and hunting. The adult females share food only with their own kittens and not with the other adults. Cats recognise who belongs to their group and who doesn’t and tend to be aggressive to those outside the group. Younger cats are more tolerant of strangers, until they form their own groups. Males are not usually social towards each other, but occasionally tolerate each other in loose ‘brotherhoods’.

In our homes we form the social group, which may include one or more cats. If there is more than one cat these may subdivide themselves into cliques or factions. Pairs of cats raised together often remain closely bonded and affectionate for life. Other cats (especially males) may isolate themselves from the group and do not want to interact with other cats. Cats that are happy on their own do not need other cats for company. It is more common to find stressed cats in multi-cat households. Cats will tolerate other cats best if they are introduced when young. After 2 years of age cats are less tolerant of newcomers to the group. Humans take the place of parents in their cats’ lives. Cats who grow up with humans retain some psychological traits from kittenhood and never achieve full psychological maturity.

At the time that humans were learning to manipulate the environment to their own advantage by domesticating plants and animals, cats started learning to manipulate us. They have now managed to achieve very comfortable and prosperous lives with humans and have followed humans around the planet. Cats arrived in Australia with the First Fleet, having found a very comfortable niche on sailing ships helping to control vermin. Matthew Flinders‘ cat, Trim, became famous as a result of the book Flinders wrote about him. However, cats have had a devastating effect on the native wildlife of Australia. They kill millions of native animals every year, possibly even millions each night. It is thought that they have been responsible for the extinction of numbers of native mice and small marsupial species. Cats are very efficient and deadly nocturnal hunters. It is recommended that all cats are kept restrained indoors or in runs, especially at night. We must not forget that our cuddly companions are still carnivorous predators.

Krebs on SecurityIs it Time to Can the CAN-SPAM Act?

Regulators at the U.S. Federal Trade Commission (FTC) are asking for public comment on the effectiveness of the CAN-SPAM Act, a 14-year-old federal law that seeks to crack down on unsolicited commercial email. Judging from an unscientific survey by this author, the FTC is bound to get an earful.

spamspamspam

Signed into law by President George W. Bush in 2003, the “Controlling the Assault of Non-Solicited Pornography and Marketing Act” was passed in response to a rapid increase in junk email marketing.

The law makes it a misdemeanor to spoof the information in the “from:” field of any marketing message, and prohibits the sending of sexually-oriented spam without labeling it “sexually explicit.” The law also requires spammers to offer recipients a way to opt-out of receiving further messages, and to process unsubscribe requests within 10 business days.

The “CAN” in CAN-SPAM was a play on the verb “to can,” as in “to put an end to,” or “to throw away,” but critics of the law often refer to it as the YOU-CAN-SPAM Act, charging that it essentially legalized spamming. That’s partly because the law does not require spammers to get permission before they send junk email. But also because the act prevents states from enacting stronger anti-spam protections, and it bars individuals from suing spammers except under laws not specific to email.

Those same critics often argue that the law is rarely enforced, although a search on the FTC’s site for CAN-SPAM press releases produces quite a few civil suits brought by the commission against marketers over the years. Nevertheless, any law affecting Internet commerce is bound to need a few tweaks over the years, and CAN-SPAM has been showing its age for some time now.

Ron Guilmette, an anti-spam activists whose work has been profiled extensively on this blog, didn’t sugar-coat it, calling CAN-SPAM “a travesty that was foisted upon the American people by a small handful of powerful companies, most notably AOL and Microsoft, and by their obedient lackeys in Congress.”

According to Guilmette, the Act was deliberately fashioned so as to nullify California’s more restrictive anti-spam law, and it made it impossible for individual victims of spam to sue spam senders. Rather, he said, that right was reserved only for the same big companies that lobbied heavily for the passage of the CAN-SPAM Act.

“The entire Act should be thrown out and replaced,” Guilmette said. “It hasn’t worked to control spam, and it has in fact only served to make the problem worse.”

In the fix-it-don’t-junk-it camp is Joe Jerome, policy counsel for the Electronic Frontier Foundation (EFF), a nonprofit digital rights advocacy group. Jerome allowed that CAN-SPAM is far from perfect, but he said it has helped to set some ground rules.

“In her announcement on this effort, Acting Chairman Ohlhausen hinted that regulations can be excessive, outdated, or unnecessary,” Jerome said. “Nothing can be further from the case with respect to spam. CAN-SPAM was largely ineffective in stopping absolutely bad, malicious spammers, but it’s been incredibly important in creating a baseline for commercial email senders. Advertising transparency and easy opt-outs should not be viewed as a burden on companies, and I’d worry that weakening CAN-SPAM would set us back. If anything, we need stronger standards around opt-outs and quicker turn-around time, not less.”

Dan Balsam, an American lawyer who’s made a living suing spammers, has argued that CAN-SPAM is nowhere near as tough as it needs to be on junk emailers. Balsam argues that spammy marketers win as long as the federal law leaves enforcement up to state attorneys general, the FTC and Internet service providers.

“I would tell the FTC that it’s a travesty that Congress purports to usurp the states’ traditional authority to regulate advertising,” Balsam said via email. “I would tell the FTC that it’s ridiculous that the CAN-SPAM Act allows spam unless/until the person opts out, unlike e.g. Canada’s law. And I would tell the FTC that the CAN-SPAM Act isn’t working because there’s still obviously a spam problem.”

Cisco estimates that 65 percent of all email sent today is spam. That’s down only slightly from 2004 when CAN-SPAM took effect. At the time, Postini Inc. — an email filtering company later bought by Google — estimated that 70 percent of all email was spam.

Those figures may be misleading because a great deal of spam today is simply malicious email. Nobody harbored any illusions that CAN-SPAM could do much to stop the millions of phishing scams, malware and booby-trapped links being blasted out each day by cyber criminals. This type of spam is normally relayed via hacked servers and computers without the knowledge of their legitimate owners. Also, while the world’s major ISPs have done a pretty good job blocking most pornography spam, it’s still being blasted out en masse from some of the same criminals who are pumping malware spam.

Making life more miserable and expensive for malware spammers and phishers has been major focus of my work, both here at KrebsOnSecurity and in my book, Spam Nation: The Inside Story of Organized Cybercrime. Stay tuned later this week for the results of a lengthy investigation into a spam gang that has stolen millions of Internet addresses to play their trade (a story, by the way, that features prominently the work of the above-quoted anti-spammer Ron Guilmette).

What do you think about the CAN-SPAM law? Sound off in the comments below, and consider leaving a copy of your thoughts at the FTC’s CAN-SPAM comments page.

Planet DebianBits from Debian: New Debian Developers and Maintainers (May and June 2017)

The following contributors got their Debian Developer accounts in the last two months:

  • Alex Muntada (alexm)
  • Ilias Tsitsimpis (iliastsi)
  • Daniel Lenharo de Souza (lenharo)
  • Shih-Yuan Lee (fourdollars)
  • Roger Shimizu (rosh)

The following contributors were added as Debian Maintainers in the last two months:

  • James Valleroy
  • Ryan Tandy
  • Martin Kepplinger
  • Jean Baptiste Favre
  • Ana Cristina Custura
  • Unit 193

Congratulations!

Planet DebianRitesh Raj Sarraf: apt-offline 1.8.1 released

apt-offline 1.8.1 released.

This is a bug fix release fixing some python3 glitches related to module imports. Recommended for all users.

apt-offline (1.8.1) unstable; urgency=medium

  * Switch setuptools to invoke py3
  * No more argparse needed on py3
  * Fix genui.sh based on comments from pyqt mailing list
  * Bump version number to 1.8.1

 -- Ritesh Raj Sarraf <rrs@debian.org>  Sat, 01 Jul 2017 21:39:24 +0545
 

What is apt-offline

Description: offline APT package manager
 apt-offline is an Offline APT Package Manager.
 .
 apt-offline can fully update and upgrade an APT based distribution without
 connecting to the network, all of it transparent to APT.
 .
 apt-offline can be used to generate a signature on a machine (with no network).
 This signature contains all download information required for the APT database
 system. This signature file can be used on another machine connected to the
 internet (which need not be a Debian box and can even be running windows) to
 download the updates.
 The downloaded data will contain all updates in a format understood by APT and
 this data can be used by apt-offline to update the non-networked machine.
 .
 apt-offline can also fetch bug reports and make them available offline.

 

Categories: 

Keywords: 

Like: 

Planet DebianJunichi Uekawa: PLC network connection seems to be less reliable.

PLC network connection seems to be less reliable. Maybe it's too hot. Reconnecting it seems to make it better. Hmmm..

,

Planet DebianThorsten Alteholz: My Debian Activities in June 2017

FTP assistant

This month I marked 100 packages for accept and rejected zero packages. I promise, I will reject more in July!

Debian LTS

This was my thirty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload went down again to 16h. During that time I did LTS uploads of:

  • [DLA 994-1] zziplib security update for seven CVEs
  • [DLA 995-1] swftools security update for two CVEs
  • [DLA 998-1] c-ares security update for one CVE
  • [DLA 1008-1] libxml2 security update for five CVEs

I also tested the proposed apache2 package prepared by Roberto and started to work on a new bind9 upload

Last but not least I had five days of frontdesk duties.

Other stuff

This month was filled with updating lots of machines to Stretch. Everything went fine, so thanks a lot to everybody involved in that release.

Nevertheless I also did a DOPOM upload and now take care of otpw. Luckily most of the accumulated bugs already contained a patch, so that I just had to upload a new version and close them.

Planet DebianDaniel Silverstone: F/LOSS activity, June 2017

It seems to be becoming popular to send a mail each month detailing your free software work for that month. I have been slowly ramping my F/LOSS activity back up, after years away where I worked on completing my degree. My final exam for that was in June 2017 and as such I am now in a position to try and get on with more F/LOSS work.

My focus, as you might expect, has been on Gitano which reached 1.0 in time for Stretch's release and which is now heading gently toward a 1.1 release which we have timed for Debconf 2017. My friend a colleague Richard has been working hard on Gitano and related components during this time too, and I hope that Debconf will be an opportunity for him to meet many of my Debian friends too. But enough of that, back to the F/LOSS.

We've been running Gitano developer days roughly monthly since March of 2017, and the June developer day was attended by myself, Richard Maw, and Richard Ipsum. You are invited to read the wiki page for the developer day if you want to know exactly what we got up to, but a summary of my involvement that day is:

  • I chaired the review of our current task state for the project
  • I chaired the decision on the 1.1 timeline.
  • I completed a code branch which adds rudimentary hook support to Gitano and submitted it for code review.
  • I began to learn about git-multimail since we have a need to add support for it to Gitano

Other than that, related to Gitano during June I:

  • Reviewed Richard Ipsum's lua-scrypt patches for salt generation
  • Reviewed Richard Maw's work on centralising Gitano's patterns into a module.
  • Responded to reviews of my hook work, though I need to clean it up some before it'll be merged.

My non-Gitano F/LOSS related work in June has been entirely centred around the support I provide to the Lua community in the form of the Lua mailing list and website. The host on which it's run is ailing, and I've had to spend time trying to improve and replace that.

Hopefully I'll have more to say next month. Perhaps by doing this reporting I'll get more F/LOSS done. Of course, July's report will be sent out while I'm in Montréal for debconf 2017 (or at least for debcamp at that point) so hopefully more to say anyway.

Planet DebianKeith Packard: DRM-lease-4

DRM leasing part three (vblank)

The last couple of weeks have been consumed by getting frame sequence numbers and events handled within the leasing environment (and Vulkan) correctly.

Vulkan EXT_display_control extension

This little extension provides the bits necessary for applications to track the display of frames to the user.

VkResult
vkGetSwapchainCounterEXT(VkDevice           device,
             VkSwapchainKHR         swapchain,
             VkSurfaceCounterFlagBitsEXT    counter,
             uint64_t           *pCounterValue);

This function just retrieves the current frame count from the display associated with swapchain.

VkResult
vkRegisterDisplayEventEXT(VkDevice          device,
              VkDisplayKHR          display,
              const VkDisplayEventInfoEXT   *pDisplayEventInfo,
              const VkAllocationCallbacks   *pAllocator,
              VkFence           *pFence);

This function creates a fence that will be signaled when the specified event happens. Right now, the only event supported is when the first pixel of the next display refresh cycle leaves the display engine for the display. If you want something fancier (like two frames from now), you get to do that on your own using this basic function.

drmWaitVBlank

drmWaitVBlank is the existing interface for all things sequence related and has three modes (always nice to have one function do three things, I think). It can:

  1. Query the current vblank number
  2. Block until a specified vblank number
  3. Queue an event to be delivered at a specific vblank number

This interface has a few issues:

  • It has been kludged into supporting multiple CRTCs by taking bits from the 'type' parameter to hold a 'pipe' number, which is the index in the kernel into the array of CRTCs.

  • It has a random selection of 'int' and 'long' datatypes in the interface, making it need special helpers for 32-bit apps running on a 64-bit kernel.

  • Times are in microseconds, frame counts are 32 bits. Vulkan does everything in nanoseconds and wants 64-bits of frame counts.

For leases, figuring out the index into the kernel list of crtcs is pretty tricky -- our lease has a subset of those crtcs, so we can't actually compute the global crtc index.

drmCrtcGetSequence

int drmCrtcGetSequence(int fd, uint32_t crtcId,
               uint64_t *sequence, uint64_t *ns);

Here's a simple new function — hand it a crtc ID and it provides the current frame sequence number and the time when that frame started (in nanoseconds).

drmCrtcQueueSequence

int drmCrtcQueueSequence(int fd, uint32_t crtcId,
                 uint32_t flags, uint64_t sequence,
             uint64_t user_data);

struct drm_event_crtc_sequence {
    struct drm_event    base;
    __u64           user_data;
    __u64           time_ns;
    __u64           sequence;
};

This will cause a CRTC_SEQUENCE event to be delivered at the start of the specified frame sequence. That event will include the frame when the event was actually generated (in case it's late), along with the time (in nanoseconds) when that frame was started. The event also includes a 64-bit user_data value, which can be used to hold a pointer to whatever data the application wants to see in the event handler.

The 'flags' argument contains a combination of:

#define DRM_CRTC_SEQUENCE_RELATIVE      0x00000001  /* sequence is relative to current */
#define DRM_CRTC_SEQUENCE_NEXT_ON_MISS      0x00000002  /* Use next sequence if we've missed */
#define DRM_CRTC_SEQUENCE_FIRST_PIXEL_OUT   0x00000004  /* Signal when first pixel is displayed */

These are similar to the values provided for the drmWaitVBlank function, except I've added a selector for when the event should be delivered to align with potential future additions to Vulkan. Right now, the only time you can ask for is first-pixel-out, which says that the event should correspond to the display of the first pixel on the screen.

DRM events → Vulkan fences

With the kernel able to deliver a suitable event at the next frame, all the Vulkan code needed was a to create a fence and hook it up to such an event. The existing fence code only deals with rendering fences, so I added window system interface (WSI) fencing infrastructure and extended the radv driver to be able to handle both kinds of fences within that code.

Multiple waiting threads

I've now got three places which can be waiting for a DRM event to appear:

  1. Frame sequence fences.

  2. Wait for an idle image. Necessary when you want an image to draw the next frame to.

  3. Wait for the previous flip to complete. The kernel can only queue one flip at a time, so we have to make sure the previous flip is complete before queuing another one.

Vulkan allows these to be run from separate threads, so I needed to deal with multiple threads waiting for a specific DRM event at the same time.

XCB has the same problem and goes to great lengths to manage this with a set of locking and signaling primitives so that only one thread is ever doing poll or read from the socket at time. If another thread wants to read at the same time, it will block on a condition variable which is then signaled by the original reader thread at the appropriate time. It's all very complicated, and it didn't work reliably for a number of years.

I decided to punt and just create a separate thread for processing all DRM events. It blocks using poll(2) until some events are readable, processes those and then broadcasts to a condition variable to notify any waiting threads that 'something' has happened. Each waiting thread simply checks for the desired condition and if not satisfied, blocks on that condition variable. It's all very simple looking, and seems to work just fine.

Code Complete, Validation Remains

At this point, all of the necessary pieces are in place for the VR application to take advantage of an HMD using only existing Vulkan extensions. Those will be automatically mapped into DRM leases and DRM events as appropriate.

The VR compositor application is working pretty well; tests with Dota 2 show occasional jerky behavior in complex scenes, so there's clearly more work to be done somewhere. I need to go write a pile of tests to independently verify that my code is working. I wonder if I'll need to wire up some kind of light sensor so I can actually tell when frames get displayed as it's pretty easy to get consistent-but-wrong answers in this environment.

Source Code

  • Linux. This is based off of a reasonably current drm-next branch from Dave Airlie. 965 commits past 4.12 RC3.

    git://people.freedesktop.org/~keithp/linux drm-lease-v3

  • X server (which includes xf86-video-modesetting). This is pretty close to master.

    git://people.freedesktop.org/~keithp/xserver drm-lease

  • RandR protocol changes

    git://people.freedesktop.org/~keithp/randrproto drm-lease

  • xcb proto (no changes to libxcb sources, but it will need to be rebuilt)

    git://people.freedesktop.org/~keithp/xcb/proto drm-lease

  • DRM library. About a dozen patches behind master.

    git://people.freedesktop.org/~keithp/drm drm-lease

  • Mesa. Branched early this month (4 June), this is pretty far from master.

    git://people.freedesktop.org/~keithp/mesa drm-lease

Planet DebianRuss Allbery: End of month haul

For some reason, June is always incredibly busy for me. It's historically the most likely month in which I don't post anything here at all except reviews (and sometimes not even that). But I'm going to tell you about what books I bought (or were given to me) on the very last day of the month to break the pattern of no journal postings in June.

Ted Chiang — Arrival (Stories of Your Life) (sff collection)
Eoin Colfer — Artemis Fowl (sff)
Philip K. Dick — The Man Who Japed (sff)
Yoon Ha Lee — Raven Strategem (sff)
Paul K. Longmore — Why I Burned My Book (nonfiction)
Melina Marchetta — The Piper's Son (mainstream)
Jules Verne — For the Flag (sff, sort of)

This is a more than usually eclectic mix.

The Chiang is his Stories of Your Life collection reissued under the title of Arrival to cash in on the huge success of the movie based on one of his short stories. I'm not much of a short fiction reader, but I've heard so many good things about Chiang that I'm going to give it a try.

The Longmore is a set of essays about disability rights that has been on my radar for a while. I finally got pushed into buying it (plus the first Artemis Fowl book and the Marchetta) because I've been reading back through every review Light has written. (I wish I were even close to that amusingly snarky in reviews, and she manages to say in a few paragraphs what usually takes me a whole essay.)

Finally, the Dick and the Verne were gifts from a co-worker from a used book store in Ireland. Minor works by both authors, but nice, old copies of the books.

Rondam RamblingsMcConnell's Monster

Like a movie monster that keeps rising from the dead long after you think it has been dispatched, the American Health Care Act, and the Senate's sequel, the Better Care Reconciliation Act, simply refuse to die.  And also like movie monsters, if they are released from the laboratory into the world, they will main and kill thousands of innocent people.  The numbers change from week to week, but the

,

Krebs on SecuritySo You Think You Can Spot a Skimmer?

This week marks the 50th anniversary of the automated teller machine — better known to most people as the ATM or cash machine. Thanks to the myriad methods thieves have devised to fleece unsuspecting cash machine users over the years, there are now more ways than ever to get ripped off at the ATM. Think you’re good at spotting the various scams? A newly released ATM fraud inspection guide may help you test your knowledge.

The first cash machine opened for business on June 27, 1967 at a Barclays bank branch in Enfield, north London, but ATM transactions back then didn’t remotely resemble the way ATMs work today.

The first ATM was installed in Enfield, in North London, on June 27, 1967. Image: Barclays Bank.

The first ATM was installed in Enfield, in North London, on June 27, 1967. Image: Barclays Bank.

The cash machines of 1967 relied not on plastic cards but instead on paper checks that the bank would send to customers in the mail. Customers would take those checks — which had little punched-card holes printed across the surface — and feed them into the ATM, which would then validate the checks and dispense a small amount of cash.

This week, Barclay’s turned the ATM at the same location into a gold color to mark its golden anniversary, dressing the machine with velvet ropes and a red carpet leading up to the machine’s PIN pad.

The location of the world's first ATM, turned to gold to commemorate the cash machine's golden anniversary. Image: Barclays Bank.

The location of the world’s first ATM, turned to gold and commemorated with a plaque to mark the cash machine’s golden anniversary. Image: Barclays Bank.

Chances are, the users of that gold ATM have little to worry about from skimmer scammers. But the rest of us practically need a skimming-specific dictionary to keep up with today’s increasingly ingenious thieves.

These days there are an estimated three million ATMs around the globe, and a seemingly endless stream of innovative criminal skimming devices has introduced us over the years to a range of terms specific to cash machine scams like wiretapping, eavesdropping, card-trapping, cash-trapping, false fascias, shimming, black box attacks, bladder bombs (pump skimmers), gas attacks, and deep insert skimmers.

Think you’ve got what it takes to spot the telltale signs of a skimmer? Then have a look at the ATM Fraud Inspection Guide (PDF) from cash machine giant NCR Corp., which briefly touches on the most common forms of ATM skimming and their telltale signs.

For example, below are a few snippets from that guide showing different cash trapping devices made to siphon bills being dispensed from the ATM.

Cash-trapping devices. Source: NCR.

Cash-trapping devices. Source: NCR.

As sophisticated as many modern ATM skimmers may be, most of them can still be foiled by ATM customers simply covering the PIN pad with their hands while entering their PIN (the rare exceptions here involve expensive, complex fraud devices called “PIN pad overlays”).

The proliferation of skimming devices can make a trip to any ATM seem like a stressful experience, but keep in mind that skimmers aren’t the only thing that can go wrong at an ATM. It’s a good idea to visit only ATMs that are in well-lit and public areas, and to be aware of your surroundings as you approach the cash machine. If you visit a cash machine that looks strange, tampered with, or out of place, then try to find another ATM.

You are far more likely to encounter ATM skimmers over the weekend when the bank is closed (skimmer thieves especially favor long holiday weekends when the banks are closed on Monday). Also, if you have the choice between a stand-alone, free-standing ATM and one that is installed at a fixed location (particularly a bank) opt for the fixed-location machine, which is typically more secure against physical tampering.

"Deep insert" skimmers, top. Below, an ATM "shimming" device. Source: NCR.

“Deep insert” skimmers, top. Below, ATM “shimming” devices. Source: NCR.

Sociological ImagesRace, femininity, and benign nature in a vintage tobacco ad

Flashback Friday.

In Race, Ethnicity, and Sexuality, Joane Nagel looks at how these characteristics are used to create new national identities and frame colonial expansion. In particular, White female sexuality, presented as modest and appropriate, was often contrasted with the sexuality of colonized women, who were often depicted as promiscuous or immodest.

This 1860s advertisement for Peter Lorillard Snuff & Tobacco illustrates these differences. According to Toby and Will Musgrave, writing in An Empire of Plants, the ad drew on a purported Huron legend of a beautiful white spirit bringing them tobacco.

There are a few interesting things going on here. We have the association of femininity with a benign nature: the women are surrounded by various animals (monkeys, a fox and a rabbit, among others) who appear to pose no threat to the women or to one another. The background is lush and productive.

Racialized hierarchies are embedded in the personification of the “white spirit” as a White woman, descending from above to provide a precious gift to Native Americans, similar to imagery drawing on the idea of the “white man’s burden.”

And as often occurred (particularly as we entered the Victorian Era), there was a willingness to put non-White women’s bodies more obviously on display than the bodies of White women. The White woman above is actually less clothed than the American Indian woman, yet her arm and the white cloth are strategically placed to hide her breasts and crotch. On the other hand, the Native American woman’s breasts are fully displayed.

So, the ad provides a nice illustration of the personification of nations with women’s bodies, essentialized as close to nature, but arranged hierarchically according to race and perceived purity.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

(View original at https://thesocietypages.org/socimages)

Sam VargheseSage advice

A wise old owl sat on an oak;

The more he saw, the less he spoke;

The less he spoke, the more he heard;

Why aren’t we like this wise, old bird?

CryptogramGood Article About Google's Project Zero

Fortune magazine just published a good article about Google's Project Zero, which finds and publishes exploits in other companies' software products.

I have mixed feeling about it. The project does great work, and the Internet has benefited enormously from these efforts. But as long as it is embedded inside Google, it has to deal with accusations that it targets Google competitors.

Worse Than FailureError'd: Best Null I Ever Had

"Truly the best null I've ever had. Definitely would purchase again," wrote Andrew R.

 

"Apparently, the Department of Redundancy Department got a hold of the internet," writes Ken F.

 

Berend writes, "So, if I enter 'N' does that mean I'll be instantly hit by a death ray?"

 

"Move over, fake news, Google News has this thing," wrote Jack.

 

Evan C. writes, "I honestly wouldn't put it past parents in Canada to register their yet-to-be-born children for hockey 10 years in advance."

 

"I think that a problem has, um, something, to that computer," writes Tyler Z.

 

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

LongNow10 Years Ago: Brian Eno’s 77 Million Paintings in San Francisco, 02007


Long Now co-founders Stewart Brand (center)
and Brian Eno (right) in San Francisco, 02007

Exactly a decade ago today, in June 02007, Long Now hosted the North American Premiere of Brian Eno’s 77 Million Paintings. It was a celebration of Eno’s unique generative art work, as well as the inaugural event of our newly launched Long Now Membership program.

Here’s how we described the large scale art work at the time:

Conceived by Brian Eno as “visual music”, his latest artwork 77 Million Paintings is a constantly evolving sound and imagescape which continues his exploration into light as an artist’s medium and the aesthetic possibilities of “generative software”.

We presented 77 Million Paintings over three nights at San Francisco’s Yerba Buena Center for the Arts (YBCA) on June 29 & 30 and July 1, 02007.

The Friday and Saturday shows were packed with about 900 attendees each. On Sunday we hosted a special Long Now members night. The crowd was smaller with only our newly joined charter members plus Long Now staff and Board, including the artist himself.

Long Now co-founder Brian Eno at his 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Brian Eno came in from London for the event. While he’d shown this work at the Venice Bienniale, the Milan Triennale, Tokyo, London and South Africa; this was its first showing in the U.S. (or anywhere in North America). The actual presentation was a unique large scale projection created collaboratively with San Francisco’s Obscura Digital creative studio.

The installation was in a large, dark room accompanied by generative ambient Eno music. The audience could sit in chairs at the back of the room, sink into bean bags, or lie down on rugs or the floor closer to the screens. Like the Ambient Painting at The Interval or other examples of Eno’s generative visual art, a series of high resolution images slowly fade in and over each other and out again at a glacial pace. In brilliant colors, constantly transforming at a rate so slow it is difficult to track. Until you notice it’s completely different.

Close up of 77 Million Paintings at the opening in San Francisco, 02007; photo by Robin Rupe

Close up of 77 Million Paintings at the opening in San Francisco, 02007; photo by Robin Rupe

Long Now Executive Director Alexander Rose also spoke:

About the work: Brian Eno discusses 77 Million Paintings with Wired News (excerpt):

Eno: The pieces surprise me. I have 77 Million Paintings running in my studio a lot of the time. Occasionally I’ll look up from what I’m doing and I think, “God, I’ve never seen anything like that before!” And that’s a real thrill.
Wired News: When you look at it, do you feel like it’s something that you had a hand in creating?
Eno: Well, I know I did, but it’s a little bit like if you are dealing hands of cards and suddenly you deal four aces. You know it’s only another combination that’s no more or less likely than any of the other combinations of four cards you could deal. Nonetheless, some of the combinations are really striking. You think, “Wow — look at that one.” Sometimes some combination comes up and I know it’s some result of this system that I invented, but nonetheless I didn’t imagine such a thing could be produced by it.

The exterior of Yerba Buena Center for the Arts (YBCA) has its own beautiful illumination:

There was a simultaneous virtual version of 77 Million Paintings ongoing in Second Life:

Here’s video of the Second Life version of 77 Million Paintings. Looking at it today gives you some sense of what the 02007 installation was like in person:

We brought the prototype of the 10,000 Year Clock’s Chime Generator to YBCA (this was 7 years before it was converted into a table for the opening of The Interval):

The Chime Generator was outfitted with tubular bells at the time:

10,000 Year Clock Chime Generator prototype at 77 Million Paintings in San Francisco, 02007; photo by Scott Beale

After two days open to the public, the closing night event was a performance and a party exclusively for Long Now members. Our membership program was brand new then, and many charter members joined just in time to attend the event. So a happy anniversary to all of you who are celebrating a decade as a Long Now member!

Brian Eno, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Members flew in from around the country for the event. Long Now’s Founders were all there. This began a tradition of Long Now special events with members which have included Longplayer / Long Conversation which was also at YBCA and our two Mechanicrawl events which explored of San Francisco’s mechanical wonders.

Here are a few more photos of Long Now staff and friends who attended:

Long Now co-founder Danny Hillis, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Burning Man co-founder Larry Harvey and Long Now co-founder Stewart Brand at 77 Million Paintings opening in San Francisco; photo by Scott Beale

Kevin Kelly and Louis Rosetto, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Lori Dorn and Jeffrey & Jillian of Because We Can at the 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Long Now staff Mikl-em & Danielle Engelman at the 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Thanks to Scott Beale / Laughing Squid, Robin Rupe, and myself for the photos used above. Please mouse over each to see the photo credit.

Brian Eno at 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

More photos from Scott Beale. More photos of the event by Mikl Em.

More on the production from Obscura Digital.

,

Cory DoctorowI’ll see you this weekend at Denver Comic-Con!




I just checked in for my o-dark-hundred flight to Denver tomorrow morning for this weekend’s Denver Comic-Con, where I’m appearing for several hours on Friday, Saturday and Sunday, including panels with some of my favorite writers, like John Scalzi, Richard Kadrey, Catherynne Valente and Scott Sigler:


Friday:


* 1:30-2:30pm The Future is Here :: Room 402
How have recent near-future works fared in preparing us for the realities of the current day? What can near-future works being published now tell us about what’s coming?
Mario Acevedo, Cory Doctorow, Sherry Ficklin, Richard Kadrey, Dan Wells

* 2:30-4:30pm Signing :: Tattered Cover Signing Booth


* 4:30-5:30pm Fight The Power! Fiction For Political Change :: Room 402
Some authors incorporate political themes and beliefs into their stories. In this tumultuous political climate, even discussions of a political nature in fiction can resonate with readers, and could even be a source of change. Our panelists will talk about what they have done in their books to cause change, and the (desired) results.
Charlie Jane Anders, Cory Doctorow, John Scalzi, Emily Singer, Ausma Zehanat Khan

Saturday:

* 12:00-1:00pm The Writing Process of Best Sellers :: Room 407
The authors of the today’s best sellers discuss their technical process and offer creative insight.
Eneasz Brodski, Cory Doctorow, John Scalzi, Catherynne Valente


* 1:30-2:30pm Creating the Anti-Hero :: Room 402
Millennial both read and write YA, and they’re sculpting the genre to meet this generation’s needs. Authors of recent YA titles discuss writing for the modern YA audience and how their books contribute to the genre.
Pierce Brown, Delilah Dawson, Cory Doctorow, Melissa Koons, Catherynne Valente, Dan Wells

* 3:30-4:30pm Millennials Rising – YA Literature Today :: Room 402
Millennial both read and write YA, and they’re sculpting the genre to meet this generation’s needs. Authors of recent YA titles discuss writing for the modern YA audience and how their books contribute to the genre.
Pierce Brown, Delilah Dawson, Cory Doctorow, Melissa Koons, Catherynne Valente, Dan Wells

* 4:30-6:30pm Signing :: Tattered Cover Signing Booth

Sunday:

11:00am-12:00pm Economics, Value and Motivating your Character :: Room 407
How does money and economics figure into writing compelling characters. The search for money In our daily lives fashions our character, and in fiction can be the cause of turning good people into bad, making characters do things that are, well, out-of-character. Money is the great motivator – and find out how these authors use it to shape their characters and move along the story.
Dr. Jason Arentz, Cory Doctorow, Van Aaron Hughes, Matt Parrish, John Scalzi

12:30-2:30pm Signing :: Tattered Cover Signing Booth

3:00-4:00pm Urban Science Fiction :: DCCP4 – Keystone City Room
The sensibility and feel of Urban Fantasy, without the hocus-pocus. They tend to be stories that could be tomorrow, with the direct consequences of today’s technologies, today’s societies. Urban Science Fiction authors discuss writing the world we don’t yet live in, but could!
Cory Doctorow, Sue Duff, Richard Kadrey, Cynthia Richards, Scott Sigler

(Image: Pat Loika, CC-BY)

CryptogramThe Women of Bletchley Park

Really good article about the women who worked at Bletchley Park during World War II, breaking German Enigma-encrypted messages.

CryptogramWebsites Grabbing User-Form Data Before It's Submitted

Websites are sending information prematurely:

...we discovered NaviStone's code on sites run by Acurian, Quicken Loans, a continuing education center, a clothing store for plus-sized women, and a host of other retailers. Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

This is important because it goes against what people expect:

In yesterday's report on Acurian Health, University of Washington law professor Ryan Calo told Gizmodo that giving users a "send" or "submit" button, but then sending the entered information regardless of whether the button is pressed or not, clearly violates a user's expectation of what will happen. Calo said it could violate a federal law against unfair and deceptive practices, as well as laws against deceptive trade practices in California and Massachusetts. A complaint on those grounds, Calo said, "would not be laughed out of court."

This kind of thing is going to happen more and more, in all sorts of areas of our lives. The Internet of Things is the Internet of sensors, and the Internet of surveillance. We've long passed the point where ordinary people have any technical understanding of the different ways networked computers violate their privacy. Government needs to step in and regulate businesses down to reasonable practices. Which means government needs to prioritize security over their own surveillance needs.

Worse Than FailureThe Agreement

In addition to our “bread and butter” of bad code, bad bosses, worse co-workers and awful decision-making, we always love the chance to turn out occassional special events. This time around, our sponsors at Hired gave us the opportunity to build and film a sketch.

I’m super-excited for this one. It’s a bit more ambitious than some of our previous projects, and pulled together some of the best talent in the Pittsburgh comedy community to make it happen. Everyone who worked on it- on set or off- did an excellent job, and I couldn't be happier with the results.

Once again, special thanks to Hired, who not only helped us produce this sketch, but also helps keep us keep the site running. With Hired, instead of applying for jobs, your prospective employer will apply to interview you. You get placed in control of your job search, and Hired provides a “talent advocate” who can provide unbiased career advice and make sure you put your best foot forward. Sign up now, and find the best opportunities for your future with Hired.

And now, our feature presentation: The Agreement

Brought to you by:

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

,

Google AdsenseAdSense now understands Urdu

Today, we’re excited to announce the addition of Urdu, a language spoken by millions in Pakistan, India and many other countries around the world, to the family of AdSense supported languages.

The interest for Urdu language content has been growing steadily over the last few years. AdSense provides an easy way for publishers to monetize the content they create in Urdu, and help advertisers looking to connect with the growing online Urdu audience to reach them with relevant ads.

To start monetizing your Urdu content website with Google AdSense:

  1. Check the AdSense program policies and make sure your website is compliant. 
  2. Sign up for an AdSense account 
  3. Add the AdSense code to start displaying relevant ads to your users. 
Welcome to AdSense!



Posted by: AdSense Internationalization Team

CryptogramGirl Scouts to Offer Merit Badges in Cybersecurity

The Girl Scouts are going to be offering 18 merit badges in cybersecurity, to scouts as young as five years old.

CryptogramCIA Exploits Against Wireless Routers

WikiLeaks has published CherryBlossom, the CIA's program to hack into wireless routers. The program is about a decade old.

Four good news articles. Five. And a list of vulnerable routers.

Worse Than FailureNews Roundup: The Internet of Nope

Folks, we’ve got to talk about some of the headlines about the Internet of “Things”. If you’ve been paying even no attention to that space, you know that pretty much everything getting released is some combination of several WTFs, whether in conception, implementation, and let’s not forget security.

A diagram of IoT approaches

I get it. It’s a gold-rush business. We’ve got computers that are so small, so cheap, and so power-efficient, that we can slap the equivalent of a 1980s super-computer in a toilet seat. There's the potential to create products that make our lives better, that make the world better, and could carry us into a glowing future. It just sometimes feels like that's not what anybody's actually trying to make, though. Without even checking, I’m sure you can buy a WiFi enabled fidget spinner that posts the data to a smartphone app where you can send “fidges” to your friends, bragging about your RPMs.

We need this news-roundup, because when Alexa locks you out of your house because you didn’t pay for Amazon Prime this month, we can at least say “I told you so”. You think I’m joking, but Burger King wants in on that action, with its commercial that tries to trick your Google Assistant into searching for burgers. That’s also not the first time that a commercial has trigged voice commands, and I can guarantee that it isn’t going to be the last.

Now, maybe this is sour grapes. I bought a Nest thermostat before it was cool, and now three hardware generations on, I’m not getting software updates, and there are rumors about the backend being turned off someday. Maybe Nest needs a model more like “Hive Hub”. Hive is a startup with £500M invested, making it one of the only “smart home” companies with an actual business model. Of course, that business model is that you’ll pay $39.99 per month to turn your lights on and off.

At least you know that some of that money goes to keeping your smart-home secure. I’m kidding, of course- nobody spends any effort on making these devices secure. There are many, many high profile examples of IoT hacks. You hook your toaster up to the WiFi and suddenly it’s part of a botnet swarm mining BitCoins. One recent, high-profile example is the ZigBee Protocol, which powers many smart-home systems. It’s a complete security disaster, and opens up a new line of assault- instead of tricking a target to plug a thumb drive into their network, you can now put your payload in a light bulb.

Smart-homes aside, IoT in general is breeding ground for botnets. Sure, your uncle Jack will blindly click through every popup and put his computer password in anything that looks like a password box, but at least you can have some confidence that his Windows/Mac/Linux desktop has some rudimentary protections bundled with the OS. IoT vendors apparently don’t care.

Let’s take a break, and take a peek at a fun story about resetting a computerized lock. Sure, they could have just replaced the lock, but look at all the creative hackery they had to do to get around it.

With that out of the way, let’s talk about tea. Ever since the Keurig coffee maker went big, everyone’s been trying to be “the Keurig for waffles” or “the Keurig for bacon” or “the Keurig for juice”- the latter giving us the disaster that is the Juicero. Mash this up with the Internet of Things, and you get this WiFi enabled tea-maker, which can download recipes for brewing tea off the Internet. And don’t worry, it’ll always use the correct recipe because each pod is loaded with an RFID that not only identifies which recipe to use, but ensures that you’re not using any unathorized tea.

In addition to the “Keurig, but for $X,” there’s also the ever popular “the FitBit, but for $X.” Here’s the FitBit for desks. It allows your desk to nag you about getting up, moving around, and it’ll upload your activity to the Internet while it’s at it. I’m sure we’re all really excited for when our “activity” gets logged for future review.

Speaking of FitBits, Qualcomm just filed some patents for putting that in your workout shoes. This is actually not a totally terrible idea- I mean, by standards of that tea pot, anyway. I share it here because they’re calling it “The Internet of Shoes” which is a funny way of saying, “our marketing team just gave up”.

Finally, since we’re talking about Internet connected gadgets that serve no real purpose, Google Glass got its first software update in three years. Apparently Google hasn’t sent the Glass to a farm upstate, where it can live with Google Reader, Google Wave, Google Hangouts, and all the other projects Google got bored of.

[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

,

Krebs on Security‘Petya’ Ransomware Outbreak Goes Global

A new strain of ransomware dubbed “Petya” is worming its way around the world with alarming speed. The malware is spreading using a vulnerability in Microsoft Windows that the software giant patched in March 2017 — the same bug that was exploited by the recent and prolific WannaCry ransomware strain.

The ransom note that gets displayed on screens of Microsoft Windows computers infected with Petya.

The ransom note that gets displayed on screens of Microsoft Windows computers infected with Petya.

According to multiple news reports, Ukraine appears to be among the hardest hit by Petya. The country’s government, some domestic banks and largest power companies all warned today that they were dealing with fallout from Petya infections.

Danish transport and energy firm Maersk said in a statement on its Web site that “We can confirm that Maersk IT systems are down across multiple sites and business units due to a cyber attack.” In addition, Russian energy giant Rosneft said on Twitter that it was facing a “powerful hacker attack.” However, neither company referenced ransomware or Petya.

Security firm Symantec confirmed that Petya uses the “Eternal Blue” exploit, a digital weapon that was believed to have been developed by the U.S. National Security Agency and in April 2017 leaked online by a hacker group calling itself the Shadow Brokers.

Microsoft released a patch for the Eternal Blue exploit in March (MS17-010), but many businesses put off installing the fix. Many of those that procrastinated were hit with the WannaCry ransomware attacks in May. U.S. intelligence agencies assess with medium confidence that WannaCry was the work of North Korean hackers.

Organizations and individuals who have not yet applied the Windows update for the Eternal Blue exploit should patch now. However, there are indications that Petya may have other tricks up its sleeve to spread inside of large networks.

Russian security firm Group-IB reports that Petya bundles a tool called “LSADump,” which can gather passwords and credential data from Windows computers and domain controllers on the network.

Petya seems to be primarily impacting organizations in Europe, however the malware is starting to show up in the United States. Legal Week reports that global law firm DLA Piper has experienced issues with its systems in the U.S. as a result of the outbreak.

Through its twitter account, the Ukrainian Cyber Police said the attack appears to have been seeded through a software update mechanism built into M.E.Doc, an accounting program that companies working with the Ukranian government need to use.

Nicholas Weaver, a security researcher at the International Computer Science Institute and a lecturer at UC Berkeley, said Petya appears to have been well engineered to be destructive while masquerading as a ransomware strain.

Weaver noted that Petya’s ransom note includes the same Bitcoin address for every victim, whereas most ransomware strains create a custom Bitcoin payment address for each victim.

Also, he said, Petya urges victims to communicate with the extortionists via an email address, while the majority of ransomware strains require victims who wish to pay or communicate with the attackers to use Tor, a global anonymity network that can be used to host Web sites which can be very difficult to take down.

“I’m willing to say with at least moderate confidence that this was a deliberate, malicious, destructive attack or perhaps a test disguised as ransomware,” Weaver said. “The best way to put it is that Petya’s payment infrastructure is a fecal theater.”

Ransomware encrypts important documents and files on infected computers and then demands a ransom (usually in Bitcoin) for a digital key needed to unlock the files. With most ransomware strains, victims who do not have recent backups of their files are faced with a decision to either pay the ransom or kiss their files goodbye.

Ransomware attacks like Petya have become such a common pestilence that many companies are now reportedly stockpiling Bitcoin in case they need to quickly unlock files that are being held hostage by ransomware.

Security experts warn that Petya and other ransomware strains will continue to proliferate as long as companies delay patching and fail to develop a robust response plan for dealing with ransomware infestations.

According to ISACA, a nonprofit that advocates for professionals involved in information security, assurance, risk management and governance, 62 percent of organizations surveyed recently reported experiencing ransomware in 2016, but only 53 percent said they had a formal process in place to address it.

Update: 5:06 p.m. ET: Added quotes from Nicholas Weaver and links to an analysis by the Ukrainian cyber police.

Rondam RamblingsThere's something very odd about the USS Fitzgerald incident

For a US Navy warship to allow itself to be very nearly destroyed by a civilian cargo ship virtually requires an epic career-ending screwup.  The exact nature of that screwup has yet to be determined, and the Navy is understandably staying very tight-lipped about it.  But they have said one thing on the record which is almost certainly false: that the collision happened at 2:30 AM local time:

CryptogramFighting Leakers at Apple

Apple is fighting its own battle against leakers, using people and tactics from the NSA.

According to the hour-long presentation, Apple's Global Security team employs an undisclosed number of investigators around the world to prevent information from reaching competitors, counterfeiters, and the press, as well as hunt down the source when leaks do occur. Some of these investigators have previously worked at U.S. intelligence agencies like the National Security Agency (NSA), law enforcement agencies like the FBI and the U.S. Secret Service, and in the U.S. military.

The information is from an internal briefing, which was leaked.

Worse Than FailureNot so DDoS

Joe K was a developer at a company that provided a SaaS Natural Language Processing system. As Chief Engineer of the Data Science Team (a term that make him feel like some sort of mad professor), his duties included coding the Data Science Service. It provided the back-end for handling the complex, heavy-lifting type of processing that had to happen in real-time. Since it was very CPU-intensive, Joe spent a lot of time trying to battle latency. But that was the least of his problems.

Ddos-attack-ex

The rest of the codebase was a cobbled-together mess that had been coded by the NLP researchers- scientists with no background in programming or computer science. Their mantra was “If it gets us the results we need, who cares how it looks behind the scenes?” This meant Joe’s well-designed data service somehow had to interface with applications made from a pile of ugly hacks. It was difficult at times, but he managed to get the job done while also keeping CPU usage to a minimum.

One day Joe was working away when Burt, the company CEO, burst in to their humble basement computer lab in an obvious tizzy. Burt rarely visited the “egghead dungeon”, as he called it, so something had to be amiss. “JOE!” he cried out. “The production data science service is completely down! Every customer we have gave me an angry call within the last ten minutes!”

Considering this was an early-stage startup with only five customers, Burt’s assertion was probably true, if misleading. “Wow, ok Burt. Let me get right on that!” Joe offered, feeling flustered. He took a look at the error logging service and there was nothing to be found. He then attempted to SSH to each of the production servers, with success. He decided to check performance on the servers and an entire string of red flags shot straight up the proverbial flag pole. Every production server was at 100% CPU usage.

“I have an effect for you, Burt, but not a cause. I’ll have to dig deeper but it almost seems like… a Denial of Service attack?” Joe offered, not believing that would actually be the case. With only five whitelisted customers able to connect, all of them using the NLP system to its fullest shouldn’t come even close to causing this.

While looking further at the server logs, Joe got an instant message from Xander, the software engineer who worked on the dashboards, “Hey Joe, I noticed prod was down… could it be related to something I’m doing?”

“Ummm… maybe? What is it you are doing exactly?” Joe replied, with a new sense of concern. Xander’s dashboard shouldn’t have any interaction with the DSS, so it seemed like an odd question. Requests to the NLP site would initially come to a front-end server, and if there was some advanced analysis that needed to happen, that server would RPC to the DSS. After the response was computed, the front-end server would log the request and response to the Xander’s dashboard system so it could monitor usage stats.

“Well, the dashboard is out of sync,” Xander explained. There had been a bug causing events to not make it to the dashboard system for the past month. They would need to be added to make the dashboard accurate. This could have been a simple change to the dashboard’s database, but instead Xander decided to replay all of the actual HTTP requests to the front end. Many of those requests triggered processing on the DSS- processing which had already been done. And since it was taking a long time, Xander had batched up the resent requests and was running them from three different machines, thus providing a remarkably good simulation of a DDoS.

“STOP YOUR PROCESS IMMEDIATELY AND DO THIS THE RIGHT WAY!” Joe shot back, all caps intended.

“Ok, ok, sorry. I’ll get this cleaned up,” Xander assured Joe. Within 15 minutes, the server CPU usage returned to normal levels and everything was great again. Joe was able to get Burt off his back and return to his normal duties.

A few minutes later, Joe’s IM dinged again with a message from Xander. "Hey Joe, sorry about that, LOL. But are we 100% sure that was the problem? Should I do it again just to be sure?

If there was a way for Joe to use instant messaging to send a virtual strangulation to Xander, he would have done it. But a “HELL NO!!!” would have to suffice.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

,

Planet Linux AustraliaDavid Rowe: Codec 2 Wideband

I’m spending a month or so improving the speech quality of a couple of Codec 2 modes. I have two aims:

  1. Make the 700 bit/s codec sound better, to improve speech quality on low SNR HF channels (beneath 0dB).
  2. Develop a higher quality mode in the 2000 to 3000 bit/s range, that can be used on HF channels with modest SNRs (around 10dB)

I ran some numbers on the new OFDM modem and LDPC codes, and turns out we can get 3000 bit/s of codec data through a 2000 Hz channel at down to 7dB SNR.

Now 3000 bit/s is broadband for me – I’ve spent years being very frugal with my bits while I play in low SNR HF land. However it’s still a bit low for Opus which kicks in at 6000 bit/s. I can’t squeeze 6000 bit/s through a 2000 Hz RF channel without higher order QAM constellations which means SNRs approaching 20dB.

So – what can I do with 3000 bit/s and Codec 2? I decided to try wideband(-ish) audio – the sort of audio bandwidth you get from Skype or AM broadcast radio. So I spent a few weeks modifying Codec 2 to work at 16 kHz sample rate, and Jean Marc gave me a few tips on using DCTs to code the bits.

It’s early days but here are a few samples:

Description Sample
1 Original Speech Listen
2 Codec 2 Model, orignal amplitudes and phases Listen
3 Synthetic phase, one bit voicing, original amplitudes Listen
4 Synthetic phase, one bit voicing, amplitudes at 1800 bit/s Listen
5 Simulated analog SSB, 300-2600Hz BPF, 10dB SNR Listen

Couple of interesting points:

  • Sample (2) is as good as Codec 2 can do, its the unquantised model parameters (harmonic phases and amplitudes). It’s all down hill from here as we quantise or toss away parameters.
  • In (3) I’m using a one bit voicing model, this is very vocoder and shouldn’t work this well. MBE/MELP all say you need mixed excitation. Exploring that conundrum would be a good Masters degree topic.
  • In (3) I can hear the pitch estimator making a few mistakes, e.g. around “sheet” on the female.
  • The extra 4kHz of audio bandwidth doesn’t take many more bits to encode, as the ear has a log frequency response. It’s maybe 20% more bits than 4kHz audio.
  • You can hear some words like “well” are muddy and indistinct in the 1800 bit/s sample (4). This usually means the formants (spectral) peaks are not well defined, so we might be tossing away a little too much information.
  • The clipping on the SSB sample (5) around the words “depth” and “hours” is an artifact of the PathSim AGC. But dat noise. It gets really fatiguing after a while.

Wideband audio is a big paradigm shift for Push To Talk (PTT) radio. You can’t do this with analog radio: 2000 Hz of RF bandwidth, 8000 Hz of audio bandwidth. I’m not aware of any wideband PTT radio systems – they all work at best 4000 Hz audio bandwidth. DVSI has a wideband codec, but at a much higher bit rate (8000 bits/s).

Current wideband codecs shoot for artifact-free speech (and indeed general audio signals like music). Codec 2 wideband will still have noticeable artifacts, and probably won’t like music. Big question is will end users prefer this over SSB, or say analog FM – at the same SNR? What will 8kHz audio sound like on your HT?

We shall see. I need to spend some time cleaning up the algorithms, chasing down a few bugs, and getting it all into C, but I plan to be testing over the air later this year.

Let me know if you want to help.

Play Along

Unquantised Codec 2 with 16 kHz sample rate:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 -o - | play -t raw -r 16000 -s -2 -

With “Phase 0” synthetic phase and 1 bit voicing:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 --phase0 --postfilter -o - | play -t raw -r 16000 -s -2 -

Links

FreeDV 2017 Road Map – this work is part of the “Codec 2 Quality” work package.

Codec 2 page – has an explanation of the way Codec 2 models speech with harmonic amplitudes and phases.

CryptogramSeparating the Paranoid from the Hacked

Sad story of someone whose computer became owned by a griefer:

The trouble began last year when he noticed strange things happening: files went missing from his computer; his Facebook picture was changed; and texts from his daughter didn't reach him or arrived changed.

"Nobody believed me," says Gary. "My wife and my brother thought I had lost my mind. They scheduled an appointment with a psychiatrist for me."

But he built up a body of evidence and called in a professional cybersecurity firm. It found that his email addresses had been compromised, his phone records hacked and altered, and an entire virtual internet interface created.

"All my communications were going through a man-in-the-middle unauthorised server," he explains.

It's the "psychiatrist" quote that got me. I regularly get e-mails from people explaining in graphic detail how their whole lives have been hacked. Most of them are just paranoid. But a few of them are probably legitimate. And I have no way of telling them apart.

This problem isn't going away. As computers permeate even more aspects of our lives, it's going to get even more debilitating. And we don't have any way, other than hiring a "professional cybersecurity firm," of telling the paranoids from the victims.

CryptogramThe FAA Is Arguing for Security by Obscurity

In a proposed rule by the FAA, it argues that software in an Embraer S.A. Model ERJ 190-300 airplane is secure because it's proprietary:

In addition, the operating systems for current airplane systems are usually and historically proprietary. Therefore, they are not as susceptible to corruption from worms, viruses, and other malicious actions as are more-widely used commercial operating systems, such as Microsoft Windows, because access to the design details of these proprietary operating systems is limited to the system developer and airplane integrator. Some systems installed on the Embraer Model ERJ 190-300 airplane will use operating systems that are widely used and commercially available from third-party software suppliers. The security vulnerabilities of these operating systems may be more widely known than are the vulnerabilities of proprietary operating systems that the avionics manufacturers currently use.

Longtime readers will immediately recognize the "security by obscurity" argument. Its main problem is that it's fragile. The information is likely less obscure than you think, and even if it is truly obscure, once it's published you've just lost all your security.

This is me from 2014, 2004, and 2002.

The comment period for this proposed rule is ongoing. If you comment, please be polite -- they're more likely to listen to you.

Worse Than FailureCodeSOD: Plurals Dones Rights

Today, submitter Adam shows us how thoughtless language assumptions made by programmers are also hilarious language assumptions:

"So we're querying a database for data matching *title* but then need to try again with plural/singular if we didn't get anything. Removing the trailing S is bad, but what really caught my eye was how we make a word plural. Never mind any English rules or if the word is actually Greek, Chinese, or Klingon."


if ((locs == NULL || locs->Length() == 0) && (title->EndsWith(@"s") || title->EndsWith(@"S")))
{
    title->RemoveCharAt(title->Length()-1);
    locs = data->GetLocationsForWord(title);
}
else if ((locs == NULL || locs->Length() == 0) && title->Length() > 0)
{
    WCHAR c = title->CharAt(title->Length()-1);
    if (c >= 'A' && c <= 'z')="" {="" title-="">Append(@"S");
    }
    else
    {
        title->Append(@"s");
    }
    locs = data->GetLocationsForWord(title);
}

Untils nexts times: ευχαριστώs &s 再见s, Hochs!

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaOpenSTEM: Guess the Artefact! – #2

Today’s Guess the Artefact! covers one of a set of artefacts which are often found confusing to recognise. We often get questions about these artefacts, from students and teachers alike, so here’s a chance to test your skills of observation. Remember – all heritage and archaeological material is covered by State or Federal legislation and should never be removed from its context. If possible, photograph the find in its context and then report it to your local museum or State Heritage body (the Dept of Environment and Heritage Protection in Qld; the Office of Environment and Heritage in NSW; the Dept of Environment, Planning and Sustainable Development in ACT; Heritage Victoria; the Dept of Environment, Water and Natural Resources in South Australia; the State Heritage Office in WA and the Heritage Council – Dept of Tourism and Culture in NT).

This artefact is made of stone. It measures about 12 x 8 x 3 cm. It fits easily and comfortably into an adult’s hand. The surface of the stone is mostly smooth and rounded, it looks a little like a river cobble. However, one side – the right-hand side in the photo above – is shaped so that 2 smooth sides meet in a straight, sharpish edge. Such formations do not occur on naturally rounded stones, which tells us that this was shaped by people and not just rounded in a river. The smoothed edges meeting in a sharp edge tell us that this is ground-stone technology. Ground stone technology is a technique used by people to create smooth, sharp edges on stones. People grind the stone against other rocks, occasionally using sand and water to facilitate the process, usually in a single direction. This forms a smooth surface which ends in a sharp edge.

Neolithic Axe

Ground stone technology is usually associated with the Neolithic period in Europe and Asia. In the northern hemisphere, this technology was primarily used by people who were learning to domesticate plants and animals. These early farmers learned to grind grains, such as wheat and barley, between two stones to make flour – thus breaking down the structure of the plant and making it easier to digest. Our modern mortar and pestle is a descendant of this process. Early farmers would have noticed that these actions produced smooth and sharp edges on the stones. These observations would have led them to apply this technique to other tools which they used and thus develop the ground-stone technology. Here (picture on right) we can see an Egyptian ground stone axe from the Neolithic period. The toolmaker has chosen an attractive red and white stone to make this axe-head.

In Japan this technology is much older than elsewhere in the northern hemisphere, and ground-stone axes have been found dating to 30,000 years ago during the Japanese Palaeolithic period. Until recently these were thought to be the oldest examples of ground-stone technology in the world. However, in 2016, Australian archaeologists Peter Hiscock, Sue O’Connor, Jane Balme and Tim Maloney reported in an article in the journal Australian Archaeology, the finding of a tiny flake of stone (just over 1 cm long and 1/2 cm wide) from a ground stone axe in layers dated to 44,000 to 49,000 years ago at the site of Carpenter’s Gap in the Kimberley region of north-west Australia. This tiny flake of stone – easily missed by anyone not paying close attention – is an excellent example of the extreme importance of ‘archaeological context’. Archaeological material that remains in its original context (known as in situ) can be dated accurately and associated with other material from the same layers, thus allowing us to understand more about the material. Anything removed from the context usually can not be dated and only very limited information can be learnt.

The find from the Kimberley makes Australia the oldest place in the world to have ground-stone technology. The tiny chip of stone, broken off a larger ground-stone artefact, probably an axe, was made by the ancestors of Aboriginal people in the millennia after they arrived on this continent. These early Australians did not practise agriculture, but they did eat various grains, which they leaned to grind between stones to make flour. It is possible that whilst processing these grains they learned to grind stone tools as well. Our artefact, shown above, is undated. It was found, totally removed from its original context, stored under an old house in Brisbane. The artefact is useful as a teaching aid, allowing students to touch and hold a ground-stone axe made by Aboriginal people in Australia’s past. However, since it was removed from its original context at some point, we do not know how old it is, or even where it came from exactly.

Our artefact is a stone tool. Specifically, it is a ground stone axe, made using technology that dates back almost 50,000 years in Australia! These axes were usually made by rubbing a hard stone cobble against rocks by the side of a creek. Water from the creek was used as a lubricant, and often sand was added as an extra abrasive. The making of ground-stone axes often left long grooves in these rocks. These are called ‘grinding grooves’ and can still be found near some creeks in the landscape today, such as in Kuringai Chase National Park in Sydney. The ground-stone axes were usually hafted using sticks and lashings of plant fibre, to produce a tool that could be used for cutting vegetation or other uses. Other stone tools look different to the one shown above, especially those made by flaking stone; however, smooth stones should always be carefully examined in case they are also ground-stone artefacts and not just simple stones!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners July Meeting: Teaching programming using video games

Jul 15 2017 12:30
Jul 15 2017 16:30
Jul 15 2017 12:30
Jul 15 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Andrew Pam will be demonstrating a range of video games that run natively on Linux and explicitly include programming skills as part of the game including SpaceChem, InfiniFactory, TIS-100, Shenzen I/O, Else Heart.Break(), Hack 'n' Slash and Human Resource Machine.  He will seek feedback on the suitability of these games for teaching programming skills to non-programmers and the possibility of group play in a classroom or workshop setting.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 15, 2017 - 12:30

read more

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main July 2017 Meeting

Jul 4 2017 18:30
Jul 4 2017 20:30
Jul 4 2017 18:30
Jul 4 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, July 4, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• To be announced

Come have a drink with us and talk about Linux.  If you have something cool to show, please bring it along!

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 4, 2017 - 18:30

Cory DoctorowAudio from my NYPL appearance with Edward Snowden


Last month, I appeared onstage with Edward Snowden at the NYPL, hosted by Paul Holdengraber, discussing my novel Walkaway. The library has just posted the audio! It was quite an evening

,

Cory DoctorowBruce Sterling reviews WALKAWAY

Bruce Sterling, Locus Magazine: Walkaway is a real-deal, generically traditional science-fiction novel; it’s set in an undated future and it features weird set design, odd costumes, fights, romances, narrow escapes, cool weapons, even zeppelins. This is the best Cory Doctorow book ever. I don’t know if it’s destined to become an SF classic, mostly because it’s so advanced and different that it makes the whole genre look archaic.

For instance: in a normal science fiction novel, an author pals up with scientists and popularizes what the real experts are doing. Not here, though. Cory Doctorow is such an Internet policy wonk that he’s ‘‘popularizing’’ issues that only he has ever thought about. Walkaway is mostly about advancing and demolishing potential political arguments that have never been made by anybody but him…

…Walkaway is what science fiction can look like under modern cultural conditions. It’s ‘‘relevant,’’ it’s full of tremulous urgency, it’s Occupy gone exponential. It’s a novel of polarized culture-war in which all the combatants fast-talk past each other while occasionally getting slaughtered by drones. It makes Ed Snowden look like the first robin in spring.

…The sci-fi awesome and the authentically political rarely mix successfully. Cory had an SF brainwave and decided to exploit a cool plot element: people get uploaded into AIs. People often get killed horribly in Walkaway, and the notion that the grim victims of political struggle might get a silicon afterlife makes their fate more conceptually interesting. The concept’s handled brilliantly, too: these are the best portrayals of people-as-software that I’ve ever seen. They make previous disembodied AI brains look like glass jars from 1950s B-movies. That’s seduc­tively interesting for a professional who wants to mutate the genre’s tropes, but it disturbs the book’s moral gravity. The concept makes death and suffering silly.

…I’m not worried about Cory’s literary fate. I’ve read a whole lot of science fiction novels. Few are so demanding and thought-provoking that I have to abandon the text and go for a long walk.

I won’t say there’s nothing else like Walkaway, because there have been some other books like it, but most of them started mass movements or attracted strange cults. There seems to be a whole lot of that activity going on nowadays. After this book, there’s gonna be more.