Planet Russell

,

CryptogramAmazon Patents Measures to Prevent In-Store Comparison Shopping

Amazon has been issued a patent on security measures that prevents people from comparison shopping while in the store. It's not a particularly sophisticated patent -- it basically detects when you're using the in-store Wi-Fi to visit a competitor's site and then blocks access -- but it is an indication of how retail has changed in recent years.

What's interesting is that Amazon is on the other of this arms race. As an on-line retailer, it wants people to walk into stores and then comparison shop on its site. Yes, I know it's buying Whole Foods, but it's still predominantly an online retailer. Maybe it patented this to prevent stores from implementing the technology.

It's probably not nearly that strategic. It's hard to build a business strategy around a security measure that can be defeated with cellular access.

Worse Than FailureError'd: Perfectly Logical

"Outlook can't open an attachment because it claims that it was made in Outlook, which Outlook doesn't think is installed...or something," writes Gavin.

 

Mitch wrote, "So, the problems I'm having with activating Windows 10 is that I need to install Windows 10. Of course!"

 

"I don't expect 2018 to come around," writes Adam K., "Instead we'll all be transported back to 2014!"

 

"Here I thought that the world had gone mad, but then I remembered that I had a currency converter add-on installed," writes Shahim M.

 

John S. wrote, "It's good to know that the important notices are getting priority!"

 

Michael D. wrote, "It's all fun and games until someone tries to exit the conference room while someone else is quenching their thirst."

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianArturo Borrero González: Backup router/switch configuration to a git repository

git

Most routers/switches out there store their configuration in plain text, which is nice for backups. I’m talking about Cisco, Juniper, HPE, etc. The configuration of our routers are being changed several times a day by the operators, and in this case we lacked some proper way of tracking these changes.

Some of these routers come with their own mechanisms for doing backups, and depending on the model and version perhaps they include changes-tracking mechanisms as well. However, they mostly don’t integrate well into our preferred version control system, which is git.

After some internet searching, I found rancid, which is a suite for doing tasks like this. But it seemed rather complex and feature-full for what we required: simply fetch the plain text config and put it into a git repo.

Worth noting that the most important drawback of not triggering the change-tracking from the router/switch is that we have to follow a polling approach: loggin into each device, get the plain text and the commit it to the repo (if changes detected). This can be hooked in cron, but as I said, we lost the sync behaviour and won’t see any changes until the next cron is run.

In most cases, we lost authorship information as well. But it was not important for us right now. In the future this is something that we will have to solve.

Also, some routers/switches lack some basic SSH security improvements, like public-key authentication, so we end having to hard-code user/pass in our worker script.

Since we have several devices of the same type, we just iterate over their names.

For example, this is what we use for hp comware devices:

#!/bin/bash
# run this script by cron

USER="git"
PASSWORD="readonlyuser"
DEVICES="device1 device2 device3 device4"

FILE="flash:/startup.cfg"
GIT_DIR="myrepo"
GIT="/srv/git/${GIT_DIR}.git"

TMP_DIR="$(mktemp -d)"
if [ -z "$TMP_DIR" ] ; then
	echo "E: no temp dir created" >&2
	exit 1
fi

GIT_BIN="$(which git)"
if [ ! -x "$GIT_BIN" ] ; then
	echo "E: no git binary" >&2
	exit 1
fi

SCP_BIN="$(which scp)"
if [ ! -x "$SCP_BIN" ] ; then
	echo "E: no scp binary" >&2
	exit 1
fi

SSHPASS_BIN="$(which sshpass)"
if [ ! -x "$SSHPASS_BIN" ] ; then
	echo "E: no sshpass binary" >&2
	exit 1
fi

# clone git repo
cd $TMP_DIR
$GIT_BIN clone $GIT
cd $GIT_DIR

for device in $DEVICES; do
	mkdir -p $device
	cd $device

	# fetch cfg
	CONN="${USER}@${device}"
	$SSHPASS_BIN -p "$PASSWORD" $SCP_BIN ${CONN}:${FILE} .

	# commit
	$GIT_BIN add -A .
	$GIT_BIN commit -m "${device}: configuration change" \
		-m "A configuration change was detected" \
		--author="cron <cron@example.com>"

	$GIT_BIN push -f
	cd ..
done

# cleanup
rm -rf $TMP_DIR

You should create a read-only user ‘git’ in the devices. And beware that each device model has the config file stored in a different place.

For reference, in HP comware, the file to scp is flash:/startup.cfg. And you might try creating the user like this:

local-user git class manage
 password hash xxxxx
 service-type ssh
 authorization-attribute user-role security-audit
#

In Junos/Juniper, the file you should scp is /config/juniper.conf.gz and the script should gunzip the data before committing. For the read-only user, try is something like this:

system {
	[...]
	login {
		[...]
		class git {
			permissions maintenance;
			allow-commands scp.*;
		}
		user git {
			uid xxx;
			class git;
			authentication {
				encrypted-password "xxx";
			}
		}
	}
}

The file to scp in HP procurve is /cfg/startup-config. And for the read-only user, try something like this:

aaa authorization group "git user" 1 match-command "scp.*" permit
aaa authentication local-user "git" group "git user" password sha1 "xxxxx"

What would be the ideal situation? Get the device controlled directly by git (i.e. commit –> git hook –> device update) or at least have the device to commit the changes by itself to git. I’m open to suggestions :-)

Planet DebianElena 'valhalla' Grandi: On brokeness, the live installer and being nice to people

On brokeness, the live installer and being nice to people

This morning I've read this blog.einval.com/2017/06/22#tro.

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enought to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.

,

Planet DebianSteve McIntyre: -1, Trolling

Here's a nice comment I received by email this morning. I guess somebody was upset by my last post?

From: Tec Services <tecservices911@gmail.com>
Date: Wed, 21 Jun 2017 22:30:26 -0700
To: steve@einval.com
Subject: its time for you to retire from debian...unbelievable..your
         the quality guy and fucked up the installer!

i cant ever remember in the hostory of computing someone releasing an installer
that does not work!!

wtf!!!

you need to be retired...due to being retarded..

and that this was dedicated to ian...what a
disaster..you should be ashames..he is probably roling in his grave from shame
right now....

It's nice to be appreciated.

Planet Linux AustraliaChris Neugebauer: Hire me!

tl;dr: I’ve recently moved to the San Francisco Bay Area, received my US Work Authorization, so now I’m looking for somewhere  to work. I have a résumé and an e-mail address!

I’ve worked a lot in Free and Open Source Software communities over the last five years, both in Australia and overseas. While much of my focus has been on the Python community, I’ve also worked more broadly in the Open Source world. I’ve been doing this community work entirely as a volunteer, most of the time working in full-time software engineering jobs which haven’t related to my work in the Open Source world.

It’s pretty clear that I want to move into a job where I can use the skills I’ve been volunteering for the last few years, and put them to good use both for my company, and for the communities I serve.

What I’m interested in doing fits best into a developer advocacy or community management sort of role. Working full-time on helping people in tech be better at what they do would be just wonderful. That said, my background is in code, and working in software engineering with a like-minded company would also be pretty exciting (better still if I get to write a lot of Python).

  • Something with a strong developer relations element. I enjoy working with other developers, and I love having the opportunity to get them excited about things that I’m excited about. As a conference organiser, I’m very aware of the line between terrible marketing shilling, and genuine advocacy by and for developers: I want to help whoever I work for end up on the right side of that line.
  • Either in San Francisco, North of San Francisco, or Remote-Friendly. I live in Petaluma, a lovely town about 50 minutes north of San Francisco, with my wonderful partner, Josh. We’re pretty happy up here, but I’m happy to regularly commute as far as San Francisco. I’ll consider opportunities in other cities, but they’d need to primarily be remote.
  • Relevant to Open Source. The Open Source world is where my experience is, it’s where I know people, and it’s the world where I can be most credible. This doesn’t mean I need to be working on open source itself, but I’d love to be able to show up at OSCON or linux.conf.au and be excited to have my company’s name on my badge.

Why would I be good at this? I’ve been working on building and interacting with communities of developers, especially in the Free and Open Source Software world, for the last five years.

You can find a complete list of what I’ve done in my résumé, but here’s a selection of what I think’s notable:

  • Co-organised two editions of PyCon Australia, and led the linux.conf.au 2017 team. I’ve led PyCon AU, from inception, to bidding, to the successful execution for two years in a row. As the public face of PyCon AU, I made sure that the conference had the right people interested in speaking, and that we had many from Australian Python community interested in attending. I took what I learned at PyCon AU and applied it to run linux.conf.au 2017, where our CFP attracted its largest ever response (beating the previous record by more than 30%).
  • Developed Registrasion, an open source conference ticket system. I designed and developed a ticket sales system that allowed for automation of the most significant time sinks that linux.conf.au and PyCon Australia registration staff had experienced in previous years. Registrasion was Open Sourced, and several other conferences are considering adopting it.
  • Given talks at countless open source and developer events, both in Australia, and overseas. I’ve presented at OSCON, PyCons in five countries, and myriad other conferences. I’ve presented on a whole lot of technical topics, and I’ve recently started talking more about the community-level projects I’ve been involved with.
  • Designed, ran, and grew PyCon Australia’s outreach and inclusion programmes. Each year, PyCon Australia has offered upwards of $10,000 (around 10% of conference budget) in grants to people who otherwise wouldn’t be able to attend the conference: this is not just speakers, but people whose presence would improve the conference just by being there. I’ve led a team to assess applications for these grants, and lead our outreach efforts to make sure we find the right people to receive these grants.
  • Served as a council member for Linux Australia. Linux Australia is the peak body for Open Source communities in Australia, as well as underwriting the region’s more popular Open Source and Developer conferences. In particular, I led a project to design governance policies to help make sure the conferences we underwrite are properly budgeted and planned.

So, if you know of anything going at the moment, I’d love to hear about it. I’m reachable by e-mail (mail@chrisjrn.com) but you can also find me on Twitter (@chrisjrn), or if you really need to, LinkedIn.

TEDAn updated design for TED Talks

TED Talks design

It’s been a few years since the TED Talks video page was last updated, but a new design begins rolling out this week. The update aims to provide a simple, straightforward viewing experience for you while surfacing other ideas worth spreading that you might also like.

A few changes to highlight …

More talks to watch

Today there are about 2,500 TED Talks in the catalog, and each is unique. However, most of them are connected to other talks in some way — on similar topics, or given by the same speaker. Think of it as part of a conversation. That’s why, in our new design, it’s easier to see other talks you might be interested in. Those smart recommendations are shown along the right side of the screen.

As our library of talks grows, the updated design will help you discover the most relevant talks.

Beyond the video: More brain candy

Most ideas are rich in nuanced information far beyond what an 18 minute talk can contain. That’s why we collected deeper content around the idea for you to explore— like books by the speaker, articles relating to the talk, and ways to take action and get involved — in the Details section.

Many speakers provide annotations for viewers (now with clickable time codes that take you right to the relevant moment in the video) as well as their own resources and personal recommendations. You can find all of that extra content in the Footnotes and Reading list sections.

Transcripts, translations, and subtitling

Reaching a global community has always been a foundation of TED’s mission, so working to improve the experience for our non-English speaking viewers is an ongoing effort. This update gives you one-click access to our most requested subtitles (when available), displayed in their native endonyms. We’ve also improved the subtitles themselves, making the text easier for you to read across languages.

What’s next?

While there are strong visual differences, this update is but one mark in a series of improvements we plan on making for how you view TED Talks on TED.com. We’d appreciate your feedback to measure our progress and influence our future changes!


LongNowThe Nuclear Bunker Preserving Movie History

During the Cold War, this underground bunker in Culpeper, Virginia was where the government would have taken the president if a nuclear war broke out. Now, the Library of Congress is using it to preserve all manner of films, from Casablanca to Harry Potter. The oldest films were made on nitrate, a fragile and highly combustible film base that shares the same chemical compound as gunpowder. Great Big Story takes us inside the vault, and introduces us to archivist George Willeman, the man in charge of restoring and preserving the earliest (and most incendiary) motion pictures.

Krebs on SecurityWhy So Many Top Hackers Hail from Russia

Conventional wisdom says one reason so many hackers seem to hail from Russia and parts of the former Soviet Union is that these countries have traditionally placed a much greater emphasis than educational institutions in the West on teaching information technology in middle and high schools, and yet they lack a Silicon Valley-like pipeline to help talented IT experts channel their skills into high-paying jobs. This post explores the first part of that assumption by examining a breadth of open-source data.

The supply side of that conventional wisdom seems to be supported by an analysis of educational data from both the U.S. and Russia, which indicates there are several stark and important differences between how American students are taught and tested on IT subjects versus their counterparts in Eastern Europe.

computered

Compared to the United States there are quite a few more high school students in Russia who choose to specialize in information technology subjects. One way to measure this is to look at the number of high school students in the two countries who opt to take the advanced placement exam for computer science.

According to an analysis (PDF) by The College Board, in the ten years between 2005 and 2016 a total of 270,000 high school students in the United States opted to take the national exam in computer science (the “Computer Science Advanced Placement” exam).

Compare that to the numbers from Russia: A 2014 study (PDF) on computer science (called “Informatics” in Russia) by the Perm State National Research University found that roughly 60,000 Russian students register each year to take their nation’s equivalent to the AP exam — known as the “Unified National Examination.” Extrapolating that annual 60,000 number over ten years suggests that more than twice as many people in Russia — 600,000 — have taken the computer science exam at the high school level over the past decade.

In “A National Talent Strategy,” an in-depth analysis from Microsoft Corp. on the outlook for information technology careers, the authors warn that despite its critical and growing importance computer science is taught in only a small minority of U.S. schools. The Microsoft study notes that although there currently are just over 42,000 high schools in the United States, only 2,100 of them were certified to teach the AP computer science course in 2011.

A HEAD START

If more people in Russia than in America decide to take the computer science exam in secondary school, it may be because Russian students are required to study the subject beginning at a much younger age. Russia’s Federal Educational Standards (FES) mandate that informatics be compulsory in middle school, with any school free to choose to include it in their high school curriculum at a basic or advanced level.

“In elementary school, elements of Informatics are taught within the core subjects ‘Mathematics’ and ‘Technology,” the Perm University research paper notes. “Furthermore, each elementary school has the right to make [the] subject “Informatics” part of its curriculum.”

The core components of the FES informatics curriculum for Russian middle schools are the following:

1. Theoretical foundations
2. Principles of computer’s functioning
3. Information technologies
4. Network technologies
5. Algorithmization
6. Languages and methods of programming
7. Modeling
8. Informatics and Society

SECONDARY SCHOOL

There also are stark differences in how computer science/informatics is taught in the two countries, as well as the level of mastery that exam-takers are expected to demonstrate in their respective exams.

Again, drawing from the Perm study on the objectives in Russia’s informatics exam, here’s a rundown of what that exam seeks to test:

Block 1: “Mathematical foundations of Informatics”,
Block 2: “Algorithmization and programming”, and
Block 3: “Information and computer technology.”

The testing materials consist of three parts.

Part 1 is a multiple-choice test with four given options, and it covers all the blocks. Relatively little time is set aside to complete this part.

Part 2 contains a set of tasks of basic, intermediate and advanced levels of complexity. These require brief answers such as a number or a sequence of characteristics.

Part 3 contains a set of tasks of an even higher level of complexity than advanced. These tasks usually involve writing a detailed answer in free form.

According to the Perm study, “in 2012, part 1 contained 13 tasks; Part 2, 15 tasks; and Part 3, 4 tasks. The examination covers the key topics from the Informatics school syllabus. The tasks with detailed answers are the most labor intensive. These include tasks on the analysis of algorithms, drawing up computer programs, among other types. The answers are checked by the experts of regional examination boards based on standard assessment criteria.”

Image: Perm State National Research University, Russia.

Image: Perm State National Research University, Russia.

In the U.S., the content of the AP computer science exam is spelled out in this College Board document (PDF).

US Test Content Areas:

Computational Thinking Practices (P)

P1: Connecting Computing
P2: Creating Computational Artifacts
P3: Abstracting
P4: Analyzing Problems and Artifacts
P5: Communicating
P6: Collaborating

The Concept Outline:

Big Idea 1: Creativity
Big idea 2: Abstraction
Big Idea 3: Data and Information
Big Idea 4: Algorithms
Big idea 5: Programming
Big idea 6: The Internet
Big idea 7: Global Impact

ADMIRING THE PROBLEM

How do these two tests compare? Alan Paller, director of research for the SANS Institute — an information security education and training organization — says topics 2, 3, 4 and 6 in the Russian informatics curriculum above are the “basics” on which cybersecurity skills can be built, and they are present beginning in middle school for all Russian students.

“Very few middle schools teach this in the United States,” Paller said. “We don’t teach these topics in general and we definitely don’t test them. The Russians do and they’ve been doing this for the past 30 years. Which country will produce the most skilled cybersecurity people?”

Paller said the Russian curriculum virtually ensures kids have far more hands-on experience with computer programming and problem solving. For example, in the American AP test no programming language is specified and the learning objectives are:

“How are programs developed to help people and organizations?”
“How are programs used for creative expression?”
“How do computer programs implement algorithms?”
“How does abstraction make the development of computer programs possible?”
“How do people develop and test computer programs?”
“Which mathematical and logical concepts are fundamental to programming?”

“Notice there is almost no need to learn to program — I think they have to write one program (in collaboration with other students),” Paller wrote in an email to KrebsOnSecurity. “It’s like they’re teaching kids to admire it without learning to do it. The main reason that cyber education fails is that much of the time the students come out of school with almost no usable skills.”

THE WAY FORWARD

On the bright side, there are signs that computer science is becoming a more popular focus for U.S. high school students. According to the latest AP Test report (PDF) from the College Board, almost 58,000 Americans took the AP exam in computer science last year — up from 49,000 in 2015.

However, computer science still is far less popular than most other AP test subjects in the United States. More than a half million students opted for the English AP exam in 2016; 405,000 took English literature; almost 283,000 took AP government, while some 159,000 students went for an AP test called “Human Geography.”

A breakdown of subject specialization in the 2016 v. 2015 AP tests in the United States. Source: The College Board.

A breakdown of subject specialization in the 2016 v. 2015 AP tests in the United States. Source: The College Board.

This is not particularly good news given the dearth of qualified cybersecurity professionals available to employers. ISACA, a non-profit information security advocacy group, estimates there will be a global shortage of two million cyber security professionals by 2019. A report from Frost & Sullivan and (ISC)2 prognosticates there will be more than 1.5 million cybersecurity jobs unfilled by 2020.

The IT recruitment problem is especially acute for companies in the United States. Unable to find enough qualified cybersecurity professionals to hire here in the U.S., companies increasingly are counting on hiring foreigners who have the skills they’re seeking. However, the Trump administration in April ordered a full review of the country’s high-skilled immigration visa program, a step that many believe could produce new rules to clamp down on companies that hire foreigners instead of Americans.

Some of Silicon Valley’s biggest players are urging policymakers to adopt a more forward-looking strategy to solving the skills gap crisis domestically. In its National Talent Strategy report (PDF), Microsoft said it spends 83 percent of its worldwide R&D budget in the United States.

“But companies across our industry cannot continue to focus R&D jobs in this country if we cannot fill them here,” reads the Microsoft report. “Unless the situation changes, there is a growing probability that unfilled jobs will migrate over time to countries that graduate larger numbers of individuals with the STEM backgrounds that the global economy so clearly needs.”

Microsoft is urging U.S. policymakers to adopt a nationwide program to strengthen K-12 STEM education by recruiting and training more teachers to teach it. The software giant also says states should be given more funding to broaden access to computer science in high school, and that computer science learning needs to start much earlier for U.S. students.

“In the short-term this represents an unrealized opportunity for American job growth,” Microsoft warned. “In the longer term this may spur the development of economic competition in a field that the United States pioneered.”

Planet DebianJohn Goerzen: First Experiences with Stretch

I’ve done my first upgrades to Debian stretch at this point. The results have been overall good. On the laptop my kids use, I helped my 10-year-old do it, and it worked flawlessly. On my workstation, I got a kernel panic on boot. Hmm.

Unfortunately, my system has to use the nv drivers, which leaves me with an 80×25 text console. It took some finagling (break=init in grub, then manually insmoding the appropriate stuff based on modules.dep for nouveau), but finally I got a console so I could see what was breaking. It appeared that init was crashing because it couldn’t find liblz4. A little digging shows that liblz4 is in /usr, and /usr wasn’t mounted. I’ve filed the bug on systemd-sysv for this.

I run root on ZFS, and further digging revealed that I had datasets named like this:

  • tank/hostname-1/ROOT
  • tank/hostname-1/usr
  • tank/hostname-1/var

This used to be fine. The mountpoint property of the usr dataset put it at /usr without incident. But it turns out that this won’t work now, unless I set ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs for some reason. So I renamed them so usr was under ROOT, and then the system booted.

Then I ran samba not liking something in my bind interfaces line (to be fair, it did still say eth0 instead of br0). rpcbind was failing in postinst, though a reboot seems to have helped that. More annoying was that I had trouble logging into my system because resolv.conf was left empty (despite dns-* entries in /etc/network/interfaces and the presence of resolvconf). I eventually repaired that, and found that it kept removing my “search” line. Eventually I removed resolvconf.

Then mariadb’s postinst was silently failing. I eventually discovered it was sending info to syslog (odd), and /etc/init.d/apparmor teardown let it complete properly. It seems like there may have been an outdated /etc/apparmor.d/cache/usr.sbin.mysql out there for some reason.

Then there was XFCE. I use it with xmonad, and the session startup was really wonky. I had to zap my sessions, my panel config, etc. and start anew. I am still not entirely sure I have it right, but I at do have a usable system now.

Planet DebianDirk Eddelbuettel: nanotime 0.2.0

A new version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic.

Thanks to a metric ton of work by Leonardo Silvestri, the package now uses S4 classes internally allowing for greater consistency of operations on nanotime objects.

Changes in version 0.2.0 (2017-06-22)

  • Rewritten in S4 to provide more robust operations (#17 by Leonardo)

  • Ensure tz="" is treated as unset (Leonardo in #20)

  • Added format and tz arguments to nanotime, format, print (#22 by Leonardo and Dirk)

  • Ensure printing respect options()$max.print, ensure names are kept with vector (#23 by Leonardo)

  • Correct summary() by defining names<- (Leonardo in #25 fixing #24)

  • Report error on operations that are meaningful for type; handled NA, NaN, Inf, -Inf correctly (Leonardo in #27 fixing #26)

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramNSA Insider Security Post-Snowden

According to a recently declassified report obtained under FOIA, the NSA's attempts to protect itself against insider attacks aren't going very well:

The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department's inspector general completed in 2016.

[...]

The agency also failed to meaningfully reduce the number of officials and contractors who were empowered to download and transfer data classified as top secret, as well as the number of "privileged" users, who have greater power to access the N.S.A.'s most sensitive computer systems. And it did not fully implement software to monitor what those users were doing.

In all, the report concluded, while the post-Snowden initiative -- called "Secure the Net" by the N.S.A. -- had some successes, it "did not fully meet the intent of decreasing the risk of insider threats to N.S.A. operations and the ability of insiders to exfiltrate data."

Marcy Wheeler comments:

The IG report examined seven of the most important out of 40 "Secure the Net" initiatives rolled out since Snowden began leaking classified information. Two of the initiatives aspired to reduce the number of people who had the kind of access Snowden did: those who have privileged access to maintain, configure, and operate the NSA's computer systems (what the report calls PRIVACs), and those who are authorized to use removable media to transfer data to or from an NSA system (what the report calls DTAs).

But when DOD's inspectors went to assess whether NSA had succeeded in doing this, they found something disturbing. In both cases, the NSA did not have solid documentation about how many such users existed at the time of the Snowden leak. With respect to PRIVACs, in June 2013 (the start of the Snowden leak), "NSA officials stated that they used a manually kept spreadsheet, which they no longer had, to identify the initial number of privileged users." The report offered no explanation for how NSA came to no longer have that spreadsheet just as an investigation into the biggest breach thus far at NSA started. With respect to DTAs, "NSA did not know how many DTAs it had because the manually kept list was corrupted during the months leading up to the security breach."

There seem to be two possible explanations for the fact that the NSA couldn't track who had the same kind of access that Snowden exploited to steal so many documents. Either the dog ate their homework: Someone at NSA made the documents unavailable (or they never really existed). Or someone fed the dog their homework: Some adversary made these lists unusable. The former would suggest the NSA had something to hide as it prepared to explain why Snowden had been able to walk away with NSA's crown jewels. The latter would suggest that someone deliberately obscured who else in the building might walk away with the crown jewels. Obscuring that list would be of particular value if you were a foreign adversary planning on walking away with a bunch of files, such as the set of hacking tools the Shadow Brokers have since released, which are believed to have originated at NSA.

Read the whole thing. Securing against insiders, especially those with technical access, is difficult, but I had assumed the NSA did more post-Snowden.

Worse Than FailureI Need More Space

Beach litter, Winterton Dunes - geograph.org.uk - 966905

Shawn W. was a newbie support tech at a small company. Just as he was beginning to familiarize himself with its operational quirks, he got a call from Jim: The Big Boss.

Dread seized Shawn. Aside from a handshake on Interview Day, the only "interaction" he'd had with Jim thus far was overhearing him tear into a different support rep about having to deal with "complicated computer crap" like changing passwords. No doubt, this call was bound to be a clinic in saintly patience.

"Tech Support," Shawn greeted. "How may—?"

"I'm out of space and I need more!" Jim barked over the line.

"Oh." Shawn mentally geared up for a memory or hard drive problem. "Did you get a warning or error mes—?"

"Just get up here and bring some more space with you!" Jim hung up.

"Oh, boy," Shawn muttered to himself.

Deciding that he was better off diagnosing the problem firsthand, Shawn trudged upstairs to Jim's office. To his pleasant surprise, he found it empty. He sank into the cushy executive-level chair. Jim hadn't been away long enough for any screensavers or lock screens to pop up, so Shawn had free rein to examine the machine.

There wasn't much to find. The only program running was a web browser, with a couple of tabs open to ESPN.com and an investment portfolio. The hardware itself was fairly new. CPU, memory, hard drive all looked fine.

"See, I'm out of space. Did you bring me more?"

Shawn glanced up to find Jim barreling toward him, steaming mug of coffee in hand. He braced himself as though facing down an oncoming freight train. "I'm not sure I see the problem yet. Can you show me what you were doing when you noticed you needed more space?"

Jim elbowed his way over to the mouse, closed the browser, then pointed to the monitor. "There! Can't you see I'm out of space?"

Indeed, Jim's desktop was full. So many shortcuts, documents, widgets, and other icons crowded the screen that the tropical desktop background was barely recognizable as such.

While staring at what resembled the aftermath of a Category 5 hurricane, Shawn debated his response. "OK, I see what you mean. Let's see if we can—"

"Can't you just get me more screen?" Jim pressed.

More screen? "You mean another monitor?" Shawn asked. "Well, yes, I could add a second monitor if you want one, but we could also organize your desktop a little and—"

"Good, get me one of those! Don't touch my icons!" Jim shooed Shawn away like so much lint. "Get out of my chair so I can get back to work."

A short while later, Shawn hooked up a second monitor to Jim's computer. This prompted a huge and unexpected grin from the boss. "I like you, you get things done. Those other guys would've taken a week to get me more space!"

Shawn nodded while stifling a snort. "Let me know if you need anything else."

Once Jim had left for the day, Shawn swung past the boss' office out of morbid curiosity. Jim had already scattered a few dozen shortcuts across his new real estate. Another lovely vacation destination was about to endure a serious littering problem.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianNorbert Preining: Signal handling in R

Recently I have been programming quite a lot in R, and today stumbled over the problem to implement a kind of monitoring loop in R. Typically that would be a infinite loop with sleep calls, but I wanted to allow for waking up from the sleep via sending UNIX style signals, in particular SIGINT. After some searching I found Beyond Exception Handling: Conditions and Restarts from the Advanced R book. But it didn’t really help me a lot to program an interrupt handler.

My requirements were:

  • an interruption of the work-part should be immediately restarted
  • an interruption of the sleep-part should go immediately into the work-part

Unfortunately it seems not to be possible to ignore interrupts at all from with the R code. The best one can do is install interrupt handlers and try to repeat the code which was executed while the interrupt happened. This is what I tried to implement with the following code below. I still have to digest the documentation about conditions and restarts, and play around a lot, but at least this is an initial working version.

workfun <- function() {
  i <- 1
  do_repeat <- FALSE
  while (TRUE) {
    message("begin of the loop")
    withRestarts(
      {
        # do all the work here
        cat("Entering work part i =", i, "\n");
        Sys.sleep(10)
        i <- i + 1
        cat("finished work part\n")
      }, 
      gotSIG = function() { 
        message("interrupted while working, restarting work part")
        do_repeat <<- TRUE
        NULL
      }
    )
    if (do_repeat) {
      cat("restarting work loop\n")
      do_repeat <- FALSE
      next
    } else {
      cat("between work and sleep part\n")
    }
    withRestarts(
      {
        # do the sleep part here
        cat("Entering sleep part i =", i, "\n")
        Sys.sleep(10)
        i <- i + 1
        cat("finished sleep part\n")
      }, 
      gotSIG = function() {
        message("got work to do, waking up!")
        NULL
      }
    )
    message("end of the loop")
  }
}

cat("Current process:", Sys.getpid(), "\n");

withCallingHandlers({
    workfun()
  },
  interrupt = function(e) {
    invokeRestart("gotSIG")
  })

While not perfect, I guess I have to live with this method for now.

CryptogramCeramic Knife Used in Israel Stabbing

I have no comment on the politics of this stabbing attack, and only note that the attacker used a ceramic knife -- that will go through metal detectors.

I have used a ceramic knife in the kitchen. It's sharp.

EDITED TO ADD (6/22): It looks like the knife had nothing to do with the attack discussed in the article.

Don MartiStuff I'm thankful for

I'm thankful that the sewing machine was invented a long time ago, not today. If the sewing machine were invented today, most sewing tutorials would be twice as long, because all the thread would come in proprietary cartridges, and you would usually have to hack the cartridge to get the type of thread you need in a cartridge that works with your machine.

Don Marti1. Write open source. 2. ??? 3. PROFIT

Studies keep showing that open source developers get paid more than people who develop software but do not contribute to open source.

Good recent piece: Tabs, spaces and your salary - how is it really? by Evelina Gabasova.

But why?

Is open source participation a way to signal that you have skills and are capable of cooperation with others?

Is open source a way to build connections and social capital so that you have more awareness of new job openings and can more easily move to a higher-paid position?

Does open source participation just increase your skills so that you do better work and get paid more for it?

Are open source codebases a complementary good to open source maintenance programming, so that a lower price for access to the codebase tends to drive up the price for maintenance programming labor?

Is "we hire open source people" just an excuse for bias, since the open source scene at least in the USA is less diverse than the general pool of programming job applicants?

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.3 (and 0.2.2)

A new minor version 0.2.3 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. The RcppCCTZ page has a few usage examples and details.

This version ensures that we set the TZDIR environment variable correctly on the old dreaded OS that does not come with proper timezone information---an issue which had come up while preparing the next (and awesome, trust me) release of nanotime. It also appears that I failed to blog about 0.2.2, another maintenance release, so changes for both are summarised next.

Changes in version 0.2.3 (2017-06-19)

  • On Windows, the TZDIR environment variable is now set in .onLoad()

  • Replaced init.c with registration code inside of RcppExports.cpp thanks to Rcpp 0.12.11.

Changes in version 0.2.2 (2017-04-20)

  • Synchronized with upstream CCTZ

  • The time_point object is instantiated explicitly for nanosecond use which appears to be required on macOS

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianJoey Hess: DIY professional grade solar panel installation

I've installed 1 kilowatt of solar panels on my roof, using professional grade eqipment. The four panels are Astronergy 260 watt panels, and they're mounted on IronRidge XR100 rails. Did it all myself, without help.

house with 4 solar panels on roof

I had three goals for this install:

  1. Cheap but sturdy. Total cost will be under $2500. It would probably cost at least twice as much to get a professional install, and the pros might not even want to do such a small install.
  2. Learn the roof mount system. I want to be able to add more panels, remove panels when working on the roof, and understand everything.
  3. Make every day a sunny day. With my current solar panels, I get around 10x as much power on a sunny day as a cloudy day, and I have plenty of power on sunny days. So 10x the PV capacity should be a good amount of power all the time.

My main concerns were, would I be able to find the rafters when installing the rails, and would the 5x3 foot panels be too unweildly to get up on the roof by myself.

I was able to find the rafters, without needing a stud finder, after I removed the roof's vent caps, which exposed the rafters. The shingles were on straight enough that I could follow the lines down and drilled into the rafter on the first try every time. And I got the rails on spaced well and straight, although I could have spaced the FlashFeet out better (oops).

My drill ran out of juice half-way, and I had to hack it to recharge on solar power, but that's another story. Between the learning curve, a lot of careful measurement, not the greatest shoes for roofing, and waiting for recharging, it took two days to get the 8 FlashFeet installed and the rails mounted.

Taking a break from that and swimming in the river, I realized I should have been wearing my water shoes on the roof all along. Super soft and nubbly, they make me feel like a gecko up there! After recovering from an (unrelated) achilles tendon strain, I got the panels installed today.

Turns out they're not hard to handle on the roof by myself. Getting them up a ladder to the roof by yourself would normally be another story, but my house has a 2 foot step up from the back retaining wall to the roof, and even has a handy grip beam as you step up.

roof next to the ground with a couple of cinderblock steps

The last gotcha, which I luckily anticipated, is that panels will slide down off the rails before you can get them bolted down. This is where a second pair of hands would have been most useful. But, I macguyvered a solution, attaching temporary clamps before bringing a panel up, that stopped it sliding down while I was attaching it.

clamp temporarily attached to side of panel

I also finished the outside wiring today. Including the one hack of this install so far. Since the local hardware store didn't have a suitable conduit to bring the cables off the roof, I cobbled one together from pipe, with foam inserts to prevent chafing.

some pipe with 5 wires running through it, attached to the side of the roof

While I have 1 kilowatt of power on my roof now, I won't be able to use it until next week. After ordering the upgrade, I realized that my old PWM charge controller would be able to handle less than half the power, and to get even that I would have needed to mount the fuse box near the top of the roof and run down a large and expensive low-voltage high-amperage cable, around OO AWG size. Instead, I'll be upgrading to a MPPT controller, and running a single 150 volt cable to it.

Then, since the MPPT controller can only handle 1 kilowatt when it's converting to 24 volts, not 12 volts, I'm gonna have to convert the entire house over from 12V DC to 24V DC, including changing all the light fixtures and rewiring the battery bank...

CryptogramIs Continuing to Patch Windows XP a Mistake?

Last week, Microsoft issued a security patch for Windows XP, a 16-year-old operating system that Microsoft officially no longer supports. Last month, Microsoft issued a Windows XP patch for the vulnerability used in WannaCry.

Is this a good idea? This 2014 essay argues that it's not:

The zero-day flaw and its exploitation is unfortunate, and Microsoft is likely smarting from government calls for people to stop using Internet Explorer. The company had three ways it could respond. It could have done nothing­ -- stuck to its guns, maintained that the end of support means the end of support, and encouraged people to move to a different platform. It could also have relented entirely, extended Windows XP's support life cycle for another few years and waited for attrition to shrink Windows XP's userbase to irrelevant levels. Or it could have claimed that this case is somehow "special," releasing a patch while still claiming that Windows XP isn't supported.

None of these options is perfect. A hard-line approach to the end-of-life means that there are people being exploited that Microsoft refuses to help. A complete about-turn means that Windows XP will take even longer to flush out of the market, making it a continued headache for developers and administrators alike.

But the option Microsoft took is the worst of all worlds. It undermines efforts by IT staff to ditch the ancient operating system and undermines Microsoft's assertion that Windows XP isn't supported, while doing nothing to meaningfully improve the security of Windows XP users. The upside? It buys those users at best a few extra days of improved security. It's hard to say how that was possibly worth it.

This is a hard trade-off, and it's going to get much worse with the Internet of Things. Here's me:

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn't true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

At least Microsoft has security engineers on staff that can write a patch for Windows XP. There will be no one able to write patches for your 16-year-old thermostat and refrigerator, even assuming those devices can accept security patches.

Planet DebianReproducible builds folks: Reproducible Builds: week 112 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 11 and Saturday June 17 2017:

Upcoming events

Upstream patches and bugs filed

Reviews of unreproducible packages

1 package review has been added, 19 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (1)
  • Edmund Grimley Evans (1)

diffoscope development

tests.reproducible-builds.org

As you might have noticed, Debian stretch was released last week. Since then, Mattia and Holger renamed our testing suite to stretch and added a buster suite so that we keep our historic results for stretch visible and can continue our development work as usual. In this sense, happy hacking on buster; may it become the best Debian release ever and hopefully the first reproducible one!

  • Vagrant Cascadian:
  • Valerie Young: Add highlighting in navigation for the new nodes health pages.
  • Mattia Rizzolo:
    • Do not dump database ACL in the backups.
    • Deduplicate SSLCertificateFile directive into the common-directives-ssl macro
    • Apache: t.r-b.o: redirect /testing/ to /stretch/
    • db: s/testing/stretch/g
    • Start adding code to test buster...
  • Holger Levsen:
    • Update README.infrastructure to explain who has root access where.
    • reproducible_nodes_info.sh: correctly recognize zero builds per day.
    • Add build nodes health overview page, then split it in three: health overview, daily munin graphs and weekly munin graphs.
    • reproducible_worker.sh: improve handling of systemctl timeouts.
    • reproducible_build_service: sleep less and thus restart failed workers sooner.
    • Replace ftp.(de|uk|us).debian.org with deb.debian.org everywhere.
    • Performance page: also show local problems with _build_service.sh (which are autofixed after a maximum of 133.7 minutes).
    • Rename nodes_info job to html_nodes_info.
    • Add new node health check jobs, split off from maintenance jobs, run every 15 minutes.
      • Add two new checks: 1. for correct future (2019 is incorrect atm, and we sometimes got that). 2.) for writeable /tmp (sometimes happens on borked armhf nodes).
    • Add jobs for testing buster.
    • s/testing/stretch/g in all the code.
    • Finish the code to deal with buster.
    • Teach jessie and Ubuntu 16.04 how to debootstrap buster.

Axel Beckert is currently in the process of setting up eight LeMaker HiKey960 boards. These boards were sponsored by Hewlett Packard Enterprise and will be hosted by the SOSETH students association at ETH Zurich. Thanks to everyone involved here and also thanks to Martin Michlmayr and Steve Geary who initiated getting these boards to us.

Misc.

This week's edition was written by Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Sociological ImagesOn “voluntary conformism,” or how we use our freedom to fit in

Originally posted at Montclair Socioblog.

“Freedom of opinion does not exist in America,” said DeTocqueville 250 years ago. He might have held the same view today.

But how could a society that so values freedom and individualism be so demanding of conformity?  I had blogged about this in 2010 with references to old sitcoms, but for my class this semester I needed something more recent. Besides, Cosby now carries too much other baggage. ABC’s “black-ish”* came to the rescue.

The idea I was offering in class was, first, that our most cherished American values can conflict with one another. For example, our desire for family-like community can clash with our value on independence and freedom. Second, the American solution to this conflict between individual and group is often what Claude Fischer calls “voluntarism.”  We have freedom – you can voluntarily choose which groups to belong to. But once you choose to be a member, you have to conform.  The book I had assigned my class (My Freshman Year by Rebekah Nathan*) uses the phrase “voluntary conformism.”

In a recent episode of “black-ish,” the oldest daughter, Zoey, must choose which college to go to. She has been accepted at NYU, Miami, Vanderbilt, and Southern Cal. She leans heavily towards NYU, but her family, especially her father Dre, want her to stay close to home. The conflict is between Family – family togetherness, community – and Independence. If Zoey goes to NYU, she’ll be off on her own; if she stays in LA, she’ll be just a short drive from her family. New York also suggests values on Achievement, Success, even Risk-taking (“If I can make it there” etc.)

Zoey decides on NYU, and her father immediately tries to undermine that choice, reminding her of how cold and dangerous it will be. It’s typical sitcom-dad buffonery, and his childishness tips us off that this position, imposing his will, is the wrong one. Zoey, acting more mature, simply goes out and buys a bright red winter coat.

The argument for Independence, Individual Choice, and Success is most clearly expressed by Pops (Dre’s father, who lives with them), and it’s the turning point in the show. Dre and his wife are complaining about the kids growing up too fast. Pops says, “Isn’t this what you wanted? Isn’t this why you both worked so hard — movin’ to this White-ass neighborhood, sendin’ her to that White-ass school so she could have all these White-ass opportunities? Let. Her. Go.”

That should be the end of it. The final scene should be the family bidding a tearful goodbye to Zoey at LAX. But a few moments later, we see Zoey talking to her two younger siblings (8-year old twins – Jack and Diane). They remind her of how much family fun they have at holidays. Zoey has to tell them that New York is far, so she won’t be coming back till Christmas – no Thanksgiving, no Halloween.

Jack reminds her about the baby that will arrive soon. “He won’t even know you.”

In the next scene, Zoey walks into her parents room carrying the red winter coat. “I need to return this.”

“Wrong size?” asks her father.

“Wrong state.”

She’s going to stay in LA and go to USC.

Over a half-century ago, David McClelland wrote that a basic but unstated tenet of American culture is: “I want to freely choose to do what others expect me to do.” Zoey has chosen to do what others want her to do – but she has made that individual choice independently. It’s “voluntary conformism,” and it’s the perfect American solution (or at least the perfect American sitcom solution).

* For those totally unfamiliar with the show, the premise is this: Dre Johnson, a Black man who grew up in a working-class Black neighborhood of LA, has become a well-off advertising man, married a doctor (her name is Rainbow, or usually Bow), and moved to a big house in an upscale neighborhood. They have four children, and the wife is pregnant with a fifth.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramThe Dangers of Secret Law

Last week, the Department of Justice released 18 new FISC opinions related to Section 702 as part of an EFF FOIA lawsuit. (Of course, they don't mention EFF or the lawsuit. They make it sound as if it was their idea.)

There's probably a lot in these opinions. In one Kafkaesque ruling, a defendant was denied access to the previous court rulings that were used by the court to decide against it:

...in 2014, the Foreign Intelligence Surveillance Court (FISC) rejected a service provider's request to obtain other FISC opinions that government attorneys had cited and relied on in court filings seeking to compel the provider's cooperation.

[...]

The provider's request came up amid legal briefing by both it and the DOJ concerning its challenge to a 702 order. After the DOJ cited two earlier FISC opinions that were not public at the time -- one from 2014 and another from 2008­ -- the provider asked the court for access to those rulings.

The provider argued that without being able to review the previous FISC rulings, it could not fully understand the court's earlier decisions, much less effectively respond to DOJ's argument. The provider also argued that because attorneys with Top Secret security clearances represented it, they could review the rulings without posing a risk to national security.

The court disagreed in several respects. It found that the court's rules and Section 702 prohibited the documents release. It also rejected the provider's claim that the Constitution's Due Process Clause entitled it to the documents.

This kind of government secrecy is toxic to democracy. National security is important, but we will not survive if we become a country of secret court orders based on secret interpretations of secret law.

Worse Than FailureCodeSOD: A Lazy Cat

The innermost circle of Hell, as we all know, is trying to resolve printer driver issues for all eternity. Ben doesn’t work with the printers that we mere mortals deal with on a regular basis, though. He runs a printing press, three stories of spinning steel and plates and ink and rolls of paper that could crush a man.

Like most things, the press runs Linux- a highly customized, modified version of Linux. It’s a system that needs to be carefully configured, as “disaster recovery” has a slightly different meaning on this kind of heavy equipment. The documentation, while thorough and mostly clear, was obviously prepared by someone who speaks English as a second language. Thus, Ben wanted to check the shell scripts to better understand what they did.

The first thing he caught was that each script started with variable declarations like this:

GREP="/bin/grep"
CAT="/bin/cat"

In some cases, there were hundreds of such variable declarations, because presumably, someone doesn’t trust the path variable.

Now, it’s funny we bring up cat, as a common need in these scripts is to send a file to STDOUT. You’d think that cat is just the tool for the job, but you’d be mistaken. You need a shell function called cat_file:

# function send an file to STDOUT
#
# Usage: cat_file <Filename>
#

function cat_file ()
{
        local temp
        local error
        error=0
        if [ $# -ne 1 ]; then
                temp=""
                error=1
        else
                if [ -e ${1} ]; then
                        temp="`${CAT} ${1}`"
                else
                        temp=""
                        error=1
                fi
        fi
        echo "${temp}"
        return $((error))
}

This ‘belt and suspenders’ around cat ensures that you called it with parameters, that the parameters exist, and failing that, it… well… fails. Much like cat would, naturally. This gives you the great advantage, however, that instead of writing code like this:

dev="`cat /proc/dev/net | grep eth`"

You can instead write code like this:

dev="`cat_file /proc/dev/net | ${GREP} eth`"

Much better, yes?

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Planet DebianVincent Bernat: IPv4 route lookup on Linux

TL;DR: With its implementation of IPv4 routing tables using LPC-tries, Linux offers good lookup performance (50 ns for a full view) and low memory usage (64 MiB for a full view).


During the lifetime of an IPv4 datagram inside the Linux kernel, one important step is the route lookup for the destination address through the fib_lookup() function. From essential information about the datagram (source and destination IP addresses, interfaces, firewall mark, …), this function should quickly provide a decision. Some possible options are:

  • local delivery (RTN_LOCAL),
  • forwarding to a supplied next hop (RTN_UNICAST),
  • silent discard (RTN_BLACKHOLE).

Since 2.6.39, Linux stores routes into a compressed prefix tree (commit 3630b7c050d9). In the past, a route cache was maintained but it has been removed1 in Linux 3.6.

Route lookup in a trie

Looking up a route in a routing table is to find the most specific prefix matching the requested destination. Let’s assume the following routing table:

$ ip route show scope global table 100
default via 203.0.113.5 dev out2
192.0.2.0/25
        nexthop via 203.0.113.7  dev out3 weight 1
        nexthop via 203.0.113.9  dev out4 weight 1
192.0.2.47 via 203.0.113.3 dev out1
192.0.2.48 via 203.0.113.3 dev out1
192.0.2.49 via 203.0.113.3 dev out1
192.0.2.50 via 203.0.113.3 dev out1

Here are some examples of lookups and the associated results:

Destination IP Next hop
192.0.2.49 203.0.113.3 via out1
192.0.2.50 203.0.113.3 via out1
192.0.2.51 203.0.113.7 via out3 or 203.0.113.9 via out4 (ECMP)
192.0.2.200 203.0.113.5 via out2

A common structure for route lookup is the trie, a tree structure where each node has its parent as prefix.

Lookup with a simple trie

The following trie encodes the previous routing table:

Simple routing trie

For each node, the prefix is known by its path from the root node and the prefix length is the current depth.

A lookup in such a trie is quite simple: at each step, fetch the nth bit of the IP address, where n is the current depth. If it is 0, continue with the first child. Otherwise, continue with the second. If a child is missing, backtrack until a routing entry is found. For example, when looking for 192.0.2.50, we will find the result in the corresponding leaf (at depth 32). However for 192.0.2.51, we will reach 192.0.2.50/31 but there is no second child. Therefore, we backtrack until the 192.0.2.0/25 routing entry.

Adding and removing routes is quite easy. From a performance point of view, the lookup is done in constant time relative to the number of routes (due to maximum depth being capped to 32).

Quagga is an example of routing software still using this simple approach.

Lookup with a path-compressed trie

In the previous example, most nodes only have one child. This leads to a lot of unneeded bitwise comparisons and memory is also wasted on many nodes. To overcome this problem, we can use path compression: each node with only one child is removed (except if it also contains a routing entry). Each remaining node gets a new property telling how many input bits should be skipped. Such a trie is also known as a Patricia trie or a radix tree. Here is the path-compressed version of the previous trie:

Patricia trie

Since some bits have been ignored, on a match, a final check is executed to ensure all bits from the found entry are matching the input IP address. If not, we must act as if the entry wasn’t found (and backtrack to find a matching prefix). The following figure shows two IP addresses matching the same leaf:

Lookup in a Patricia trie

The reduction on the average depth of the tree compensates the necessity to handle those false positives. The insertion and deletion of a routing entry is still easy enough.

Many routing systems are using Patricia trees:

Lookup with a level-compressed trie

In addition to path compression, level compression2 detects parts of the trie that are densily populated and replace them with a single node and an associated vector of 2k children. This node will handle k input bits instead of just one. For example, here is a level-compressed version our previous trie:

Level-compressed trie

Such a trie is called LC-trie or LPC-trie and offers higher lookup performances compared to a radix tree.

An heuristic is used to decide how many bits a node should handle. On Linux, if the ratio of non-empty children to all children would be above 50% when the node handles an additional bit, the node gets this additional bit. On the other hand, if the current ratio is below 25%, the node loses the responsibility of one bit. Those values are not tunable.

Insertion and deletion becomes more complex but lookup times are also improved.

Implementation in Linux

The implementation for IPv4 in Linux exists since 2.6.13 (commit 19baf839ff4a) and is enabled by default since 2.6.39 (commit 3630b7c050d9).

Here is the representation of our example routing table in memory3:

Memory representation of a trie

There are several structures involved:

The trie can be retrieved through /proc/net/fib_trie:

$ cat /proc/net/fib_trie
Id 100:
  +-- 0.0.0.0/0 2 0 2
     |-- 0.0.0.0
        /0 universe UNICAST
     +-- 192.0.2.0/26 2 0 1
        |-- 192.0.2.0
           /25 universe UNICAST
        |-- 192.0.2.47
           /32 universe UNICAST
        +-- 192.0.2.48/30 2 0 1
           |-- 192.0.2.48
              /32 universe UNICAST
           |-- 192.0.2.49
              /32 universe UNICAST
           |-- 192.0.2.50
              /32 universe UNICAST
[...]

For internal nodes, the numbers after the prefix are:

  1. the number of bits handled by the node,
  2. the number of full children (they only handle one bit),
  3. the number of empty children.

Moreover, if the kernel was compiled with CONFIG_IP_FIB_TRIE_STATS, some interesting statistics are available in /proc/net/fib_triestat4:

$ cat /proc/net/fib_triestat
Basic info: size of leaf: 48 bytes, size of tnode: 40 bytes.
Id 100:
        Aver depth:     2.33
        Max depth:      3
        Leaves:         6
        Prefixes:       6
        Internal nodes: 3
          2: 3
        Pointers: 12
Null ptrs: 4
Total size: 1  kB
[...]

When a routing table is very dense, a node can handle many bits. For example, a densily populated routing table with 1 million entries packed in a /12 can have one internal node handling 20 bits. In this case, route lookup is essentially reduced to a lookup in a vector.

The following graph shows the number of internal nodes used relative to the number of routes for different scenarios (routes extracted from an Internet full view, /32 routes spreaded over 4 different subnets with various densities). When routes are densily packed, the number of internal nodes are quite limited.

Internal nodes and null pointers

Performance

So how performant is a route lookup? The maximum depth stays low (about 6 for a full view), so a lookup should be quite fast. With the help of a small kernel module, we can accurately benchmark5 the fib_lookup() function:

Maximum depth and lookup time

The lookup time is loosely tied to the maximum depth. When the routing table is densily populated, the maximum depth is low and the lookup times are fast.

When forwarding at 10 Gbps, the time budget for a packet would be about 50 ns. Since this is also the time needed for the route lookup alone in some cases, we wouldn’t be able to forward at line rate with only one core. Nonetheless, the results are pretty good and they are expected to scale linearly with the number of cores.

Another interesting figure is the time it takes to insert all those routes into the kernel. Linux is also quite efficient in this area since you can insert 2 million routes in less than 10 seconds:

Insertion time

Memory usage

The memory usage is available directly in /proc/net/fib_triestat. The statistic provided doesn’t account for the fib_info structures, but you should only have a handful of them (one for each possible next-hop). As you can see on the graph below, the memory use is linear with the number of routes inserted, whatever the shape of the routes is.

Memory usage

The results are quite good. With only 256 MiB, about 2 million routes can be stored!

Routing rules

Unless configured without CONFIG_IP_MULTIPLE_TABLES, Linux supports several routing tables and has a system of configurable rules to select the table to use. These rules can be configured with ip rule. By default, there are three of them:

$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default

Linux will first lookup for a match in the local table. If it doesn’t find one, it will lookup in the main table and at last resort, the default table.

Builtin tables

The local table contains routes for local delivery:

$ ip route show table local
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 192.168.117.0 dev eno1 proto kernel scope link src 192.168.117.55
local 192.168.117.55 dev eno1 proto kernel scope host src 192.168.117.55
broadcast 192.168.117.63 dev eno1 proto kernel scope link src 192.168.117.55

This table is populated automatically by the kernel when addresses are configured. Let’s look at the three last lines. When the IP address 192.168.117.55 was configured on the eno1 interface, the kernel automatically added the appropriate routes:

  • a route for 192.168.117.55 for local unicast delivery to the IP address,
  • a route for 192.168.117.255 for broadcast delivery to the broadcast address,
  • a route for 192.168.117.0 for broadcast delivery to the network address.

When 127.0.0.1 was configured on the loopback interface, the same kind of routes were added to the local table. However, a loopback address receives a special treatment and the kernel also adds the whole subnet to the local table. As a result, you can ping any IP in 127.0.0.0/8:

$ ping -c1 127.42.42.42
PING 127.42.42.42 (127.42.42.42) 56(84) bytes of data.
64 bytes from 127.42.42.42: icmp_seq=1 ttl=64 time=0.039 ms

--- 127.42.42.42 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms

The main table usually contains all the other routes:

$ ip route show table main
default via 192.168.117.1 dev eno1 proto static metric 100
192.168.117.0/26 dev eno1 proto kernel scope link src 192.168.117.55 metric 100

The default route has been configured by some DHCP daemon. The connected route (scope link) has been automatically added by the kernel (proto kernel) when configuring an IP address on the eno1 interface.

The default table is empty and has little use. It has been kept when the current incarnation of advanced routing has been introduced in Linux 2.1.68 after a first tentative using “classes” in Linux 2.1.156.

Performance

Since Linux 4.1 (commit 0ddcf43d5d4a), when the set of rules is left unmodified, the main and local tables are merged and the lookup is done with this single table (and the default table if not empty). Without specific rules, there is no performance hit when enabling the support for multiple routing tables. However, as soon as you add new rules, some CPU cycles will be spent for each datagram to evaluate them. Here is a couple of graphs demonstrating the impact of routing rules on lookup times:

Routing rules impact on performance

For some reason, the relation is linear when the number of rules is between 1 and 100 but the slope increases noticeably past this threshold. The second graph highlights the negative impact of the first rule (about 30 ns).

A common use of rules is to create virtual routers: interfaces are segregated into domains and when a datagram enters through an interface from domain A, it should use routing table A:

# ip rule add iif vlan457 table 10
# ip rule add iif vlan457 blackhole
# ip rule add iif vlan458 table 20
# ip rule add iif vlan458 blackhole

The blackhole rules may be removed if you are sure there is a default route in each routing table. For example, we add a blackhole default with a high metric to not override a regular default route:

# ip route add blackhole default metric 9999 table 10
# ip route add blackhole default metric 9999 table 20
# ip rule add iif vlan457 table 10
# ip rule add iif vlan458 table 20

To reduce the impact on performance when many interface-specific rules are used, interfaces can be attached to VRF instances and a single rule can be used to select the appropriate table:

# ip link add vrf-A type vrf table 10
# ip link set dev vrf-A up
# ip link add vrf-B type vrf table 20
# ip link set dev vrf-B up
# ip link set dev vlan457 master vrf-A
# ip link set dev vlan458 master vrf-B
# ip rule show
0:      from all lookup local
1000:   from all lookup [l3mdev-table]
32766:  from all lookup main
32767:  from all lookup default

The special l3mdev-table rule was automatically added when configuring the first VRF interface. This rule will select the routing table associated to the VRF owning the input (or output) interface.

VRF was introduced in Linux 4.3 (commit 193125dbd8eb), the performance was greatly enhanced in Linux 4.8 (commit 7889681f4a6c) and the special routing rule was also introduced in Linux 4.8 (commit 96c63fa7393d, commit 1aa6c4f6b8cd). You can find more details about it in the kernel documentation.

Conclusion

The takeaways from this article are:

  • route lookup times hardly increase with the number of routes,
  • densily packed /32 routes lead to amazingly fast route lookups,
  • memory use is low (128 MiB par million routes),
  • no optimization is done on routing rules.

  1. The routing cache was subject to reasonably easy to launch denial of service attacks. It was also believed to not be efficient for high volume sites like Google but I have first-hand experience it was not the case for moderately high volume sites. 

  2. IP-address lookup using LC-tries”, IEEE Journal on Selected Areas in Communications, 17(6):1083-1092, June 1999. 

  3. For internal nodes, the key_vector structure is embedded into a tnode structure. This structure contains information rarely used during lookup, notably the reference to the parent that is usually not needed for backtracking as Linux keeps the nearest candidate in a variable. 

  4. One leaf can contain several routes (struct fib_alias is a list). The number of “prefixes” can therefore be greater than the number of leaves. The system also keeps statistics about the distribution of the internal nodes relative to the number of bits they handle. In our example, all the three internal nodes are handling 2 bits. 

  5. The measurements are done in a virtual machine with one vCPU. The host is an Intel Core i5-4670K running at 3.7 GHz during the experiment (CPU governor was set to performance). The kernel is Linux 4.11. The benchmark is single-threaded. It runs a warm-up phase, then executes about 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC

  6. Fun fact: the documentation of this first tentative of more flexible routing is still available in today’s kernel tree and explains the usage of the “default class”

Don MartiCatching up to Safari?

Earlier this month, Apple Safari pulled ahead of other mainstream browsers in tracking protection. Tracking protection in the browser is no longer a question of should the browser do it, but which browser best protects its users. But Apple's early lead doesn't mean that another browser can't catch up.

Tracking protection is still hard. You have to provide good protection from third-party tracking, which users generally don't want, without breaking legit third-party services such as content delivery networks, single sign-on systems, and shopping carts. Protection is a balance, similar to the problem of filtering spam while delivering legit mail. Just as spam filtering helps enable legit email marketing, tracking protection tends to enable legit advertising that supports journalism and cultural works.

In the long run, just as we have seen with spam filters, it will be more important to make protection hard to predict than to run the perfect protection out of the box. A spam filter, or browser, that always does the same thing will be analyzed and worked around. A mail service that changes policies to respond to current spam runs, or an unpredictable ecosystem of tracking protection add-ons that browser users can install in unpredictable combinations, is likely to be harder.

But most users aren't in the habit of installing add-ons, so browsers will probably have to give them a nudge, like Microsoft Windows does when it nags the user to pick an antivirus package (or did last time I checked.) So the decentralized way to catch up to Apple could end up being something like:

  • When new tracking protection methods show up in the privacy literature, quietly build the needed browser add-on APIs to make it possible for new add-ons to implement them.

  • Do user research to guide the content and timing of nudges. (Some atypical users prefer to be tracked, and should be offered a chance to silence the warnings by affirmatively choosing a do-nothing protection option.)

  • Help users share information about the pros and cons of different tools. If a tool saves lots of bandwidth and battery life but breaks some site's comment form, help the user make the right choice.

  • Sponsor innovation challenges to incentivize development, testing, and promotion of diverse tracking protection tools.

Any surveillance marketer can install and test a copy of Safari, but working around an explosion of tracking protection tools would be harder. How to set priorities when they don't know which tools will get popular?

What about adfraud?

Tracking protection strategies have to take adfraud into account. Marketers have two choices for how to deal with adfraud:

  • flight to quality

  • extra surveillance

Flight to quality is better in the long run. But it's a problem from the point of view of adtech intermediaries because it moves more ad money to high-reputation sites, and the whole point of adtech is to reach big-money eyeballs on cheap sites. Adtech firms would rather see surveillance-heavy responses to adfraud. One way to help shift marketing budgets away from surveillance, and toward flight to quality, is to make the returns on surveillance investments less predictable.

This is possible to do without making value judgments about certain kinds of sites. If you like a site enough to let it see your personal info, you should be able to do it, even if in my humble opinion it's a crappy site. But you can have this option without extending to all crappy sites the confidence that they'll be able to live on leaked data from unaware users.

,

Planet DebianSteve McIntyre: So, Stretch happened...

Things mostly went very well, and we've released Debian 9 this weekend past. Many many people worked together to make this possible, and I'd like to extend my own thanks to all of them.

As a project, we decided to dedicate Stretch to our late founder Ian Murdock. He did much of the early work to get Debian going, and inspired many more to help him. I had the good fortune to meet up with Ian years ago at a meetup attached to a Usenix conference, and I remember clearly he was a genuinely nice guy with good ideas. We'll miss him.

For my part in the release process, again I was responsible for producing our official installation and live images. Release day itself went OK, but as is typical the process ran late into Saturday night / early Sunday morning. We made and tested lots of different images, although numbers were down from previous releases as we've stopped making the full CD sets now.

Sunday was the day for the release party in Cambridge. As is traditional, a group of us met up at a local hostelry for some revelry! We hid inside the pub to escape from the ridiculouly hot weather we're having at the moment.

Party

Due to a combination of the lack of sleep and the heat, I nearly forgot to even take any photos - apologies to the extra folks who'd been around earlier whom I missed with the camera... :-(

Planet DebianAndreas Bombe: New Blog

So I finally got myself a blog to write about my software and hardware projects, my work in Debian and, I guess, stuff. Readers of planet.debian.org, hi! If you can see this I got the configuration right.

For the curious, I’m using a static site generator for this blog — Hugo to be specific — like all the cool kids do these days.

TEDListen in on couples therapy with Esther Perel, Tabby’s star dims again, and more

Behold, your recap of TED-related news:

The truth about couples. Ever wonder what goes on in couples therapy? You may want to tune in to Esther Perel’s new podcast “Where Should We Begin?” Each episode invites the reader to listen to a real session with a real couple working out real issues, from a Christian couple bored with their sex life to a couple dealing with the aftermath of an affair, learning how to cope and communicate, express and excite. Perel hopes her audience will walk away with a sense of “truth” surrounding relationships — and maybe take away something for their own relationships. As she says: “You very quickly realize that you are standing in front of the mirror, and that the people that you are listening to are going to give you the words and the language for the conversations you want to have.” The first four episodes of “Where Should We Begin?” are available on Audible, with new episodes added every Friday. (Watch Perel’s TED Talk)

Three TEDsters join the Media Lab. MIT’s Media Lab has chosen its Director’s Fellows for 2017, inviting nine extraordinary people to spend two years working with each other, MIT faculty and students to move their work forward. Two of the new Fellows are TED speakers — Adam Foss and Jamila Raqib — and a third is a TED Fellow, activist Esra’a Al Shafei. In a press release, Media Lab Director (and fellow TED speaker) Joi Ito said the new crop of fellows “aligns with our mission to create a better future for all,” with an emphasis on “civic engagement, social change, education, and creative disruption.” (Watch Foss’ TED Talk and Raqib’s TED Talk)

The mystery of KIC 8462852 deepens. Tabby’s Star, notorious for “dipping,” is making headlines again with a dimming event that started in May. Astronomer Tabetha Boyajian, the star’s namesake, has been trying to crack the mystery since the flickering was noticed in 2011. The star’s dimming is erratic—sometimes losing up to 20 percent of its brightness—and has prompted a variety of potential explanations. Some say it’s space debris, others say it’s asteroids. Many blame aliens. Nobody knows for sure, still, but you can follow Boyajian on Twitter for updates. (Watch Boyajian’s TED talk)

AI: friend or foe? The big fear with AI is that humanity will be replaced or overrun, but Nicholas Christakis has been entertaining an alternative view: how can AI complement human beings? In a new study conducted at Yale, Christakis experimented with human and AI interaction. Subjects worked with anonymous AI bots in a collaborative color-coordination game, and the bots were programmed with varying behavioral randomness — in other words, they made mistakes. Christakis’ findings showed that even when paired with error-prone AI, human performance still improved. Groups solved problems 55.6% faster when paired with bots—particularly when faced with difficult problems. “The bots can help humans to help themselves,” Christakis said. (Watch Christakis’ TED Talk)

A bundle of news from TED architects. Alejandro Aravena’s Chile-based design team, Elemental, won the competition to design the Art Mill, a new museum in Doha, Qatar. The museum site is now occupied by Qatar Flour Mills, and Elemental’s design pays homage to the large grain silos it will replace. Meanwhile, The Shed, a new building in New York City designed by Liz Diller and David Rockwell, recently underwent testing. The building is designed in two parts: an eight-level tower and a teflon-based outer shell that can slide on runners over an adjacent plaza. The large shell moves using a gantry crane, and only requires the horsepower of a Toyota Prius.  When covering the plaza, the shell can house art exhibits and performances. Finally, Joshua Prince-Ramus’ architecture firm, REX, got the nod to design a new performing arts center at Brown University. It will feature a large performance hall, smaller rehearsal spaces, and rooms for practice and instruction—as well as a lobby and cafe. (Watch Aravena’s TED Talk, Diller’s TED Talk, Rockwell’s TED Talk, and Prince-Ramus’ TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this round-up.


TEDA noninvasive method for deep brain stimulation, a new class of Emerging Explorers, and much more

As usual, the TED community has lots of news to share this week. Below, some highlights.

Surface-level brain stimulation. The delivery of an electric current to the part of the brain involved in movement control, known as deep brain stimulation, is sometimes used to treat people with Parkinson’s disease, depression, epilepsy and obsessive compulsive disorder. However, the process isn’t risk-free — and there are few people who possess the skill set to open a skull and implant electrodes in the brain. A new study, of which MIT’s Ed Boyden was the senior author, has found a noninvasive method: placing electrodes on the scalp rather than in the skull. This may make deep brain stimulation available to more patients and allow the technique to be more easily adapted to treat other disorders. (Watch Boyden’s TED Talk)

Rooms for refugees. Airbnb unveiled a new platform, Welcome, which provides housing to refugees and evacuees free of charge. Using its extensive network, Airbnb is partnering with global and local organizations that will have access to Welcome in order to pair refugees with available lodging. The company aims to provide temporary housing for 100,000 displaced persons over the next five years. Airbnb co-founder, Joe Gebbia, urges anybody with a spare room to “play a small role in tackling this global challenge”; so far, 6,000 people have answered his call. (Watch Gebbia’s TED Talk)

A TEDster joins The Shed. Kevin Slavin has been named Chief Science and Technology Officer of The Shed. Set to open in 2019, The Shed is a uniquely-designed space in New York City that will bring together leading thinkers in the arts, the humanities and the sciences to create innovative art. Slavin’s multidisciplinary—or, as he puts it, anti-disciplinary—mindset seems a perfect fit for The Shed’s mission of “experimentation, innovation, and collaboration.” Slavin, who was behind the popular game Drop 7, has run a research lab at MIT’s Media Lab, and has showcased his work in MoMA, among other museums. The Shed was designed by TEDsters Liz Diller and David Rockwell. (Watch Slavin’s TED Talk, Diller’s TED Talk and Rockwell’s TED Talk)

Playing with politics. Designing a video to feel as close to real life as possible often means intricate graphics and astutely crafted scripts. For game development studio Klang, it also means replicating politics. That’s why Klang has brought on Lawrence Lessig to build the political framework for their new game, Seed. Described as “a boundless journey for human survival, fuelled by discovery, collaboration and genuine emotion,” Seed is a vast multiplayer game whose simulation continues even after a player has logged off. Players are promised “endless exploration of a living, breathing exoplanet” and can traverse this new planet forming colonies, developing relationships, and collaborating with other players. Thanks to Lessig, they can also choose their form of government and appointed officials. While the game will not center on politics, Lessig’s contributions will help the game evolve to more realistically resemble real life. (Watch Lessig’s TED Talk)

A new class of explorers. National Geographic has announced this year’s Emerging Explorers. TED Speaker Anand Varma and TED Fellows Keolu Fox and Danielle N, Lee are among them. Varma is a photographer who uses the medium to turn science into stories, as he did in his TED talk about threats faced by bees. Fox’s work connects the human genome to disease; he advocates for more diversity in the field of genetics. He believes that indigenous peoples should be included in genome sequencing not only for the sake of social justice, but for science. Studying Inuit genetics, for example, may provide insight into how they keep a traditionally fat-rich diet but have low rates of heart disease. Danielle N. Lee studies practical applications for rodents—like the African giant pouched rats trained to locate landmines. The rats are highly trainable and low-maintenance, and Lee’s research aims to tap into this unlikely resource. (Watch Varma’s TED Talk, Fox’s TED Talk and Lee’s TED Talk)

Collaborative fellowship awarded to former head of DARPA. Joining the ranks of past fellows Ruth Bader Ginsberg, Deborah Tannen and Amos Tversky is Arati Prabhakar, who has been selected for the 2017-18 fellowship at Stanford’s Center for Advanced Study in the Behavioral Sciences (CASBS). While Prabhakar’s field of expertise is in electrical engineering and applied physics, she is one of 37 fellows of various backgrounds ranging from architecture to law, and religion to statistics, to join the program. CASBS seeks to solve societal problems through interdisciplinary collaborative projects and research. At the heart of this mission is their fellowship program, says associate director Sally Schroeder. “Fellows represent all that is great about this place. It’s imperative that we continue to attract the highest quality, innovative thinkers, and we’re confident we’ve reached that standard of excellence once again with the 2017-18 class.” (Watch Prabhakar’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.


Worse Than FailureThe CMS From Hell

Hortus Deliciarum - Hell

Contracting can be really hit or miss. Sometimes, you're given a desk and equipment and treated just like an employee, except better paid and exempt from team-building exercises. Sometimes, however, you're isolated in your home office, never speaking to anyone, working on tedious, boring crap they can't convince their normal staff to do.

Eric was contracted to perform basic website updating tasks for a government agency. Most of the work consisted of receiving documents, uploading them to the server, and adding them to a page. There were 4 document categories, each one organized by year. Dull as dishwater, but easy.

The site was hosted by a third party in a shared hosting environment. It ran on a CMS produced by another party. WTFCMS was used in many high-profile sites, so the agency figured it had to be good. Eric was given login credentials and—in the way of techies given boring tasks everywhere—immediately began automating the task at hand.

Step 1 of this automation was to get a list of articles with their IDs. Eric was pleased to discover that the browser-based interface for the CMS used a JSON request to get the list of pages. With the help of good old jq, he soon had that running in a BASH shell script. To get the list of children for an article, he passed the article's ID to the getChildren endpoint.

Usually, in a heirarchy like this, there's some magic number that means "root element." Eric tried sending a series of likely candidates, like 0, -1, MAX_INT, and MIN_INT. It turned out to be -1 ... but he also got a valid list when he passed in 0.

Curious, he thought to himself. This appears to be a list of articles ... and hey, here's the ones I got for this site. These others ...? No way.

Sure enough, passing in a parent ID of 0 had gotten Eric some sort of super-root: every article across every site in the entire CMS system. Vulnerability number 1.

Step 2 was to take the ID list and get the article data so he could associate the new file with it. This wasn't nearly as simple. There was no good way to get the text of the article from the JSON interface; the CMS populated the articles server-side.

Eric was in too deep to stop now, though. He wrote a scraper for the edit page, using an XML parser to handle the HTML form that held the article text. Once he had the text, he compared it by hand to the POST request sent from his Firefox instance to ensure he had the right data.

And he did ... mostly. Turns out, the form was manipulated by on-page Javascript before being submitted: fields were disabled or enabled, date/time formats were tweaked, and the like. Eric threw together some more scripting to get the job done, but now he wasn't sure if he would hit an edge case or somehow break the database if he ran it. Still, he soldiered on.

Step 3 was to upload the files so they could be linked to the article. With Firebug open, Eric went about adding an upload.

Now, WTFCMS seemed to offer the usual flow: enter a name, select a file, and click Upload to both upload the file and save it as the given name. When he got to step 2, however, the file was uploaded immediately—but he still had to click the Upload button to "save" it.

What happens if I click Cancel? Eric wondered. No, never mind, I don't want to know. What does the POST look like?

It was a mess of garbage. Eric was able to find the file he uploaded, and the name he'd given it ... and also a bunch of server-side information the user shouldn't be privvy to, let alone be able to tamper with. Things like, say, the directory on the server where the file should be saved. Vulnerability number 2.

The response to the POST contained, unexpectedly, HTML. That HTML contained an iframe. The iframe contained an iframe. iframe2 contained iframe3; iframe3 contained a form. In that form were two fields: a submit button, reading "Upload", and a hidden form field containing the path of the uploaded file. In theory, he could change that to read anything on the server. Now he had both read and write access to any arbitrary destination in the CMS, maybe even on the server itself. Vulnerability number 3.

It was at this point that Eric gave up on his script altogether. This is the kind of task that Selenium IDE is perfect for. He just kept his head down, hoped that the server had some kind of validation to prevent curious techies like himself from actually exploiting any of these potential vulnerabilities, and served out the rest of his contract.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianFoteini Tsiami: Internationalization, part one

The first part of internationalizing a Greek application, is, of course, translating all the Greek text to English. I already knew how to open a user interface (.ui) file with Glade and how to translate/save it from there, and mail the result to the developers.

If only it was that simple! I learned that the code of most open source software is kept on version control systems, which fortunately are a bit similar to Wikis, which I was familiar with, so I didn’t have a lot of trouble understanding the concepts. Thanks to a very brief git crash course from my mentors, I was able to quickly start translating, committing, and even pushing back the updated files.

The other tricky part was internationalizing the python source code. There Glade couldn’t be used, a text editor like Pluma was needed. And the messages were part of the source code, so I had to be extra careful not to break the syntax. The English text then needed to be wrapped around _(), which does the gettext call which dynamically translates the messages into the user language.

All this was very educative, but now that the first part of the internationalization, i.e. the Greek-to-English translations, are over, I think I’ll take some time to read more about the tools that I used!


Planet DebianNorbert Preining: TeX Live 2017 hits Debian/unstable

Yesterday I uploaded the first packages of TeX Live 2017 to Debian/unstable, meaning that the new release cycle has started. Debian/stretch was released over the weekend, and this opened up unstable for new developments. The upload comprised the following packages: asymptote, cm-super, context, context-modules, texlive-base, texlive-bin, texlive-extra, texlive-extra, texlive-lang, texworks, xindy.

I mentioned already in a previous post the following changes:

  • several packages have been merged, some are dropped (eg. texlive-htmlxml) and one new package (texlive-plain-generic) has been added
  • luatex got updated to 1.0.4, and is now considered stable
  • updmap and fmtutil now require either -sys or -user
  • tlmgr got a shell mode (interactive/scripting interface) and a new feature to add arbitrary TEXMF trees (conf auxtrees)

The last two changes are described together with other news (easy TEXMF tree management) in the TeX Live release post. These changes more or less sum up the new infra structure developments in TeX Live 2017.

Since the last release to unstable (which happened in 2017-01-23) about half a year of package updates have accumulated, below is an approximate list of updates (not split into new/updated, though).

Enjoy the brave new world of TeX Live 2017, and please report bugs to the BTS!

Updated/new packages:
academicons, achemso, acmart, acro, actuarialangle, actuarialsymbol, adobemapping, alkalami, amiri, animate, aomart, apa6, apxproof, arabluatex, archaeologie, arsclassica, autoaligne, autobreak, autosp, axodraw2, babel, babel-azerbaijani, babel-english, babel-french, babel-indonesian, babel-japanese, babel-malay, babel-ukrainian, bangorexam, baskervaldx, baskervillef, bchart, beamer, beamerswitch, bgteubner, biblatex-abnt, biblatex-anonymous, biblatex-archaeology, biblatex-arthistory-bonn, biblatex-bookinother, biblatex-caspervector, biblatex-cheatsheet, biblatex-chem, biblatex-chicago, biblatex-claves, biblatex-enc, biblatex-fiwi, biblatex-gb7714-2015, biblatex-gost, biblatex-ieee, biblatex-iso690, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-oxref, biblatex-philosophy, biblatex-publist, biblatex-shortfields, biblatex-subseries, bibtexperllibs, bidi, biochemistry-colors, bookcover, boondox, bredzenie, breqn, bxbase, bxcalc, bxdvidriver, bxjalipsum, bxjaprnind, bxjscls, bxnewfont, bxorigcapt, bxpapersize, bxpdfver, cabin, callouts, chemfig, chemformula, chemmacros, chemschemex, childdoc, circuitikz, cje, cjhebrew, cjk-gs-integrate, cmpj, cochineal, combofont, context, conv-xkv, correctmathalign, covington, cquthesis, crimson, crossrefware, csbulletin, csplain, csquotes, css-colors, cstldoc, ctex, currency, cweb, datetime2-french, datetime2-german, datetime2-romanian, datetime2-ukrainian, dehyph-exptl, disser, docsurvey, dox, draftfigure, drawmatrix, dtk, dviinfox, easyformat, ebproof, elements, endheads, enotez, eqnalign, erewhon, eulerpx, expex, exsheets, factura, facture, fancyhdr, fbb, fei, fetamont, fibeamer, fithesis, fixme, fmtcount, fnspe, fontmfizz, fontools, fonts-churchslavonic, fontspec, footnotehyper, forest, gandhi, genealogytree, glossaries, glossaries-extra, gofonts, gotoh, graphics, graphics-def, graphics-pln, grayhints, gregoriotex, gtrlib-largetrees, gzt, halloweenmath, handout, hang, heuristica, hlist, hobby, hvfloat, hyperref, hyperxmp, ifptex, ijsra, japanese-otf-uptex, jlreq, jmlr, jsclasses, jslectureplanner, karnaugh-map, keyfloat, knowledge, komacv, koma-script, kotex-oblivoir, l3, l3build, ladder, langsci, latex, latex2e, latex2man, latex3, latexbug, latexindent, latexmk, latex-mr, leaflet, leipzig, libertine, libertinegc, libertinus, libertinust1math, lion-msc, lni, longdivision, lshort-chinese, ltb2bib, lualatex-math, lualibs, luamesh, luamplib, luaotfload, luapackageloader, luatexja, luatexko, lwarp, make4ht, marginnote, markdown, mathalfa, mathpunctspace, mathtools, mcexam, mcf2graph, media9, minidocument, modular, montserrat, morewrites, mpostinl, mptrees, mucproc, musixtex, mwcls, mweights, nameauth, newpx, newtx, newtxtt, nfssext-cfr, nlctdoc, novel, numspell, nwejm, oberdiek, ocgx2, oplotsymbl, optidef, oscola, overlays, pagecolor, pdflatexpicscale, pdfpages, pdfx, perfectcut, pgfplots, phonenumbers, phonrule, pkuthss, platex, platex-tools, polski, preview, program, proofread, prooftrees, pst-3dplot, pst-barcode, pst-eucl, pst-func, pst-ode, pst-pdf, pst-plot, pstricks, pstricks-add, pst-solides3d, pst-spinner, pst-tools, pst-tree, pst-vehicle, ptex2pdf, ptex-base, ptex-fontmaps, pxbase, pxchfon, pxrubrica, pythonhighlight, quran, ran_toks, reledmac, repere, resphilosophica, revquantum, rputover, rubik, rutitlepage, sansmathfonts, scratch, seealso, sesstime, siunitx, skdoc, songs, spectralsequences, stackengine, stage, sttools, studenthandouts, svg, tcolorbox, tex4ebook, tex4ht, texosquery, texproposal, thaienum, thalie, thesis-ekf, thuthesis, tikz-kalender, tikzmark, tikz-optics, tikz-palattice, tikzpeople, tikzsymbols, titlepic, tl17, tqft, tracklang, tudscr, tugboat-plain, turabian-formatting, txuprcal, typoaid, udesoftec, uhhassignment, ukrainian, ulthese, unamthesis, unfonts-core, unfonts-extra, unicode-math, uplatex, upmethodology, uptex-base, urcls, variablelm, varsfromjobname, visualtikz, xassoccnt, xcharter, xcntperchap, xecjk, xepersian, xetexko, xevlna, xgreek, xsavebox, xsim, ycbook.

LongNowThe Industrial Sublime: Edward Burtynsky Takes the Long View

“Oil Bunkering #1, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

The New Yorker recently profiled photographer, former SALT speaker, and 02016 sponsor of the Conversations at the Interval livestream Edward Burtynsky and his quest to document a changing planet in the anthropocene age.

“What I am interested in is how to describe large-scale human systems that impress themselves upon the land,” Burtynsky told New Yorker staff writer Raffi Khatchadourian as they surveyed the decimated, oil-covered landscapes of Lagos, Nigeria from a helicopter.

“Saw Mills #1, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

For over three decades, Edward Burtynsky has been taking large-format photographs of industrial landscapes which include mining locations around the globe and the building of Three Gorges Dam in China. His work has been noted for beautiful images which are often at odds with their subject’s negative environmental impacts.

Photograph by Benedicte Kurzen / Noor for The New Yorker

“This is the sublime of our time,” said Burtynsky in his 02008 SALT Talk, which included a formal proposal for a permanent art gallery in the chamber that encloses the 10,000-year Clock, as well as the results of his research into methods of capturing images that might have the best chance to survive in the long-term.

“Oil Bunkering #4, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

As the Khatchadourian notes, Burtynsky’s approach has at times attracted controversy:

Over the years, greater skepticism has been voiced about […] Burtynsky’s inclination to depict toxic landscapes in visually arresting terms. A critic responding to “Oil” wondered whether the fusing of beauty with monumentalism, of extreme photographic detachment with extreme ecological damage, could trigger only apathy as a response. [Curator] Paul Roth had a different view: “Maybe these people are a bit immune to the sublime—being terribly anxious while also being attracted to the beauty of an image.”

“Oil Bunkering #2, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

Burtynsky does not seek to be heavy-handed or pedantic in his work, but neither does he seek to be amoral. The environmental and human rights issues are directly shown, rather than explicitly proclaimed.

“Oil Bunkering #5, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

In recent years Burtynsky’s work has focused on water, including oil spills around the world, like the ones he was documenting in Lagos, a city he calls a “hyper crucible of globalism.”

As the global consequences of human activity have become unmistakably pressing, Burtynsky has connected his photography more directly with environmentalism. “There has been a discussion for a long time about climate change, but we don’t seem to be ceasing anything,” he says. “That has begun to bring a sense of urgency to me.”

Burtynsky is currently working on the film Anthropocene, which documents unprecedented human impact on the natural world.

Read The New Yorker profile of Burtynsky in full.

,

Planet DebianJeremy Bicha: GNOME Tweak Tool 3.25.3

Today I released the second development snapshot (3.25.3) of what will be GNOME Tweak Tool 3.26.

I consider the initial User Interface (UI) rework proposed by the GNOME Design Team to be complete now. Every page in Tweak Tool has been updated, either in this snapshot or the previous development snapshot.

The hard part still remains: making the UI look as good as the mockups. Tweak Tool’s backend makes this a bit more complicated than usual for an app like this.

Here are a few visual highlights of this release.

The Typing page has been moved into an Additional Layout Options dialog in the Keyboard & Mouse page. Also, the Compose Key option has been given its own dialog box.

Florian Müllner added content to the Extensions page that is shown if you don’t have any GNOME Shell extensions installed yet.

A hidden feature that GNOME has had for a long time is the ability to move the Application Menu from the GNOME top bar to a button in the app’s title bar. This is easy to enable in Tweak Tool by turning off the Application Menu switch in the Top Bar page. This release improves how well that works, especially for Ubuntu users where the required hidden appmenu window button was probably not pre-configured.

Some of the ComboBoxes have been replaced by ListBoxes. One example is on the Workspaces page where the new design allows for more information about the different options. The ListBoxes are also a lot easier to select than the smaller ComboBoxes were.

For details of these and other changes, see the commit log or the NEWS file.

GNOME Tweak Tool 3.26 will be released alongside GNOME 3.26 in mid-September.

Planet DebianShirish Agarwal: Seizures, Vigo and bi-pedal motion

Dear all, an update is in order. While talking to physiotherapist couple of days before, came to know the correct term to what was I experiencing. I had experienced convulsive ‘seizure‘ , spasms being a part of it. Reading the wikipedia entry and the associated links/entries it seems I am and was very very lucky.

The hospital or any hospital is a very bad bad place. I have seen all horror movies which people say are disturbing but have never been disturbed as much as I was in hospital. I couldn’t help but hear people’s screams and saw so many cases which turned critical. At times it was not easy to remain positive but dunno from where there was a will to live which pushed me and is still pushing me.

One of the things that was painful for a long time were the almost constant stream of injections that were injected in me. It was almost an afterthought that the nurse put a Vigo in me.

Similar to the Vigo injected in me.

While the above medical device is similar, mine had a cross, the needle was much shorter and is injected into the vein. After that all injections are injected into that including common liquid which is salt,water and something commonly given to patients to stabilize first. I am not remembering the name atm.

I also had a urine bag which was attached to my penis in a non-invasive manner. Both my grandfather and grandma used to cry when things went wrong while I didn’t feel any pain but when the urine bag was disattached and attached again, so seems things have improved there.

I was also very conscious of getting bed sores as both my grandpa and grandma had them when in hospital. As I had no strength I had to beg. plead do everything to make sure that every few hours I was turned from one side to other. I also had an air bag which is supposed to alleviate or relief this condition.

Constant physiotherapy every day for a while slowly increased my strength and slowly both the vigo and feeding tube put inside my throat was removed.

I have no remembrance as to when they had put the feeding tube as it was all rubber and felt bad when it came out.

Further physiotherapy helped me crawl till the top of the bed, the bed was around 6 feet in length and and more than enough so I could turn both sides without falling over.

Few days later I found I could also sit up using my legs as a lever and that gave confidence to the doctors to remove the air bed so I could crawl more easily.

Couple of more days later I stood on my feet for the first time and it was like I had lead legs. Each step was painful but the sense and feeling of independence won over whatever pain was there.

I had to endure wet wipes from nurses and ward boys in place of a shower everyday and while they were respectful always it felt humiliating.

The first time I had a bath after 2 weeks or something, every part of my body cried and I felt like a weakling. I had thought I wouldn’t be able to do justice to the physiotherapy session which was soon after but after the session was back to feeling normal.

For a while I was doing the penguin waddle which while painful was also had humor in it. I did think of shooting the penguin waddle but decided against it as I was half-naked most of the time ( the hospital clothes never fit me properly)

Cut to today and I was able to climb up and down the stairs on my own and circled my own block, slowly but was able to do it on my own by myself.

While I always had a sense of wonderment for bi-pedal motion as well as all other means of transport, found much more respect of walking. I live near a fast food eating joint so I see lot of youngsters posing in different ways with their legs to show interest to their mates. And this I know happens both on the conscious and sub-conscious levels. To be able to see and discern that also put a sense of wonder in nature’s creations.

All in all, I’m probabl6y around 40% independent and still 60% interdependent. I know I have to be patient with myself and those around me and explain to others what I’m going through.

For e.g. I still tend to spill things and still can’t touch-type much.

So, the road is long, I can only pray and hope best wishes for anybody who is my condition and do pray that nobody goes through what I went through, especiallly not children.

I am also hoping that things like DxtER and range of non-invasive treatments make their way into India and the developing world at large.

Anybody who is overweight and is either disgusted or doesn’t like the gym route, would recommend doing sessions with a physiotherapist that you can trust. You have to trust that her judgement will push you a bit more and not more that the gains you make are toppled over.

I still get dizziness spells while doing therapy but will to break it as I know dizziness doesn’t help me.

I hope my writings give strength and understanding to either somebody who is going through it, or relatives or/and caregivers so they know the mental status of the person who’s going through it.

Till later and sorry it became so long.

Update – I forgot to share this inspirational story from my city which I shared with a friend days ago. Add to that, she is from my city. What it doesn’t share is that Triund is a magical place. I had visited once with a friend who had elf ears (he had put on elf ears) and it is kind of place which alchemist talks about, a place where imagination does turn wild and there is magic in the air.


Filed under: Miscellenous Tagged: #air bag, #bed sores, #convulsive epileptic seizure, #crawling, #horror, #humiliation, #nakedness, #penguin waddle, #physiotherapy, #planet-debian, #spilling things, #urine bag, #Vigo medical device

Planet DebianVasudev Kamath: Update: - Shell pipelines with subprocess crate and use of Exec::shell function

In my previous post I used Exec::shell function from subprocess crate and passed it string generated by interpolating --author argument. This string was then run by the shell via Exec::shell. After publishing post I got ping on IRC by Jonas Smedegaard and Paul Wise that I should replace Exec::shell, as it might be prone to errors or vulnerabilities of shell injection attack. Indeed they were right, in hurry I did not completely read the function documentation which clearly mentions this fact.

When invoking this function, be careful not to interpolate arguments into the string run by the shell, such as Exec::shell(format!("sort {}", filename)). Such code is prone to errors and, if filename comes from an untrusted source, to shell injection attacks. Instead, use Exec::cmd("sort").arg(filename).

Though I'm not directly taking input from untrusted source, its still possible that the string I got back from git log command might contain some oddly formatted string with characters of different encoding which could possibly break the Exec::shell , as I'm not sanitizing the shell command. When we use Exec::cmd and pass argument using .args chaining, the library takes care of creating safe command line. So I went in and modified the function to use Exec::cmd instead of Exec::shell.

Below is updated function.

fn copyright_fromgit(repo: &str) -> Result<Vec<String>> {
    let tempdir = TempDir::new_in(".", "debcargo")?;
    Exec::cmd("git")
     .args(&["clone", "--bare", repo, tempdir.path().to_str().unwrap()])
     .stdout(subprocess::NullFile)
     .stderr(subprocess::NullFile)
     .popen()?;

    let author_process = {
        Exec::shell(OsStr::new("git log --format=\"%an <%ae>\"")).cwd(tempdir.path()) |
        Exec::shell(OsStr::new("sort -u"))
    }.capture()?;
    let authors = author_process.stdout_str().trim().to_string();
    let authors: Vec<&str> = authors.split('\n').collect();
    let mut notices: Vec<String> = Vec::new();
    for author in &authors {
        let author_string = format!("--author={}", author);
        let first = {
            Exec::cmd("/usr/bin/git")
             .args(&["log", "--format=%ad",
                    "--date=format:%Y",
                    "--reverse",
                    &author_string])
             .cwd(tempdir.path()) | Exec::shell(OsStr::new("head -n1"))
        }.capture()?;

        let latest = {
            Exec::cmd("/usr/bin/git")
             .args(&["log", "--format=%ad", "--date=format:%Y", &author_string])
             .cwd(tempdir.path()) | Exec::shell("head -n1")
        }.capture()?;

        let start = i32::from_str(first.stdout_str().trim())?;
        let end = i32::from_str(latest.stdout_str().trim())?;
        let cnotice = match start.cmp(&end) {
            Ordering::Equal => format!("{}, {}", start, author),
            _ => format!("{}-{}, {}", start, end, author),
        };

        notices.push(cnotice);
    }

    Ok(notices)
}

I still use Exec::shell for generating author list, this is not problematic as I'm not interpolating arguments to create command string.

Sociological ImagesAre Millennials having less sex? Or more? And what’s coming next?

Based on analyses of General Social Survey data, a well-designed and respected source of data about American life, members of the Millennial generation are acquiring about the same number of sexual partners as the Baby Boomers. This data suggests that the big generational leap was between the Boomers and the generation before them, not the Boomers and everyone that came after. And rising behavioral permissiveness definitely didn’t start with the Millennials. Sexually speaking, Millennials look a lot like their parents at the same age and are perhaps even less sexually active then Generation X.

Is it true?

It doesn’t seem like it should be true. In terms of attitudes, American society is much more sexually permissive than it was for Boomers, and Millennials are especially more permissive. Boomers had to personally take America through the sexual revolution at a time when sexual permissiveness was still radical, while Generation X had to contend with a previously unknown fatal sexually transmitted pandemic. In comparison, the Millennials have it so easy. Why aren’t they having sex with more people?

A new study using data from the National Survey of Family Growth (NSFG) (hat tip Paula England) contrasts with previous studies and reports an increase. It finds that nine out of ten Millennial women had non-marital sex by the time they were 25 years old, compared to eight out of ten Baby Boomers. And, among those, Millennials reported two additional total sexual partners (6.5 vs. 4.6).

Nonmarital Sex by Age 25, Paul Hemez

Are Millennials acquiring more sexual partners after all?

I’m not sure. The NSFG report used “early” Millennials (only ones born between 1981 and 1990). In a not-yet-released book, the psychologist Jean Twenge uses another survey — the Youth Risk Behavior Surveillance System — to argue that the next generation (born between 1995 and 2002), which she calls the “iGen,” are even less likely to be sexually active than Millennial. According to her analysis, 37% of 9th graders in 1995 (born in 1981, arguably the first Millennial year) had lost their virginity, compared to 34% in 2005, and 24% in 2015.

Percentage of high school students who have ever had sex, by grade. Youth Risk Behavior Surveillance System, 1991-2015.

iGen, Jean Twenge

If Twenge is right, then we’re seeing a decline in the rate of sexual initiation and possibly partner acquisition that starts somewhere near the transition between Gen X and Millennial, proceeds apace throughout the Millennial years, and is continuing — Twenge argues accelerating — among the iGens. So, if the new NSFG report finds an increase in sexual partners between the Millennials and the Boomers, it might be because they sampled on “early” Millennials, those closer to Gen Xers, on the top side of the decline.

Honestly, I don’t know. It’s interesting though. And it’s curious why the big changes in sexually permissive attitudes haven’t translated into equally sexually permissive behaviors. Or, have actually accompanied a decrease in sexual behavior. It depends a lot on how you chop up the data, too. Generations, after all, all artificial categories. And variables like “nonmarital sex by age 25” are specific and may get us different findings than other measures. Sociological questions have lots of moving parts and it looks as if we’re still figuring this one out.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianHideki Yamane: PoC: use Sphinx for debian-policy

Before party, we did a monthly study meeting and I gave a talk about tiny hack for debian-policy document.

debian-policy was converted from debian-sgml to docbook in 4.0.0, and my proposal is "Go move forward to Sphinx".

Here's sample, and you can also get PoC source from my GitHub repo and check it.

CryptogramNew Technique to Hijack Social Media Accounts

Access Now has documented it being used against a Twitter user, but it also works against other social media accounts:

With the Doubleswitch attack, a hijacker takes control of a victim's account through one of several attack vectors. People who have not enabled an app-based form of multifactor authentication for their accounts are especially vulnerable. For instance, an attacker could trick you into revealing your password through phishing. If you don't have multifactor authentication, you lack a secondary line of defense. Once in control, the hijacker can then send messages and also subtly change your account information, including your username. The original username for your account is now available, allowing the hijacker to register for an account using that original username, while providing different login credentials.

Three news stories.

Worse Than FailureRepresentative Line: Highly Functional

For a brief period of time, say, about 3–4 years ago, if you wanted to sound really smart, you’d bring up “functional programming”. Name-dropping LISP or even better, Haskell during an interview marked you as a cut above the hoi polloi. Even I, surly and too smart for this, fell into the trap of calling JavaScript “LISP with curly braces”, just because it had closures.

Still, functional programming features have percolated through other languages because they work. They’re another tool for the job, and like any tool, when used by the inexpert, someone might lose a finger. Or perhaps someone should lose a finger, if only as a warning to others.

For example, what if you wanted to execute a loop 100 times in JavaScript? You could use a crummy old for loop, but that’s not functional. The functional solution comes from an anonymous submitter:

Array.apply(null, {length: 99}).map(Number.call, Number).forEach(function (element, index) {
// do some more crazy stuff
});

This is actually an amazing abuse of JavaScript’s faculties, and I thought I saw the worst depredations one could visit on JavaScript while working with Angular code. When I first read this line, my initial reaction was, “oh, that’s not so bad.” Then I tried to trace through its logic. Then I realized, no, this is actually really bad. Not just extraneous arrays bad, but full abused of JavaScript bad. Like call Language Protective Services bad. This is easier to explain if you look at it backwards.

forEach applies a function to each element in the array, supplying the element and the index of that element.

Number.call invokes the Number function, used to convert things into numbers (shocking, I know), but it allows you to supply the this against which the function is executed. map takes a callback function, and supplies an array item for the currentValue, the index, and the whole array as parameters. map also allows you to specify what this is, for the callback itself- which they set to be Number- the function they’re calling.

So, remember, map expects a callback in the form f(currentValue, index, array). We’re supplying a function: call(thisValue, numberToConvert). So, the end result of map in this function is that we’re going to emit an array with each element equal to its own index, which makes the forEach look a bit silly.

Finally, at the front, we call Array.apply, which is mostly the same as Array.call, with a difference in how arguments get passed. This allows the developer to deftly avoid writing new Array(99), which would have the same result, but would look offensively object-oriented.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianMichal Čihař: Call for Weblate translations

Weblate 2.15 is almost ready (I expect no further code changes), so it's really great time to contribute to it's translations! Weblate 2.15 should be released early next week.

As you might expect, Weblate is translated using Weblate, so the contributions should be really easy. In case there is something unclear, you can look into Weblate documentation.

I'd especially like to see improvements in the Italian translation which was one of the first in Weblate beginnings, but hasn't received much love in past years.

Filed under: Debian English SUSE Weblate

,

Planet Linux AustraliaOpenSTEM: HASS Additional Activities

OK, so you’ve got the core work covered for the term and now you have all those reports to write and admin to catch up on. Well, the OpenSTEM™ Understanding Our World® HASS plus Science material has heaps of activities which help students to practise core curricular skills and can keep students occupied. Here are some ideas:

 Aunt Madge’s Suitcase Activity

Aunt Madge

Aunt Madge is a perennial favourite with students of all ages. In this activity, students use clues to follow Aunt Madge around the world trying to return her forgotten suitcase. There’s a wide range of locations to choose from on every continent – both natural and constructed places. This activity can be tailored for group work, or the whole class, and by adjusting the number of locations to be found, the teacher can adjust to the available time, anywhere from 10-15 minutes to a whole lesson. Younger students enjoy matching the pictures of locations and trying to find the countries on the map. Older students can find out further information about the locations on the information sheets. Teachers can even choose a theme for the locations (such as “Ancient History” or “Aboriginal Places”) and see if students can guess what it is.

 Ancient Sailing Ships Activity

Sailing Ships (History + Science)Science

Students in Years 3 to 6 have undertaken the Ancient Sailing Ships activity this term, however, there is a vast scope for additional aspects to this activity. Have students compared the performance of square-rigged versus lateen sails? How about varying the number of masts? Have students raced the vessels against each other? (a water trough and a fan is all that’s needed for some exciting races) Teachers can encourage the students to examine the effects of other changes to ship design, such as adding a keel or any other innovations students can come up with, which can be tested. Perhaps classes or grades can even race their ships against each other.

Trade and Barter Activity

Students in years 5 and 6 in particular enjoy the Trade and Barter activity, which teaches them the basics of Economics without them even realising it! This activity covers so many different aspects of the curriculum, that it is always a good one to revisit, even though it was not in this term’s units. Students enjoy the challenge and will find the activity different each time. It is a particularly good choice for a large chunk of time, or for smaller groups; perhaps a more experienced group can coach other students. The section of the activity which has students developing their own system of writing is one that lends itself to extension and can even be spun off as a separate activity.

Games from the Past

Kids Playing TagKids Playing Tag

Students of all ages enjoy many of the games listed in the resource Games From The Past. Several of these games are best done whilst running around outside, so if that is an option, then choose from the Aboriginal, Chinese or Zulu games. Many of these games can be played by large groups. Older students might like to try recreating some of the rules for some of the games of Ancient Egypt or the Aztecs. If this resource wasn’t part of the resources for your particular unit, it can be downloaded from the OpenSTEM™ site directly.

 

Class Discussions

The b) and c) sections of the Teacher Handbooks contain suggestions for topics of discussion – such as Women Explorers or global citizenship, or ideas for drawings that the students can do. These can also be undertaken as additional activities. Teachers could divide students into groups to research and explore particular aspects of these topics, or stage debates, allowing students to practise persuasive writing skills as well.

OpenSTEM A0 world map: Country Outlines and Ice Age CoastlineAdding events to a timeline, or the class calendar, also good ways to practise core skills.

The OpenSTEM™ Our World map is used as the perfect complement to many of the Understanding Our World® units. This map comes blank and country names are added to the map during activities. The end of term is also a good chance for students to continue adding country names to the map. These can be cut out of the resource World Countries, which supplies the names in a suitable font size. Students can use the resource World Maps to match the country names to their locations.

We hope you find these suggestions useful!

Enjoy the winter holidays – not too long now to a nice, cosy break!

Planet DebianSimon Josefsson: OpenPGP smartcard under GNOME on Debian 9.0 Stretch

I installed Debian 9.0 “Stretch” on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard — I use a YubiKey NEO — does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 “Jessie” earlier, and I thought I’d do a similar blog post for Debian 9.0 “Stretch”. The situation is slightly different than before (e.g., GnuPG works better but SSH doesn’t) so there is some progress. May I hope that Debian 10.0 “Buster” gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report).

After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.

jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 

This fails because scdaemon is not installed. Isn’t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.

root@latte:~# apt-get install scdaemon

Then try again.

jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: No such device
gpg: OpenPGP card not available: No such device
jas@latte:~$ 

I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.

root@latte:~# apt-get install pcscd

Now gpg --card-status works!

jas@latte:~$ gpg --card-status

Reader ...........: Yubico Yubikey NEO CCID 00 00
Application ID ...: D2760001240102000006017403230000
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01740323
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/54265e8c.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 8358
Signature key ....: 9941 5CE1 905D 0E55 A9F8  8026 860B 7FBB 32F8 119D
      created ....: 2014-06-22 19:19:04
Encryption key....: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
      created ....: 2014-06-22 19:19:20
Authentication key: 2E08 856F 4B22 2148 A40A  3E45 AF66 08D7 36BA 8F9B
      created ....: 2014-06-22 19:19:41
General key info..: sub  rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson 
sec#  rsa3744/0664A76954265E8C  created: 2014-06-22  expires: 2017-09-04
ssb>  rsa2048/860B7FBB32F8119D  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/9535162A78ECD86B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/AF6608D736BA8F9B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
jas@latte:~$ 

Using the key will not work though.

jas@latte:~$ echo foo|gpg -a --sign
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
jas@latte:~$ 

This is because the public key and the secret key stub are not available.

jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 

You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
jas@latte:~$ 

Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.

root@latte:~# apt-get install dirmngr

Below I proceed to trust the clouds to find my key.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 

Now the public key and the secret key stub are available locally.

jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
sub   rsa2048 2014-06-22 [S] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [E] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
ssb>  rsa2048 2014-06-22 [S] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [E] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ 

I am now able to sign data with the smartcard, yay!

jas@latte:~$ echo foo|gpg -a --sign
-----BEGIN PGP MESSAGE-----

owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ
yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G
uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL
h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79
i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd
x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4
5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D
zycu/QA=
=I8rt
-----END PGP MESSAGE-----
jas@latte:~$ 

Encrypting to myself will not work smoothly though.

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user
sub  rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson 
 Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
      Subkey fingerprint: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B

It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.

Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting

jas@latte:~$ 

The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.

jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

gpg> trust
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: ultimate      validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg> quit
jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
-----BEGIN PGP MESSAGE-----

hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/
/hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8
iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y
PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM
EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr
kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K
nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG
=s6kt
-----END PGP MESSAGE-----
jas@latte:~$ 

So everything is fine, isn’t it? Alas, not quite.

jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ 

Tracking this down, I now realize that GNOME’s keyring is used for SSH but GnuPG’s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.

jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/gnupg/S.gpg-agent:0:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 

Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.

jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 
jas@latte:~$ 

Log out from GNOME and log in again. Now you should see ssh-add -L working.

jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323
jas@latte:~$ 

Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key.

Enjoy!

Planet DebianAlexander Wirt: alioth needs your help

It may look that the decision for pagure as alioth replacement is already finalized, but that’s not really true. I got a lot of feedback and tips in the last weeks, those made postpone my decision. Several alternative systems were recommended to me, here are a few examples:

and probably several others. I won’t be able to evaluate all of those systems in advance of our sprint. That’s where you come in: if you are familiar with one of those systems, or want to get familiar with them, join us on our mailing list and create a wiki page below https://wiki.debian.org/Alioth/GitNext with a review of your system.

What do we need to know?

  • Feature set compared to current alioth
  • Feature set compared to a popular system like github
  • Some implementation designs
  • Some information about scaling (expect something like 15.000 > 25.000 repos)
  • Support for other version control systems
  • Advantages: why should we choose that system
  • Disadvantages: why shouldn’t we choose that system
  • License
  • Other interesting features
  • Details about extensibility
  • A really nice thing would be a working vagrant box / vagrantfile + ansible/puppet to test things

If you want to start on such a review, please announce it on the mailinglist.

If you have questions, ask me on IRC, Twitter or mail. Thanks for your help!

Rondam RamblingsTrumpcare and the TPP: Republicans have learned nothing from history

As long as I'm ranting about Republican hypocrisy, I feel I should say a word about the secretive and thoroughly undemocratic process being employed by them to pass the Trumpcare bill.  If history is any guide, this will come back to bite them badly.  But Republicans don't seem to learn from history.  (Neither do Democrats, actually, but they aren't the ones trying to take my health insurance

Planet DebianEriberto Mota: Como migrar do Debian Jessie para o Stretch

Bem vindo ao Debian Stretch!

Ontem, 17 de junho de 2017, o Debian 9 (Stretch) foi lançado. Eu gostaria de falar sobre alguns procedimentos básicos e regras para migrar do Debian 8 (Jessie).

Passos iniciais

  • A primeira coisa a fazer é ler a nota de lançamento. Isso é fundamental para saber sobre possíveis bugs e situações especiais.
  • O segundo passo é atualizar o Jessie totalmente antes de migrar para o Stretch. Para isso, ainda dentro do Debian 8, execute os seguintes comandos:
# apt-get update
# apt-get dist-upgrade

Migrando

  • Edite o arquivo /etc/apt/sources.list e altere todos os nomes jessie para stretch. A seguir, um exemplo do conteúdo desse arquivo (poderá variar, de acordo com as suas necessidades):
deb http://ftp.br.debian.org/debian/ stretch main
deb-src http://ftp.br.debian.org/debian/ stretch main
                                                                                                                                
deb http://security.debian.org/ stretch/updates main
deb-src http://security.debian.org/ stretch/updates main
  • Depois, execute:
# apt-get update
# apt-get dist-upgrade

Caso haja algum problema, leia as mensagens de erro e tente resolver o problema. Resolvendo ou não tal problema, execute novamente o comando:

# apt-get dist-upgrade

Havendo novos problemas, tente resolver. Busque soluções no Google, se for necessário. Mas, geralmente, tudo dará certo e você não deverá ter problemas.

Alterações em arquivos de configuração

Quando você estiver migrando, algumas mensagens sobre alterações em arquivos de configuração poderão ser mostradas. Isso poderá deixar alguns usuários pedidos, sem saber o que fazer. Não entre em pânico.

Existem duas formas de apresentar essas mensagens: via texto puro em shell ou via janela azul de mensagens. O texto a seguir é um exemplo de mensagem em shell:

Ficheiro de configuração '/etc/rsyslog.conf'
 ==> Modificado (por si ou por um script) desde a instalação.
 ==> O distribuidor do pacote lançou uma versão atualizada.
 O que deseja fazer? As suas opções são:
 Y ou I : instalar a versão do pacote do maintainer
 N ou O : manter a versão actualmente instalada
 D : mostrar diferenças entre as versões
 Z : iniciar uma shell para examinar a situação
 A ação padrão é manter sua versão atual.
*** rsyslog.conf (Y/I/N/O/D/Z) [padrão=N] ?

A tela a seguir é um exemplo de mensagem via janela:

Nos dois casos, é recomendável que você escolha por instalar a nova versão do arquivo de configuração. Isso porque o novo arquivo de configuração estará totalmente adaptado aos novos serviços instalados e poderá ter muitas opções novas ou diferentes. Mas não se preocupe, pois as suas configurações não serão perdidas. Haverá um backup das mesmas. Assim, para shell, escolha a opção "Y" e, no caso de janela, escolha a opção "instalar a versão do mantenedor do pacote". É muito importante anotar o nome de cada arquivo modificado. No caso da janela anterior, trata-se do arquivo /etc/samba/smb.conf. No caso do shell o arquivo foi o /etc/rsyslog.conf.

Depois de completar a migração, você poderá ver o novo arquivo de configuração e o original. Caso o novo arquivo tenha sido instalado após uma escolha via shell, o arquivo original (o que você tinha anteriormente) terá o mesmo nome com a extensão .dpkg-old. No caso de escolha via janela, o arquivo será mantido com a extensão .ucf-old. Nos dois casos, você poderá ver as modificações feitas e reconfigurar o seu novo arquivo de acordo com as necessidades.

Caso você precise de ajuda para ver as diferenças entre os arquivos, você poderá usar o comando diff para compará-los. Faça o diff sempre do arquivo novo para o original. É como se você quisesse ver como fazer com o novo arquivo para ficar igual ao original. Exemplo:

# diff -Naur /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Em uma primeira vista, as linhas marcadas com "+" deverão ser adicionadas ao novo arquivo para que se pareça com o anterior, assim como as marcadas com "-" deverão ser suprimidas. Mas cuidado: é normal que haja algumas linhas diferentes, pois o arquivo de configuração foi feito para uma nova versão do serviço ou aplicativo ao qual ele pertence. Assim, altere somente as linhas que realmente são necessárias e que você mudou no arquivo anterior. Veja o exemplo:

+daemon.*;mail.*;\
+ news.err;\
+ *.=debug;*.=info;\
+ *.=notice;*.=warn |/dev/xconsole
+*.* @sam

No meu caso, originalmente, eu só alterei a última linha. Então, no novo arquivo de configuração, só terei interesse em adicionar essa linha. Bem, se foi você quem fez a configuração anterior, você saberá fazer a coisa certa. Geralmente, não haverá muitas diferenças entre os arquivos.

Outra opção para ver as diferenças entre arquivos é o comando mcdiff, que poderá ser fornecido pelo pacote mc. Exemplo:

# mcdiff /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Problemas com ambientes e aplicações gráficas

É possível que você tenha algum problema com o funcionamento de ambientes gráficos, como Gnome, KDE etc, ou com aplicações como o Mozilla Firefox. Nesses casos, é provável que o problema seja os arquivos de configuração desses elementos, existentes no diretório home do usuário. Para verificar, crie um novo usuário no Debian e teste com ele. Se tudo der certo, faça um backup das configurações anteriores (ou renomeie as mesmas) e deixe que a aplicação crie uma configuração nova. Por exemplo, para o Mozilla Firefox, vá ao diretório home do usuário e, com o Firefox fechado, renomeie o diretório .mozilla para .mozilla.bak, inicie o Firefox e teste.

Está inseguro?

Caso você esteja muito inseguro, instale um Debian 8, com ambiente gráfico e outras coisas, em uma máquina virtual e migre para Debian 9 para testar e aprender. Sugiro VirtualBox como virtualizador.

Divirta-se!

 

Rondam RamblingsAnd the Oscar for Most Extreme Hypocrisy by a Republican goes to...

New Gingrich!  For saying that "the president “technically” can’t even obstruct justice" after leading the charge to impeach Bill Clinton for obstructing justice.  Congratulations, Mr. Gingrich!  Being the most hypocritical Republican is quite an achievement in this day and age.

Planet DebianMichal Čihař: python-gammu for Windows

It has been few months since I'm providing Windows binaries for Gammu, but other parts of the family were still missing. Today, I'm adding python-gammu.

Unlike previous attempts which used crosscompilation on Linux using Wine, this is also based on AppVeyor. Still I don't have to touch Windows to do that, what is nice :-). This has been introducted in python-gammu 2.9 and depend on Gammu 1.38.4.

What is good on this is that pip install python-gammu should now work with binary packages if you're using Python 3.5 or 3.6.

Maybe I'll find time to look at option providing Wammu as well, but it's more tricky there as it doesn't support Python 3, while the python-gammu for Windows can currently only be built for Python 3.5 and 3.6 (due to MSVC dependencies of older Python versions).

Filed under: Debian English Gammu python-gammu Wammu

Planet DebianVasudev Kamath: Rust - Shell like Process pipelines using subprocess crate

I had to extract copyright information from the git repository of the crate upstream. The need aroused as part of updating debcargo, tool to create Debian package source from the Rust crate.

General idea behind taking copyright information from git is to extract starting and latest contribution year for every author/committer. This can be easily achieved using following shell snippet

for author in $(git log --format="%an" | sort -u); do
   author_email=$(git log --format="%an <%ae>" --author="$author" | head -n1)
   first=$(git \
   log --author="$author" --date=format:%Y --format="%ad" --reverse \
             | head -n1)
   latest=$(git log --author="$author" --date=format:%Y --format="%ad" \
             | head -n1)
   if [ $first -eq $latest ]; then
       echo "$first, $author_email"
   else
       echo "$first-$latest, $author_email"
   fi
done

Now challenge was to execute these command in Rust and get the required answer. So first step was I looked at std::process, default standard library support for executing shell commands.

My idea was to execute first command to extract authors into a Rust vectors or array and then have 2 remaining command to extract years in a loop. (Yes I do not need additional author_email command in Rust as I can easily get both in the first command which is used in for loop of shell snippet and use it inside another loop). So I setup to 3 commands outside the loop with input and output redirected, following is snippet should give you some idea of what I tried to do.

let authors_command = Command::new("/usr/bin/git")
             .arg("log")
             .arg("--format=\"%an <%ae>\"")
             .spawn()?;
let output = authors_command.wait()?;
let authors: Vec<String> = String::from_utf8(output.stdout).split('\n').collect();
let head_n1 = Command::new("/usr/bin/head")
             .arg("-n1")
             .stdin(Stdio::piped())
             .stdout(Stdio::piped())
             .spwn()?;
for author in &authors {
             ...
}

And inside the loop I would create additional 2 git commands read their output via pipe and feed it to head command. This is where I learned that it is not straight forward as it looks :-). std::process::Command type does not implement Copy nor Clone traits which means one use of it I will give up the ownership!. And here I started fighting with borrow checker. I need to duplicate declarations to make sure I've required commands available all the time. Additionally I needed to handle error output at every point which created too many nested statements there by complicating the program and reducing its readability

When all started getting out of control I gave a second thought and wondered if it would be good to write down this in shell script ship it along with debcargo and use the script Rust program. This would satisfy my need but I would need to ship additional script along with debcargo which I was not really happy with.

Then a search on crates.io revealed subprocess, a crate designed to be similar with subprocess module from Python!. Though crate is not highly downloaded it still looked promising, especially the trait implements a trait called BitOr which allows use of | operator to chain the commands. Additionally it allows executing full shell commands without need of additional chaining of argument which was done above snippet. End result a much simplified easy to read and correct function which does what was needed. Below is the function I wrote to extract copyright information from git repo.

fn copyright_fromgit(repo: &str) -> Result<Vec<String>> {
    let tempdir = TempDir::new_in(".", "debcargo")?;
    Exec::shell(OsStr::new(format!("git clone --bare {} {}",
                                repo,
                                tempdir.path().to_str().unwrap())
                              .as_str())).stdout(subprocess::NullFile)
                              .stderr(subprocess::NullFile)
                              .popen()?;

    let author_process = {
         Exec::shell(OsStr::new("git log --format=\"%an <%ae>\"")).cwd(tempdir.path()) |
         Exec::shell(OsStr::new("sort -u"))
     }.capture()?;
    let authors = author_process.stdout_str().trim().to_string();
    let authors: Vec<&str> = authors.split('\n').collect();
    let mut notices: Vec<String> = Vec::new();
    for author in &authors {
        let reverse_command = format!("git log --author=\"{}\" --format=%ad --date=format:%Y \
                                    --reverse",
                                   author);
        let command = format!("git log --author=\"{}\" --format=%ad --date=format:%Y",
                           author);
        let first = {
             Exec::shell(OsStr::new(&reverse_command)).cwd(tempdir.path()) |
             Exec::shell(OsStr::new("head -n1"))
         }.capture()?;

         let latest = {
             Exec::shell(OsStr::new(&command)).cwd(tempdir.path()) | Exec::shell("head -n1")
         }.capture()?;

        let start = i32::from_str(first.stdout_str().trim())?;
        let end = i32::from_str(latest.stdout_str().trim())?;
        let cnotice = match start.cmp(&end) {
            Ordering::Equal => format!("{}, {}", start, author),
            _ => format!("{}-{}, {}", start, end, author),
        };

        notices.push(cnotice);
    }

    Ok(notices)
}

Of course it is not as short as the shell or probably Python code, but that is fine as Rust is system level programming language (which is intended to replace C/C++) and doing complex Shell code (complex due to need of shell pipelines) in approximately 50 lines of code in safe and secure way is very much acceptable. Besides code is as much readable as a plain shell snippet thanks to the | operator implemented by subprocess crate.

Planet DebianHideki Yamane: Debian9 release party in Tokyo

We celebrated Debian9 "stretch" release in Tokyo (thanks to Cybozu, Inc. for the place).








We enjoyed beer, wine, sake, soft drinks, pizza, sandwich, snacks and cake&coffee (Nicaraguan one, it reminds me DebConf12 :)

Planet DebianBits from Debian: Debian 9.0 Stretch has been released!

Alt Stretch has been released

Let yourself be embraced by the purple rubber toy octopus! We're happy to announce the release of Debian 9.0, codenamed Stretch.

Want to install it? Choose your favourite installation media among Blu-ray Discs, DVDs, CDs and USB sticks. Then read the installation manual.

Already a happy Debian user and you only want to upgrade? You can easily upgrade from your current Debian 8 Jessie installation, please read the release notes.

Do you want to celebrate the release? Share the banner from this blog in your blog or your website!

Planet DebianJonathan Carter: AIMS Desktop 2017.1 is available!

Back at DebConf 15 in Germany, I gave a talk on on AIMS Desktop (which was then based on Ubuntu), and our intentions and rationale for wanting to move it over to being Debian based.

Today, alongside the Debian 9 release, we release AIMS Desktop 2017.1, the first AIMS Desktop released based on Debian. For Debian 10, we’d like to get the last remaining AIMS Desktop packages into Debian so that it could be a Debian pure blend.

Students trying out a release candidate at AIMS South Africa

It’s tailored to the needs of students, lecturers and researchers at the African Institute for Mathemetical Sciences, we’re releasing it to the public in the hope that it could be useful for other tertiary education users with an interest in maths and science software. If you run a mirror at your university, it would also be great if you could host a copy. we added an rsync location on the downloads page which you could use to keep it up to date.

Planet DebianJonathan Carter: Debian 9 is available!

Congratulations to everyone who has played a part in the creation of Debian GNU/Linux 9.0! It’s a great release, I’ve installed the pre-release versions for friends, family and colleagues and so far the feedback has been very positive.

This release is dedicated to Ian Murdock, who founded the Debian project in 1993, and sadly passed away on 28 December 2015. On the Debian ISO files a dedication statement is available on /doc/dedication/dedication-9.0.txt

Here’s a copy of the dedication text:

Dedicated to Ian Murdock
------------------------

Ian Murdock, the founder of the Debian project, passed away
on 28th December 2015 at his home in San Francisco. He was 42.

It is difficult to exaggerate Ian's contribution to Free
Software. He led the Debian Project from its inception in
1993 to 1996, wrote the Debian manifesto in January 1994 and
nurtured the fledgling project throughout his studies at
Purdue University.

Ian went on to be founding director of Linux International,
CTO of the Free Standards Group and later the Linux
Foundation, and leader of Project Indiana at Sun
Microsystems, which he described as "taking the lesson
that Linux has brought to the operating system and providing
that for Solaris".

Debian's success is testament to Ian's vision. He inspired
countless people around the world to contribute their own free
time and skills. More than 350 distributions are known to be
derived from Debian.

We therefore dedicate Debian 9 "stretch" to Ian.

-- The Debian Developers

During this development cycle, the amount of source packages in Debian grew from around 21 000 to around 25 000 packages, which means that there’s a whole bunch of new things Debian can make your computer do. If you find something new in this release that you like, post about it on your favourite social networks, using the hashtag #newinstretch – or look it up to see what others have discovered!

Debian Administration Debian Stretch Released

Today the Debian project is pleased to announce the release of the next stable release of Debian GNU/Linux, code-named Stretch.

Planet DebianBenjamin Mako Hill: The Community Data Science Collective Dataverse

I’m pleased to announce the Community Data Science Collective Dataverse. Our dataverse is an archival repository for datasets created by the Community Data Science Collective. The dataverse won’t replace work that collective members have been doing for years to document and distribute data from our research. What we hope it will do is get our data — like our published manuscripts — into the hands of folks in the “forever” business.

Over the past few years, the Community Data Science Collective has published several papers where an important part of the contribution is a dataset. These include:

Recently, we’ve also begun producing replication datasets to go alongside our empirical papers. So far, this includes:

In the case of each of the first groups of papers where the dataset was a part of the contribution, we uploaded code and data to a website we’ve created. Of course, even if we do a wonderful job of keeping these websites maintained over time, eventually, our research group will cease to exist. When that happens, the data will eventually disappear as well.

The text of our papers will be maintained long after we’re gone in the journal or conference proceedings’ publisher’s archival storage and in our universities’ institutional archives. But what about the data? Since the data is a core part — perhaps the core part — of the contribution of these papers, the data should be archived permanently as well.

Toward that end, our group has created a dataverse. Our dataverse is a repository within the Harvard Dataverse where we have been uploading archival copies of datasets over the last six months. All five of the papers described above are uploaded already. The Scratch dataset, due to access control restrictions, isn’t listed on the main page but it’s online on the site. Moving forward, we’ll be populating this new datasets we create as well as replication datasets for our future empirical papers. We’re currently preparing several more.

The primary point of the CDSC Dataverse is not to provide you with way to get our data although you’re certainly welcome to use it that way and it might help make some of it more discoverable. The websites we’ve created (like for the ones for redirects and for page protection) will continue to exist and be maintained. The Dataverse is insurance for if, and when, those websites go down to ensure that our data will still be accessible.


This post was also published on the Community Data Science Collective blog.

,

Planet DebianAlexander Wirt: Survey about alioth replacement

To get some idea about the expectations and current usage of alioth I created a survey. Please take part in it if you are an alioth user. If you need some background about the coming alioth replacement I recommend to read the great lwn article written by anarcat.

Krebs on SecurityCredit Card Breach at Buckle Stores

The Buckle Inc., a clothier that operates more than 450 stores in 44 U.S. states, disclosed Friday that its retail locations were hit by malicious software designed to steal customer credit card data. The disclosure came hours after KrebsOnSecurity contacted the company regarding reports from sources in the financial sector about a possible breach at the retailer.

buckle

On Friday morning, KrebsOnSecurity contacted The Buckle after receiving multiple tips from sources in the financial industry about a pattern of fraud on customer credit and debit cards which suggested a breach of point-of-sale systems at Buckle stores across the country.

Later Friday evening, The Buckle Inc. released a statement saying that point-of-sale malware was indeed found installed on cash registers at Buckle retail stores, and that the company believes the malware was stealing customer credit card data between Oct. 28, 2016 and April 14, 2017. The Buckle said purchases made on its online store were not affected.

As with the recent POS-malware based breach at Kmart, The Buckle said all of its stores are equipped with EMV-capable card terminals, meaning the point-of-sale machines can accommodate newer, more secure chip-based credit and debit cards. The malware copies account data stored on the card’s magnetic stripe. Armed with that information, thieves can clone the cards and use them to buy high-priced merchandise from electronics stores and big box retailers.

The trouble is that not all banks have issued chip-enabled cards, which are far more expensive and difficult for thieves to counterfeit. Customers who shopped at compromised Buckle stores using a chip-based card would not be in danger of having their cards cloned and used elsewhere, but the stolen card data could still be used for e-commerce fraud.

Visa said in March 2017 there were more than 421 million Visa chip cards in the country, representing 58 percent of Visa cards. According to Visa, counterfeit fraud has been declining month over month — down 58 percent at chip-enabled merchants in December 2016 when compared to the previous year.

The United States is the last of the G20 nations to make the shift to chip-based cards. Visa has said it typically took about three years after the liability shifts in other countries before 90% of payment card transactions were “chip-on-chip,” or generated by a chip card used at a chip-based terminal.

Virtually every other country that has made the jump to chip-based cards saw fraud trends shifting from card-present to card-not-present (online, phone) fraud as it became more difficult for thieves to counterfeit physical credit cards. Data collected by consumer credit bureau Experian suggests that e-commerce fraud increased 33 percent last year over 2015.

TED5 TED Radio Hour episodes that explore what it’s like to be human

TED Radio Hour started in 2013, and while I’ve only been working on the show for about a year, it’s one of my favorite parts of my job. We work with an incredibly creative team over at NPR, and helping them weave different ideas into a narrative each week adds a whole new dimension to the talks.

On Friday, the podcast published its 100th episode. The theme is A Better You, and in the hour we explore the many ways we as humans try to improve ourselves. We look at the role of our own minds when it comes to self-improvement, and the tension in play between the internal and the external in this struggle.

New to the show, or looking to dip back into the archive? Below are five of my favorite episodes so far that explore what it means to be human.

The Hero’s Journey

What makes a hero? Why are we so drawn to stories of lone figures, battling against the odds? We talk about space and galaxies far, far away a lot at TED, but in this episode we went one step further and explored the concept of the Hero’s Journey relates to the Star Wars universe – and the ideas of TED speakers. Dame Ellen MacArthur shares the transformative impact of her solo sailing trip around the world. Jarrett J. Krosoczka pays homage to the surprising figures that formed his path in life. George Takei tells his powerful story of being held in a Japanese-American internment camp during WWII, and how he managed to forgive, and even love, the country that treated him this way. We finish up the hour with Ismael Nazario’s story of spending 300 days in solitary confinement before he was even convicted of a crime, and how this ultimately set him on a journey to help others.

Anthropocene

In this episode, four speakers make the case that we are now living in a new geological age called the Anthropocene, where the main force impacting the earth – is us. Kenneth Lacovara opens the show by taking us on a tour of the earth’s ages so far. Next Emma Marris calls us to connect with nature in a new way so we’ll actually want to protect it. Then, Peter Ward looks at what past extinctions can tell us about the earth – and ourselves. Finally Cary Fowler takes us deep within a vault in Svalbard, where a group of scientists are storing seeds in an attempt to ultimately preserve our species. While the subject could easily be a ‘doom and gloom’ look at the state of our planet, ultimately it left me hopeful and optimistic for our ability to solve some of these monumental problems. If you haven’t yet heard of the Anthropocene, I promise that after this episode you’ll start coming across it everywhere.

The Power of Design

Doing an episode on design seemed like an obvious choice, and we were excited about the challenge of creating an episode about such a visual discipline for radio. We looked at the ways good or bad design affects us, and the ways we can make things more elegant and beautiful. Tony Fadell starts out the episode by bringing us back to basics, calling out the importance of noticing design flaws in the world around us in order to solve problems. Marc Kushner predicts how architectural design is going to be increasingly shaped by public perception and social media. Airbnb co-founder Joe Gebbia takes us inside the design process that helped people establish enough trust to open up their homes to complete strangers. Next we take an insightful design history lesson with Alice Rawsthorn to pay homage to bold and innovative design thinkers of the past, and their impact on the present. We often think of humans as having a monopoly on design, but our final speaker in this episode, Janine Benyus, examines the incredible design lessons we can take from the natural world.

Beyond Tolerance

We throw around the word ‘tolerance’ a lot – especially in the last year as politics has grown even more polarized. But how can we push past mere tolerance to true understanding and empathy? I remember when we first started talking about this episode Guy said he wanted it to be a deep dive into things you wouldn’t talk about at the dinner table, and we did just that: from race, to politics, to abortion, all the way to Israeli-Palestinian relations. Arthur Brooks tackles the question of how liberals and conservatives can work together – and why it’s so crucial. Diversity advocate Vernā Myers gives some powerful advice on how to conquer our unconscious biases. In the fraught and often painful debate around abortion, Aspen Baker emphasizes the need to listen: to be pro-voice, rather than pro-life or pro-choice. Finally Aziz Abu Sarah describes the tours he leads which bring Jews, Muslims and Christians across borders to break bread and forge new cultural ties.

Headspace

What I really love about this episode is that it takes a dense and difficult subject – mental health – and approaches it with this very human optimism, ultimately celebrating the resilience and power of our minds. The show opens up with Andrew Solomon, one of my favorite TED speakers, who shares what he has learned from his battle with depression, including how he forged meaning and identity from his experience with the illness. He has some fascinating and beautiful ideas around mental health and personality, which still resonate so strongly with me. Next, Alix Generous explains some of the misconceptions around Asperger’s Syndrome; she beautifully articulates the gap between her “complex inner life” and how she communicates with the world. David Anderson looks at the biology of emotion and how our brains function, painting a picture of how new research could revolutionize the way we understand and care for our mental health. Our fourth speaker, psychologist Guy Winch, gives some strong takeaways on how we can incorporate caring for our ‘emotional health’ in our daily lives.

Happy listening! To find out more about the show, follow us on Facebook and Twitter.


,

Planet DebianBits from Debian: Upcoming Debian 9.0 Stretch!

Alt Stretch is coming on 2017-06-17

The Debian Release Team in coordination with several other teams are preparing the last bits needed for releasing Debian 9 Stretch. Please, be patient! Lots of steps are involved and some of them take some time, such as building the images, propagating the release through the mirror network, and rebuilding the Debian website so that "stable" points to Debian 9.

Follow the live coverage of the release on https://micronews.debian.org or the @debian profile in your favorite social network! We'll spread the word about what's new in this version of Debian 9, how the release process is progressing during the weekend and facts about Debian and the wide community of volunteer contributors that make it possible.

CryptogramFriday Squid Blogging: Squids from Space Video Game

An early preview.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramNSA Links WannaCry to North Korea

There's evidence:

Though the assessment is not conclusive, the preponderance of the evidence points to Pyongyang. It includes the range of computer Internet protocol addresses in China historically used by the RGB, and the assessment is consistent with intelligence gathered recently by other Western spy agencies. It states that the hackers behind WannaCry are also called "the Lazarus Group," a name used by private-sector researchers.

One of the agencies reported that a prototype of WannaCry ransomware was found this spring in a non-Western bank. That data point was a "building block" for the North Korea assessment, the individual said.

Honestly, I don't know what to think. I am skeptical, but I am willing to be convinced. (Here's the grugq, also trying to figure it out.) What I would like to see is the NSA evidence in more detail than they're probably comfortable releasing.

More commentary. Slashdot thread.

Planet DebianElena 'valhalla' Grandi: Travel piecepack v0.1

Travel piecepack v0.1

Immagine/fotosocial.gl-como.it/photos/valha

A www.piecepack.org/ set of generic board game pieces is nice to have around in case of a sudden spontaneous need of gaming, but carrying my full set www.trueelena.org/fantastic/fe takes some room, and is not going to fit in my daily bag.

I've been thinking for a while that an half-size set could be useful, and between yesterday and today I've actually managed to do the first version.

It's (2d) printed on both sides of a single sheet of heavy paper, laminated and then cut, comes with both the basic suites and the playing card expansion and fits in a mint tin divided by origami boxes.

It's just version 0.1 because there are a few issues: first of all I'm not happy with the manual way I used to draw the page: ideally it would have been programmatically generated from the same svg files as the 3d piecepack (with the ability to generate other expansions), but apparently reading paths from an svg and writing it in another svg is not supported in an easy way by the libraries I could find, and looking for it was starting to take much more time than just doing it by hand.

I also still have to assemble the dice; in the picture above I'm just using the ones from the 3d-printed set, but they are a bit too big and only four of them fit in the mint tin. I already have the faces printed, so this is going to be fixed in the next few days.

Source files are available in the same git repository as the 3d-printable piecepack git.trueelena.org/cgit.cgi/3d/, with the big limitation mentioned above; updates will also be pushed there, just don't hold your breath for it :)

Planet DebianMichal Čihař: New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue was over one month long, so it's time to process it and include new project.

This time, the newly hosted projects include:

We now also host few new Minetest mods:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do them on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Sociological ImagesLonely Hearts: Estranged Fathers on Father’s Day

I work with one of the most heartbroken groups of people in the world: fathers whose adult children want nothing to do with them. While every day has its challenges, Father’s Day—with its parade of families and feel-good ads—makes it especially difficult for these Dads to avoid the feelings of shame, guilt and regret always lurking just beyond the reach of that well-practiced compartmentalization. Like birthdays, and other holidays, Father’s Day creates the wish, hope, or prayer that maybe today, please today, let me hear something, anything from my kid.

Many of these men are not only fathers but grandfathers who were once an intimate part of their grandchildren’s lives. Or, more tragically, they discovered they were grandfathers through a Facebook page, if they hadn’t yet been blocked. Or, they learn from an unwitting relative bearing excited congratulations, now surprised by the look of grief and shock that greets the newly announced grandfather. Hmm, what did I do with those cigars I put aside for this occasion?

And it’s not just being involved as a grandfather that gets denied. The estrangement may foreclose the opportunity to celebrate other developmental milestones he always assumed he’d attend, such as college graduations, engagement parties, or weddings. Maybe he was invited to the wedding but told he wouldn’t get to walk his daughter down the aisle because that privilege was being reserved for her father-in-law whom she’s decided is a much better father than he ever was.

Most people assume that a Dad would have to do something pretty terrible to make an adult child not want to have contact. My clinical experience working with estranged parents doesn’t bear this out. While those cases clearly exist, many parents get cut out as a result of the child needing to feel more independent and less enmeshed with the parent or parents. A not insignificant number of estrangements are influenced by a troubled or compelling son-in-law or daughter-in-law. Sometimes a parent’s divorce creates the opportunity for one parent to negatively influence the child against the other parent, or introduce people who compete for the parent’s love, attention or resources. In a highly individualistic culture such as ours, divorce may cause the child to view a parent more as an individual with relative strengths and weaknesses rather than a family unit of which they’re a part.

Little binds adult children to their parents today beyond whether or not the adult child wants that relationship. And a not insignificant number decide that they don’t.

While my clinical work hasn’t shown fathers to be more vulnerable to estrangement than mothers, they do seem to be more at risk of a lower level of investment from their adult children. A recent Pew survey found that women more commonly say their grown children turn to them for emotional support while men more commonly say this “hardly ever” or “never” occurs. This same study reported that half of adults say they are closer with their mothers, while only 15 percent say they are closer with their fathers.

So, yes, let’s take a moment to celebrate fathers everywhere. And another to feel empathy for those Dads who won’t have any contact with their child on Father’s Day.

Or any other day.

Josh Coleman is Co-Chair, Council on Contemporary Families, and author most recently of When Parents Hurt. Originally posted at Families as They Really Are.

(View original at https://thesocietypages.org/socimages)

CryptogramGaming Google News

Turns out that it's surprisingly easy to game:

It appears that news sites deemed legitimate by Google News are being modified by third parties. These sites are then exploited to redirect to the spam content. It appears that the compromised sites are examining the referrer and redirecting visitors coming from Google News.

Worse Than FailureError'd: @TitleOfErrord

"I asked my son, @Firstname, and he is indeed rather @Emotion about going to @ThemePark!" wrote Chris @LASTNAME.

 

"I think Google assumes there is only one exit on the highway," writes Balaprasanna S.

 

Axel C. writes, "So what you're saying here is that something went wrong?"

 

"Hmmmm...YMMV, but that's not quite the company that I would want to follow," wrote Rob H.

 

"You know, I also confuse San Francisco with San Jose all the time. I mean, they just sound so much alike!" writes Mike S.

 

Mike G. writes, "Sure, it's a little avant garde, but I hear this film was nominated for an award."

 

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

,

Planet DebianMike Hommey: Announcing git-cinnabar 0.5.0 beta 2

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 1?

  • Enabled support for clonebundles for faster clones when the server provides them.
  • Git packs created by git-cinnabar are now smaller.
  • Added a new git cinnabar upgrade command to handle metadata upgrade separately from fsck.
  • Metadata upgrade is now significantly faster.
  • git cinnabar fsck also faster.
  • Both now also use significantly less memory.
  • Updated git to 2.13.1 for git-cinnabar-helper.

Harald WelteHow the Osmocom GSM stack is funded

As the topic has been raised on twitter, I thought I might share a bit of insight into the funding of the Osmocom Cellular Infrastructure Projects.

Keep in mind: Osmocom is a much larger umbrella project, and beyond the Networks-side cellular stack is home many different community-based projects around open source mobile communications. All of those have started more or less as just for fun projects, nothing serious, just a hobby [1]

The projects implementing the network-side protocol stacks and network elements of GSM/GPRS/EGPRS/UMTS cellular networks are somewhat the exception to that, as they have evolved to some extent professionalized. We call those projects collectively the Cellular Infrastructure projects inside Osmocom. This post is about that part of Osmocom only

History

From late 2008 through 2009, People like Holger and I were working on bs11-abis and later OpenBSC only in our spare time. The name Osmocom didn't even exist back then. There was a strong technical community with contributions from Sylvain Munaut, Andreas Eversberg, Daniel Willmann, Jan Luebbe and a few others. None of this would have been possible if it wasn't for all the help we got from Dieter Spaar with the BS-11 [2]. We all had our dayjob in other places, and OpenBSC work was really just a hobby. People were working on it, because it was where no FOSS hacker has gone before. It was cool. It was a big and pleasant challenge to enter the closed telecom space as pure autodidacts.

Holger and I were doing freelance contract development work on Open Source projects for many years before. I was mostly doing Linux related contracting, while Holger has been active in all kinds of areas throughout the FOSS software stack.

In 2010, Holger and I saw some first interest by companies into OpenBSC, including Netzing AG and On-Waves ehf. So we were able to spend at least some of our paid time on OpenBSC/Osmocom related contract work, and were thus able to do less other work. We also continued to spend tons of spare time in bringing Osmocom forward. Also, the amount of contract work we did was only a fraction of the many more hours of spare time.

In 2011, Holger and I decided to start the company sysmocom in order to generate more funding for the Osmocom GSM projects by means of financing software development by product sales. So rather than doing freelance work for companies who bought their BTS hardware from other places (and spent huge amounts of cash on that), we decided that we wanted to be a full solution supplier, who can offer a complete product based on all hardware and software required to run small GSM networks.

The only problem is: We still needed an actual BTS for that. Through some reverse engineering of existing products we figured out who one of the ODM suppliers for the hardware + PHY layer was, and decided to develop the OsmoBTS software to do so. We inherited some of the early code from work done by Andreas Eversberg on the jolly/bts branch of OsmocomBB (thanks), but much was missing at the time.

What follows was Holger and me working several years for free [3], without any salary, in order to complete the OsmoBTS software, build an embedded Linux distribution around it based on OE/poky, write documentation, etc. and complete the first sysmocom product: The sysmoBTS 1002

We did that not because we want to get rich, or because we want to run a business. We did it simply because we saw an opportunity to generate funding for the Osmocom projects and make them more sustainable and successful. And because we believe there is a big, gaping, huge vacuum in terms of absence of FOSS in the cellular telecom sphere.

Funding by means of sysmocom product sales

Once we started to sell the sysmoBTS products, we were able to fund Osmocom related development from the profits made on hardware / full-system product sales. Every single unit sold made a big contribution towards funding both the maintenance as well as the ongoing development on new features.

This source of funding continues to be an important factor today.

Funding by means of R&D contracts

The probably best and most welcome method of funding Osmocom related work is by means of R&D projects in which a customer funds our work to extend the Osmocom GSM stack in one particular area where he has a particular need that the existing code cannot fulfill yet.

This kind of project is the ideal match, as it shows where the true strength of FOSS is: Each of those customers did not have to fund the development of a GSM stack from scratch. Rather, they only had to fund those bits that were missing for their particular application.

Our reference for this is and has been On-Waves, who have been funding development of their required features (and bug fixing etc.) since 2010.

We've of course had many other projects from a variety of customers over over the years. Last, but not least, we had a customer who willingly co-funded (together with funds from NLnet foundation and lots of unpaid effort by sysmocom) the 3G/3.5G support in the Osmocom stack.

The problem here is:

  • we have not been able to secure anywhere nearly as many of those R&D projects within the cellular industry, despite believing we have a very good foundation upon which we can built. I've been writing many exciting technical project proposals
  • you almost exclusively get funding only for new features. But it's very hard to get funding for the core maintenance work. The bug-fixing, code review, code refactoring, testing, etc.

So as a result, the profit margin you have on selling R&D projects is basically used to (do a bad job of) fund those bits and pieces that nobody wants to pay for.

Funding by means of customer support

There is a way to generate funding for development by providing support services. We've had some success with this, but primarily alongside the actual hardware/system sales - not so much in terms of pure software-only support.

Also, providing support services from a R&D company means:

  • either you distract your developers by handling support inquiries. This means they will have less time to work on actual code, and likely get side tracked by too many issues that make it hard to focus
  • or you have to hire separate support staff. This of course means that the size of the support business has to be sufficiently large to not only cover the cots of hiring + training support staff, but also still generate funding for the actual software R&D.

We've tried shortly with the second option, but fallen back to the first for now. There's simply not sufficient user/admin type support business to rectify dedicated staff for that.

Funding by means of cross-subsizing from other business areas

sysmocom also started to do some non-Osmocom projects in order to generate revenue that we can feed again into Osmocom projects. I'm not at liberty to discuss them in detail, but basically we've been doing pretty much anything from

  • custom embedded Linux board designs
  • M2M devices with GSM modems
  • consulting gigs
  • public tendered research projects

Profits from all those areas went again into Osmocom development.

Last, but not least, we also operate the sysmocom webshop. The profit we make on those products also is again immediately re-invested into Osmocom development.

Funding by grants

We've had some success in securing funding from NLnet Foundation for specific features. While this is useful, the size of their projects grants of up to EUR 30k is not a good fit for the scale of the tasks we have at hand inside Osmocom. You may think that's a considerable amount of money? Well, that translates to 2-3 man-months of work at a bare cost-covering rate. At a team size of 6 developers, you would theoretically have churned through that in two weeks. Also, their focus is (understandably) on Internet and IT security, and not so much cellular communications.

There are of course other options for grants, such as government research grants and the like. However, they require long-term planning, they require you to match (i.e. pay yourself) a significant portion, and basically mandate that you hire one extra person for doing all the required paperwork and reporting. So all in all, not a particularly attractive option for a very small company consisting of die hard engineers.

Funding by more BTS ports

At sysmocom, we've been doing some ports of the OsmoBTS + OsmoPCU software to other hardware, and supporting those other BTS vendors with porting, R&D and support services.

If sysmocom was a classic BTS vendor, we would not help our "competition". However, we are not. sysmocom exists to help Osmocom, and we strongly believe in open systems and architectures, without a single point of failure, a single supplier for any component or any type of vendor lock-in.

So we happily help third parties to get Osmocom running on their hardware, either with a proprietary PHY or with OsmoTRX.

However, we expect that those BTS vendors also understand their responsibility to share the development and maintenance effort of the stack. Preferably by dedicating some of their own staff to work in the Osmocom community. Alternatively, sysmocom can perform that work as paid service. But that's a double-edged sword: We don't want to be a single point of failure.

Osmocom funding outside of sysmocom

Osmocom is of course more than sysmocom. Even for the cellular infrastructure projects inside Osmocom is true: They are true, community-based, open, collaborative development projects. Anyone can contribute.

Over the years, there have been code contributions by e.g. Fairwaves. They, too, build GSM base station hardware and use that as a means to not only recover the R&D on the hardware, but also to contribute to Osmocom. At some point a few years ago, there was a lot of work from them in the area of OsmoTRX, OsmoBTS and OsmoPCU. Unfortunately, in more recent years, they have not been able to keep up the level of contributions.

There are other companies engaged in activities with and around Osmcoom. There's Rhizomatica, an NGO helping indigenous communities to run their own cellular networks. They have been funding some of our efforts, but being an NGO helping rural regions in developing countries, they of course also don't have the deep pockets. Ideally, we'd want to be the ones contributing to them, not the other way around.

State of funding

We're making some progress in securing funding from players we cannot name [4] during recent years. We're also making occasional progress in convincing BTS suppliers to chip in their share. Unfortunately there are more who don't live up to their responsibility than those who do. I might start calling them out by name one day. The wider community and the public actually deserves to know who plays by FOSS rules and who doesn't. That's not shaming, it's just stating bare facts.

Which brings us to:

  • sysmocom is in an office that's actually too small for the team, equipment and stock. But we certainly cannot afford more space.
  • we cannot pay our employees what they could earn working at similar positions in other companies. So working at sysmocom requires dedication to the cause :)
  • Holger and I have invested way more time than we have ever paid us, even more so considering the opportunity cost of what we would have earned if we'd continued our freelance Open Source hacker path
  • we're [just barely] managing to pay for 6 developers dedicated to Osmocom development on our payroll based on the various funding sources indicated above

Nevertheless, I doubt that any such a small team has ever implemented an end-to-end GSM/GPRS/EGPRS network from RAN to Core at comparative feature set. My deepest respects to everyone involved. The big task now is to make it sustainable.

Summary

So as you can see, there's quite a bit of funding around. However, it always falls short of what's needed to implement all parts properly, and even not quite sufficient to keep maintaining the status quo in a proper and tested way. That can often be frustrating (mostly to us but sometimes also to users who run into regressions and oter bugs). There's so much more potential. So many things we wanted to add or clean up for a long time, but too little people interested in joining in, helping out - financially or by writing code.

On thing that is often a challenge when dealing with traditional customers: We are not developing a product and then selling a ready-made product. In fact, in FOSS this would be more or less suicidal: We'd have to invest man-years upfront, but then once it is finished, everyone can use it without having to partake in that investment.

So instead, the FOSS model requires the customers/users to chip in early during the R&D phase, in order to then subsequently harvest the fruits of that.

I think the lack of a FOSS mindset across the cellular / telecom industry is the biggest constraining factor here. I've seen that some 20-15 years ago in the Linux world. Trust me, it takes a lot of dedication to the cause to endure this lack of comprehension so many years later.

[1]just like Linux has started out.
[2]while you will not find a lot of commits from Dieter in the code, he has been playing a key role in doing a lot of prototyping, reverse engineering and debugging!
[3]sysmocom is 100% privately held by Holger and me, we intentionally have no external investors and are proud to never had to take a bank loan. So all we could invest was our own money and, most of all, time.
[4]contrary to the FOSS world, a lot of aspects are confidential in business, and we're not at liberty to disclose the identities of all our customers

Harald WelteFOSS misconceptions, still in 2017

The lack of basic FOSS understanding in Telecom

Given that the Free and Open Source movement has been around at least since the 1980ies, it puzzles me that people still seem to have such fundamental misconceptions about it.

Something that really triggered me was an article at LightReading [1] which quotes Ulf Ewaldsson, a leading Ericsson excecutive with

"I have yet to understand why we would open source something we think is really good software"

This completely misses the point. FOSS is not about making a charity donation of a finished product to the planet.

FOSS is about sharing the development costs among multiple players, and avoiding that everyone has to reimplement the wheel. Macro-Economically, it is complete and utter nonsense that each 3GPP specification gets implemented two dozens of times, by at least a dozen of different entities. As a result, products are way more expensive than needed.

If large Telco players (whether operators or equipment manufacturers) were to collaboratively develop code just as much as they collaboratively develop the protocol specifications, there would be no need for replicating all of this work.

As a result, everyone could produce cellular network elements at reduced cost, sharing the R&D expenses, and competing in key areas, such as who can come up with the most energy-efficient implementation, or can produce the most reliable hardware, the best receiver sensitivity, the best and most fair scheduling implementation, or whatever else. But some 80% of the code could probably be shared, as e.g. encoding and decoding messages according to a given publicly released 3GPP specification document is not where those equipment suppliers actually compete.

So my dear cellular operator executives: Next time you're cursing about the prohibitively expensive pricing that your equipment suppliers quote you: You only have to pay that much because everyone is reimplementing the wheel over and over again.

Equally, my dear cellular infrastructure suppliers: You are all dying one by one, as it's hard to develop everything from scratch. Over the years, many of you have died. One wonders, if we might still have more players left, if some of you had started to cooperate in developing FOSS at least in those areas where you're not competing. You could replicate what Linux is doing in the operating system market. There's no need in having a phalanx of different proprietary flavors of Unix-like OSs. It's way too expansive, and it's not an area in which most companies need to or want to compete anyway.

Management Summary

You don't first develop and entire product until it is finished and then release it as open source. This makes little economic sense in a lot of cases, as you've already invested into developing 100% of it. Instead, you actually develop a new product collaboratively as FOSS in order to not have to invest 100% but maybe only 30% or even less. You get a multitude of your R&D investment back, because you're not only getting your own code, but all the other code that other community members implemented. You of course also get other benefits, such as peer review of the code, more ideas (not all bright people work inside one given company), etc.

[1]that article is actually a heavily opinionated post by somebody who appears to be pushing his own anti-FOSS agenda for some time. The author is misinformed about the fact that the TIP has always included projects under both FRAND and FOSS terms. As a TIP member I can attest to that fact. I'm only referencing it here for the purpose of that that Ericsson quote.

Planet DebianJeremy Bicha: #newinstretch : Latest WebKitGTK+

GNOME Web (Epiphany) in Debian 9 "Stretch"

Debian 9 “Stretch”, the latest stable version of the venerable Linux distribution, will be released in a few days. I pushed a last-minute change to get the latest security and feature update of WebKitGTK+ (packaged as webkit2gtk 2.16.3) in before release.

Carlos Garcia Campos discusses what’s new in 2.16, but there are many, many more improvements since the 2.6 version in Debian 8.

Like many things in Debian, this was a team effort from many people. Thank you to the WebKitGTK+ developers, WebKitGTK+ maintainers in Debian, Debian Release Managers, Debian Stable Release Managers, Debian Security Team, Ubuntu Security Team, and testers who all had some part in making this happen.

As with Debian 8, there is no guaranteed security support for webkit2gtk for Debian 9. This time though, there is a chance of periodic security updates without needing to get the updates through backports.

If you would like to help test the next proposed update, please contact me so that I can help coordinate this.

Krebs on SecurityInside a Porn-Pimping Spam Botnet

For several months I’ve been poking at a decent-sized spam botnet that appears to be used mainly for promoting adult dating sites. Having hit a wall in my research, I decided it might be good to publish what I’ve unearthed so far to see if this dovetails with any other research out there.

In late October 2016, an anonymous source shared with KrebsOnSecurity.com a list of nearly 100 URLs that — when loaded into a Firefox browser — each displayed what appeared to be a crude but otherwise effective text-based panel designed to report in real time how many “bots” were reporting in for duty.

Here’s a set of archived screenshots of those counters illustrating how these various botnet controllers keep a running tab of how many “activebots” — hacked servers set up to relay spam — are sitting idly by and waiting for instructions.

One of the more than 100 panels linked to the same porn spamming operation. In October 2016, these 100 panels reported a total of 1.2 million active bots operating simultaneously.

At the time, it was unclear to me how this apparent botnet was being used, and since then the total number of bots reporting in each day has shrunk considerably. During the week the above-linked screen shots were taken, this botnet had more than 1.2 million zombie machines or servers reporting each day (that screen shot archive includes roughly half of the panels found). These days, the total number of servers reporting in to this spam network fluctuates between 50,000 and 100,000.

Thanks to a tip from an anti-spam activist who asked not to be named, I was able to see that the botnet appears to be busy promoting a seemingly endless network of adult dating Web sites connected to just two companies: CyberErotica, and Deniro Marketing LLC (a.k.a. AmateurMatch).

As affiliate marketing programs go, CyberErotica stretches way back — perhaps to the beginning. According to TechCrunch, CyberErotica is said to have launched the first online affiliate marketing firm in 1994.

In 2001, CyberErotica’s parent firm Voice Media settled a lawsuit with the U.S. Federal Trade Commission, which alleged that the adult affiliate program was misrepresenting its service as free while it dinged subscribers for monthly charges and made it difficult for them to cancel.

In 2010, Deniro Marketing found itself the subject of a class-action lawsuit that alleged the company employed spammers to promote an online dating service that was overrun with automated, fake profiles of young women. Those allegations ended in an undisclosed settlement after the judge in the case tossed out the spamming claim because the statute of limitations on those charges had expired.

What’s unusual (and somewhat lame) about this botnet is that — through a variety of botnet reporting panels that are still displaying data — we can get live, real-time updates about the size and status of this crime machine. No authentication or credentials needed. So much for operational security!

The “mind map” pictured below contains enough information for nearly anyone to duplicate this research, and includes the full Web address of the botnet reporting panels that are currently online and responding with live updates. I was unable to load these panels in a Google Chrome browser (perhaps the XML data on the page is missing some key components), but they loaded fine in Mozilla Firefox.

But a note of caution: I’d strongly encourage anyone interested in following my research to take care before visiting these panels, preferably doing so from a disposable “virtual” machine that runs something other than Microsoft Windows.

That’s because spammers are usually involved in the distribution of malicious software, and spammers who maintain vast networks of apparently compromised systems are almost always involved in creating or at least commissioning the creation of said malware. Worse, porn spammers are some of the lowest of the low, so it’s only prudent to behave as if any and all of their online assets are actively hostile or malicious.

A “mind map” tracing some of the research mentioned in this post.

FOLLOW THE HONEY

So how did KrebsOnSecurity tie the spam that was sent to promote these two adult dating schemes to the network of spam botnet panels that I mentioned at the outset of this post? I should say it helped immensely that one anti-spam source maintains a comprehensive, historic collection of spam samples, and that this source shared more than a half dozen related spam samples. Here’s one of them.

All of those spams had similar information included in their “headers” — the metadata that accompanies all email messages.

Received: from minitanth.info-88.top (037008194168.suwalki.vectranet.pl [37.8.194.168])
Received: from exundancyc.megabulkmessage225.com (109241011223.slupsk.vectranet.pl [109.241.11.223])
Received: from disfrockinga.message-49.top (unknown [78.88.215.251])
Received: from offenders.megabulkmessage223.com (088156021226.olsztyn.vectranet.pl [88.156.21.226])
Received: from snaileaterl.inboxmsg-228.top (109241018033.lask.vectranet.pl [109.241.18.33])
Received: from soapberryl.inboxmsg-242.top (037008209142.suwalki.vectranet.pl [37.8.209.142])
Received: from dicrostonyxc.inboxmsg-230.top (088156042129.olsztyn.vectranet.pl [88.156.42.129])

To learn more about what information you can glean from email headers, see this post. But for now, here’s a crash course for our purposes. The so-called “fully qualified domain names” or FQDNs in the list above can be found just to the right of the open parentheses in each line.

When this information is present in the headers (and not simply listed as “unknown”) it is the fully-verified, real name of the machine that sent the message (at least as far as the domain name system is concerned). The dotted address to the right in brackets on each line is the numeric Internet address of the actual machine that sent the message.

The information to the left of the open parentheses is called the “HELO/EHLO string,” and an email server administrator can set this information to display whatever he wants: It could be set to bush[dot]whitehouse[dot]gov. Happily, in this case the spammer seems to have been consistent in the naming convention used to identify the sending domains and subdomains.

Back in October 2016 (when these spam messages were sent) the FQDN “minitanth.info-88[dot]top” resolved to a specific IP address: 37.8.194.168. Using passive DNS tools from Farsight Security — which keeps a historic record of which domain names map to which IP addresses — I was able to find that the spammer who set up the domain info-88[dot]top had associated the domain with hundreds of third-level subdomains (e.g. minithanth.info-88[dot]top, achoretsq.info-88[dot]top, etc.).

It was also clear that this spammer controlled a great many top-level domain names, and that he had countless third-level subdomains assigned to every domain name. This type of spamming is known as “snowshoe” spamming.

Like a snowshoe spreads the load of a traveler across a wide area of snow, snowshoe spamming is a technique used by spammers to spread spam output across many IPs and domains, in order to dilute reputation metrics and evade filters,” writes anti-spam group Spamhaus in its useful spam glossary.

WORKING BACKWARDS

So, armed with all of that information, it took just one or two short steps to locate the IP addresses of the corresponding botnet reporting panels. Quite simply, one does DNS lookups to find the names of the name servers that were providing DNS service for each of this spammer’s second-level domains.

Once one has all of the name server names, one simply does yet more DNS lookups — one for each of the name server names — in order to get the corresponding IP address for each one.

With that list of IP addresses in hand, a trusted source volunteered to perform a series of scans on the addresses using “Nmap,” a powerful and free tool that can map out any individual virtual doorways or “ports” that are open on targeted systems. In this case, an Nmap scan against that list of IPs showed they were all listening for incoming connections on Port 10001.

From there, I took the IP address list and plugged each address individually into the URL field of a browser window in Mozilla Firefox, and then added “:10001” to the end of the address. After that, each address happily loaded a Web page displaying the number of bots connecting to each IP address at any given time.

Here’s the output of one controller that’s currently getting pinged by more than 12,000 systems configured to relay porn spam (the relevant part is the first bit on the second line below — “current activebots=”). Currently, the entire botnet (counting the active bots from all working bot panels) seems to hover around 80,000 systems.

pornbotpanel

At the time, the spam being relayed through these systems was advertising sites that tried to get visitors to sign up for online chat and dating sites apparently affiliated with Deniro Marketing and CyberErotica.

Seeking more information, I began searching the Web for information about CyberErotica’s affiliate offerings and I found that the affiliate program’s marketing division is run by a guy who uses the email address scott@cecash.com.

A Google search quickly reveals that scott@cecash.com also advertises he can be reached using the ICQ instant messenger address of 55687349. I checked icq.com’s member lookup page, and found the name attached to ICQ# 55687349 is “Scott Philips.”

Mr. Philips didn’t return messages seeking comment. But I couldn’t help wonder about the similarity between that name and a convicted Australian porn spammer named Scott Phillips (NB: two “l’s in Phillips).

In 2010, Scott Gregory Phillips was fined AUD $2 million for running a business that employed people to create fake profiles on dating websites in a bid to obtain the mobile phone numbers of dating website users. Phillips’ operation then sent SMS texts such as “get laid, text your number to…”, and then charged $5 on the mobile accounts of people who replied.

Phillips’ Facebook page and Quora profile would have us believe he has turned his life around and is now making a living through day trading. Reached via email, Phillips said he is a loyal reader who long ago quit the spam business.

“I haven’t been in the spam business since 2002 or so,” Phillips said. “I did some SMS spam in 2005, got about 18 million bucks worth of fines for it, and went straight.”

Phillips says he builds “automated commodity trading systems” now, and that virtually all modern spam is botnet-based.

“As far as I know the spam industry is 100% botnet these days, and not a viable proposition for adult sites,” he told KrebsOnSecurity.

Well, it’s certainly a viable proposition for some spammer. The most frustrating aspect of this research is that — in spite of the virtually non-existent operational security employed by whoever built this particular crime machine, I still have no real data on how the botnet is being built, what type of malicious software may be involved, or who’s responsible.

If anyone has additional research or information on this botnet, please don’t hesitate to leave a comment below or get in touch with me directly.

Cory DoctorowTalking about contestable futures on the Imaginary Worlds podcast

I’m in the latest episode of Imaginary Worlds, “Imagining the Internet” (MP3), talking about the future as a contestable place that we can’t predict, but that we can influence.


We were promised flying cars and we got Twitter instead. That’s the common complaint against sci-fi authors. But some writers did imagine the telecommunications that changed our world for better or worse. Cory Doctorow, Ada Palmer, Jo Walton and Arizona State University professor Ed Finn look at the cyberpunks and their predecessors. And artist Paul St George talks about why he’s fascinated by a Skype-like machine from the Victorian era.

CryptogramMillennials and Secret Leaking

I hesitate to blog this, because it's an example of everything that's wrong with pop psychology. Malcolm Harris writes about millennials, and has a theory of why millennials leak secrets. My guess is that you could write a similar essay about every named generation, every age group, and so on.

Worse Than FailureCodeSOD: Classic WTF: Hacker Proof Booleans

We continue our summer break with a classic case of outsmarting oneself in the stupidest way. Original -- Remy

"Years ago, long before I'd actually started programming, I spent my time learning about computers and data concepts by messing around with, believe it or not, cheat devices for video games," wrote Rena K., "The one I used primarily provided a RAM editor and some other tools which allowed me to tool around with the internal game files and I even get into muddling around with the game data all in the interest of seeing what would happen."

"As such, by the time my inflated hacker ego and I got into programming professionally, I was already pretty familiar with basic things like data types and binary. I was feeling pretty darn L33T."

"However, this mindset lead to me thinking that someone could potentially 'steal my program' by replacing my name with theirs in a hex editor and claiming to have made it themselves. (Which wasn't unheard of in the little game hacking communities I was in...) So I used the h4x0r sk1llz I'd picked up to make my program hacker-proof."

"Of course I knew full well how boolean variables worked, but I'd read somewhere that in VB6, boolean types were actually integers. From this, I concluded that it was technically possible that a boolean variable could hold a value that was neither true or false. Of course there was no way to do this from within VB, so that could only mean someone was monkeying around with something they shouldn't. I needed a way to detect this."

if var = true then
    doThings()
else if var = false then
    doOtherThings()
else
    MsgBox("omfg haxor alert")
    End --terminate program
end if

"I kept up adding the above to my code for years until I grew up enough to realize that it didn't do a darn thing. For the record though, nobody ever managed to 'steal my program'."


Do you have any confessions you'd like to make? Send them on in.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianRhonda D'Vine: Apollo 440

It's been a while. And currently I shouldn't even post but rather pack my stuff because I'll get the keys to my flat in 6 days. Yay!

But, for packing I need a good sound track. And today it is Apollo 440. I saw them live at the Sundance Festival here in Vienna 20 years ago. It's been a while, but their music still gives me power to pull through.

So, without further ado, here are their songs:

  • Ain't Talkin' 'Bout Dub: This is the song I first stumbled upon, and got me into them.
  • Stop The Rock: This was featured in a movie I enjoyed, with a great dancing scene. :)
  • Krupa: Also a very up-cheering song!

As always, enjoy!

/music | permanent link | Comments: 2 | Flattr this

Planet DebianEnrico Zini: 5 years of Debian Diversity Statement

The Debian Project welcomes and encourages participation by everyone.

No matter how you identify yourself or how others perceive you: we welcome you. We welcome contributions from everyone as long as they interact constructively with our community.

While much of the work for our project is technical in nature, we value and encourage contributions from those with expertise in other areas, and welcome them into our community.

The Debian Diversity Statement has recently turned 5 years old, and I still find it the best diversity statement I know of, one of the most welcoming texts I've seen, and the result of one of the best project-wide mailing list discussions I can remember.

,

Planet DebianJoey Hess: not tabletop solar

Borrowed a pickup truck today to fetch my new solar panels. This is 1 kilowatt of power on my picnic table.

solar panels on picnic table

Planet DebianSteve Kemp: Porting pfctl to Linux

If you have a bunch of machines running OpenBSD for firewalling purposes, which is pretty standard, you might start to use source-control to maintain the rulesets. You might go further, and use some kind of integration testing to deploy changes from your revision control system into production.

Of course before you deploy any pf.conf file you need to test that the file contents are valid/correct. If your integration system doesn't run on OpenBSD though you have a couple of choices:

  • Run a test-job that SSH's to the live systems, and tests syntax.
    • Via pfctl -n -f /path/to/rules/pf.conf.
  • Write a tool on your Linux hosts to parse and validate the rules.

I looked at this last year and got pretty far, but then got distracted. So the other day I picked it up again. It turns out that if you're patient it's not hard to use bison to generate some C code, then glue it together such that you can validate your firewall rules on a Linux system.

  deagol ~/pf.ctl $ ./pfctl ./pf.conf
  ./pf.conf:298: macro 'undefined_variable' not defined
  ./pf.conf:298: syntax error

Unfortunately I had to remove quite a lot of code to get the tool to compile, which means that while some failures like that above are caught others are missed. The example above reads:

vlans="{vlan1,vlan2}"
..
pass out on $vlans proto udp from $undefined_variable

Unfortunately the following line does not raise an error:

pass out on vlan12 inet proto tcp from <unknown> to $http_server port {80,443}

That comes about because looking up the value of the table named unknown just silently fails. In slowly removing more and more code to make it compile I lost the ability to keep track of table definitions - both their names and their values - Thus the fetching of a table by name has become a NOP, and a bogus name will result in no error.

Now it is possible, with more care, that you could use a hashtable library, or similar, to simulate these things. But I kinda stalled, again.

(Similar things happen with fetching a proto by name, I just hardcoded inet, gre, icmp, icmp6, etc. Things that I'd actually use.)

Might be a fun project for somebody with some time anyway! Download the OpenBSD source, e.g. from a github mirror - yeah, yeah, but still. CVS? No thanks! - Then poke around beneath sbin/pfctl/. The main file you'll want to grab is parse.y, although you'll need to setup a bunch of headers too, and write yourself a Makefile. Here's a hint:

  deagol ~/pf.ctl $ tree
  .
  ├── inc
  │   ├── net
  │   │   └── pfvar.h
  │   ├── queue.h
  │   └── sys
  │       ├── _null.h
  │       ├── refcnt.h
  │       └── tree.h
  ├── Makefile
  ├── parse.y
  ├── pf.conf
  ├── pfctl.h
  ├── pfctl_parser.h
  └── y.tab.c

  3 directories, 11 files

Planet DebianMichael Prokop: Grml 2017.05 – Codename Freedatensuppe

The Debian stretch release is going to happen soon (on 2017-06-17) and since our latest Grml release is based on a very recent version of Debian stretch I’m taking this as opportunity to announce it also here. So by the end of May we released a new stable release of Grml (the Debian based live system focusing on system administrator’s needs), known as version 2017.05 with codename Freedatensuppe.

Details about the changes of the new release are available in the official release notes and as usual the ISOs are available via grml.org/download.

With this new Grml release we finally made the switch from file-rc to systemd. From a user’s point of view this doesn’t change that much, though to prevent having to answer even more mails regarding the switch I wrote down some thoughts in Grml’s FAQ. There are some things that we still need to improve and sort out, but overall the switch to systemd so far went better than anticipated (thanks a lot to the pkg-systemd folks, especially Felipe Sateler and Michael Biebl!).

And last but not least, Darshaka Pathirana helped me a lot with the systemd integration and polishing the release, many thanks!

Happy Grml-ing!

Planet DebianDaniel Pocock: Croissants, Qatar and a Food Computer Meetup in Zurich

In my last blog, I described the plan to hold a meeting in Zurich about the OpenAg Food Computer.

The Meetup page has been gathering momentum but we are still well within the capacity of the room and catering budget so if you are in Zurich, please join us.

Thanks to our supporters

The meeting now has sponsorship from three organizations, Project 21 at ETH, the Debian Project and Free Software Foundation of Europe.

Sponsorship funds help with travel expenses and refreshments.

Food is always in the news

In my previous blog, I referred to a number of food supply problems that have occurred recently. There have been more in the news this week: a potential croissant shortage in France due to the rising cost of butter and Qatar's efforts to air-lift 4,000 cows from the US and Australia, among other things, due to the Saudi Arabia embargo.

The food computer isn't an immediate solution to these problems but it appears to be a helpful step in the right direction.

CryptogramData vs. Analysis in Counterterrorism

This article argues that Britain's counterterrorism problem isn't lack of data, it's lack of analysis.

Cory DoctorowHow to get a signed, personalized copy of Walkaway sent to your door!


The main body of the tour for my novel Walkaway is done (though there are still upcoming stops at Denver Comic-Con, San Diego Comic-Con, the Burbank Public Library and Defcon in Las Vegas), but you can still get signed, personalized copies of Walkaway!

My local, fantastic indie bookstore, Dark Delicacies, has a good supply of Walkaways, and since I pass by it most days, they’ve generously offered to take special orders for me to stop in and personalize so they can ship them anywhere in the world.

You can reach them at +1 818-556-6660, or darkdel@darkdel.com.

Planet DebianHolger Levsen: 20170614-stretch-vim

Changed defaults for vim in Stretch

So appearantly vim in Stretch comes with some new defaults, most notably the mouse is now enabled and there is incremental search, which I find… challenging.

As a reminder for my future self, these needs to go into ~/.vimrc (or /etc/vim/vimrc) to revert those changes:

set mouse=
set noincsearch

Sociological Images“Luxury” versus “discount” pricing and the meaning of the number 9

I discovered a nice gem of an insight this week in an article called The 11 Ways That Consumers Are Hopeless at Math: the symbolism of the number 9.

We’re all familiar with the convention of pricing items one penny below a round number: $1.99 instead of $2.00, $39.99 instead of $40.00, etc. Psychologically, marketers know that this works. We’re more likely to buy something at $89.99 than we are at $90.00.

It’s not, though, because we are tricked by that extra penny for our pockets. It’s because, so argues Derek Thompson, the .99 symbolizes “discount.” It is more than just a number, it has a meaning. It now says to us not just 9, but also You are getting a deal. It doesn’t matter if it’s a carton of eggs for $2.99 or a dishwasher for $299.99. In both cases, putting two 9s at the end makes us feel like smart shoppers.

To bring this point home, in those moments when we’re not looking for a deal, the number 9 has the opposite effect. When marketers want to sell a “luxury” item, they generally don’t use the 9s. They simply state the round number price. The whole point of buying a luxury item is to spend a lot of money because you have the money to spend. It shouldn’t feel like a deal; it should feel like an indulgence. Thompson uses the example of lobster at a high-end restaurant. They don’t sell it to you for $99.99. That looks cheap. They ask you for the $100. And, if you’ve got the money and you’re in the mood, it feels good exactly in part because there are no 9s.

Definitely no 9s:

Photo by artjour street art flickr creative commons.

Not yet convinced? Consider as an example this price tag for a flat screen television. Originally priced at $2,300.00, but discounted at $1,999.99. Suddenly on sale and a whole lot of 9s:

Photo by Paul Swansen flickr creative commons; cropped.
Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianSjoerd Simons: Debian armhf VM on arm64 server

At Collabora one of the many things we do is build Debian derivatives/overlays for customers on a variety of architectures including 32 bit and 64 bit ARM systems. And just as Debian does, our OBS system builds on native systems rather than emulators.

Luckily with the advent of ARM server systems some years ago building natively for those systems has been a lot less painful than it used to be. For 32 bit ARM we've been relying on Calxeda blade servers, however Calxeda unfortunately tanked ages ago and the hardware is starting to show its age (though luckily Debian Stretch does support it properly, so at least the software is still fresh).

On the 64 bit ARM side, we're running on Gigabyte MP30-AR1 based servers which can run 32 bit arm code (As opposed to e.g. ThunderX based servers which can only run 64 bit code). As such running armhf VMs on them to act as build slaves seems a good choice, but setting that up is a bit more involved than it might appear.

The first pitfall is that there is no standard bootloader or boot firmware available in Debian to boot on the "virt" machine emulated by qemu (I didn't want to use an emulation of a real machine). That also means there is nothing to pick the kernel inside the guest at boot nor something which can e.g. have the guest network boot, which means direct kernel booting needs to be used.

The second pitfall was that the current Debian Stretch armhf kernel isn't built with support for the generic PCI host controller which the qemu virtual machine exposes, which means no storage and no network shows up in the guest. Hopefully that will get solved soonish (Debian bug 864726) and can be in a Stretch update, until then a custom kernel package is required using the patch attach to the bug report is required but I won't go into that any further in this post.

So on the happy assumption that we have a kernel that works, the challenge left is to nicely manage direct kernel loading. Or more specifically, how ensure the hosts boots the kernel the guest has installed via the standard apt tools without having to copy kernels around between guest/host, which essentially comes down to exposing /boot from the guest to the host. The solution we picked is to use qemu's 9pfs support to share a folder from the host and use that as /boot of the guest. For the 9p folder the "mapped" security mode seems needed as the "none" mode seems to get confused by dpkg (Debian bug 864718).

As we're using libvirt as our virtual machine manager the remainder of how to glue it all together will be mostly specific to that.

First step is to install the system, mostly as normal. One can directly boot into the vmlinuz and initrd.gz provided by normal Stretch armhf netboot installer (downloaded into e.g /tmp). The setup overall is straight-forward with a few small tweaks:

  • /srv/armhf-vm-boot is setup to be the 9p shared folder (this should exist and owned by the libvirt-qemu user) that will be used for sharing /boot later
  • the kernel args are setup to setup root= with root partition intended to be used in the VM, adjust for your usage.
  • The image file to use the virtio bus, which doesn't seem the default.

Apart from those tweaks the resulting example command is similar to the one that can be found in the virt-install man-page:

virt-install --name armhf-vm --arch armv7l --memory 512 \
  --disk /srv/armhf-vm.img,bus=virtio
  --filesystem /srv/armhf-vm-boot,virtio-boot,mode=mapped \
  --boot=kernel=/tmp/vmlinuz,initrd=/tmp/initrd.gz,kernel_args="console=ttyAMA0,root=/dev/vda1"

Run through the install as you'd normally would. Towards the end the installer will likely complain it can't figure out how to install a bootloader, which is fine. Just before ending the install/reboot, switch to the shell and copy the /boot/vmlinuz and /boot/initrd.img from the target system to the host in some fashion (e.g. chroot into /target and use scp from the installed system). This is required as the installer doesn't support 9p, but to boot the system an initramfs will be needed with the modules needed to mount the root fs, which is provided by the installed initramfs :). Once that's all moved around, the installer can be finished.

Next, booting the installed system. For that adjust the libvirt config (e.g. using virsh edit and tuning the xml) to use the kernel and initramfs copied from the installer rather then the installer ones. Spool up the VM again and it should happily boot into a freshly installed Debian system.

To finalize on the guest side /boot should be moved onto the shared 9pfs, the fstab entry for the new /boot should look something like:

virtio-boot /boot  9p trans=virtio,version=9p2000.L,x-systemd.automount 0 0

With that setup, it's just a matter of shuffling the files in /boot around to the new filesystem and the guest is done (make sure vmlinuz/initrd.img stay symlinks). Kernel upgrades will work as normal and visible to the host.

Now on the host side there is one extra hoop to jump through, as the guest uses the 9p mapped security model symlinks in the guest will be normal files on the host containing the symlink target. To resolve that one, we've used libvirt's qemu hook support to setup a proper symlink before the guest is started. Below is the script we ended up using as an example (/etc/libvirt/hooks/qemu):

vm=$1
action=$2
bootdir=/srv/${vm}-boot

if [ ${action} != "prepare" ] ; then
  exit 0
fi

if [ ! -d ${bootdir} ] ; then
  exit 0
fi

ln -sf $(basename $(cat ${bootdir}/vmlinuz))  ${bootdir}/virtio-vmlinuz
ln -sf $(basename $(cat ${bootdir}/initrd.img))  ${bootdir}/virtio-initrd.img

With that in place, we can simply point the libvirt definition to use /srv/${vm}-boot/virtio-{vmlinuz,initrd.img} as the kernel/initramfs for the machine and it'll automatically get the latest kernel/initramfs as installed by the guest when the VM is started.

Just one final rough edge remains, when doing reboot from the VM libvirt leaves qemu to handle that rather than restarting qemu. This unfortunately means a reboot won't pick up a new kernel if any, for now we've solved this by configuring libvirt to stop the VM on reboot instead. As we typically only reboot VMs on kernel (security) upgrades, while a bit tedious, this avoid rebooting with an older kernel/initramfs than intended.

Planet DebianSjoerd Simons: Debian Jessie on Raspberry Pi 2

Apart from being somewhat slow, one of the downsides of the original Raspberry Pi SoC was that it had an old ARM11 core which implements the ARMv6 architecture. This was particularly unfortunate as most common distributions (Debian, Ubuntu, Fedora, etc) standardized on the ARMv7-A architecture as a minimum for their ARM hardfloat ports. Which is one of the reasons for Raspbian and the various other RPI specific distributions.

Happily, with the new Raspberry Pi 2 using Cortex-A7 Cores (which implement the ARMv7-A architecture) this issue is out of the way, which means that a a standard Debian hardfloat userland will run just fine. So the obvious first thing to do when an RPI 2 appeared on my desk was to put together a quick Debian Jessie image for it.

The result of which can be found at: https://images.collabora.co.uk/rpi2/

Login as root with password debian (Obviously do change the password and create a normal user after booting). The image is 3G, so should fit on any SD card marketed as 4G or bigger. Using bmap-tools for flashing is recommended, otherwise you'll be waiting for 2.5G of zeros to be written to the card, which tends to be rather boring. Note that the image is really basic and will just get you to a login prompt on either serial or hdmi, batteries are very much not included, but can be apt-getted :).

Technically, this image is simply a Debian Jessie debootstrap with a extra packages for hardware support. Unlike Raspbian the first partition (which contains the firmware & kernel files to boot the system) is mounted on /boot/firmware rather then on /boot. This is because the VideoCore expects the first partition to be a FAT filesystem, but mounting FAT on /boot really doesn't work right on Debian systems as it contains files managed by dpkg (e.g. the kernel package) which requires a POSIX compatible filesystem. Essentially the same reason why Debian is using /boot/efi for the ESP partition on Intel systems rather the mounting it on /boot directly.

For reference, the RPI2 specific packages in this image are from https://repositories.collabora.co.uk/debian/ in the jessie distribution and rpi2 component (this repository is enabled by default on the image). The relevant packages there are:

  • linux: Current 3.18 based package from Debian experimental (3.18.5-1~exp1 at the time of this writing) with a stack of patches on top from the raspberrypi github repository and tweaked to build an rpi2 flavour as the patchset isn't multiplatform capable :(
  • raspberrypi-firmware-nokernel: Firmware package and misc libraries packages taken from Raspbian, with a slight tweak to install in /boot/firmware rather then /boot.
  • flash-kernel: Current flash-kernel package from debian experimental, with a small addition to detect the RPI 2 and "flash" the kernel to /boot/firmware/kernel7.img (which is what the GPU will try to boot on this board).

For the future, it would be nice to see the Raspberry Pi 2 support out of the box on Debian. For that to happen, the most important thing would be to have some mainline kernel support for this board (supporting multiplatform!) so it can be build as part of debians armmp kernel flavour. And ideally, having the firmware load a bootloader (such as u-boot) rather than a kernel directly to allow for a much more flexible boot sequence and support for using an initramfs (u-boot has some support for the original Raspberry Pi, so adding Raspberry Pi 2 support should hopefully not be too tricky)

Update: An updated image (20150705) is available with the latest packages from Jessie and a GPG key that's not expired :).

Planet DebianMike Gabriel: Ayatana Indicators

In the near future various upstream projects related to the Ubuntu desktop experience as we have known it so far may become only sporadically maintained or even fully unmaintained. Ubuntu will switch to the Gnome desktop environment with 18.04 LTS as its default desktop, maybe even earlier. The Application Indicators [1] brought into being by Canonical Ltd. will not be needed in Gnome (AFAIK) any more. We can expect the Application Indicator related projects become unmaintained upstream. (In fact I have recently been offered continuation of upstream maintenance of libdbusmenu).

Historical Background

This all started at Ubuntu Developer Summit 2012 when Canonical Ltd. announced Ubuntu to become the successor of Windows XP in business offices. The Unity Greeter received an Remote Login Service enhancement: since then it supports Remote Login to Windows Terminal Servers. The question came up, why Remote Login to Linux servers--maybe even Ubuntu machines--is not on the agenda. It turned out, that it wasn't even a discussable topic. At that time, I started looking into the Unity Greeter code, adding support for X2Go Logon into Unity Greeter. I never really stopped looking at the greeter code from time to time.

Since then, it turned into some sort of a hobby... While looking into the Unity Greeter code over the past years and actually forking Unity Greeter as Arctica Greeter [2] in September 2015, I also started looking into the Application Indicators concept just recently. And I must say, the more I have been looking into it, the more I have started liking the concept behind Application Indicators. The basic idea is awesome. However, lately all indicators became more and more Ubuntu-centric and IMHO too polluted by code related to the just declared dead Ubuntu phablet project.

Forking Application Indicators

Saying all this, I recently forked Application Indicators as Ayatana Indicators. At the moment I represent upstream and Debian package maintainer in one person. Ideally, this is only temporary and more people join in. (I heard some Unity 7 maintainers think about switching to Ayatana Indicators for the now community maintained Unity 7). The goal is to provide Ayatana Indicators to all desktop environments generically, all that want to use them, either as default or optionally. Release-wise, the idea is to strictly differentiate between upstream and Debian downstream in the release cycles of the various related components.

I hope, noone is too concerned about the choice of name, as the "Ayatana" word actually was first used for upstream efforts inside Ubuntu [3]. Using the Ayatana term for the indicator forks is meant as honouring the previously undertaken efforts. I have seen very good work, so far, while going through the indicators' code. The upstream code must not be distro-specific, but, of course, can be distro-aware.

Contributions Welcome

The Ayatana Indicators upstream project components are currently hosted on Github under the umbrella of the Arctica Project. Regarding Debian, first uploads have recently been accepted to Debian experimental. The Debian packages are maintained under the umbrella of the revived Ayatana Packagers team [4].

Meet you at the Ayatana Indicators BoF at DebConf 17 (hopefully)

For DebConf 17 (yeah, I am going there, if all plans work out well!!!!) I have submitted a BoF on this topic (let's hope, it gets accepted...). I'd like to give a quick overview on the current status of above named efforts and reasonings behind my commitment to the work. Most of the time during that BoF I would like to get into discussion with desktop maintainers, possibly upstream developers, Ubuntu developers, etc. Anyone who sees an asset in the Indicators approach is welcome to share and contribute.

References

Planet DebianNicolas Dandrimont: DebConf 17 bursaries: update your status now!

TL;DR: if you applied for a DebConf 17 travel bursary, and you haven’t accepted it yet, login to the DebConf website and update your status before June 20th or your bursary grant will be gone.

*blows dust off the blog*

As you might be aware, DebConf 17 is coming soon and it’s gonna be the biggest DebConf in Montréal ever.

Of course, what makes DebConf great is the people who come together to work on Debian, share their achievements, and help draft our cunning plans to take over the world. Also cheese. Lots and lots of cheese.

To that end, the DebConf team had initially budgeted US$40,000 for travel grants ($30,000 for contributors, $10,000 for diversity and inclusion grants), allowing the bursaries team to bring people from all around the world who couldn’t have made it to the conference.

Our team of volunteers rated the 188 applications, we’ve made a ranking (technically, two rankings : one on contribution grounds and one on D&I grounds), and we finally sent out a first round of grants last week.

After the first round, the team made a new budget assessment, and thanks to the support of our outstanding sponsors, an extra $15,000 has been allocated for travel stipends during this week’s team meeting, with the blessing of the DPL.

We’ve therefore been able to send a second round of grants today.

Now, if you got a grant, you have two things to do : you need to accept your grant, and you need to update your requested amount. Both of those steps allow us to use our budget more wisely: having grants expire frees money up to get more people to the conference earlier. Having updated amounts gives us a better view of our overall budget. (You can only lower your requested amount, as we can’t inflate our budget)

Our system has sent mails to everyone, but it’s easy enough to let that email slip (or to not receive it for some reason). It takes 30 seconds to look at the status of your request on the DebConf 17 website, and even less to do the few clicks needed for you to accept the grant. Please do so now! OK, it might take a few minutes if your SSO certificate has expired and you have to look up the docs to renew it.

The deadline for the first round of travel grants (which went out last week) is June 20th. The deadline for the second round (which went out today) is June 24th. If somehow you can’t login to the website before the deadline, the bursaries team has an email address you can use.

We want to send out a third round of grants on June 25th, using the money people freed up: our current acceptance ratio is around 40%, and a lot of very strong applications have been deferred. We don’t want them to wait up until July to get a definitive answer, so thanks for helping us!

À bientôt à Montréal !

Worse Than FailureClassic WTF: The Accidental Hire

At least we get a summer break, I suppose. Not like over at Doghouse Insurance. Original -- Remy

Doghouse Insurance (as we'll call them) was not a pleasant place to work. Despite being a very successful player in their industry, the atmosphere inside Doghouse was filled with a constant, frenzied panic. If Joe Developer didn't delay his upcoming vacation and put in those weekend hours, he might risk the timely delivery of his team's module, which might risk delaying the entire project, which might risk the company's earnings potential, which might risk the collapse of the global economy. And that's just for the Employee Password Change Webpage project; I can't even begin to fathom the overarching devastation that would ensue from a delayed critical project.

To make matters worse, the primary business application that poor souls like Vinny maintained was a complete nightmare. It was developed during the company's "database simplification" era and consisted of hundreds of different "virtual attribute tables" stuffed into four real tables; it was a classic case of The Inner-Platform Effect. But amidst all this gloom and despair was an upbeat fellow named Chris who accidentally became a part of the Doghouse Insurance team.

Chris interviewed with Doghouse Insurance back in 2002 for a developer position on the Data Warehouse team. With the large pool of available candidates at the time, Chris didn't make the cut and the opening was awarded to someone else. However, Doghouse never communicated this to him and instead offered him a job.

It was an awkward first day; Chris showed up and no one knew what to do with him. They obviously couldn't immediately fire him (it would risk a lawsuit, which might risk a -- oh, you know the drill) and, since all teams were short-staffed, they couldn't increase the headcount on one team because that would be unfair to all of the other managers. After a few weeks, it was finally decided: Chris would be the Source Control Guy.

Doghouse Insurance didn't really have a need for a Source Control Guy and Chris didn't really have any experience being a Source Control Guy. It was a perfect match. After a few months, Chris figured out how to manage Doghouse's Source Control System and became responsible for the entirety of SCS-related tasks: adding new users, reseting forgotten passwords, creating new repositories, and -- well -- that was pretty much it.

While everyone else stressed out and practically killed themselves over deadlines, Chris mostly sat around all day, waiting for that occasional "I forgot my source control password" email. He never gloated nor complained and instead made himself available to listen to his coworker's grievances and tales of woe. Chris would offer up whatever advice he could and would generally lighten the mood of anyone who stopped by his desk for a chat. His cubicle became the sole oasis of sanity in the frantic world of Doghouse Insurance.

Although Vinny is no longer at Doghouse Insurance (he actually left after following Chris' advice), he still does keep in touch with Chris. And Vinny is happy to report that, should you ever find yourself unfortunate enough to work at Doghouse Insurance, you can still find Chris there, managing the Source Control System and eager to chat about the insanity that is Doghouse.

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Planet DebianDirk Eddelbuettel: #7: C++14, R and Travis -- A useful hack

Welcome to the seventh post in the rarely relevant R ramblings series, or R4 for short.

We took a short break as several conferences and other events interfered during the month of May, keeping us busy and away from this series. But we are back now with a short and useful hack I came up with this weekend.

The topic is C++14, i.e. the newest formally approved language standard for C++, and its support in R and on Travis CI. With release R 3.4.0 of a few weeks ago, R now formally supports C++14. Which is great.

But there be devils. A little known fact is that R hangs on to its configuration settings from its own compile time. That matters in cases such as the one we are looking at here: Travis CI. Travis is a tremendously useful and widely-deployed service, most commonly connected to GitHub driving "continuous integration" (the 'CI') testing after each commit. But Travis CI, for as useful as it is, is also maddingly conservative still forcing everybody to live and die by [Ubuntu 14.04]http://releases.ubuntu.com/14.04/). So while we all benefit from the fine work by Michael who faithfully provides Ubuntu binaries for distribution via CRAN (based on the Debian builds provided by yours truly), we are stuck with Ubuntu 14.04. Which means that while Michael can provide us with current R 3.4.0 it will be built on ancient Ubuntu 14.04.

Why does this matter, you ask? Well, if you just try to turn the very C++14 support added to R 3.4.0 on in the binary running on Travis, you get this error:

** libs
Error in .shlib_internal(args) : 
  C++14 standard requested but CXX14 is not defined

And you get it whether or not you define CXX14 in the session.

So R (in version 3.4.0) may want to use C++14 (because a package we submitted requests it), but having been built on the dreaded Ubuntu 14.04, it just can't oblige. Even when we supply a newer compiler. Because R hangs on to its compile-time settings rather than current environment variables. And that means no C++14 as its compile-time compiler was too ancient. Trust me, I tried: adding not only g++-6 (from a suitable repo) but also adding C++14 as the value for CXX_STD. Alas, no mas.

The trick to overcome this is twofold, and fairly straightforward. First off, we just rely on the fact that g++ version 6 defaults to C++14. So by supplying g++-6, we are in the green. We have C++14 by default without requiring extra options. Sweet.

The remainder is to tell R to not try to enable C++14 even though we are using it. How? By removing CXX_STD=C++14 on the fly and just for Travis. And this can be done easily with a small script configure which conditions on being on Travis by checking two environment variables:

#!/bin/bash

## Travis can let us run R 3.4.0 (from CRAN and the PPAs) but this R version
## does not know about C++14.  Even though we can select CXX_STD = C++14, R
## will fail as the version we use there was built in too old an environment,
## namely Ubuntu "trusty" 14.04.
##
## So we install g++-6 from another repo and rely on the fact that is
## defaults to C++14.  Sadly, we need R to not fail and hence, just on
## Travis, remove the C++14 instruction

if [[ "${CI}" == "true" ]]; then
    if [[ "${TRAVIS}" == "true" ]]; then 
        echo "** Overriding src/Makevars and removing C++14 on Travis only"
        sed -i 's|CXX_STD = CXX14||' src/Makevars
    fi
fi

I have deployed this now for two sets of builds in two distinct repositories for two "under-development" packages not yet on CRAN, and it just works. In case you turn on C++14 via SystemRequirements: in the file DESCRIPTION, you need to modify it here.

So to sum up, there it is: C++14 with R 3.4.0 on Travis. Only takes a quick Travis-only modification.

,

Planet DebianReproducible builds folks: Reproducible Builds: week 111 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 4 and Saturday June 10 2017:

Past and upcoming events

On June 10th, Chris Lamb presented at the Hong Kong Open Source Conference 2017 on reproducible builds.

Patches and bugs filed

Reviews of unreproducible packages

7 package reviews have been added, 10 have been updated and 14 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (4)
  • Chris Lamb (1)
  • Christoph Biedl (1)
  • Niko Tyni (1)

Two FTBFS issues of LEDE (exposed in our setup) were found and were fixed:

diffoscope development

  • Chris Lamb: Some code style improvements

tests.reproducible-builds.org:

Alexander 'lynxis' Couzens made some changes for testing LEDE and OpenWrt:

  • Build tar before downloading everything: On system without tar --sort=name we need to compile tar before downloading everything
  • Set CONFIG_AUTOREMOVE to reduce required space
  • Create a workaround for signing keys: LEDE signs the release with a signing key, but generates the signing key if it's not present. To have a reproducible release we need to take care of signing keys.
  • openwrt_get_banner(): use staging_dir instead of build_dir because the former is persistent among the two builds.
  • Don't build all packages to improve development speed for now.
  • Only build one board instead of all boards. Reducing the build time improves developing speed. Once the image is reproducible we will enable more boards.
  • Disable node_cleanup_tmpdirs

Hans-Christoph Steiner, for testing F-Droid:

  • Do full git reset/clean like Jenkins does
  • hard code WORKSPACE dir names, as WORKSPACE cannot be generated from $0 as it's a temporary name.

Daniel Shahaf, for testing Debian:

  • Remote scheduler:
    • English fix to error message.
    • Allow multiple architectures in one invocation.
    • Refactor: Break out a helper function. Rename variable to disambiguate with scheduling_args.message.
  • Include timestamps in logs
  • Set timestamps to second resolution (was millisecond by default).

Holger 'h01ger' Levsen, for testing Debian:

  • Improvements to the breakages page:
    • List broken packages and diffoscope problems first, and t.r-b.o problems last.
    • Reword, drop 'caused by'.
  • Add niceness to our list of variations, running with niceness of 11 for the first build and niceness of 10 for the second one. Thanks to Vagrant for the idea.
  • Automatic scheduler:
    • Reschedule after 12h packages that failed with error 404
    • Run scheduler every 3h instead of every 6h
  • Add basic README about the infrastructure and merge Vagrants notes about his console host.

Misc.

This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Krebs on SecurityMicrosoft, Adobe Ship Critical Fixes

Microsoft today released security updates to fix almost a hundred flaws in its various Windows operating systems and related software. One bug is so serious that Microsoft is issuing patches for it on Windows XP and other operating systems the company no longer officially supports. Separately, Adobe has pushed critical updates for its Flash and Shockwave players, two programs most users would probably be better off without.

brokenwindowsAccording to security firm Qualys, 27 of the 94 security holes Microsoft patches with today’s release can be exploited remotely by malware or miscreants to seize complete control over vulnerable systems with little or no interaction on the part of the user.

Microsoft this month is fixing another serious flaw (CVE-2017-8543) present in most versions of Windows that resides in the feature of the operating system which handles file and printer sharing (also known as “Server Message Block” or the SMB service).

SMB vulnerabilities can be extremely dangerous if left unpatched on a local (internal) corporate network. That’s because a single piece of malware that exploits this SMB flaw within a network could be used to replicate itself to all vulnerable systems very quickly.

It is this very “wormlike” capability — a flaw in Microsoft’s SMB service — that was harnessed for spreading by WannaCry, the global ransomware contagion last month that held files for ransom at countless organizations and shut down at least 16 hospitals in the United Kingdom.

According to Microsoft, this newer SMB flaw is already being exploited in the wild. The vulnerability affects Windows Server 2016, 2012, 2008 as well as desktop systems like Windows 10, 7 and 8.1.

The SMB flaw — like the one that WannaCry leveraged — also affects older, unsupported versions of Windows such as Windows XP and Windows Server 2003. And, as with that SMB flaw, Microsoft has made the unusual decision to make fixes for this newer SMB bug available for those older versions. Users running XP or Server 2003 can get the update for this flaw here.

“Our decision today to release these security updates for platforms not in extended support should not be viewed as a departure from our standard servicing policies,” wrote Eric Doerr, general manager of Microsoft’s Security Response Center.

“Based on an assessment of the current threat landscape by our security engineers, we made the decision to make updates available more broadly,” Doerr wrote. “As always, we recommend customers upgrade to the latest platforms. “The best protection is to be on a modern, up-to-date system that incorporates the latest defense-in-depth innovations. Older systems, even if fully up-to-date, lack the latest security features and advancements.”

The default browsers on Windows — Internet Explorer or Edge — get their usual slew of updates this month for many of these critical, remotely exploitable bugs. Qualys says organizations using Microsoft Outlook should pay special attention to a newly patched bug in the popular mail program because attackers can send malicious email and take complete control over the recipient’s Windows machine when users merely view a specially crafted email in Outlook.

brokenflash-aSeparately, Adobe has issued updates to fix critical security problems with both its Flash Player and Shockwave Player. If you have Shockwave installed, please consider removing it now.

For starters, hardly any sites require this plugin to view content. More importantly, Adobe has a history of patching Shockwave’s built-in version of Flash several versions behind the stand-alone Flash plugin version. As a result Shockwave has been a high security risk to have installed for many years now. For more on this trend, see Why You Should Ditch Adobe Shockwave.

Same goes for Adobe Flash Player, which probably most users can get by with these days just enabling it in the rare instance that it’s required. I recommend for users who have an affirmative need for Flash to leave it disabled until that need arises. Otherwise, get rid of it.

Adobe patches dangerous new Flash flaws all the time, and Flash bugs are still the most frequently exploited by exploit kits — malware booby traps that get stitched into the fabric of hacked and malicious Web sites so that visiting browsers running vulnerable versions of Flash get automatically seeded with malware.

For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

If you choose to keep Flash, please update it today to version 26.0.0.126. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.

As always, if you experience any issues downloading or installing any of these updates, please leave a note about it in the comments below.

Update, May 16, 10:38 a.m. ET: Microsoft has revised its bulletin on the vulnerability for which it issued Windows XP fixes (CVE-2017-8543) to clarify that the problem fixed by the patch is in the Windows Search service, not the SMB service as Microsoft previously stated in the bulletin. The original bulletin from Microsoft’s Security Response Center incorrectly stated that SMB was part of this vulnerability: rather, it has nothing to do with this vulnerability and was not patched. The vulnerability is in Windows Search only. I’m mentioning it here because a Windows user or admin thinking that turning off SMBor blocking SMB would stop all vectors to this attack would be wrong and still vulnerable without the patch. All an attacker needs to is get some code to talk to Windows Search in a malformed way – even locally — to exploit this Windows Search flaw.

Planet DebianJonathan Wiltshire: What to expect on Debian release day

Nearly two years ago I wrote about what to expect on Jessie release day. Shockingly enough, the process for Stretch to be released should be almost identical.

TEDSneak preview lineup unveiled for Africa’s next TED Conference

On August 27, an extraordinary group of people will gather in Arusha, Tanzania, for TEDGlobal 2017, a four-day TED Conference for “those with a genuine interest in the betterment of the continent,” says curator Emeka Okafor.

As Okafor puts it: “Africa has an opportunity to reframe the future of work, cultural production, entrepreneurship, agribusiness. We are witnessing the emergence of new educational and civic models. But there is, on the flip side, a set of looming challenges that include the youth bulge and under-/unemployment, a food crisis, a risky dependency on commodities, slow industrializations, fledgling and fragile political systems. There is a need for a greater sense of urgency.”

He hopes the speakers at TEDGlobal will catalyze discussion around “the need to recognize and amplify solutions from within the Africa and the global diaspora.”

Who are these TED speakers? A group of people with “fresh, unique perspectives in their initiatives, pronouncements and work,” Okafor says. “Doers as well as thinkers — and contrarians in some cases.” The curation team, which includes TED head curator Chris Anderson, went looking for speakers who take “a hands-on approach to solution implementation, with global-level thinking.”

Here’s the first sneak preview — a shortlist of speakers who, taken together, give a sense of the breadth and topics to expect, from tech to the arts to committed activism and leadership. Look for the long list of 35–40 speakers in upcoming weeks.

The TEDGlobal 2017 conference happens August 27–30, 2017, in Arusha, Tanzania. Apply to attend >>

Kamau Gachigi, Maker

“In five to ten years, Kenya will truly have a national innovation system, i.e. a system that by its design audits its population for talented makers and engineers and ensures that their skills become a boon to the economy and society.” — Kamau Gachigi on Engineering for Change

Dr. Kamau Gachigi is the executive director of Gearbox, Kenya’s first open makerspace for rapid prototyping, based in Nairobi. Before establishing Gearbox, Gachigi headed the University of Nairobi’s Science and Technology Park, where he founded a Fab Lab full of manufacturing and prototyping tools in 2009, then built another one at the Riruta Satellite in an impoverished neighborhood in the city. At Gearbox, he empowers Kenya’s next generation of creators to build their visions. @kamaufablab

Mohammed Dewji, Business leader

“My vision is to facilitate the development of a poverty-free Tanzania. A future where the opportunities for Tanzanians are limitless.” — Mohammed Dewji

Mohammed Dewji is a Tanzanian businessman, entrepreneur, philanthropist, and former politician. He serves as the President and CEO of MeTL Group, a Tanzanian conglomerate operating in 11 African countries. The Group operates in areas as diverse as trading, agriculture, manufacturing, energy and petroleum, financial services, mobile telephony, infrastructure and real estate, transport, logistics and distribution. He served as Member of Parliament for Singida-Urban from 2005 until his retirement in 2015. Dewji is also the Founder and Trustee of the Mo Dewji Foundation, focused on health, education and community development across Tanzania. @moodewji

Meron Estefanos, Refugee activist

“Q: What’s a project you would like to move forward at TEDGlobal?
A: Bringing change to Eritrea.” —Meron Estefanos

Meron Estefanos is an Eritrean human rights activist, and the host and presenter of Radio Erena’s weekly program “Voices of Eritrean Refugees,” aired from Paris. Estefanos is executive director of the Eritrean Initiative on Refugee Rights (EIRR), advocating for the rights of Eritrean refugees, victims of trafficking, and victims of torture. Ms Estefanos has been key in identifying victims throughout the world who have been blackmailed to pay ransom for kidnapped family members, and was a key witness in the first trial in Europe to target such blackmailers. She is co-author of Human Trafficking in the Sinai: Refugees between Life and Death and The Human Trafficking Cycle: Sinai and Beyond, and was featured in the film Sound of Torture. She was nominated for the 2014 Raoul Wallenberg Award for her work on human rights and victims of trafficking. @meronina

Touria El Glaoui, Art fair founder

“I’m looking forward to discussing the roles we play as leaders and tributaries in redressing disparities within arts ecosystems. The art fair is one model which has had a direct effect on the ways in which audiences engage with art, and its global outlook has contributed to a highly mobile and dynamic means of interaction.” — Touria El Glaoui

Touria El Glaoui is the founding director of the 1:54 Contemporary African Art Fair, which takes place in London and New York every year and, in 2018, launches in Marrakech. The fair highlights work from artists and galleries across Africa and the diaspora, bringing visibility in global art markets to vital upcoming visions. El Glaoui began her career in the banking industry before founding 1:54 in 2013. Parallel to her career, Touria has organised and co-curated exhibitions of her father’s work, the Moroccan artist Hassan El Glaoui, in London and Morocco. @154artfair

Gus Casely-Hayford, Historian

“Technological, demographic, economic and environmental change are recasting the world profoundly and rapidly. The sentiment that we are traveling through unprecedented times has left many feeling deeply unsettled, but there may well be lessons to learn from history — particularly African history — lessons that show how brilliant leadership and strategic intervention have galvanised and united peoples around inspirational ideas.” — Gus Casely-Hayford

Dr. Gus Casely-Hayford is a curator and cultural historian who writes, lectures and broadcasts widely on African culture. He has presented two series of The Lost Kingdoms of Africa for the BBC and has lectured widely on African art and culture, advising national and international bodies on heritage and culture. He is currently developing a National Portrait Gallery exhibition that will tell the story of abolition of slavery through 18th- and 19th-century portraits — an opportunity to bring many of the most important paintings of black figures together in Britain for the first time.

Oshiorenoya Agabi, Computational neuroscientist

“Koniku eventually aims to build a device that is capable of thinking in the biological sense, like a human being. We think we can do this in the next two to five years.” — Oshiorenoya Agabi on IndieBio.co

With his startup Koniku, Oshiorenoya Agabi is working to integrate biological neurons and silicon computer chips, to build computers that can think like humans can. Faster, cleverer computer chips are key to solving the next big batch of computing problems, like particle detection or sophisticated climate modeling — and to get there, we need to move beyond the limitations of silicon, Agabi believes. Born and raised in Lagos, Nigeria, Agabi is now based in the SF Bay Area, where he and his lab mates are working on the puzzle of connecting silicon to biological systems.

Natsai Audrey Chieza, Design researcher

Photo: Natsai Audrey Chieza

Natsai Audrey Chieza is a design researcher whose fascinating work crosses boundaries between technology, biology, design and cultural studies. She is founder and creative director of Faber Futures, a creative R&D studio that conceptualises, prototypes and evaluates the resilience of biomaterials emerging through the convergence of bio-fabrication, digital fabrication and traditional craft processes. As Resident Designer at the Department of Biochemical Engineering, University College London, she established a design-led microbiology protocol that replaces synthetic pigments with natural dyes excreted by bacteria — producing silk scarves dyed brilliant blues, reds and pinks. The process demands a rethink of the entire system of fashion and textile production — and is also a way to examine issues like resource scarcity, provenance and cultural specificity. @natsaiaudrey

Stay tuned for more amazing speakers, including leaders, creators, and more than a few truth-tellers … learn more >>


CryptogramSecurity Flaws in 4G VoLTE

Research paper: "Subscribers remote geolocation and tracking using 4G VoLTE enabled Android phone," by Patrick Ventuzelo, Olivier Le Moal, and Thomas Coudray.

Abstract: VoLTE (Voice over LTE) is a technology implemented by many operators over the world. Unlike previous 2G/3G technologies, VoLTE offers the possibility to use the end-to-end IP networks to handle voice communications. This technology uses VoIP (Voice over IP) standards over IMS (IP Multimedia Subsystem) networks. In this paper, we will first introduce the basics of VoLTE technology. We will then demonstrate how to use an Android phone to communicate with VoLTE networks and what normal VoLTE communications look like. Finally, we will describe different issues and implementations' problems. We will present vulnerabilities, both passive and active, and attacks that can be done using VoLTE Android smartphones to attack subscribers and operators' infrastructures. Some of these vulnerabilities are new and not previously disclosed: they may allow an attacker to silently retrieve private pieces of information on targeted subscribers, such as their geolocation.

News article. Slashdot thread.

Worse Than FailureCodeSOD: Classic WTF: It's Like Calling Assert

We continue our summer vacation with this gem- a unique way to interact with structured exception handling, to be sure. Original. --Remy

When we go from language to language and platform to platform, a whole lot of “little things” change about how we write code: typing, syntax, error handling, etc. Good developers try to adapt to a new language by reading the documentation, asking experienced colleagues, and trying to follow best practices. “Certain Developers,” however, try to make the language adapt to their way of doing things.

Adrien Kunysz discovered this following code written by a “Certain Developer” who wasn’t a fan of the try...catch…finally approach called for in .NET Java development and exception handling.

   /**
    * Like calling assert(false) in C.
    */
   protected final void BUG (String msg) {
       Exception e = null;
       try { throw new Exception (); } catch (Exception c) { e = c; }
       logger.fatal (msg, e);
       System.exit (1);
   }

And I’m sure that, by commenting “Like calling assert(false) in C,” the author doesn’t mean assert.h, but means my_assert.h. After all, who is C – or any other language – to tell him how errors should be handled?

UPDATE: Fixed Typos and language. I swear, at 7:00AM this looked fine to me...

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 182 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change and we are thus still a little behind our objective.

The security tracker currently lists 44 packages with a known CVE and the dla-needed.txt file 42. The number of open issues is close to last month.

Thanks to our sponsors

New sponsors are in bold (none this month unfortunately).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

,

Sociological Images2/3rds of sexual minorities now identify as bisexual, but it depends

Originally posted at Inequality by (Interior) Design.

I’ve been following a couple different data sets that track the size of the LGB(T) population in the United States for a few years. There’s a good amount of evidence that all points in the same direction: those identifying as lesbian, gay, bisexual, and possibly transgender too are all on the rise. Just how large of an increase is subject to a bit of disagreement, but the larger trend is undeniable. Much of the reporting on this shift treats this as a fact that equally blankets the entirety of the U.S. population (or only deals superficially with the really interesting demographic questions concerning the specific groups within the population that account for this change).

In a previous post, I separated the L’s, G’s and B’s because I suspected that more of this shift was accounted for by bisexuals than is often discussed in any critical way (*the GSS does not presently have a question that allows us to separate anyone identifying as transgender or outside the gender binary). Between 2008 and 2016, the proportion of the population identifying as lesbian or gay went from 1.6% to 2.4%. During the same period, those identifying as bisexual jumped from 1.1% to 3.3%. It’s a big shift and it’s even bigger when you look at how pronounced it is among the groups who primarily account for this change: women, people of color, and young people.

The thing about sexual identities though, is that they’re just like other kinds of meaningful identities in that they intersect with other identities in ways that produce different sorts of meanings depending upon what kinds of configurations of identities they happen to be combined with (like age, race, and gender). For instance, as a sexual identity, bisexual is more common than both lesbian and gay combined. But, bisexuality is gendered. Among women, “bisexual” is a more common sexual identity than is “lesbian”; but among men, “gay” is a more common sexual identity than “bisexual”–though this has shifted a bit over the 8 years GSS has been asking questions about sexual orientation. And so too is bisexuality a racialized identity in that the above gendered trend is more true of white and black men than men of other races.

Consider this: between 2008 and 2016, among young people (18-34 years old), those identifying as lesbian or gay went from 2.7% to 3.0%, while those identifying as “bisexual” increased twofold, from 2.6% to 5.3%.  But, look at how this more general change among young people looks when we break it down by gender.
Picture1

Looked at this way, bisexuality as a sexual identity has more than doubled in recent years. Among 18-34 year old women in 2016, the GSS found 8% identifying as bisexual.  You have to be careful with GSS data once you start parsing the data too much as the sample sizes decrease substantially once we start breaking things down by more than gender and age. But, just for fun, I wanted to look into how this trend looked when we examined it among different racial groups (GSS only has codes for white, black, and other).Picture1

Here, you can see a couple things.  But one of the big stories I see is that “bisexual” identity appears to be particularly absent among Black men in the U.S. And, among young men identifying as a race other than Black or white, bisexuality is a much more common identity than is gay. It’s also true that the proportions of gay and bisexual men in each group appear to jump around year to year.  The general trend follows the larger pattern – toward more sexual minority identities.  But, it’s less straightforward than that when we actually look at the shift among a few specific racial groups within one gender.  Now, look at this trend among women.

Picture1
Here, we clearly see the larger trend that “bisexual” appears to be a more common sexual identity than “lesbian.” But, look at Black women in 2016.  In 2016, just shy of one in five Black women between the ages of 18 and 34 identified as lesbian or bisexual (19%) in the GSS sample! And about two thirds of those women are identifying as bisexual (12.4%) rather than as lesbian (6.6%). Similarly, and mirroring the larger trend that “bisexual” is more common among women while “gay” is more popular among men, “lesbian” is a noticeably absent identity among women identifying as a race other than Black or white just as “gay” is less present among men identifying as a race other than Black or white.

Below is all that information in a single chart.  I felt it was a little less intuitive to read in this form. But this is the combined information from the two graphs preceding this if it’s helpful to see it in one chart.

Picture1

What these shifts mean is a larger question. But it’s one that will require an intersectional lens to interpret. And this matters because bisexuality is a less-discussed sexual identification–so much so that “bi erasure” is used to address the problem of challenging the legitimacy or even existence of this sexual identity. As a sexual identification in the U.S., however, “bisexual” is actually more common than “gay” and “lesbian” identifications combined.

And yet, whether bisexual identifying people will or do see themselves as part of a distinct sexual minority is more of an open question. All of this makes me feel that we need to consider more carefully whether grouping bisexuals with lesbian women and gay men when reporting shifts in the LGB population. Whatever is done, we should care about bisexuality (particularly among women), because this is a sexual identification that is becoming much more common than is sometimes recognized.

Tristan Bridges, PhD is a professor at The College at Brockport, SUNY. He is the co-editor of Exploring Masculinities: Identity, Inequality, Inequality, and Change with C.J. Pascoe and studies gender and sexual identity and inequality. You can follow him on Twitter here. Tristan also blogs regularly at Inequality by (Interior) Design.

(View original at https://thesocietypages.org/socimages)

CryptogramHealthcare Industry Cybersecurity Report

New US government report: "Report on Improving Cybersecurity in the Health Care Industry." It's pretty scathing, but nothing in it will surprise regular readers of this blog.

It's worth reading the executive summary, and then skimming the recommendations. Recommendations are in six areas.

The Task Force identified six high-level imperatives by which to organize its recommendations and action items. The imperatives are:

  1. Define and streamline leadership, governance, and expectations for health care industry cybersecurity.

  2. Increase the security and resilience of medical devices and health IT.

  3. Develop the health care workforce capacity necessary to prioritize and ensure cybersecurity awareness and technical capabilities.

  4. Increase health care industry readiness through improved cybersecurity awareness and education.

  5. Identify mechanisms to protect research and development efforts and intellectual property from attacks or exposure.

  6. Improve information sharing of industry threats, weaknesses, and mitigations.

News article.

Slashdot thread.

Worse Than FailureClassic WTF: Server Room Fans and More Server Room Fun

The Daily WTF is taking a short summer break this week, and as the temperatures around here are edging up towards "Oh God I Want to Die" degrees Fahrenheit, I thought it'd be great to kick off this week of classic articles with some broiling hot server room hijinks. -- Remy

"It's that time of year again," Robert Rossegger wrote, "you know, when the underpowered air conditioner just can't cope with the non-winter weather? Fortunately, we have a solution for that... and all we need to do is just keep an extra eye on people walking near the (completely ajar) server room door."

 

"For as long as anyone can remember," Mike E wrote, "the fax machine in one particular office was a bit spotty whenever it was wet out. After having the telco test the lines from the DMARC to the office, I replaced the hardware, looked for water leaks all along the run, and found precisely nothing. The telco disavowed all responsibility, so the best solution I could offer was to tell the users affected by this to look out the window and, if raining, go to another fax machine."

"One day, we had the telco out adding a T1 and they had the cap off of the vault where our cables come in to the building. Being curious by nature, I wandered over when nobody was around and wound up taking this picture. After emailing same to the district manager of the telco, suddenly we had the truck out for an extra day (accompanied by one very sullen technician) and the fax machine worked perfectly from then on."

 

"I found this when I came back in to work after some time off," writes Sam Nicholson, "that drive is actually earmarked for 'off-site backup'. Also, this is what passes for a server rack at this particular software company. Yes, it's made of wood."

 

"Some people use 'proper electrical wiring'," writes Mike, "others use 'extension cords'. We, on the other hand, apparently do this."

 

"I was staying at a hotel in Manhattan and somehow took a wrong turn and wound up in the stairwell," wrote Dan, "not only is all their equipment in a public place (without even a door), it's mostly hanging from cables in several places."

 

"I spotted this in China," writes Matt, "This poor switch was bolted to a column in the middle of some metal shop about 4m above ground. There were many more curious things, but I decided to keep a low profile and stop taking pictures."

 

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

,

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 2, week 9

The OpenSTEM™ Understanding Our World® units have only 9 weeks per term, so this is the last week! Our youngest students are looking at some Aboriginal Places; slightly older older students are thinking about what their school and local area were like when their parents and grandparents were children; and students in years 3 to 6 are completing their presentations and anything else that might be outstanding from the term.

Foundation/Prep/Kindy

Students in the stand-alone Foundation/Prep/Kindy class (Unit F.2) examine Aboriginal Places this week. Students examine which places are special to Aboriginal people, and how these places should be cared for by Aboriginal people and the broader community. Several of the Australian places in the Aunt Madge’s Suitcase Activity can be used to support this discussion in the classroom. Students in an integrated Foundation/Prep/Kindy and Year 1 class (Unit F.6), as well as Year 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) students consider life in the times of their parents and grandparents, with specific reference to their school, or the local area studied during this unit. Teachers may wish to invite older members of the community (including interested parents and/or grandparents) in to the class to describe their memories of the area in former years. Were any of them past students of the school? This is a great opportunity for students to come up with their own questions about life in past times.

Years 3 to 6

Aunt Madge

Students in Year 3 (Unit 3.6), 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are finishing off their presentations and any outstanding work this week. Sometimes the middle of term can be very rushed and so it’s always good to have some breathing space at the end to catch up on anything that might have been squeezed out before. For those classes where everyone is up-to-date and looking for extra activities, the Aunt Madge’s Suitcase Activity is always popular with students and can be used to support their learning. Teachers may wish to select a range of destinations appropriate to the work covered during the term and encourage students to think about how those destinations relate to the material covered in class. Destinations may be selected by continent or theme – e.g. natural places or historical sites. A further advantage of Aunt Madge is that the activity can be tailored to fit the available time – from 5 or 10 minutes for a single destination, to 45 minutes or more for a full selection; and played in groups, or as a whole class, allowing some students to undertake the activity while other students may be catching up on other work. Students may also wish to revisit aspects of the Ancient Sailing Ships Activity and expand on their investigations.

Although this is the last week of this term’s units, we will have some more suggestions for extra activities next week – particularly those that keep the students busy while teachers attend to marking or compiling of reports.

Don MartiApple's kangaroo cookie robot

I'm looking forward to trying "Intelligent Tracking Prevention" in Apple Safari. But first, let's watch an old TV commercial for MSN.

Today, a spam filter seems like a must-have feature for any email service. But MSN started talking about its spam filtering back when Sanford Wallace, the "Spam King," was saying stuff like this.

I have to admit that some people hate me, but I have to tell you something about hate. If sending an electronic advertisement through email warrants hate, then my answer to those people is "Get a life. Don't hate somebody for sending an advertisement through email." There are people out there that also like us.

According to spammers, spam filtering was just Internet nerds complaining about something that regular users actually like. But the spam debate ended when big online services, starting with MSN, started talking about how they build for their real users instead of for Wallace's hypothetical spam-loving users.

If you missed the email spam debate, don't worry. Wallace's talking points about spam filters constantly get recycled by surveillance marketers talking about tracking protection. But now it's not email spam that users supposedly crave. Today, the Interactive Advertising Bureau tells us that users want ads that "follow them around" from site to site.

Enough background. Just as the email spam debate ended with MSN's campaign, the third-party web tracking debate ended on June 5, 2017.

With Intelligent Tracking Prevention, WebKit strikes a balance between user privacy and websites’ need for on-device storage. That said, we are aware that this feature may create challenges for legitimate website storage, i.e. storage not intended for cross-site tracking.

If you need it in bullet points, here it is.

  • Nifty machine learning technology is coming in on the user's side.

  • "Legitimate" uses do not include cross-site tracking.

  • Safari's protection is automatic and client-side, so no blocklist politics.

Surveillance marketers come up with all kinds of hypothetical reasons why users might prefer targeted ads. But in the real world, Apple invests time and effort to understand user experience. When Apple communicates about a feature, it's because that feature is likely to keep a user satisfied enough to buy more Apple devices. We can't read their confidential user research, but we can see what the company learned from it based on how they communicate about products.

(Imagine for a minute that Apple's user research had found that real live users are more like the Interactive Advertising Bureau's idea of a user. We might see announcements more like "Safari automatically shares your health and financial information with brands you love!" Anybody got one of those to share?)

Saving an out-of-touch ad industry

Advertising supports journalism and cultural works that would not otherwise exist. It's too important not to save. Bob Hoffman asks,

[H]ow can we encourage an acceptable version of online advertising that will allow us to enjoy the things we like about the web without the insufferable annoyance of the current online ad model?

The browser has to be part of the answer. If the browser does its job, as Safari is doing, it can play a vital role in re-connecting users with legit advertising—just as users have come to trust legit email newsletters now that they have effective spam filters.

Safari's Intelligent Tracking Prevention is not the final answer any more than Paul Graham's "A plan for spam" was the final spam filter. Adtech will evade protection tools just as spammers did, and protection will have to keep getting better. But at least now we can finally say debate over, game on.

With New Browser Tech, Apple Preserves Privacy and Google Preserves Trackers

An Ad Network That Works With Fake News Sites Just Launched An Anti–Fake News Initiative

Google Slammed For Blocking Ads While Allowing User Tracking

Introducing FilterBubbler: A WebExtension built using React/Redux

Forget far-right populism – crypto-anarchists are the new masters

Risks to brands under new EU regulations

Breitbart ads plummet nearly 90 percent in three months as Trump’s troubles mount

Be Careful Celebrating Google’s New Ad Blocker. Here’s What’s Really Going On.

‘We know the industry is a mess’: Marketers share challenges at Digiday Programmatic Marketing Summit

FIREBALL – The Chinese Malware of 250 Million Computers Infected

Verified bot laundering 2. Not funny. Just die

Publisher reliance on tech providers is ‘insane’: A Digiday+ town hall with The Washington Post’s Jarrod Dicker

Why pseudonymization is not the silver bullet for GDPR.

A level playing field for companies and consumers

,

CryptogramFriday Squid Blogging: Sex Is Traumatic for the Female Dumpling Squid

The more they mate, the sooner they die. Academic paper (paywall). News article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramNSA Document Outlining Russian Attempts to Hack Voter Rolls

This week brought new public evidence about Russian interference in the 2016 election. On Monday, the Intercept published a top-secret National Security Agency document describing Russian hacking attempts against the US election system. While the attacks seem more exploratory than operational ­-- and there's no evidence that they had any actual effect ­-- they further illustrate the real threats and vulnerabilities facing our elections, and they point to solutions.

The document describes how the GRU, Russia's military intelligence agency, attacked a company called VR Systems that, according to its website, provides software to manage voter rolls in eight states. The August 2016 attack was successful, and the attackers used the information they stole from the company's network to launch targeted attacks against 122 local election officials on October 27, 12 days before the election.

That is where the NSA's analysis ends. We don't know whether those 122 targeted attacks were successful, or what their effects were if so. We don't know whether other election software companies besides VR Systems were targeted, or what the GRU's overall plan was -- if it had one. Certainly, there are ways to disrupt voting by interfering with the voter registration process or voter rolls. But there was no indication on Election Day that people found their names removed from the system, or their address changed, or anything else that would have had an effect -- anywhere in the country, let alone in the eight states where VR Systems is deployed. (There were Election Day problems with the voting rolls in Durham, NC ­-- one of the states that VR Systems supports ­-- but they seem like conventional errors and not malicious action.)

And 12 days before the election (with early voting already well underway in many jurisdictions) seems far too late to start an operation like that. That is why these attacks feel exploratory to me, rather than part of an operational attack. The Russians were seeing how far they could get, and keeping those accesses in their pocket for potential future use.

Presumably, this document was intended for the Justice Department, including the FBI, which would be the proper agency to continue looking into these hacks. We don't know what happened next, if anything. VR Systems isn't commenting, and the names of the local election officials targeted did not appear in the NSA document.

So while this document isn't much of a smoking gun, it's yet more evidence of widespread Russian attempts to interfere last year.

The document was, allegedly, sent to the Intercept anonymously. An NSA contractor, Reality Leigh Winner, was arrested Saturday and charged with mishandling classified information. The speed with which the government identified her serves as a caution to anyone wanting to leak official US secrets.

The Intercept sent a scan of the document to another source during its reporting. That scan showed a crease in the original document, which implied that someone had printed the document and then carried it out of some secure location. The second source, according to the FBI's affidavit against Winner, passed it on to the NSA. From there, NSA investigators were able to look at their records and determine that only six people had printed out the document. (The government may also have been able to track the printout through secret dots that identified the printer.) Winner was the only one of those six who had been in e-mail contact with the Intercept. It is unclear whether the e-mail evidence was from Winner's NSA account or her personal account, but in either case, it's incredibly sloppy tradecraft.

With President Trump's election, the issue of Russian interference in last year's campaign has become highly politicized. Reports like the one from the Office of the Director of National Intelligence in January have been criticized by partisan supporters of the White House. It's interesting that this document was reported by the Intercept, which has been historically skeptical about claims of Russian interference. (I was quoted in their story, and they showed me a copy of the NSA document before it was published.) The leaker was even praised by WikiLeaks founder Julian Assange, who up until now has been traditionally critical of allegations of Russian election interference.

This demonstrates the power of source documents. It's easy to discount a Justice Department official or a summary report. A detailed NSA document is much more convincing. Right now, there's a federal suit to force the ODNI to release the entire January report, not just the unclassified summary. These efforts are vital.

This hack will certainly come up at the Senate hearing where former FBI director James B. Comey is scheduled to testify Thursday. Last year, there were several stories about voter databases being targeted by Russia. Last August, the FBI confirmed that the Russians successfully hacked voter databases in Illinois and Arizona. And a month later, an unnamed Department of Homeland Security official said that the Russians targeted voter databases in 20 states. Again, we don't know of anything that came of these hacks, but expect Comey to be asked about them. Unfortunately, any details he does know are almost certainly classified, and won't be revealed in open testimony.

But more important than any of this, we need to better secure our election systems going forward. We have significant vulnerabilities in our voting machines, our voter rolls and registration process, and the vote tabulation systems after the polls close. In January, DHS designated our voting systems as critical national infrastructure, but so far that has been entirely for show. In the United States, we don't have a single integrated election. We have 50-plus individual elections, each with its own rules and its own regulatory authorities. Federal standards that mandate voter-verified paper ballots and post-election auditing would go a long way to secure our voting system. These attacks demonstrate that we need to secure the voter rolls, as well.

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­-- by extension ­-- our democracy. Yes, fixing this will be expensive. Yes, it will require federal action in what's historically been state-run systems. But as a country, we have no other option.

This essay previously appeared in the Washington Post.

TEDTwo surprising strategies for effective innovation

Picture this: Three kids are given a LEGO set with the pieces to build a fire department. All of them want to build as many new toys as possible.

The first kid goes straight for the easy wins. He puts a tiny red hat on a tiny minifig: presto, a firefighter! In this way, he quickly makes several simple toys. The second kid goes by intuition. He chooses the pieces he’s drawn to and imagines how he could combine them. The third takes a different strategy altogether: She picks up axles, wheels, base plates; pieces she can’t use now but knows she’ll need later if she wants to build complex toys.

By the time they’re finished playing, which kid will have created the most new toys?

Common lore favors the second kid’s strategy — innovation by intuition or visionary foresight. “Innovation has been more of an art than a science,” says Martin Reeves (TED Talk: How to build a business that lasts 100 years), a senior partner and managing director at BCG, and global director of BCG’s think tank. “We think it’s dependent on intuition or personality or luck.”

A new study, led by Reeves and Thomas Fink from the London Institute of Mathematical Sciences, shows that’s not the case.

“Innovation is an unpredictable process, but one with predictable features,” says Reeves. “It’s not just a matter of luck. It’s possible to have a strategy of innovation.”

The study found that the second kid, guided only by intuition and vision, is the least likely to succeed. The other two are the ones to emulate, but the secret is knowing how and when to use each of their tactics.   

The Impatient Strategy

Let’s go back to the first kid, the one who started by putting hats on the figurines. His strategy is familiar to entrepreneurs: he’s creating the minimum viable product, or the simplest, fastest version of a finished product.

Reeves calls that an “impatient strategy.” It’s fast, iterative, and bare bones.  

When you’re breaking into a market that’s fairly new, an impatient strategy is the best way to go. “Look for simple solutions,” says Reeves.    

For example, that’s what Uber did when it first launched. The industry was young and easy to disrupt, so the app combined technologies that already existed to create a simple black-car service. Only later did it become the sprawling company it is today, looking ahead to things like the future of self-driving cars.   

The Patient Strategy

An impatient strategy might be effective early on, but eventually, it stops working.

Enter the third kid from our LEGO story. She’s not worried about speed; she’s focused on the end point she wants to reach. It’ll take her longer to build a toy, but she’s more likely to create a toy that’s elaborate (think: a fire truck) and more sophisticated than the first kid’s firefighters in hats. 

Reeves calls this a “patient strategy.” It’s complex, forward-looking, and relatively slow.   

A patient strategy is too costly for most startups. It requires resources and access, and it risks investing a lot in a product that doesn’t take off. “It becomes a big company game,” says Reeves.  

For example, Apple is known to make investments in technologies that often pay off later, many years after acquisition or initial patenting. That’s the hallmark of a patient strategy.    

When to Switch Your Strategy  

The most successful entrepreneurs use both strategies. They’re fast and agile when their industry is young; patient and forward-looking as their industry gets more advanced.  

How do you know when to switch? “Think of this as a search,” says Reeves. “Understand the maturity of your space by looking at the complexity of the products that you and your competitors are creating.”  

As the products get more complex, your strategy should get more patient.

Of course, the rest of the business needs to follow suit. “Adjust all aspects of your business to match your strategy,” says Reeves. “An impatient strategy is fast and agile, but you also need to prepare yourself to change your approach and structure later.”


Sociological ImagesMocking perfect gender performances (because the rule is to break rules)

Both men and women face a lot of pressure to perform masculinity and femininity respectively. But, ironically, people who rigidly conform to rules about gender, those who enact perfect performances of masculinity or femininity, are often the butt of jokes. Many of us, for example, think the male body builder is kind of gross; we suspect that he may be compensating for something, dumb like a rock, or even narcissistic. Likewise, when we see a bleach blond teetering in stilettos and pulling up her strapless mini, many of us think she must be stupid and shallow, with nothing between her ears but fashion tips.

The fact that we live in a world where there are different expectations for men’s and women’s behavior, in other words, doesn’t mean that we’re just robots acting out those expectations. We actually tend to mock slavish adherence to those rules, even as we carefully negotiate them (breaking some rules, but not too many, and not the really important ones).

In any case, I thought of this when I saw this ad. The woman at the other end of the table is doing (at least some version of) femininity flawlessly.  The hair is perfect, her lips exactly the right shade of pink, her shoulders are bare. But… it isn’t enough.  The man behind the menu has “lost interest.”

It’s unfortunate that we spend so much time telling women that the most important thing about them is that they conform to expectations of feminine beauty when, in reality, living up to those expectations means performing an identity that we disdain.

We do it to men, too.  We expect guys to be strictly masculine, and when they turn out to be jocks and frat boys, we wonder why they can’t be nicer or more well-rounded.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Know Your Bits!

"I know software can't always be perfect, but things like this make me want to shut down my PC and say that's enough computering for the day," writes Timothy W.

 

Richard S. wrote, "I suppose you don't really need an email body when you have everything you need in the subject."

 

"I recently inherited a project from a contractor that left the project," writes Michael H., "I have never seen code quite like his, and that is NOT a compliment."

 

Bruce C. writes, "The fact that this won't ship to NZ is kind of immaterial - the REAL question is do I feel like spending $10.95 for new or can I settle for used at a discount?"

 

"I'm sure their product is great, but I don't want to be an early adopter if I can help it," writes Jaime A.

 

"To be fair, the email did catch my attention," wrote Philip G.

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners June Meeting: Debian 9 release party!

Jun 17 2017 12:30
Jun 17 2017 16:30
Jun 17 2017 12:30
Jun 17 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Debian Linux version 9 (codename "Stretch") is scheduled for release on 17 June 2017.  Join us in celebrating the release and assisting anyone who would like to install or upgrade to the new version!


There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 17, 2017 - 12:30

,

CryptogramSafety and Security and the Internet of Things

Ross Anderson blogged about his new paper on security and safety concerns about the Internet of Things. (See also this short video.)

It's very much along the lines of what I've been writing.

Worse Than FailureThe Gassed Pump

“Staff augmentation,” was a fancy way of saying, “hey, contractors get more per hour, but we don’t have to provide benefits so they are cheaper,” but Stuart T was happy to get more per hour, and even happier to know that he’d be on to his next gig within a few months. That was how he ended up working for a national chain of gas-station/convenience stores. His job was to build a “new mobile experience for customer loyalty” (aka, wrapping their website up as an app that can also interact with QR codes).

At least, that’s what he was working on before Miranda stormed into his cube. “Stuart, we need your help. ProdTrack is down, and I can’t fix it, because I’ve got to be at a mandatory meeting in ten minutes.”

A close-up of a gas pump

ProdTrack was their inventory system that powered their point-of-sale terminals. For it to be down company wide was a big problem, and essentially rendered most of their stores inoperable. “Geeze, what mandatory meeting is more important than that?”

“The annual team-building exercise,” Miranda said, using a string of profanity for punctuation. “They’ve got a ‘no excuses’ policy, so I have to go, ‘or else’, but we also need to get this fixed.”

Miranda knew exactly what was wrong. ProdTrack could only support 14 product categories. But one store- store number 924- had decided that they needed 15. So they added a 15th category to the database, threw a few products into the category, and crossed their fingers. Now, all the stores were crashing.

“You’ll need to look at the StoreSQLUpdates and the StoreSQLUpdateStatements tables,” Miranda said. “And probably dig into the ProductDataPump.exe app. Just do a quick fix- we’re releasing an update supports any number of categories in three weeks or so, we just need to hold this together till then.”

With that starting point, Stuart started digging in. First, he puzzled over the tables Miranda had mentioned. StoreSQLUpdates looked like this:

IDSTATEMENTSTATEMENT_ORDER
145938DELETE FROM SupplierInfo90
148939INSERT INTO ProductInfo VALUES(12348, 3, 6)112

Was this an audit tables? What was StoreSQLUpdateStatements then?

IDSTATEMENT
168597INSERT INTO StoreSQLUpdates(statement, statement_order) VALUES (‘DELETE FROM SupplierInfo’, 90)
168598INSERT INTO StoreSQLUpdates(statement, statement_order) VALUES (‘INSERT INTO ProductInfo VALUES(12348, 3, 6)’, 112)

Stuart stared at his screen, and started asking questions. Not questions about what he was looking at, but questions about the life choices that had brought him to this point, questions about whether it was really that bad an idea to start drinking at work, and questions about the true nature of madness- if the world was mad, and he was the only sane person left, didn’t that make him the most insane person of all?

He hoped the mandatory team building exercise was the worst experience of Miranda’s life, as he sent her a quick, “WTF?” email message. She obviously still had her cellphone handy, as she replied minutes later:

Oh, yeah, that’s for data-sync. Retail locations have flaky internet, and keep a local copy of the data. That’s what’s blowing up. Check ProductDataPump.exe.

Stuart did. ProductDataPump.exe was a VB.Net program in a single file, with one beautifully named method, RunIt, that contained nearly 2,000 lines of code. Some saintly soul had done him the favor of including a page of documentation at the top of the method, and it started with an apology, then explained the data flow.

Here’s what actually happened: a central database at corporate powered ProdTrack. When any data changed there, those changes got logged into StoreSQLUpdateStatements. A program called ProductDataShift.exe scanned that table, and when new rows appeared, it executed the statements in StoreSQLUpdateStatements (which placed the actual DML commands into StoreSQLUpdates).

Once an hour, ProductDataPump.exe would run. It would attempt to connect to each retail location. If it could, it would read the contents of the central StoreSQLUpdates and the local StoreSQLUpdates, sorting by the order column, and through a bit of faith and luck, would hopefully synchronize the two databases.

Buried in the 2,000 line method, at about line 1,751, was a block that actually executed the statements:

If bolUseSQL Then
    For Each sTmp As String In sProductsTableSQL
        sTmp = sTmp.Trim()
        If sTmp <> "" Then
            SQLUpdatesSQL(lngIDSQL, sTmp, dbQR5)
        End If
    Next sTmp
End If

Once he was done screaming at the insanity of the entire process, Stuart looked at the way product categories worked. Store 924 didn’t carry anything in the ALCOHOL category, due to state Blue Laws, but had added a PRODUCE category. None of the other stores had a PRODUCE category (if they carried any produce, they just put it in PREPARED_FOODS). Fixing the glitch that caused the application to crash when it had too many categories would take weeks, at least- and Miranda already told him a fix was coming. All he had to do was keep it from crashing until then.

Into the StoreSQLUpdates table, he added a DELETE statement that would delete every category that contained zero items. That would fix the immediate problem, but when the ProductDataPump.exe ran, it would just copy the broken categories back around. So Stuart patched the program with the worst fix he ever came up with.

If bolUseSQL Then
    For Each sTmp As String In sProductsTableSQL
        sTmp = sTmp.Trim()
        If sTmp <> "" Then
            If nStoreNumber = 924 And sTmp.Contains("ALCOHOL") Then
                Continue For
            ElseIf nStoreNumber <> 924 And sTmp.Contains("PRODUCE") Then
                Continue For
            Else
                SQLUpdatesSQL(lngIDSQL, sTmp, dbQR5)
            End If
        End If
    Next sTmp
End If
[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

Planet Linux AustraliaLev Lafayette: Heredocs with Gaussian and Slurm

Gaussian is a well-known computational chemistry package, and sometimes subject to debate over its license (e.g., the terms state researchers who develop competing software packages are not permitted to use the software, compare performance etc). Whilst I have some strong opinions about such a license, this will be elaborated at another time. The purpose here is to illustrate the use of heredocs with Slurm.

read more

,

LongNowHow Can We Create a Manual For Civilization?

“WHAT BOOKS would you want to restart civilization from scratch?”

The Long Now Foundation has been involved in and inspired by projects centered on that question since launching in 01996. (See, for example, The Rosetta Project, Westinghouse Time Capsules, The Human Document Project, The Survivor Library, The Toaster Project, The Crypt of Civilization, and the Voyager Record.) For years, Executive Director Alexander Rose has been in discussions on how to create a record of humanity and technology for our descendants. In 02014, Long Now began building it.

The Manual For Civilization is working toward a living, crowd-curated library of 3,500 books put forward by the Long Now community and on display at The Interval. To stack the shelves, we solicited book recommendations from Long Now members and supporters, special guest curators like Long Now founders Stewart Brand and Brian Eno, past Seminar speakers like George Dyson and Neal Stephenson, subject experts Maria Popova and Violet Blue, and volunteer curators like Alan Beatts, Michael Pujals, and Heath Rezabek.

Neal Stephenson selecting books for The Manual For Civilization


The physical collection in The Interval grounds the catalog, and also provided the size constraint of the number of books. But the Long Now community is global, and the reality is that few Long Now members have had the opportunity to peruse our Bay Area-bound library.

Today, we’re getting ready to digitalize the Manual so that the library can be shared with the world. We are partnering with the Internet Archive, who have created a special collection for the Manual, and, for the first time, we are sharing a selection of the titles in our collection as a temporary browse-only catalog on Libib (currently showing about 800 of the currently 1400 selections). To help make this digitalization effort happen, we will need to raise approximately $100,000 to scan all the books and post them online making the library accessible to everyone. If you are interested in helping support this effort, please contact nick@longnow.org.

The Origins of the Manual (01751-02014)

“Final Steps in Shaping a Goblet,” from Diderot’s Encyclopedie.


Framing the library’s focus as “restarting civilization” may seem apocalyptic or predictive on its face, but that is not the intention. Rather, the hope is to create a curatorial principle that inspires valuable conversation that reframes how we think about where civilization has come so far, where it might go in the future, and what tools are necessary to get it there.

In that sense, The Manual For Civilization is the latest in a centuries-long genealogy of ambitious projects to catalog and, crucially, democratize the most essential human knowledge. Inherent in each project—from Denis Diderot’s famous Encyclopedie to Long Now Co-Founder Stewart Brand’s countercultural bible Whole Earth Catalog to The Manual—is a theory of civilization. There is also, as will be discussed further below, a bias depending on which curatorial principle is emphasized and of course, who does that curation.

“Figurative system of human knowledge” from the Encyclopedie. Knowledge was divided into branches of memory, reason and imagination.


When Diderot began editing the Encyclopedie in 01751, the ideas of the Enlightenment held sway only amongst learned philosophes. Power rested in the hands of the clerics. Diderot considered the Encyclopedie as a deliberate attempt to “change the way people think” by democratizing the ideals of the Enlightenment. Controversially, the Encyclopedie’s central organizing principle was based on reason, rather than the authority of the church. In the entry for encyclopedia, he wrote:

The goal of an encyclopedia is to assemble all the knowledge scattered on the surface of the earth, to demonstrate the general system to the people with whom we live, & to transmit it to the people who will come after us, so that the works of centuries past is not useless to the centuries which follow, that our descendants, by becoming more learned, may become more virtuous & happier, & that we do not die without having merited being part of the human race.

Diderot would continue editing the Encyclopedie over the next fifteen years, amassing thousands of entries and enlisting the help of some of the Enlightenment’s most brilliant minds as contributors, including Voltaire, Rousseau, and Montesquieu. Diderot’s 35 volumes constituted a:

tremendous storehouse of fact and propaganda that swept Europe and taught it what ‘reason,’ rights,’ ‘authority,’ ‘government,’ ‘liberty,’ ‘equality,’ and related social principles are or should be. The work was subversive in its tendency, not in its advocacy: it took for granted toleration, the march of mind exemplified by science, and the the good of the whole people….The eleven volumes of plates were in themselves a revolutionary force, for they made public what had  previously been kept secret by the guilds, and they supported the philosophe doctrine that the dissemination of knowledge was the high road to emancipation.

Stewart Brand’s Whole Earth Catalog divided knowledge into sections based on whole systems thinking.


Two hundred years later, while reflecting on the legacy of the Whole Earth Catalog (01968), Stewart Brand wrote that the Catalog and the Encyclopedie shared a similar aim: to hand “the tools of a whole civilization to its citizens.” Like the Encyclopedie, Brand wrote, the Whole Earth Catalog sought to decentralize authority and redistribute it to individuals through access to knowledge, or tools. Diderot’s Encyclopedie, wrote Brand, “was the leading tool of the Enlightenment.”

Though the first commune-bound readers of the Whole Earth Catalog— those “bands of adventurous malcontents who were setting out to reinvent civilization”— did not exactly restart civilization, their process held “surprising value.” Brand wrote that as the decades passed, the Catalog’s true legacy was glimpsed in the personal computer revolution that followed, which was informed by the same process:

The personal-computer revolution was a direct result of that value system. It was initiated and carried to fruition by youthful longhairs, on purpose, with striking consistency between what was intended and what was accomplished. The impulse was to decentralize authority—to undermine the high priests and air-conditioned mainframes of information technology and hand their power to absolutely everybody.

“Here are the tools to make your life better. And to make the world better,” Brand wrote in his foreword to the Millennium Whole Earth Catalog (01994)—the last edition published. “That they’re the same tools is our theory of civilization.”

A Merry Prankster tarot card of Stewart Brand linking the curatorial principles of the Catalog to the formation of the World Wide Web.


In the inaugural Whole Earth Catalog, Brand declared that “We are as gods and might as well get good at it.” But we are as gods only because of our ancestors’ diligence. The promise of a technologically advancing future is predicated on millennia of accumulated knowledge. Civilization has taken a lot of work to build, and it demands a great deal of know-how to sustain. And as modern life increasingly encourages specialization, familiarity across that accumulated knowledge’s breadth can wane. Our ability to collaborate is a strength, but beyond a point we risk losing comprehension of the infrastructure—both physical and intellectual—that supports our modern lives. How can we retain that knowledge?

Stewart Brand at The Interval as the Manual For Civilization is constructed.


These questions inspired Long Now to build The Manual For Civilization. In developing the experience of The Interval, we integrated the Manual of Civilization book collection into the design layout as two floors of bookshelves that would face outward in The Interval space. The first floor shelves would be open and accessible for browsing, and the upper shelves would be accessible by staff, reached from the front by a tall ladder, or from the opposite side, since the shelves are open to the Long Now office above.

The Interval at Long Now in San Francisco.


As the opening date of The Interval approached in the summer of 02014, we knew we had a lot of empty shelves to fill, but had already started assembling the catalog as well as physical copies of books. In one pre-opening party, we had a bucket brigade of supporters passing physical books in the door, up the spiral staircase, to people on ladders who arranged the books on shelves, about 1,000 volumes that evening!

Kevin Kelly, with Alexander Rose, selects books from his personal library for The Manual For Civilization


We’ve had over 2,500 submissions and recommendations to the collection so far, with approximately 1,400 approved for inclusion in The Manual by our director Alexander Rose. Currently, 1,007 physical books reside in the Manual’s bookshelves. 861 titles from the collection are available to view on Libib.

Our plan is to solicit more book lists and recommendations until the list grows to about 5,000 from which we will edit the collection down to the 3,500 or so volumes that can fit on the shelves. We began the collection by using four broad categories to structure the collection:

  • Long-term Thinking, Past and Future: these include books on history as well as futurism and many books by Long Now speakers.
  • Rigorous Science Fiction: especially works that build richly imagined possible worlds to help us think about the future.
  • The Cultural Canon: great works of literature, poetry, philosophy, religion.
  • Mechanics of Civilization: “how-to” books for critical skills and technology, for example books on navigation, growing and gathering food, midwifery, forging tools.

Beyond these categories, we are exploring other ways to organize and catalog the collection, and to locate books on shelves. With any scheme though, we want to preserve the experience and delight of serendipitous discovery, of going to the bookshelf to look for one thing, and discovering three or four other things you are curious about.

Via humorous webcomic XKCD. 


We also hope to open up the discussion so that we can have an ongoing conversation about which books are in and out of the collection at any point in time, and why. With any curatorial principle comes a bias. This bias is problematic, but can be mitigated in a variety of ways. Wikipedia, for example, makes it possible for anybody to edit and contribute to its catalog. In the case of the Manual, we are committed to evolving our curatorial principle over time, the hope being that as we move through the Long Now, this living collection is responsive, adaptive and open.

We’ve already had a few valuable learning experiences. When the Manual launched, Long Now member and Brainpickings founder Maria Popova contemplated Stewart Brand’s selections for the Manual, and had “only one lament:”

One would’ve hoped that a lens on rebuilding human civilization would transcend the hegemony of the white male slant and would, at minimum, include a more equal gender balance of perspectives — of Brand’s 76 books, only one is written by a woman, one features a female co-author, and one is edited by a woman. It’s rather heartbreaking to see that someone as visionary as Brand doesn’t consider literature by women worthy of representing humanity in the long run. Let’s hope the Long Now balances the equation a bit more fairly as they move forward with the remaining entries in their 3,500-book collaborative library.

Long Now member Maria Popova.


Long Now immediately reached out to Popova and invited her to contribute her own list for the Manual. In selecting it, she found it especially challenging to reconcile the curatorial constraints of the Manual with her desire to offer a diverse and balanced representation of essential human knowledge:

I faced a disquieting and inevitable realization: The predicament of diversity is like a Russian nesting doll — once we crack one layer, there’s always another, a fractal-like subdivision that begins at the infinite and approaches the infinitesimal, getting exponentially granular with each layer, but can never be fully finished. If we take, for instance, the “women problem” — to paraphrase Margaret Atwood — then what about Black women? Black queer women? Non-Western Black queer women? Non-English-speaking non-Western Black queer women? Non-English-speaking non-Western Black queer women of Jewish descent? And on and on. Due to that infinite fractal progression, no attempt to “solve” diversity — especially no thirty-item list — could ever hope to be complete. The same goes for other variables like genre or subject: For every aficionado of fiction, there’s one of drama, then 17th-century drama, then 17th-century Italian drama, and so on.

The inherent biases in catalogs like the Manual must be acknowledged, and ideally mitigated through open conversation, if such catalogs are to persist over the long term. Over time, we believe that the conversation about what goes into Manual will become as rich and interesting as the collection in the Manual itself.

FURTHER READING

Watch Jimmy Wales’ 02006 SALT Talk on Wikipedia and the future of free culture.

Watch Internet Archive founder Brewster Kahle’s 02011 SALT Talk on universal access to all knowledge.

Selections for The Manual For Civilization from members of the Long Now community:

Projects that inspired the Manual For Civilization:

  • The Rosetta Project: A multi-millennial micro-etched disk with a record of thousands of the world’s languages.
  • Westinghouse Time Capsules: Two time capsules (they actually coined the term for this project) by Westinghouse buried at Worlds Fair sites, one in 01939 and the other 01965 to be recovered in 5000 years.  They also did the very smart thing of making a “Book of Record” and an above ground duplicate of the contents on display.
  • The Human Document Project: A German project to create a record of humanity that will last one million years.
  • Crypt of Civilization: An airtight chamber located at Oglethorpe University in Atlanta, Georgia. The crypt consists of preserved artifacts scheduled to be opened in the year 8113 AD.
  • The Voyager Record: The Voyager Golden Records are phonograph records that were included aboard both Voyager spacecraft, which were launched in 1977. They contain sounds and images selected to portray the diversity of life and culture on Earth, and are intended for any intelligent extraterrestrial life form, or far future humans, who may find them.
  • Georgia Guidestones: The four granite Guidestones are covered in inscriptions written in 8 major languages that describe the tenets of their imagined Age of Reason.
  • Doomsday Chests by Noah Raford
  • The Forever Book an idea by Kevin Kelly
  • Global Village Construction Set
  • History of Humanity” project
  • The Library of Utility
  • The Memory of Mankind project
  • The Great Pyramid project
  • Digital Clay Tablets
  • Arnano sapphire and glass data storage

Content that has been discussed to be used for these projects:

  • The Gingery books are great first pass on how to re-start manufacturing technology
  • wikiHow has a lot of great info and it is continuously updated.  The entry on how to deliver a baby seems like a particularly handy one…
  • The Foxfire Books on homespun technology seem to have a slightly less industrial take than the Gingery books, and are pretty comprehensive
  • The Let’s Say You’ve Gone Back in Time poster to help you restart civilization by Ryan North the creator of the awesome Dinosaur Comics
  • The Way Things Work by David Macaulay.  This is a fantastic book, but it might leave people thinking that all technology is powered by woolly mammoths and angels.
  • The Harvard Classics originally known as Dr. Eliot’s Five Foot Shelf are often referred to as an item that should go into a record like this.
  • Encyclopedia Britannica People often suggest using the latest version that is now out of copyright.  I believe this is the 13th edition but so far I have only found digital copies of the 11th.
  • The Domesday book: The Domesday Book is the record of the great survey of England completed in 1086.  It would be interesting to find surveys and census’ from around the world
  • The Mormon Genealogical Data:  This is also held in a bunker outside Salt Lake City Utah, but it might be nice to have a record of gene lines for a future civilization to better understand its past.
  • The Top 100 Project Gutenberg books: If you are concerned with archiving works in copyright this is a great source to find texts that are free to use.
  • The Internet Archive: An archive of complete snapshots of the web as well as thousands of books and videos.  Incidentally you would also get all of our scanned page content from the Rosetta Project with this.
  • Wikipedia: The text only version of this is actually not that large, and could be archived fairly easily.  Also one of the few sources that is beginning to get filled out in many languages and is also not held under a copyright.
  • How to field dress a deer: PDF pocket version from Penn State College of Agricultural Science (living in Northern California, I think this one will be especially handy).
  • The Toaster Project
  • The Panlex Project of cross linked language dictionaries
  • The Survivor Library

 

LongNowSelect Interval Talk Videos Now Online

Neal Stephenson at The Interval

As we mark the 3-year anniversary of the Conversations at The Interval lecture series, we’ve released video of more than a dozen Interval talks for the first time. HD video of fifteen select talks is now on The Interval website, free for everyone to enjoy.

Production of these talks was funded by donations from The Elkes Foundation, Because We Can, and Margaret & Will Hearst. Thanks to them and the ongoing support of Long Now members we can share these, and all our Long Now videos, free with the public.

Speakers featured include Long Now co-founder Stewart Brand; science fiction authors Neal Stephenson (shown above), Kim Stanley Robinson, and Andy Weir; artist Jonathon Keats; and internet archivist Jason Scott. Talk subjects range from interplanetary travel to the Internet of Things; digital preservation to the science of human taste; deep time art to the connection between genetics & ideology. You can see the full list of videos here.

Don’t know which to watch first? Here are a few suggestions:

Our first Interval speaker ever was Wired editor Adam Rogers who discussed the 10,000 year history of booze and the science behind it. You can see in the video and below how much The Interval has changed in 3 years.

Adam Rogers: Proof: The Science of Booze (May 02014)

Watch Adam Rogers at The Interval, May 02014

In the 01980s Jason Scott was an active participant in dial-up Bulletin Board Systems–one of the earliest networked communities. When he realized the content of the BBS era was in danger of disappearing he became, almost by accident, an archivist. Ever since, while some take the short-term view, Jason has stepped up to be a good ancestor for those who will inhabit the future networked world.

Jason Scott: The Web in an Eye Blink (February 02015).

How does the frame of long-term thinking change the way we consider the present, past, and future of refugees and others migrating under duress? We assembled a panel of academics and experienced non-profit workers to discuss this important topic.

The Refugee Reality panel discussion (February 02016).

Watch Refugee Reality at The Interval, February 02016

If you enjoy Long Now’s long-running Seminar podcast and the videos of that series, we think you’ll enjoy these talks, too. An audio podcast of Interval talks will launch soon!

Our Interval series continues with new talks every month. Next up is our special daytime talk by authors Neal Stephenson and Nicole Galland on Wednesday, June 14. And then on June 27, geologist Miles Traer discusses “The Geological Reveal” a deep time rock record history of the SF Bay Area.

June 14, 02017 at 12pm: Neal Stephenson & Nicole Galland at The Interval June 27, 02017: Miles Traer at The Interval

 

If you can’t make The Interval talks in person we have a video livestream
exclusively for Long Now members. So you can watch live from anywhere.

We also have short clips from Interval talks on the Long Now YouTube and Facebook pages. We hope you enjoy them, and if so that you’ll share them with others. Here’s a clip of Kim Stanley Robinson’s 02016 talk:

Google AdsenseBuilding a better web for everyone

Cross-posted from The Keyword

The vast majority of online content creators fund their work with advertising. That means they want the ads that run on their sites to be compelling, useful and engaging--ones that people actually want to see and interact with. But the reality is, it’s far too common that people encounter annoying, intrusive ads on the web--like the kind that blare music unexpectedly, or force you to wait 10 seconds before you can see the content on the page. These frustrating experiences can lead some people to block all ads--taking a big toll on the content creators, journalists, web developers and videographers who depend on ads to fund their content creation.

We believe online ads should be better. That’s why we joined the Coalition for Better Ads, an industry group dedicated to improving online ads. The group’s recently announced Better Ads Standards provide clear, public, data-driven guidance for how the industry can improve ads for consumers, and today I’d like to share how we plan to support it.

New tools for publishers

The new Ad Experience Report helps publishers understand how the Better Ads Standards apply to their own websites. It provides screenshots and videos of annoying ad experiences we’ve identified to make it easy to find and fix the issues. For a full list of ads to use instead, publishers can visit our new best practices guide.


The Ad Experience Report

“We’ve always put our users first and support the Coalition’s Better Ads efforts and standards. The report’s videos and screenshots are incredibly helpful and make the Coalition’s research actionable for our teams. We’re impressed with the level of detail and transparency Google is providing and commend this initiative.”
- Troy Young, President, Hearst Digital Media

As part of our efforts to maintain a sustainable web for everyone, we want to help publishers with good ad experiences get paid for their work. With Funding Choices, now in beta, publishers can show a customized message to visitors using an ad blocker, inviting them to either enable ads on their site, or pay for a pass that removes all ads on that site through the new Google Contributor.

“Looking at the past few years, we’ve come to realize that to the rise of ad blockers has negatively impacted potential revenue across all of our properties, particularly in Europe. Funding Choices allows us to have a conversation with visitors using ad blockers on how our business works, and provide them a choice to whitelist or contribute to our newsroom. We’ve found that people are generally open to whitelisting once they understand how content gets created.”
- Marc Boswell, SVP, Sales Operations & Client Services, Business Insider

Funding Choices is available to publishers in North America, U.K., Germany, Australia and New Zealand and will be rolling out in other countries later this year. Publishers should visit our new best practices guide for tips on crafting the right message for their audience.

Chrome support for the Better Ads Standards

Chrome has always focused on giving you the best possible experience browsing the web. For example, it prevents pop-ups in new tabs based on the fact that they are annoying. In dialogue with the Coalition and other industry groups, we plan to have Chrome stop showing ads (including those owned or served by Google) on websites that are not compliant with the Better Ads Standards starting in early 2018.

Looking ahead

We believe these changes will ensure all content creators, big and small, can continue to have a sustainable way to fund their work with online advertising.

We look forward to working with the Coalition as they develop marketplace guidelines for supporting the Better Ads Standards, and are committed to working closely with the entire industry—including groups like the IAB, IAB Europe, the DCN, the WFA, the ANA and the 4A’s, advertisers, agencies and publishers—to roll out these changes in a way that makes sense for users and the broader ads ecosystem.

Posted by Sridhar Ramaswamy
Senior Vice President, Ads and Commerce

Sociological ImagesTrump’s election made people less private about anti-immigrant attitudes

Originally posted at Montclair SocioBlog.

Did Donald Trump’s campaign and election cry havoc and unleash the dogs of racism?

Last June, hauling out Sykes and Matza’s concept of “neutralization,” I argued that Trump’s constant denigration of “political correctness” allowed his supporters to neutralize norms against racism. If the denigration of political correctness means that the people who condemn racism are wrong or bad, then what they are condemning must be OK. The logic might not be impeccable, but it works. I argued that I wasn’t sure that Trump had caused an increase in racist attitudes, but he gave people a license to express those attitudes.

Aziz Ansari made a similar point on Saturday Night Live  the day after the inauguration. (Apologies if you have to wait through an ad.)

.

Ansari’s version is much better than mine, and it reached a slightly larger audience. But there’s another important difference. I was talking about the message Trump supporters took from Trump himself before the election. Ansari is talking about the message they got from the electorate. The election changed their perceptions of the norms about expressing anti-immigrant views.

It’s as though minds of half the country had been clouded by “pluralistic ignorance.” That’s the scenario where each person thinks X but assumes that everyone else thinks Y, so everyone keeps their true ideas to themselves. Then something happens that reveals those previously hidden preferences, and suddenly everyone feels liberated. From the Trump supporters’ perspective, Trump was Dorothy throwing a bucket of water to dissolve the Wicked Witch of Political Correctness, and now all the Munchkins were free to express their feelings about Mexicans and Muslims (and perhaps others).

The 20% increase in hate crimes after the election certainly suggests that racists felt liberated by the election. Unfortunately, because of the uncontrolled variables and the small sample size of just one candidate winning one election, this bit of evidence is hardly conclusive.

But we now have some experimental evidence on the topic. In the two weeks before the election, three economists surveyed Mechanical Turkers in eight very Red states. The MTurkers were asked, in addition to other on-topic questions, if they would authorize the economists to make a donation on their behalf to a strongly anti-immigrant group, FAIR.  Mturkers were told, “The founder of FAIR is John Tanton, author of `The Immigration Invasion’ who wrote ‘I’ve come to the point of view that for European-American society and culture to persist requires a European-American majority, and a clear one at that.’”

For half the MTurkers, the donation question included this:

Note: just like any other answer to this survey, also your donation decision will be completely anonymous. No one, not even the researchers, will be able to match your decision to your name. [emphasis in original]

The other half were told:

Important: in order to ensure the quality of the data collected, a member of the research team might personally contact you to verify your answers to the next question and the following ones. [emphasis in original]

In addition, some of the people were told that Trump was certain to win their state. Others were given no information.

The results strongly support the idea that norms – or at least people’s perceptions of the norms – make a big difference. People who thought their choice would be public were much less likely to make the anti-immigrant donation – 34% vs. 54%. But that difference held true only for people who were not told how popular Trump was in their state. Among the people who were told basically, “Trump is a sure thing to win your state,” knowing that their donation would be public (i.e., known to the surveyors) made no difference in their willingness to donate.

If the researchers had stopped there, they would have been able to say

  • there seems to be a norm against giving money to anti-immigrant groups
  • the knowledge of that norm makes people less willing to make a donation to blatantly racist anti-immigrant group if even one stranger can know about that donation
  • if people think that many others in their state support an anti-immigrant candidate, they no longer feel that they need to keep their anti-immigrant views to themselves

Thanks the results of the election, though, they didn’t have to stop there. The gave the researchers a natural experiment to find out if the norms – or at least perceptions of the norms – had changed. Had Trump’s victory caused the scales of pluralistic ignorance to fall from the eyes of these Red-state Turkers?

The answer was yes. The election had the same effect as did the information about Trump support in the person’s state. It obliterated the difference between the public and private conditions.

To people who were reluctant to let their agreement with FAIR be known, Trump’s victory said, “It’s OK. You can come out of the closet. You’re among friends, and there are more of us than you thought.”

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramSurveillance Intermediaries

Interesting law-journal article: "Surveillance Intermediaries," by Alan Z. Rozenshtein.

Abstract:Apple's 2016 fight against a court order commanding it to help the FBI unlock the iPhone of one of the San Bernardino terrorists exemplifies how central the question of regulating government surveillance has become in American politics and law. But scholarly attempts to answer this question have suffered from a serious omission: scholars have ignored how government surveillance is checked by "surveillance intermediaries," the companies like Apple, Google, and Facebook that dominate digital communications and data storage, and on whose cooperation government surveillance relies. This Article fills this gap in the scholarly literature, providing the first comprehensive analysis of how surveillance intermediaries constrain the surveillance executive. In so doing, it enhances our conceptual understanding of, and thus our ability to improve, the institutional design of government surveillance.

Surveillance intermediaries have the financial and ideological incentives to resist government requests for user data. Their techniques of resistance are: proceduralism and litigiousness that reject voluntary cooperation in favor of minimal compliance and aggressive litigation; technological unilateralism that designs products and services to make surveillance harder; and policy mobilization that rallies legislative and public opinion to limit surveillance. Surveillance intermediaries also enhance the "surveillance separation of powers"; they make the surveillance executive more subject to inter-branch constraints from Congress and the courts, and to intra-branch constraints from foreign-relations and economics agencies as well as the surveillance executive's own surveillance-limiting components.

The normative implications of this descriptive account are important and cross-cutting. Surveillance intermediaries can both improve and worsen the "surveillance frontier": the set of tradeoffs ­ between public safety, privacy, and economic growth ­ from which we choose surveillance policy. And while intermediaries enhance surveillance self-government when they mobilize public opinion and strengthen the surveillance separation of powers, they undermine it when their unilateral technological changes prevent the government from exercising its lawful surveillance authorities.

Worse Than FailureCodeSOD: A Promise of Timing

Asynchronous programming is hard, and there’s never been a perfect way to solve that problem. One of the most widely used solutions is the “promise” or “future”. We wrap the asynchronous process up in a, well, promise of a future result. “Someday, there will be data here, I hope.” The real beauty of promises comes from their composability- “getData promises to fetch some records, and then the calling method can promise to display them.”

Of course, it’s still asynchronous, and when an application has multiple asynchronous processes happening at the same time, “weird behavior” can happen, thanks to timing issues. Keith W encountered one of those timing related Heisenbugs, and became immediately suspicious about how it was getting invoked:

this.disableUI("Loading form...");
this.init(workflowId, ordinal).done(() => this.enableUI("Form loaded!"));

init isn’t passing data to the next promise in the chain. So what actually happens in the init method?

public init(workflowId: string, ordinal: number): JQueryPromise<any> {
  var self = this;

  var promise = $.Deferred<boolean>();

  $.when(
    $.ajax({
      type: "GET",
      url: getEndpoint("work", "GetFormInputForm"),
      data: { id: workflowId, ordinal: ordinal },
      contentType: "application/json",
      success(result) {
        if (result) {
          var formType = result.FormType || Forms.FormType.Standard;
          self.formMetaDataExcludingValues = result;
          var customForm = new Forms.CustomForm(formType, Forms.FormEditorMode.DataInput, result);
          self.CustomForm(customForm);
          self.flatControlList = customForm.flattenControls();
          ko.utils.arrayForEach(self.flatControlList, (cnt) => {
            var row = { name: cnt.name(), value: {}, displayText: "", required: cnt.required() };
            if (cnt instanceof Forms.SelectControl) {
              row.value = cnt.values();
            } else if (cnt instanceof Forms.ContactControl || cnt instanceof Forms.OrganisationControl) {
              row.displayText = cnt.displayName();
              row.value = cnt.value();
            } else if (cnt instanceof Forms.SequenceControl) {
              row.value = "";
            } else if (cnt instanceof Forms.AssetControl) {
              row.value = cnt.value();
              row.displayText = row.value ? cnt.displayName() : null;
            } else if (cnt instanceof Forms.CategoryControl) {
              row.value = cnt.cachedValue;
            } else {
              row.value = cnt.value();
            }

            self.defaultValues.push(new ControlValue(row));
          });
        }
      }
    })
  ).done(() => {
    $.ajax({
      type: "GET",
      url: url,
      data: { id: workflowId, ordinal: ordinal, numberOfInstances: self.NoOfInstances() },
      contentType: "application/json",
      success(result) {
        if (result) {
          var formType = result.FormType || Forms.FormType.Standard;
          if (!multiple) {
            self.CurrentInstanceNo(1);

            // store a blank form control values
            self.saveCurrentFormValues().done(() => self.applyKoBinding().done(() => {
              console.info("Data Saved Once.");
            }));
          } else {
            self.Forms([]);
            self.CurrentInstanceNo(self.NoOfInstances());
            self.saveAllFormValues(result).done(() => self.applyKoBinding().done(() => {
              console.info("Data Saved for all.");
            }));
          }
        }
      }
    })
  })

  promise.resolve(true);
  return promise;
}

This code is awful at first glance. It’s complicated spaghetti code, with all sorts of stateful logic that manipulates properties of the owning object. That’s your impression at first glance, but it’s actually worse.

Let’s highlight the important parts:

public init(workflowId: string, ordinal: number): JQueryPromise<any> {
  var promise = $.Deferred<boolean>();

  //big pile of asynchronous garbage

  promise.resolve(true);
  return promise;
}

Yes, this init method returns a promise, but that promise isn’t contingent on any other promises being made, here. promise.resolve(true) essentially renders this promise synchronous, not asynchronous. Instead of waiting for all the pending requests to complete, it immediately claims to have succeeded and fulfilled its promise.

No wonder there’s a…







… timing issue.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

LongNowNeal Stephenson and Nicole Galland speak at The Interval: June 14, 02017

Authors Neal Stephenson & Nicole Galland at The Interval on Wednesday, June 14

Next week is the 3-year anniversary of Long Now’s Interval cafe and bar opening to the public in San Francisco. Since 02014 we’ve produced sixty-six long-term thinking lectures at The Interval.

On this milestone week we’re pleased to welcome authors Neal Stephenson and Nicole Galland on Wednesday, June 14, 02017 to discuss their novel The Rise and Fall of D.O.D.O. which debuts next week. This will be a special daytime event that starts at noon Pacific Time.

Tickets for this event sold out in less than an hour to Long Now members; one of our member benefits is early access to buy Interval event tickets. But, as we do for all our talks, we will host a live video stream of this event—starting at 12:30pm PT (UTC -7:00). The earlier time will we hope allow Long Now members and fans around the world will tune in, as it’s a more convenient hour in many time zones than our usual evening talks.

You do have to be a Long Now member to watch on the Long Now site. Membership starts at $8/month and comes with lots of benefits. So we hope you will consider joining if you are not yet a member; and if you are, we hope you will spread the word!

The Rise and Fall of D.O.D.O.

The Rise and Fall of D.O.D.O. features time travel, ancient texts,
 19th century technology, 
a language expert as protagonist, and magic. Sounds like the perfect summer reading for imaginative long-term thinkers!

This is Neal’s second visit to The Interval. In 02015 he talked about his novel Seveneves with Stewart Brand. Video of that talk is now live on The Interval site. He and Nicole will be joined in conversation by Long Now’s Executive Director Alexander Rose.

Neal’s long-time connection with The Long Now Foundation goes back to 01998 when he offered some design ideas about the 10,000 Year Clock to our co-founder Danny Hillis. Those sketches became the basis for his novel Anathem, published a decade later, which Neal launched with a Long Now event in San Francisco. In 02014 he not only gave us a list of books for the Manual for Civilization project, but he personally donated to help us build The Interval.

All of this makes his and Nicole’s visit during The Interval anniversary week even more special. We’re excited to learn more about the book and to share this event live with our members everywhere. We hope you will join us! More about the June 14 event.

Neal Stephenson & Nicole Galland at The Interval, June 14 02017

,

Planet Linux AustraliaDanielle Madeley: Applied PKCS#11

The most involved thing I’ve had to learn this year is how to actually use PKCS #11 to talk to crypto hardware. It’s actually not that clear. Most of the examples are buried in random bits of C from vendors like Oracle or IBM; and the spec itself is pretty dense. Especially when it comes to understanding how you actually use it, and what all the bits and pieces do.

In honour of our Prime Minister saying he should have NOBUS access into our cryptography, which is why we should all start using hardware encryption modules (did you know you can use your TPM) and thus in order to save the next girl 6 months of poking around on a piece of hardware she doesn’t really *get*, I started a document: Applied PKCS#11.

The later sections refer to the API exposed by python-pkcs11, but the first part is generally relevant. Hopefully it makes sense, I’m super keen to get feedback if I’ve made any huge logical leaps etc.

Krebs on SecurityFollowing the Money Hobbled vDOS Attack-for-Hire Service

A new report proves the value of following the money in the fight against dodgy cybercrime services known as “booters” or “stressers” — virtual hired muscle that can be rented to knock nearly any website offline.

Last fall, two 18-year-old Israeli men were arrested for allegedly running vDOS, perhaps the most successful booter service of all time. The young men were detained within hours of being named in a story on this blog as the co-proprietors of the service (KrebsOnSecurity.com would later suffer a three-day outage as a result of an attack that was alleged to have been purchased in retribution for my reporting on vDOS).

The vDos home page.

The vDos home page.

That initial vDOS story was based on data shared by an anonymous source who had hacked vDOS and obtained its private user and attack database. The story showed how the service made approximately $600,000 over just two of the four years it was in operation. Most of those profits came in the form of credit card payments via PayPal.

But prior to vDOS’s takedown in September 2016, the service was already under siege thanks to work done by a group of academic researchers who teamed up with PayPal to identify and close accounts that vDOS and other booter services were using to process customer payments. The researchers found that their interventions cut profits in half for the popular booter service, and helped reduce the number of attacks coming out of it by at least 40 percent.

At the height of vDOS’s profitability in mid-2015, the DDoS-for-hire service was earning its proprietors more than $42,000 a month in PayPal and Bitcoin payments from thousands of subscribers. That’s according to an analysis of the leaked vDOS database performed by researchers at New York University.

As detailed in August 2015’s “Stress-Testing the Booter Services, Financially,” the researchers posed as buyers of nearly two dozen booter services — including vDOS —  in a bid to discover the PayPal accounts that booter services were using to accept payments. In response to their investigations, PayPal began seizing booter service PayPal accounts and balances, effectively launching their own preemptive denial-of-service attacks against the payment infrastructure for these services.

Those tactics worked, according to a paper the NYU researchers published today (PDF) at the Weis 2017 workshop at the University of California, San Diego.

“We find that VDoS’s revenue was increasing and peaked at over $42,000/month for the month before the start of PayPal’s payment intervention and then started declining to just over $20,000/month for the last full month of revenue,” the paper notes.

subscribersThe NYU researchers found that vDOS had extremely low costs, and virtually all of its business was profit. Customers would pay up front for a subscription to the service, which was sold in booter packages priced from $5 to $300. The prices were based partly on the overall number of seconds that an attack may last (e.g., an hour would be 3,600 worth of attack seconds).

In just two of its four year in operation vDOS was responsible for launching 915,000 DDoS attacks, the paper notes. In adding up all the attack seconds from those 915,000 DDoS attacks, the researchers found vDOS was responsible for 48 “attack years” — the total amount of DDoS time faced by the victims of vDOS.

“As VDoS’s revenue and active subscriber base dwindled, so did the amount of harmful DDoS attacks launched by VDoS,” the NYU researchers wrote. “The peak attack time we found was slightly under 100,000 attacks and 5 attack years per month when VDoS’s revenue was slightly over $30,000/month. This decreased to slightly under 60,000 attacks and 3 attack years during the last month for which we have attack data. Unfortunately, we have incomplete attack data and likely missed the peak of VDoS’s attack volume. However, the payment intervention correlates to a 40% decrease in attack volume, which equates to 40,000 fewer attacks and 2 fewer attack years per month.”

Although a small percentage of vDOS customers shifted paying for their monthly subscriptions to Bitcoin after their preferred PayPal methods were no longer available, the researchers found that most customers who relied on PayPal simply went away and never came back.

“Near the middle of August 2015, the payment intervention that limited vDOS’s ability to accept PayPal payments began to take its toll on vDOS,” the researchers wrote. “Disrupting vDOS’s PayPal payment channel had a noticeable effect on both recurring and new revenue. By August 2015, payments from the PayPal channel decreased by $12,458 (44%) from an average of $28,523 over the previous five months. The Bitcoin payment channel increased by $6,360 (71%), but did not fully compensate for lost revenue from PayPal.”

The next month, vDOS established a number of ad-hoc payment methods, such as other third-party payment processors that accept credit card payments. However, most of these methods were short lived, likely due to the payment processors learning about the nature of their illicit DDoS service and terminating their accounts, the researchers observed.

“The revenue from these other regulated payment channels dwindled over a ten month period from $18,167 in September 2015 to $1,700 during June 2016,” the NYU team wrote. “The last month of the database leak in July 2016 shows no other forms payments other than Bitcoin.”

Other developments since vDOS’s demise in September 2016 have conspired to deal a series of body blows to the booter service industry. In October 2016, Hackforums — until recently the most bustling marketplace on the Internet where people could compare and purchase booter services — announced it was permanently banning the sale and advertising of these services on the forum.

In December 2016, authorities in the United States and Europe arrested nearly three-dozen people suspected of patronizing booter services. The enforcement action was a stated attempt by authorities to communicate to the public that people can go to jail for hiring booter services.

In April 2017, a U.K. man who ran a booter service that delivered some 1.7 million denial-of-service attacks against victims worldwide was sentenced to two years in prison.

Prosecutors in Israel say they are preparing formal charges against the two young Israeli men arrested last year on suspicion of running vDOS.

Check out the full NYU paper here (PDF).

CryptogramSpear Phishing Attacks

Really interesting research: "Unpacking Spear Phishing Susceptibility," by Zinaida Benenson, Freya Gassmann, and Robert Landwirth.

Abstract: We report the results of a field experiment where we sent to over 1200 university students an email or a Facebook message with a link to (non-existing) party pictures from a non-existing person, and later asked them about the reasons for their link clicking behavior. We registered a significant difference in clicking rates: 20% of email versus 42.5% of Facebook recipients clicked. The most frequently reported reason for clicking was curiosity (34%), followed by the explanations that the message fit recipient's expectations (27%). Moreover, 16% thought that they might know the sender. These results show that people's decisional heuristics are relatively easy to misuse in a targeted attack, making defense especially challenging.

Black Hat presentation on the research.

Worse Than FailureThe Insurance Plan

When designing a new feature of an application, among other things, you always want to decide how it will be used. Is it single threaded or will it need to happen in parallel. Will only one user do it at a time, or does it need to support asynchronous access. Will every user want to do it in the same way, or will they each want something just a little different.

In Sewer Ants, ants in a Sewer

Charlie C. worked for a modestly sized financial startup that had gained some traction. The company had grown to about 100 people. They had garnered about 300 customers, and they were building software that would solve a problem that was causing regulators all manner of headaches.

The system essentially provided the ability for customers to swap bond insurance contracts. These swaps could be between any two, three or four parties, depending upon the type of transaction. One day, one of the customers asked the CEO if they could implement an approval workflow. Specifically, before letting some low level trader execute a swap, it would be routed to a group with approval authority. If approved, the transaction would be routed to the other party. If not, it would be rejected back to the initiator.

The CEO asked the development manager who came to Charlie and told him to just hard code some flags in the main transaction record, and check for them as special cases at every relevant place in the code. Charlie knew that if one customer wanted it, that most (if not all) of them would want it. Accordingly, he explained as much and said he would make it table driven.

"Won't that take longer to build?" the boss inquired.

"Of course it will take longer, but it's a whole lot more supportable than special-casing the code all over, and corrupting the main transaction record," Charlie replied. With 300 customers and transactions involving 2, 3 or 4 parties, there were almost 8 billion permutations; even a tiny fraction of customers requesting special handling would make it absolutely unsupportable. "Think of it like… insurance."

The manager insisted on the hard coding, but Charlie ignored him, knowing all too well the consequences of listening to his boss.

A few months later, the work was done. On the very last day of work, as Charlie was doing some final cleanup, his boss walked over with the CEO in tow. They were both in a bit of a panic. "Another customer has asked for special workflow processing, but in a slightly different way. How much time will it take to add the additional workflow logic?"

Charlie looked at his boss, laughed, and said "About thirty minutes."

"But it took months this time!"

"And this is why I don't listen to you." Charlie went on to explain that it would take about a half hour to create the relevant groups, users and flags to simulate the new workflow, and that he would try to start the servers, queue managers and 16 copies of their application (one for each user in the scenario) on his PC to demonstrate that it would perform the required routing.

At the appointed time, both CEO and development manager returned and watched as Charlie made his machine groan through the motions to perform the desired flow processing. While it was busy thrashing, he used the opportunity to point out that they had no demo environment, that the developer workstations were simply not powerful enough to perform these sorts of demos, and he apologized for taking so much of their time. As the CEO listened to the hard disk grinding away, he agreed and offered to let them buy more powerful developer workstations.

After about an hour of going through all the different scenarios, the CEO was pleased, thanked Charlie for his time and told him to put in an order for the workstations the next day. Then they walked away.

Later that day, the development manager was transferred to manage another team and someone else was put in charge of Charlie's project. Shortly thereafter, Charlie grabbed the other developers and put together specs for their dream machines, which were ordered the next day.

Over the next year, configurations for more than a thousand different special case workflows were added, and not a single byte of code had to change.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Don MartiApple user research revealed, sort of

This is not normally the blog to come to for Apple fan posts (my ThinkPad, desktop Linux, cold dead hands, and so on) but really good work here on "Intelligent Tracking Prevention" in Apple Safari.

Looks like the spawn of Privacy Badger and cookie double-keying, designed to balance user protection from surveillance marketing with minimal breakage of sites that depend on third-party resources.

(Now all the webmasters will fix stuff to make it work with Intelligent Tracking Prevention, which makes it easier for other browsers and privacy tools to justify their own features to protect users. Of course, now the surveillance marketers will rely more on passive fingerprinting, and Apple has an advantage there because there are fewer different Safari-capable devices. But browsers need to fix fingerprinting anyway.)

Apple does massive amounts of user research and it's fun to watch the results leak through when they communicate about features. Looks like they have found that users care about being "followed" from site to site by ads, and that users are still pretty good at applied behavioral economics. The side effect of tracking protection, of course, is that it takes high-reputation sites out of competition with the bottom-feeders to reach their own audiences, so Intelligent Tracking Prevention is great news for publishers too.

Meanwhile, I don't get Google's weak "filter" thing. Looks like a transparently publisher-hostile move (since it blocks some potentially big-money ads without addressing the problem of site commodification), unless I'm missing something.

,

Sociological ImagesWhy are we so committed to the coal miner?

Originally posted at There’s Research on That!

With a group of coal miners standing behind him, President Donald Trump signed an executive order in his first 100 days reversing Obama-era climate change policies, claiming that he would bring back coal while putting miners to work. Yet, can or will coal mining jobs come back, and will this lead to economic and social development in places like Appalachia?

Probably not.

Much research has shown that the loss of mining jobs in the U.S. is largely due to mechanization and labor-cutting management practices — not environmental protections. Thus, placing the blame on climate change policies is unfounded. Instead, it’s used to scapegoat environmentalists and draw our attention away from corporations and changes in the global economy.

Even if Trump’s executive order could bring back the jobs, it might not have the effects coal miners are hoping for. Researchers find that mining does not always lead to economic growth and well-being. Thus, keeping coal mines open does not guarantee economic prosperity and well-being. A study found that in West Virginia the counties with coal mines have some of the highest poverty and unemployment rates compared to surrounding counties without active mines.

Moreover, sociologist William Freudenberg argues that economies based solely around mining are prone to booms and busts, subject to the whims of the industry. Towns in Appalachian coal country and the Bakken oil fields of North Dakota become “addicted” to extraction. But dependence on fossil fuel industries is economically precarious.

Why don’t these facts change miners’ deep ties to mining as a way of life? Because many have strong cultural connections to mining, often coming from multiple generations of miners. Through her experiences working in a coal mine, anthropologist Jessica Smith Roylston saw how the miner identity connects with masculine ideals of hard work and providing for one’s family.

Photo by nottsexminer; flickr creative commons.

Industry has tapped into these sentiments to generate public support and weave the industry into the fabric of community life. Mining companies, particularly in Appalachia, have actively worked to create a positive image through public relations and other cultural and political tactics, such as sponsoring high school football tournaments and billboard ads.

These corporate strategies place the blame on outsiders and environmentalists, provide a cover for environmentally destructive and job-cutting industry practices, and keep coal politically relevant.

Erik Kojola is a PhD student in the Department of Sociology at the University of Minnesota interested in the environment, labor, social movements and political economy.

(View original at https://thesocietypages.org/socimages)

CryptogramCIA's Pandemic Toolkit

WikiLeaks is still dumping CIA cyberweapons on the Internet. Its latest dump is something called "Pandemic":

The Pandemic leak does not explain what the CIA's initial infection vector is, but does describe it as a persistent implant.

"As the name suggests, a single computer on a local network with shared drives that is infected with the 'Pandemic' implant will act like a 'Patient Zero' in the spread of a disease," WikiLeaks said in its summary description. "'Pandemic' targets remote users by replacing application code on-the-fly with a Trojaned version if the program is retrieved from the infected machine."

The key to evading detection is its ability to modify or replace requested files in transit, hiding its activity by never touching the original file. The new attack then executes only on the machine requesting the file.

Version 1.1 of Pandemic, according to the CIA's documentation, can target and replace up to 20 different files with a maximum size of 800MB for a single replacement file.

"It will infect remote computers if the user executes programs stored on the pandemic file server," WikiLeaks said. "Although not explicitly stated in the documents, it seems technically feasible that remote computers that provide file shares themselves become new pandemic file servers on the local network to reach new targets."

The CIA describes Pandemic as a tool that runs as kernel shellcode that installs a file system filter driver. The driver is used to replace a file with a payload when a user on the local network accesses the file over SMB.

WikiLeaks page. News article.

EDITED TO ADD: In this case, Wikileaks has withheld the tool itself and just released the documentation.

Worse Than FailureCodeSOD: Variation on a Theme

If you’re not already aware, the Daily WTF is open source. We went the route of building our own CMS mostly because our application needs are pretty light. We don’t need themes, we don’t need WYSIWYG editors, we don’t need asset uploads. Also, with home-grown code, we know what’s in it, what it does, and any problems in the code are our own.

Which brings us to WordPress, land of the themes. There’s a cottage industry around building WordPress themes, and it’s a busy enough space that there are specialists in developing themes for specific industries. Alessandro ended up doing some work in the real estate business, tweaking a WP theme to change the way certain images would get displayed in a slide show.

$pic1 = get_field('pic_1');
$pic2 = get_field('pic_2');
$pic3 = get_field('pic_3');
$pic4 = get_field('pic_4');
$pic5 = get_field('pic_5');
$pic6 = get_field('pic_6');
$pic7 = get_field('pic_7');
$pic8 = get_field('pic_8');
$pic9 = get_field('pic_9');
$pic10 = get_field('pic_10');

if ($pic1!=""){
  if (strpos($pic1,'php') === false) {
  if (strpos($pic1,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic1); } else {
  if (strpos($pic1,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic1); } else {
  $pic1r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic1);
  echo '<li><a href="'.$pic1r.'" class="lbp_primary"><img src="'.$pic1r.'" /></a></li>';
  }
  }
  }
}
if ($pic2!=""){
  if (strpos($pic2,'php') === false) {
  if (strpos($pic2,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic2); } else {
  if (strpos($pic2,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic2); } else {
  $pic2r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic2);
  echo '<li><a href="'.$pic2r.'" class="lbp_primary"><img src="'.$pic2r.'" /></a></li>';
  }
  }
  }
}
if ($pic3!=""){
  if (strpos($pic3,'php') === false) {
  if (strpos($pic3,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic3); } else {
  if (strpos($pic3,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic3); } else {
  $pic3r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic3);
  echo '<li><a href="'.$pic3r.'" class="lbp_primary"><img src="'.$pic3r.'" /></a></li>';
  }
  }
  }
}
if ($pic4!=""){
  if (strpos($pic4,'php') === false) {
  if (strpos($pic4,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic4); } else {
  if (strpos($pic4,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic4); } else {
  $pic4r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic4);
  echo '<li><a href="'.$pic4r.'" class="lbp_primary"><img src="'.$pic4r.'" /></a></li>';
  }
  }
  }
}
if ($pic5!=""){
  if (strpos($pic5,'php') === false) {
  if (strpos($pic5,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic5); } else {
  if (strpos($pic5,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic5); } else {
  $pic5r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic5);
  echo '<li><a href="'.$pic5r.'" class="lbp_primary"><img src="'.$pic5r.'" /></a></li>';
  }
  }
  }
}
if ($pic6!=""){
  if (strpos($pic6,'php') === false) {
  if (strpos($pic6,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic6); } else {
  if (strpos($pic6,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic6); } else {
  $pic6r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic6);
  echo '<li><a href="'.$pic6r.'" class="lbp_primary"><img src="'.$pic6r.'" /></a></li>';
  }
  }
  }
}
if ($pic7!=""){
  if (strpos($pic7,'php') === false) {
  if (strpos($pic7,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic7); } else {
  if (strpos($pic7,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic7); } else {
  $pic7r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic7);
  echo '<li><a href="'.$pic7r.'" class="lbp_primary"><img src="'.$pic7r.'" /></a></li>';
  }
  }
  }
}
if ($pic8!=""){
  if (strpos($pic8,'php') === false) {
  if (strpos($pic8,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic8); } else {
  if (strpos($pic8,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic8); } else {
  $pic8r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic8);
  echo '<li><a href="'.$pic8r.'" class="lbp_primary"><img src="'.$pic8r.'" /></a></li>';
  }
  }
  }
}
if ($pic9!=""){
  if (strpos($pic9,'php') === false) {
  if (strpos($pic9,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic9); } else {
  if (strpos($pic9,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic9); } else {
  $pic9r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic9);

      if( file_exists( $pic9 ) ) {
        echo '<li><a href="'.$pic9r.'" class="lbp_primary"><img src="'.$pic9r.'" /></a></li>';
      }
  }
  }
  }
}
if ($pic10!=""){
  if (strpos($pic10,'php') === false) {
  if (strpos($pic10,'floorplan')) { $floor =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic10); } else {
  if (strpos($pic10,'EPC')) { $epc =  str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic10); } else {
  $pic10r = str_replace("[HIDDEN-URL]","/wp-content/uploads/",$pic10);
  echo '<li><a href="'.$pic10r.'" class="lbp_primary"><img src="'.$pic10r.'" /></a></li>';
  }
  }
  }
}

As you can see, the loop was unrolled for performance, and the extraneous tab characters were removed to keep the filesize down. It's easy to add or remove images from the slide-show, by doing a simple copy/paste action. This is some top-shelf code here.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main June 2017 Meeting

Jun 6 2017 18:30
Jun 6 2017 20:30
Jun 6 2017 18:30
Jun 6 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, June 6, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• To be announced

Come have a drink with us and talk about Linux.  If you have something cool to show, please bring it along!

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 6, 2017 - 18:30

,

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 2, week 8

This week we are starting into the last stretch of the term. Students are well into their final sections of work. Our youngest students are thinking about how we care for places, slightly older students are displaying their posters and older students are giving their presentations.

Foundation/Prep/Kindy to Year 3

Our youngest students doing the stand-alone Foundation/Prep/Kindy unit (F.2) are thinking about how we look after different places this week. Students in integrated Foundation/Prep/Kindy and Year 1 classes, doing Unit F.6, are displaying their posters on an issue in their local environment. These posters were prepared in proceeding weeks and can now be displayed either at school or in a local library or hall. The teacher may choose to invite parents to view the posters as well. Students in Years 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) also have posters to display on a range of issues, either at the school, in a local place, such as a park, or even a local heritage place. Discussions around points of view and the intended audience of the posters can help students to gain a more in-depth understanding and critique their own work.

Years 3 to 6

Students in Years 3 (Unit 3.6), 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are in the second of 3 weeks set aside for their presentations. The presentations cover a significant body of work and thus a 3 weeks of lessons are set aside for the presentations, as well as for finishing any other sections of work not yet completed. Year 3 students are considering extreme climate areas of Australia and other parts of the world, such as the Sahara Desert, Arctic and Antarctica and Mount Everest, by studying explorers such as Edmund Hillary and Tenzing Norgay, Robert Scott and Pawel Strzelecki. Year 4 students are studying explorers and the environments and animals of Africa and South America, such as Francisco Pizarro, the Giant Vampire Bat, Vasco Da Gama and the Cape Lion. Year 5 students are studying explorers, environments and animals of North America, such as Henry Hudson, Hernando de Soto and the Great Auk. Year 6 students are studying explorers, environments and indigenous peoples of Asia, such as Vitus Bering, Zheng He, Marco Polo, the Mongols and the Rus.

Planet Linux AustraliaColin Charles: Speaking in June 2017

I will be at several events in June 2017:

  • db tech showcase 2017 – 16-17 June 2017 – Tokyo, Japan. I’m giving a talk about best practices around MySQL High Availability.
  • O’Reilly Velocity 2017 – 19-22 June 2017 – San Jose, California, USA. I’m giving a tutorial about best practices around MySQL High Availability. Use code CC20 for a 20% discount.

I look forward to meeting with you at either of these events, to discuss all things MySQL (High Availability, security, cloud, etc.), and how Percona can help you.

As I write this, I’m in Budva, Montenegro, for the Percona engineering meeting.

Planet Linux AustraliaBen Martin: Six is the magic number

I have talked about controlling robot arms with 4 or 5 motors and the maths involved in turning a desired x,y,z target into servo angles. Things get a little too interesting with 6 motors as you end up with a great deal of solutions to a positioning problem and need to work out a 'best' choice.


So I finally got MoveIt! to work to control a six motor arm using ROS. I now also know that using MoveIt on lower order arms isn't going to give you much love. Six is the magic number (plus claw motor) to get things working and patience is your best friend in getting the configuration and software setup going.

This was great as MoveIt was the last corner of the ROS stack that I hadn't managed to get to work for me. The great part is that the knowledge I gained playing with MoveIt will work on larger more accurate and expensive robot arms.

,

Planet Linux AustraliaLev Lafayette: The Why and How of HPC-Cloud Hybrids with OpenStack

High performance computing and cloud computing have traditionally been seen as separate solutions to separate problems, dealing with issues of performance and flexibility respectively. In a diverse research environment however, both sets of compute requirements can occur. In addition to the administrative benefits in combining both requirements into a single unified system, opportunities are provided for incremental expansion.

read more

,

CryptogramFriday Squid Blogging: Squid as Prey

There's lots of video of squid as undersea predators. This is one of the few instances of squid as prey (from a deep submersible in the Pacific):

"We saw brittle stars capturing a squid from the water column while it was swimming. I didn't know that was possible. And then there was a tussle among the brittle stars to see who got to have the squid," says France.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesCity design and car ownership: Infrastructure needs for carlessness

Flashback Friday.

The percent of carless households in any given city correlates very well with the percent of homes built before 1940. So what happened in the 40s?

According to Left for LeDroit, it was suburbs:

The suburban housing model was — and, for the most part, still is — based on several main principles, most significantly, the uniformity of housing sizes (usually large) and the separation of residential and commercial uses. Both larger lots and the separation of uses create longer distances between any two points, requiring a greater effort to go between home, work, and the grocery store.

These longer distances between daily destinations made walking impractical and the lower population densities made public transit financially unsustainable. The only solution was the private automobile, which, coincidentally, benefited from massive government subsidies in the form of highway building and a subsidized oil infrastructure and industry.

Neighborhoods designed after World War II are designed for cars, not pedestrians; the opposite is true for neighborhoods designed before 1940. Whether or not one owns a car, and how far one drives if they do, then, is dependent on the type of city, not personal characteristics like environmental friendliness.  Ezra Klein puts it nicely:

In practice, this doesn’t feel like a decision imposed by the cold realities of infrastructure. We get attached to our cars. We get attached to our bikes. We name our subway systems. We brag about our short walks to work. People attach stories to their lives. But at the end of the day, they orient their lives around pretty practical judgments about how best to live. If you need a car to get where you’re going, you’re likely to own one. If you rarely use your car, have to move it a couple of times a week to avoid street cleaning, can barely find parking and have trouble avoiding tickets, you’re going to think hard about giving it up. It’s not about good or bad or red or blue. It’s about infrastructure.

Word.

Neither Ezra nor Left for LeDroit, however, point out that every city, whether it was built for pedestrians or cars, is full of people without cars. In the case of car-dependent cities, this is mostly people who can’t afford to buy or own a car. And these people, in these cities, are royally screwed. Los Angeles, for example, is the most expensive place in the U.S. to own a car and residents are highly car-dependent; lower income people who can’t afford a car must spend extraordinary amounts of time using our mediocre public transportation system, such that carlessness contributes significantly to unemployment.

Originally posted in 2010.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowNew Yorkers! I’ll see you tomorrow at Bookcon on the Walkaway tour (then SF, Chicago, Denver…) (!)

I just got to NYC for Bookcon, where I’m appearing tomorrow, at a “guest bookseller” event with John Scalzi at 11 at the Tor Booth (3008) (we’ll be talking up books we love!); then a panel with Charlie Jane Anders, Annalee Newitz and John Scalzi at 3PM (room 1E10), and finally a signing with Scalzi at 415PM in the autographing area.


It’s part of the Walkaway tour, which has me flying to San Francisco tomorrow night after Bookcon to attend the Bay Area Book Festival on Sunday, where I’m on two panels: Science Fiction and the Resistance, with Charlie Jane Anders, John Scalzi and Annalee Newitz; and When Reality Meets Science Fiction, with Meg Elison, Zachary Mason, and Annalee Newitz.

Next weekend, I’m in Chicago for the Printers Row festival, where I’ll be “in conversation” with Mary Robinette Kowal on Sunday at 1130AM. I return to Chicago the next week for ALA, before heading to Denver for Denver Comic-Con (I’ll also be at San Diego Comic-Con and Defcon later in the summer).

Walkaway continues to go gangbusters: this morning’s profile and review in the LA Times by Scott Timberg was a fine thing to wake up to, especially William Gibson’s commentary: “Literary naturalism is the unrecognized secret ingredient in a lot of my favorite science fiction. The characters are sexual beings, socioeconomic beings, products of thoroughly imagined cultures, etc. Naturalism, which I suppose more people would call realism today, was very thin on the ground in much of 20th Century genre sf. If the characters have sufficiently convincing lives, that organically balances talkiness and theory, and Cory’s fiction amply demonstrates that he knows that.”

I’m also extremely pleased with the number of purchasers who’ve availed themselves of my “fair trade” ebook and audiobook store, where you can buy electronic versions of all my books in a way that doubles my share of the sale-price, while delivering you a product with no DRM and no “license agreement” — just like with a real book!


(Image: Gary Coronado/Los Angeles Times)

CryptogramWannaCry and Vulnerabilities

There is plenty of blame to go around for the WannaCry ransomware that spread throughout the Internet earlier this month, disrupting work at hospitals, factories, businesses, and universities. First, there are the writers of the malicious software, which blocks victims' access to their computers until they pay a fee. Then there are the users who didn't install the Windows security patch that would have prevented an attack. A small portion of the blame falls on Microsoft, which wrote the insecure code in the first place. One could certainly condemn the Shadow Brokers, a group of hackers with links to Russia who stole and published the National Security Agency attack tools that included the exploit code used in the ransomware. But before all of this, there was the NSA, which found the vulnerability years ago and decided to exploit it rather than disclose it.

All software contains bugs or errors in the code. Some of these bugs have security implications, granting an attacker unauthorized access to or control of a computer. These vulnerabilities are rampant in the software we all use. A piece of software as large and complex as Microsoft Windows will contain hundreds of them, maybe more. These vulnerabilities have obvious criminal uses that can be neutralized if patched. Modern software is patched all the time -- either on a fixed schedule, such as once a month with Microsoft, or whenever required, as with the Chrome browser.

When the US government discovers a vulnerability in a piece of software, however, it decides between two competing equities. It can keep it secret and use it offensively, to gather foreign intelligence, help execute search warrants, or deliver malware. Or it can alert the software vendor and see that the vulnerability is patched, protecting the country -- and, for that matter, the world -- from similar attacks by foreign governments and cybercriminals. It's an either-or choice. As former US Assistant Attorney General Jack Goldsmith has said, "Every offensive weapon is a (potential) chink in our defense -- and vice versa."

This is all well-trod ground, and in 2010 the US government put in place an interagency Vulnerabilities Equities Process (VEP) to help balance the trade-off. The details are largely secret, but a 2014 blog post by then President Barack Obama's cybersecurity coordinator, Michael Daniel, laid out the criteria that the government uses to decide when to keep a software flaw undisclosed. The post's contents were unsurprising, listing questions such as "How much is the vulnerable system used in the core Internet infrastructure, in other critical infrastructure systems, in the US economy, and/or in national security systems?" and "Does the vulnerability, if left unpatched, impose significant risk?" They were balanced by questions like "How badly do we need the intelligence we think we can get from exploiting the vulnerability?" Elsewhere, Daniel has noted that the US government discloses to vendors the "overwhelming majority" of the vulnerabilities that it discovers -- 91 percent, according to NSA Director Michael S. Rogers.

The particular vulnerability in WannaCry is code-named EternalBlue, and it was discovered by the US government -- most likely the NSA -- sometime before 2014. The Washington Post reported both how useful the bug was for attack and how much the NSA worried about it being used by others. It was a reasonable concern: many of our national security and critical infrastructure systems contain the vulnerable software, which imposed significant risk if left unpatched. And yet it was left unpatched.

There's a lot we don't know about the VEP. The Washington Post says that the NSA used EternalBlue "for more than five years," which implies that it was discovered after the 2010 process was put in place. It's not clear if all vulnerabilities are given such consideration, or if bugs are periodically reviewed to determine if they should be disclosed. That said, any VEP that allows something as dangerous as EternalBlue -- or the Cisco vulnerabilities that the Shadow Brokers leaked last August to remain unpatched for years isn't serving national security very well. As a former NSA employee said, the quality of intelligence that could be gathered was "unreal." But so was the potential damage. The NSA must avoid hoarding vulnerabilities.

Perhaps the NSA thought that no one else would discover EternalBlue. That's another one of Daniel's criteria: "How likely is it that someone else will discover the vulnerability?" This is often referred to as NOBUS, short for "nobody but us." Can the NSA discover vulnerabilities that no one else will? Or are vulnerabilities discovered by one intelligence agency likely to be discovered by another, or by cybercriminals?

In the past few months, the tech community has acquired some data about this question. In one study, two colleagues from Harvard and I examined over 4,300 disclosed vulnerabilities in common software and concluded that 15 to 20 percent of them are rediscovered within a year. Separately, researchers at the Rand Corporation looked at a different and much smaller data set and concluded that fewer than six percent of vulnerabilities are rediscovered within a year. The questions the two papers ask are slightly different and the results are not directly comparable (we'll both be discussing these results in more detail at the Black Hat Conference in July), but clearly, more research is needed.

People inside the NSA are quick to discount these studies, saying that the data don't reflect their reality. They claim that there are entire classes of vulnerabilities the NSA uses that are not known in the research world, making rediscovery less likely. This may be true, but the evidence we have from the Shadow Brokers is that the vulnerabilities that the NSA keeps secret aren't consistently different from those that researchers discover. And given the alarming ease with which both the NSA and CIA are having their attack tools stolen, rediscovery isn't limited to independent security research.

But even if it is difficult to make definitive statements about vulnerability rediscovery, it is clear that vulnerabilities are plentiful. Any vulnerabilities that are discovered and used for offense should only remain secret for as short a time as possible. I have proposed six months, with the right to appeal for another six months in exceptional circumstances. The United States should satisfy its offensive requirements through a steady stream of newly discovered vulnerabilities that, when fixed, also improve the country's defense.

The VEP needs to be reformed and strengthened as well. A report from last year by Ari Schwartz and Rob Knake, who both previously worked on cybersecurity policy at the White House National Security Council, makes some good suggestions on how to further formalize the process, increase its transparency and oversight, and ensure periodic review of the vulnerabilities that are kept secret and used for offense. This is the least we can do. A bill recently introduced in both the Senate and the House calls for this and more.

In the case of EternalBlue, the VEP did have some positive effects. When the NSA realized that the Shadow Brokers had stolen the tool, it alerted Microsoft, which released a patch in March. This prevented a true disaster when the Shadow Brokers exposed the vulnerability on the Internet. It was only unpatched systems that were susceptible to WannaCry a month later, including versions of Windows so old that Microsoft normally didn't support them. Although the NSA must take its share of the responsibility, no matter how good the VEP is, or how many vulnerabilities the NSA reports and the vendors fix, security won't improve unless users download and install patches, and organizations take responsibility for keeping their software and systems up to date. That is one of the important lessons to be learned from WannaCry.

This essay originally appeared in Foreign Affairs.

Worse Than FailureError'd: A World Turned Upside Down

John A. wrote, "Wait, so 'Cancel' is 'Continue' and 'OK' is really 'Cancel'!?"

 

"Not only that; we are thankful too!" writes Bob S.

 

"With those NaN folks viewing my profile on Academaia.edu, it's only a matter of time before the world beats a path to my door!" writes Jason K.

 

Andrea S. wrote, "I'm not sure what fail(ed) but it definitely fail(ed)."

 

"Never thought I'd say it, but thank goodness for unpatched OS! Now I don't have to pay the parking meter!" writes Juan J.

 

"Congrats on your new job... Oh wait...what?!" Leslie A. writes.

 

Tony wrote, "My laptop seems to think it's REALLY good at charging its battery."

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

,

CryptogramPasswords at the Border

The password-manager 1Password has just implemented a travel mode that tries to protect users while crossing borders. It doesn't make much sense. To enable it, you have to create a list of passwords you feel safe traveling with, and then you can turn on the mode that only gives you access to those passwords. But since you can turn it off at will, a border official can just demand you do so. Better would be some sort of time lock where you are unable to turn it off at the border.

There are a bunch of tricks you can use to ensure that you are unable to decrypt your devices, even if someone demands that you do. Back in 2009, I described such a scheme, and mentioned some other tricks the year before. Here's more. They work with any password manager, including my own Password Safe.

There's a problem, though. Everything you do along these lines is problematic, because 1) you don't want to ever lie to a customs official, and 2) any steps you take to make your data inaccessible is in itself suspicious. Your best defense is not to have anything incriminating on your computer or in the various social media accounts you use. (This advice was given to Australian citizens by their Department of Immigration and Border Protection specifically to Muslims pilgrims returning from hajj. Bizarrely, an Australian MP complained when Muslims repeated that advice.)

The EFF has a comprehensive guide to both the tech and policy of securing your electronics for border crossings.

Planet Linux AustraliaDanielle Madeley: Update on python-pkcs11

I spent a bit of time fleshing out the support matrix for python-pkcs11 and getting things that aren’t SoftHSM into CI for integration testing (there’s still no one-command rollout for BuildBot connected to GitHub, but I got there in the end).

The nice folks at Nitrokey are also sending me some devices to widen the compatibility matrix. Also happy to make it work with CloudHSM if someone at Amazon wants to hook me up!

I also put together API docs that hopefully help to explain how to actually use the thing and added support for RFC3279 to pyasn1_modules (so you can encode your elliptic curve parameters).

Next goal is to open up my Django HSM integrations to add encrypted database fields, encrypted file storage and various other offloads onto the HSM. Also look at supporting certificate objects for all that wonderful stuff.

Krebs on SecurityOneLogin: Breach Exposed Ability to Decrypt Data

OneLogin, an online service that lets users manage logins to sites and apps from a single platform, says it has suffered a security breach in which customer data was compromised, including the ability to decrypt encrypted data.

oneloginHeadquartered in San Francisco, OneLogin provides single sign-on and identity management for cloud-base applications. OneLogin counts among its customers some 2,000 companies in 44 countries, over 300 app vendors and more than 70 software-as-a-service providers.

A breach that allowed intruders to decrypt customer data could be extremely damaging for affected customers. After OneLogin customers sign into their account, the service takes care of remembering and supplying the customer’s usernames and passwords for all of their other applications.

In a brief blog post Wednesday, OneLogin chief information security officer Alvaro Hoyos wrote that the company detected unauthorized access to OneLogin data.

“Today we detected unauthorized access to OneLogin data in our US data region. We have since blocked this unauthorized access, reported the matter to law enforcement, and are working with an independent security firm to determine how the unauthorized access happened and verify the extent of the impact of this incident. We want our customers to know that the trust they have placed in us is paramount.”

“While our investigation is still ongoing, we have already reached out to impacted customers with specific recommended remediation steps and are actively working to determine how best to prevent such an incident from occurring in the future and will update our customers as these improvements are implemented.”

OneLogin’s blog post includes no other details, aside from a reference to the company’s compliance page. The company has not yet responded to request for comment. However, Motherboard has obtained a copy of a message OneLogin reportedly sent to its customers about the incident, and that missive contains a critical piece of information:

“Customer data was compromised, including the ability to decrypt encrypted data,” reads the message OneLogin sent to customers.

According to Motherboard, the message also directed customers to a list of required steps to minimize any damage from the breach, such as generating new API keys and OAuth tokens (OAuth being a system for logging into accounts), creating new security certificates as well as credentials; recycling any secrets stored in OneLogin’s Secure Notes feature; and having end-users update their passwords.

Gartner Inc. financial fraud analyst Avivah Litan said she has long discouraged companies from using cloud-based single sign-on services, arguing that they are the digital equivalent to an organization putting all of its eggs in one basket.

“It’s just such a massive single point of failure,” Litan said. “And this breach shows that other [cloud-based single sign-on] services are vulnerable, too. This is a big deal and it’s disruptive for victim customers, because they have to now change the inner guts of their authentication systems and there’s a lot of employee inconvenience while that’s going on.”

KrebsOnSecurity will likely update this story throughout the day as more details become available.

Update 7:54 p.m ET: OneLogin posted an update to its blog with more details about the breach:

“Our review has shown that a threat actor obtained access to a set of AWS keys and used them to access the AWS API from an intermediate host with another, smaller service provider in the US. Evidence shows the attack started on May 31, 2017 around 2 am PST. Through the AWS API, the actor created several instances in our infrastructure to do reconnaissance. OneLogin staff was alerted of unusual database activity around 9 am PST and within minutes shut down the affected instance as well as the AWS keys that were used to create it.”

Customer impact

“The threat actor was able to access database tables that contain information about users, apps, and various types of keys. While we encrypt certain sensitive data at rest, at this time we cannot rule out the possibility that the threat actor also obtained the ability to decrypt data. We are thus erring on the side of caution and recommending actions our customers should take, which we have already communicated to our customers.”

Worse Than FailureCodeSOD: Switched Over

Twelve years ago, a company decided they needed a website. They didn’t have any web developers, and they didn’t want to hire any, so they threw a PHP manual at the new hire who happened to “be good with computers”, and called it a day.

Ms. “Good With Computers” actually learned something from the experience, and moved on to a lucrative career in web development. Unfortunately, she left behind the code she learned by doing, and now Bert has been brought in to clean up the code.

Take this block, which translates an airport code into a location name…

    switch ($v) {
        case "EIN": $ori = "Eindhoven";
            break;
        case "EIN": $ori = "Eindhoven";
            break;
        case "EIN": $ori = "Eindhoven";
            break;
        case "EIN": $ori = "Eindhoven";
            break;
        case "EIN": $ori = "Eindhoven";
            break;
        case "EIN": $ori = "Eindhoven";
            break;
        default: break;
    }

    switch ($b) {
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        case "WRO": $des = "Wroclaw";
            break;
        default: break;
    }

There’s hundreds of lines of this, following the same pattern- one switch block with the same condition repeated many times for each possible airport code. These blocks don’t exist in a function, either- they’re slapped right into the global namespace.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Krebs on SecurityCredit Card Breach at Kmart Stores. Again.

For the second time in less than three years, Kmart Stores is battling a malware-based security breach of its store credit card processing systems. kmart

Last week I began hearing from smaller banks and credit unions who said they strongly suspected another card breach at Kmart. Some of those institutions received alerts from the credit card companies about batches of stolen cards that all had one thing in common: They were all used at Kmart locations.

Asked to respond to rumors about a card breach, Kmart’s parent company Sears Holdings said some of its payment systems were infected with malicious software:

“We recently became aware that Sears Holdings was a victim of a security incident involving unauthorized credit card activity following certain customer purchases at some of our Kmart stores. We immediately launched a thorough investigation and engaged leading third party forensic experts to review our systems and secure the affected part of our network.”

“Our Kmart store payment data systems were infected with a form of malicious code that was undetectable by current anti-virus systems and application controls. Once aware of the new malicious code, we quickly removed it and contained the event. We are confident that our customers can safely use their credit and debit cards in our retail stores.”

Based on the forensic investigation, NO PERSONAL identifying information (including names, addresses, social security numbers, and email addresses) was obtained by those criminally responsible. However, we believe certain credit card numbers have been compromised. Nevertheless, in light of our EMV compliant point of sale systems, which rolled out last year, we believe the exposure to cardholder data that can be used to create counterfeit cards is limited. There is also no evidence that kmart.com or Sears customers were impacted.”

Sears spokesman Chris Brathwaite said the company is not commenting on how many of Kmart’s 735 locations nationwide may have been impacted or how long the breach is believed to have persisted, saying the investigation is ongoing.

“Given the criminal nature of this attack, Kmart is working closely with federal law enforcement authorities, our banking partners, and IT security firms in this ongoing investigation,” Sears Holdings said in its statement. “We are actively enhancing our defenses in light of this new form of malware. Data security is of critical importance to our company, and we continuously review and improve the safeguards that protect our data in response to changing technology and new threats.”

In October 2014, Sears announced a very similar breach in which the company also stressed that the data stolen did not include customer names, email addresses or other personal information. 

Both breaches involved malware designed to steal credit and debit card data from hacked point-of-sale (POS) devices. The malware copies account data stored on the card’s magnetic stripe. Armed with that information, thieves can effectively clone the cards and use them to buy high-priced merchandise from electronics stores and big box retailers.

At least two financial industry sources told KrebsOnSecurity that the breach does not appear to be affecting all Kmart stores. Those same sources said that if the breach had hit all Kmart locations, they would expect to be seeing much bigger alerts from the credit card companies about accounts that are potentially compromised.

All Kmart stores in the United States now have credit card terminals capable of processing transactions from more secure chip-based cards. The chip essentially makes the cards far more difficult and expensive to counterfeit. But not all banks have issued customers chip-enabled cards yet, and thus this latest breach at Kmart likely impacts mainly Kmart customers who shopped at the store using non-chip enabled cards.

Visa said in March 2017 there were more than 421 million Visa chip cards in the country, representing 58 percent of Visa cards. According to Visa, counterfeit fraud has been declining month over month — down 58 percent at chip-enabled merchants in December 2016 when compared to the previous year.

Sears also has released a FAQ (PDF) that includes a bit more information about this breach disclosure.

,

Google AdsenseIntroducing page-level enforcements and a new Policy center

As a publisher you face many challenges. One of the broadest and most encompassing of these is growing your user base while making sure your content remains high-quality and policy compliant. Your feedback has helped us understand this challenge, and we’re always working to improve. A few weeks ago, we announced two new AdSense features: page-level enforcements and a new Policy center. Today, we’re excited to let you know that these features are available globally for all AdSense publishers.

Page-level enforcements for more granular policy actions
To allow more precise enforcements, and provide you with feedback about policy issues as we identify them, we’re introducing page-level enforcements. A page-level enforcement affects individual pages where violations of the AdSense Program Policies are found. As a result, ad serving is restricted or disabled on those pages. Ads will continue to serve where no policy violations have been found, either at the page- or site-level.

When a new policy violation on one of your pages is identified, you’ll receive an email notification and ad serving will be restricted on that page. As this is a new feature, you may already have current page-level enforcements that were not surfaced through these email notifications. To make sure you’re not missing anything, head over to the new Policy center to review existing violations.

After you've addressed all policy violations on a page, you may request a review (previously known as an “appeal”). Reviews typically take one week but can sometimes take longer. We'll restore ad serving on the affected page or pages if a page is reviewed at your request and no policy violations are found. Alternatively, you can simply remove the AdSense ad code from that page and the page-level enforcement will disappear from the Policy center in about a week.

More transparency with the new AdSense Policy center

The AdSense Policy center is a one-stop shop for everything you need to know about policy actions that affect your sites and pages. You’ll be able to see:
  • Non-compliant page(s) or site(s)
  • Why a page or site is non-compliant
  • Steps needed to make your page or site compliant 
  • Steps to request a review of the actioned page(s) or site(s)


Follow these steps to see your current page-level enforcements, and request a review of the actioned page(s):
  1. Sign in to your AdSense account.
  2. In the left navigation panel, click Settings, then click Policy center.
  3. In the "Page-level enforcements" section, find the site or sites that have page-level violations and click Show details.
  4. In the "Page" section, click the Down arrow to learn more about the enforcement, the violation(s) on the page, and how to fix them. 
  5. Click Request review and tick the box after you’ve made sure the violations on the page are fixed.
Our beta participants provided a lot of great feedback and suggestions on how to make the AdSense Policy center as useful as possible. We’re constantly looking to improve the clarity with which we communicate our policies and policy enforcements, so let us know what you think through the ”Send feedback” link in the AdSense menu.

Learn more about these updates in the AdSense Help Center or head over to the Policy center to try it out.

Posted by: John Brown, Head of Publisher Policy Communications, 
Richard Zippel, Publisher Quality Product Manager and 
Nick Radicevic, AdSense Product Manager

Sociological ImagesSexism in ratings of intelligence across the life cycle

The average man thinks he’s smarter than the average woman. And women generally agree.

It starts early. At the age of five, most girls and boys think that their own sex is the smartest, a finding consistent with the idea that people tend to think more highly of people like themselves. Around age six, though, right when gender stereotypes tend to take hold among children, girls start reporting that they think boys are smarter, while boys continue to favor themselves and their male peers.

They may have learned this from their parents. Both mothers and fathers tend to think that their sons are smarter than their daughters. They’re more likely to ask Google if their son is a “genius” (though also whether they’re “stupid”). Regarding their daughters, they’re more likely to inquire about attractiveness.

Image via New York Times.

Once in college, the trend continues. Male students overestimate the extent to which their males peers have “mastered” biology, for example, and underestimate their female peers’ mastery, even when grades and outspokenness were accounted for.  To put a number on it, male students with a 3.00 G.P.A. were evaluated as equally smart as female students with a 3.75 G.P.A.

When young scholars go professional, the bias persists. More so than women, men go into and succeed in fields that are believed to require raw, innate brilliance, while women more so than men go into and succeed in fields that are believed to require only hard work.

Once in a field, if brilliance can be attributed to a man instead of a woman, it often will be. Within the field of economics, for example, solo-authored work increases a woman’s likelihood of getting tenure, a paper co-authored with a woman has an effect as well, but a paper co-authored with a man has zero effect. Male authors are given credit in all cases.

In negotiations over raises and promotions at work, women are more likely to be lied to, on the assumption that they’re not smart enough to figure out that they’re being given false information.

——————————

Overall, and across countries, men rate themselves as higher in analytical intelligence than women, and often women agree. Women are often rated as more verbally and emotionally intelligent, but the analytical types of intelligence (such as mathematical and spatial) are more strongly valued. When intelligence is not socially constructed as male, it’s constructed as masculine. Hypothetical figures presented as intelligent are judged as more masculine than less intelligent ones.

All this matters.

By age 6, some girls have already started opting out of playing games that they’re told are for “really, really smart” children. The same internalized sexism may lead young women to avoid academic disciplines that are believed to require raw intelligence. And, over the life course, women may be less likely than men to take advantage of career opportunities that they believe demand analytical thinking.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowBad news: tech is making us more unequal. Good news: tech can make us more equal.

My latest Guardian column is Technology is making the world more unequal. Only technology can fix this; in it, I argue that surveillance and control technology allow ruling elites to hold onto power despite the destabilizing effects of their bad decisions — but that technology also allows people to form dissident groups and protect them from intrusive states.

The question, then, isn’t whether technology makes the world more equal and prosperous, but how to use technology to attain those goals.


This is the question my novel Walkaway grapples with: which technologies make the future better, and how can we use them to defend ourselves against the technologies that make the future worse?

After all, there comes a point when the bill for guarding your wealth exceeds the cost of redistributing some of it, so you won’t need so many guards.

But that’s where technology comes in: surveillance technology makes guarding the elites much cheaper than it’s ever been. GCHQ and the NSA have managed to put the entire planet under continuous surveillance. Less technologically advanced countries can play along: Ethiopia was one of the world’s first “turnkey surveillance states”, a country with a manifestly terrible, looting elite class that has kept guillotines and firing squads at bay through buying in sophisticated spying technology from European suppliers, and using this to figure out which dissidents, opposition politicians and journalists represent a threat, so it can subject them to arbitrary detention, torture and, in some cases, execution.

As technology pervades, spying becomes cheaper and inequality becomes more stable – but not infinitely stable. With enough inequality over enough time, the cherished idiocies of the ruling elites will eventually cause a collapse. All technology does is delay it, which is terrible news, since the longer a foolish policy is in place, the more of a policy-debt we incur, and the worse the payback will be: lost generations, rising seas, etc.

That’s the bad news.


Technology is making the world more unequal. Only technology can fix this
[Cory Doctorow/The Guardian]

CryptogramPost-Quantum RSA

Interesting research on a version of RSA that is secure against a quantum computer:

Post-quantum RSA

Daniel J. Bernstein, Nadia Heninger, Paul Lou, and Luke Valenta

Abstract: This paper proposes RSA parameters for which (1) key generation, encryption, decryption, signing, and verification are feasible on today's computers while (2) all known attacks are infeasible, even assuming highly scalable quantum computers. As part of the performance analysis, this paper introduces a new algorithm to generate a batch of primes. As part of the attack analysis, this paper introduces a new quantum factorization algorithm that is often much faster than Shor's algorithm and much faster than pre-quantum factorization algorithms. Initial pqRSA implementation results are provided.

Worse Than FailureDot-Matrix Ragnarok

Fresh out of college and out of money, Johan K. started work at Midgard Manufacturing as a junior developer. His supervisor Ragna made it clear: he would only be responsible for coding error handlers.

“Our plant equipment is several decades old,” she said, “and we have to rely on the manufacturer-provided documentation for adequate coverage. To be honest, none of the senior developers want to bother. So instead of bothering one of them, you’ll be doing it.”

Johan didn’t have a problem with this. He knew he’d have to earn his chops at Midgard. But after he was given the stack of manuals that towered over his monitor, he wondered just how long it would take.

Pride Goeth Before a Fall

One month later, he had made it through a hundred pages of the first manual. Ragna checked in periodically, more interested in how complete his code was than how quickly he was getting through his list of error conditions.

the painting 'Thor and the Midgard Serpent'

The manual he was working on was for a steel press. Vintage 1985, the steel press’s firmware - using Microsoft Xenix - relied on the flakiest of communication protocols: passing information via temp file. Every night around 2AM, the press would go down for maintenance. A shell script, written by Johan, would read data from the temp file the steel press had created, enter it into a SQL database, then delete the file from disk.

Half the lines of code in Johan’s script were if/then blocks, one for each condition listed in the manual.

# if the temp file was created successfully
...
# if the local file was written successfully
...
# if the temp file was successfully removed from the firmware
...

That last one got Johan thinking. When has a rm command ever failed silently? Would he really need to check that the file was removed? The conditions for that to happen seemed as impossible as, well, the end of the world. However, as it was listed as an error condition in the manual, he had to put something in.

echo "Ragnarok has come, the Midgard serpent awakes" > /dev/lpr

In this unlikely event, an error would be spit out to the old dot matrix printer in the office, so someone would see it. For good measure, he also had an email sent to all@midgardmanufacturing.se, as well as sending an SMS text. Johan thought that should take care of it.

Hour of the Wolf

Several months later, Johan had finished coding the conditions in the first manual. His script had been tested and deployed, and it now monitored the steel press. Things had gone well for a few weeks.

Then, at 3 AM, his phone started buzzing.

The screen filled text messages. Each read the same: Ragnarok has come, the Midgard serpent awakes. His inbox overflowed with a thousand emails, each with the same text as the alert. At first he thought it was a prank by one of the senior developers.

Then he remembered.

He rushed to the office, half-awake. As he entered the unlit office, he heard something tearing and squealing from the other side of the room. After stumbling over some office chairs, he found the source: the dot matrix printer. It had chewed through an entire box of feed paper.

Ragnarok has come, the Midgard serpent awakes
Ragnarok has come, the Midgard serpent awakes
Ragnarok has come, the Midgard serpent awakes
...

Not Worthy of Valhalla

Ragna was surprisingly forgiving of Johan. Johan wrote out a proper error message, which would be emailed to only himself and a few other staff, so that his script could fail gracefully.

The senior developers, though, weren’t so kind. His cubicle walls were papered with sheets of line feed, glued so they couldn’t be pulled off, the apocalyptic words smeared in ribbon ink. Someone impersonating Loki from the Marvel movies would randomly message him from an unlisted number. He even found a plush snake wrapped around his computer monitor one morning.

Johan thought it was a good lesson in hubris. He knew it could have been worse: at least it hadn’t gone out to all of Midgard’s customers

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Don MartiThe third connection in Benkler's Tripod

Here's a classic article by Yochai Benkler: Coase's Penguin, or Linux and the Nature of the Firm.

Benkler builds on the work of Ronald Coase, whose The Nature of the Firm explains how transaction costs affect when companies can be more efficient ways to organize work than markets. Benkler adds a third organizational model, peer production. Peer production, commonly seen in open source projects, is good at matching creative people to rewarding problems.

As peer production relies on opening up access to resources for a relatively unbounded set of agents, freeing them to define and pursue an unbounded set of projects that are the best outcome of combining a particular individual or set of individuals with a particular set of resources, this open set of agents is likely to be more productive than the same set could have been if divided into bounded sets in firms.

Firms, markets, and peer production all have their advantages, and in the real world, most productive activity is mixed.

  • Managers in firms manage some production directly and trade in markets for other production. This connection in the firms/markets/peer production tripod is as old as firms.

  • The open source software business is the second connection. Managers in firms both manage software production directly and sponsor peer production projects, or manage employees who participate in projects.

But what about the third possible connection between legs of the tripod? Is it possible to make a direct connection between peer production and markets, one that doesn't go through firms? And why would you want to connect peer production directly to markets in the first place? Not just because that's where the money is, but because markets are a good tool for getting information out of people, and projects need information. Stefan Kooths, Markus Langenfurth, and Nadine Kalwey wrote, in "Open-Source Software: An Economic Assessment" (PDF),

Developers lack key information due to the absence of pricing in open-source software. They do not have information concerning customers’ willingness to pay (= actual preferences), based on which production decisions would be made in the market process. Because of the absence of this information, supply does not automatically develop in line with the needs of the users, which may manifest itself as oversupply (excessive supply) or undersupply (excessive demand). Furthermore, the functional deficits in the software market also work their way up to the upstream factor markets (in particular, the labor market for developers) and–depending on the financing model of the open-source software development–to the downstream or parallel complementary markets (e.g., service markets) as well.

Because the open-source model at its core deliberately rejects the use of the market as a coordination mechanism and prevents the formation of price information, the above market functions cannot be satisfied by the open-source model. This results in a systematic disadvantage in the provision of software in the open-source model as compared to the proprietary production process.

The workaround is to connect peer production to markets by way of firms. But the more that connections between markets and peer production projects have to go through firms, the more chances to lose information. That's not because firms are necessarily dysfunctional (although most are, in different ways). A firm might rationally choose to pay for the implementation of a feature that they predict will get 100 new users, paying $5000 each, instead of a feature that adds $1000 of value for 1000 existing users, but whose absence won't stop them from renewing.

Some ways to connect peer production to markets are already working. Crowdfunding for software projects and Patreon are furthest along, both offering support for developers who have already built a reputation.

A decentralized form of connection is Tokens, which Balaji S. Srinivasan describes as a tradeable version of API keys. If I believe that your network service will be useful to me in the future, I can pre-buy access to it. If I think your service will really catch on, I can buy a bunch of extra tokens and sell them later, without needing to involve you. (and if your service needs network effects, now I have an incentive to promote it, so that there will be a seller's market for the tokens I hold.)

Dominant assurance contracts, by Alexander Tabarrok, build on the crowdfunding model, with the extra twist that the person proposing the project has to put up some seed money that is divided among backers if the project fails to secure funding. This is supposed to bring in extra investment early on, before a project looks likely to meet its goal.

Tom W. Bell's "SPEX", in Prediction Markets for Promoting the Progress of Sciences and the Useful Arts, is a proposed market to facilitate transactions in a variety of prediction certificates, each one of which promises to pay its bearer in the event that an associated claim about science, technology, or public policy comes true. The SPEX looks promising as a way for investors to hedge their exposure to lack of innovation. If you own data centers and need energy, take a short position in SPEX contracts on cold fusion. (Or, more likely, buy into a SPEX fund that invests for your industry.) The SPEX looks like a way to connect the market to more difficult problems than the kinds of incremental innovation that tend to be funded through the VC system.

What happens when the software industry is forced to grow up?

I'm starting to think that finishing the tripod, with better links from markets to peer production, is going to matter a lot more soon, because of the software quality problem.

Today's software, both proprietary and open source, is distributed under ¯\_(ツ)_/¯ terms. "Disclaimer of implied warranty of merchantability" is lawyer-speak for "we reserve the right to half-ass our jobs lol." As Zeynep Tufekci wrote in the New York Times, "The World Is Getting Hacked. Why Don’t We Do More to Stop It?" At some point the users are going to get fed up, and we're going to have to. An industry as large and wealthy as software, still sticking to Homebrew Computer Club-era disclaimers, is like a 40-something-year-old startup bro doing crimes and claiming that they're just boyish hijinks. This whole disclaimer of implied warranty thing is making us look stupid, people. (No, I'm not for warranties on software that counts as a scientific or technical communication, or on bona fide collaborative development, but on a product product? Come on.)

Grown-up software liability policy is coming, but we're not ready for it. Quality software is not just a technically hard problem. Today, we're set up to move fast, break things, and ship dancing pigs—with incentives more powerful than incentives to build secure software. Yes, you get the occasional DARPA initiative or tool to facilitate incremental cleanup, but most software is incentivized through too many layers of principal-agent problems. Everything is broken.

If governments try to fix software liability before the software scene can fix the incentives problem, then we will end up with a stifled, slowed-down software scene, a few incumbent software companies living on regulatory capture, and probably not much real security benefit for users. But what if users (directly or through their insurance companies) are willing to pay to avoid the costs of broken software, in markets, and open source developers are willing to participate in peer production to make quality software, but software firms are not set up to connect them?

What if there is another way to connect the "I would rather pay a little more and not get h@x0r3d!" demand to the "I would code that right and release it in open source, if someone would pay for it" supply?