Planet Russell

,

Worse Than FailureAnnouncements: Build Totally Non-WTF Products at Inedo

As our friends at HIRED will attest, finding a good workplace is tough, for both the employee and the employer. Fortunately, when it comes looking for developer talent, Inedo has a bit of an advantage: in addition to being a DevOps products company, we publish The Daily WTF.

Not too long ago, I shared a Support Analyst role here and ended up hiring fellow TDWTF Ben Lubar to join the Inedo team. He's often on the front lines, supporting our customer base; but he's also done some interesting dev projects as well (including a Source Gear Vault to Git migration tool).

Today, we're looking for another developer to work from our Cleveland office. Our code is all in .NET, but we have a lot of integrations; so if you can write C# fairly comfortably but know Docker really well, then that's a great fit. The reason is that, as a software product company that builds tools for other developers, you'll do more than just write C# - in fact, a big part of the job will be resisting the urge to write mountains of code that don't actually solve a real problem. More often than not, a bit of support, a tutorial, an extension/plug-in, and better documentation go a heck of a lot further than new core product code.

We do have a couple of job postings for the position (one on Inedo.com, the other on Indeed), and you're welcome to read those to get a feel for the actual bullet points. But if you're reading this and are interested in learning more, you can use the VIP line and bypass the normal process: just shoot me an email directly at apapadimoulis at inedo dot com with "[TDWTF/Inedo] .NET Developer" as the subject and your resume attached.

Oh, we're also looking for a Community Manager, to help with both The Daily WTF and Inedo communities. So if you know anyone who might be interested in that, send them my way!

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

CryptogramNow It's Easier than Ever to Steal Someone's Keys

The website key.me will make a duplicate key from a digital photo.

If a friend or coworker leaves their keys unattended for a few seconds, you know what to do.

Planet Linux AustraliaRusty Russell: Broadband Speeds, 2 Years Later

Two years ago, considering the blocksize debate, I made two attempts to measure average bandwidth growth, first using Akamai serving numbers (which gave an answer of 17% per year), and then using fixed-line broadband data from OFCOM UK, which gave an answer of 30% per annum.

We have two years more of data since then, so let’s take another look.

OFCOM (UK) Fixed Broadband Data

First, the OFCOM data:

  • Average download speed in November 2008 was 3.6Mbit
  • Average download speed in November 2014 was 22.8Mbit
  • Average download speed in November 2016 was 36.2Mbit
  • Average upload speed in November 2008 to April 2009 was 0.43Mbit/s
  • Average upload speed in November 2014 was 2.9Mbit
  • Average upload speed in November 2016 was 4.3Mbit

So in the last two years, we’ve seen 26% increase in download speed, and 22% increase in upload, bringing us down from 36/37% to 33% over the 8 years. The divergence of download and upload improvements is concerning (I previously assumed they were the same, but we have to design for the lesser of the two for a peer-to-peer system).

The idea that upload speed may be topping out is reflected in the Nov-2016 report, which notes only an 8% upload increase in services advertised as “30Mbit” or above.

Akamai’s State Of The Internet Reports

Now let’s look at Akamai’s Q1 2016 report and Q1-2017 report.

  • Annual global average speed in Q1 2015 – Q1 2016: 23%
  • Annual global average speed in Q1 2016 – Q1 2017: 15%

This gives an estimate of 19% per annum in the last two years. Reassuringly, the US and UK (both fairly high-bandwidth countries, considered in my previous post to be a good estimate for the future of other countries) have increased by 26% and 19% in the last two years, indicating there’s no immediate ceiling to bandwidth.

You can play with the numbers for different geographies on the Akamai site.

Conclusion: 19% Is A Conservative Estimate

17% growth now seems a little pessimistic: in the last 9 years the US Akamai numbers suggest the US has increased by 19% per annum, the UK by almost 21%.  The gloss seems to be coming off the UK fixed-broadband numbers, but they’re still 22% upload increase for the last two years.  Even Australia and the Philippines have managed almost 21%.

Worse Than FailureOpen Sources

121212 2 OpenSwissKnife

Here's how open-source is supposed to work: A goup releases a product, with the source code freely available. Someone finds a problem. They solve the problem, issue a pull request, and the creators merge that into the product, making it better for everyone.

Here's another way open-source is supposed to work: A group releases a product, with the source code freely available. Someone finds a problem, but they can't fix it themselves, so they issue a bug report. Someone else fixes the problem, issues a pull request, and the creators merge that into the product, making it better for everyone.

Here's one way open-source works: Someone creates a product. It gets popular—and I mean really, really popular, practically overnight. The creator didn't ask for this. They have no idea what to do with success. They try their best to keep up, but they can't keep on top of everything all the time. They haven't even set up a build pipeline yet. They're flying by the seat of their pants. One day, unwisely choosing to program with a fever, they commit broken code and push it up to GitHub, triggering an automatic release. Fifteen thousand downstram dependencies find their build broken and show up to scold the creator for not running tests before releasing.

Here's another way open-source works: A group creates a product. It gets popular. Some time later, there are 600 open issues and over 50 pending pull requests. The creator hasn't commented in a year, but people keep vainly trying to improve the product.

Here's another way open-source works: A group creates a product. They decide to avoid the above PR disaster by using some off-site bug tracker. Someone files a bug. Then another bug. Then 5 or 10 more. The creator goes on a rampage, insisting that everyone is using it wrong, and deletes all the bug reports, banning the users who submitted them. The product continues to gain success, and more and more people file bugs, only to find the bug reports summarily closed. Sometimes people get lucky and their reports will be closed, then re-opened when another dev decides to fix the problem.

Here's another way open-source works: Group A creates a product. Group B creates a product, and uses the first product as their support forum. That forum gets hacked. Group B files a bug to Group A, rightly worried about the security of the software they use. After all, if the forum has a remote code exploit, maybe they should move to something newer, maybe written in Ruby instead of PHP. One of the developers from Group A, today's submitter, offers to investigate.

Many forums allow admins to edit the themes for the site directly; for example, NodeBB provides admins a textbox in which they can paste CSS to tweak the theme to their liking. This forum figures that since the admins are already on the wrong side of the airtight hatchway, they can inject bits of PHP code directly into the forum header. Code like, say, saving a poison payload to a disk that creates a endpoint that accepts arbitrary file uploads so they can root the box.

But how did the hacker get access to that admin panel? Surely that's a security flaw, right? Turns out they'd found a weak link in the security chain and applied just enough force to break their way in. If you work in security, I'm sure you won't be surprised to hear the flaw: one of the admins had a weak password, one right out of a classic dictionary list.

Here's another way open-source works: 15% of the NPM ecosystem was controlled by people with weak passwords. Someone hacks them all, tells NPM how they did it, and gets a mass forced-reset on everyone's passwords. Everyone is more secure.

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

Planet DebianThadeu Lima de Souza Cascardo: News on Debian on apexqtmo

I had been using my Samsung Galaxy S Relay 4G for almost three years when I decided to get a new phone. I would use this new phone for daily tasks and take the chance to get a new model for hacking in the future. My apexqtmo would still be my companion and would now be more available for real hacking.

And so it also happened that its power button got stuck. It was not the first time, but now it would happen every so often, and would require me to disassemble it. So I managed to remove the plastic button and leave it with a hole so I could press the button with a screwdriver or a paperclip. That was the excuse I needed to get it to running Debian only.

Though it's now always plugged on my laptop, I got the chance to hack on it on my scarce free time. As I managed to get a kernel I built myself running on it, I started fixing things like enabling devtmpfs. I didn't insist much on running systemd, though, and kept with System V. The Xorg issues were either on the server or the client, depending on which client I ran.

I decided to give a chance to running the Android userspace on a chroot, but gave up after some work to get some firmware loaded.

I managed to get the ALSA controls right after saving them inside a chroot on my CyanogenMod system. Then, restoring them on Debian allowed to play songs. Unfortunately, it seems I broke the audio jack when disassembling it. Otherwise, it would have been a great portable audio player. I even wrote a small program that would allow me to control mpd by swiping on the touchscreen.

Then, as Debian release approached, I decided to investigate the framebuffer issue closely. I ended finding out that it was really a bug in the driver, and after fixing it, the X server and client crashes were gone. It was beautiful to get some desktop environment running with the right colors, get a calculator started and really using the phone as a mobile device.

There are two lessons or findings here for me. The first one is that the current environments are really lacking. Even something like GPE can't work. The buttons are tiny, scrollbars are still the only way for scrolling, some of the time. No automatic virtual keyboards. So, there needs to be some investing in the existing environments, and maybe even the development of new environments for these kinds of devices. This was something I expected somehow, but it's still disappointing to know that we had so much of those developed in the past and now gone. I really miss Maemo. Running something like Qtopia would mean grabing a very old unmaintained software not available in Debian. There is still matchbox, but it's as subpar as the others I tested.

The second lesson is that building a userspace to run on old kernels will still hit the problem of broken drivers. In my particular case, unless I wrote code for using Ion instead of the framebuffer, I would have had that problem. Or it would require me to add code to xorg-xserver that is not appropriate. Or fix the kernel drivers of available kernel sourcecodes. But this does not scale much more than doing the right thing and adding upstream support for these devices.

So, I decided it was time I started working on upstream support for my device. I have it in progress and may send some upstream patches soon. I have USB and MMC/SDcard working fine. DRM is still a challenge, but thanks to Rob Clark, it's something I expect to get working soon, and after that, I would certainly celebrate. Maybe even consider starting the work on other devices a little sooner.

Trying to review my post on GNU on smartphones, here is where I would put some of the status of my device and some extra notes.

On Halium

I am really glad people started this project. This was one of the things I criticized: that though Ubuntu Phone and FirefoxOS built on Android userspace, they were not easily portable to many devices out there.

But as I am looking for a more pure GNU experience, let's call it that, Halium does not help much in that direction. But I'd like to see it flourish and allow people to use more OSes on more devices.

Unfortunately, it suffers from similar problems as the strategy I was trying to go with. If you have a device with a very old kernel, you won't be able to run some of the latest userspace, even with Android userspace help. So, lots of devices would be left unsupported, unless we start working on some upstream support.

On RYF Hardware

My device is one of the worst out there. It's a modem that has a peripherical CPU. Much has already been said about Qualcomm chips being some of the least freedom-friendly. Ironically, it's some with the best upstream support, as far as I found out while doing this upstreaming work. Guess we'll have to wait for opencores, openrisc and risc-v to catch up here.

Diversity

Though I have been experimenting with Debian, the upstream work would sure benefit lots of other OSes out there, mainly GNU+Linux based ones, but also other non-GNU Linux based ones. Not so much for other kernels.

On other options

After the demise of Ubuntu Phone, I am glad to see UBports catching up. I hope the project is sustainable and produce more releases for more devices.

Rooting

This needs documentation. Most of the procedures rely on booting a recovery system, which means we are already past the root requirement. We simply boot our own system, then. However, for some debugging strategies, getting root on the OEM system is useful. So, try to get root on your system, but beware of malware out there.

Booting

Most of these devices will have their bootloaders in there. They may be unlocked, allowing unsigned kernels to be booted. Replacing these bootloaders is still going to be a challenge for another future phase. Though adding a second bootloader there, one that is freedom respecting, and that allows more control on that booting step to the user is something possible once you have some good upstream support. One could either use kexec for that, or try to use the same device tree for U-Boot, and use the knowledge of the device drivers for Linux on writing drivers for U-Boot, GRUB or Libreboot.

Installation

If you have root on your OEM system, this is something that could be worked on. Otherwise, there is magic-device-tool, whose approach is one that could be used.

Kernels

While I am working on adding Linux upstream support for my device, it would be wonderful to see more kernels supporting those gadgets. Hopefully, some of the device driver writing and reverse engineering could help with that, though I am not too much optimistic. But there is hope.

Basic kernel drivers

Adding the basic support, like USB and MMC, after clocks, gpios, regulators and what not, is the first step to a long road. But it would allow using the device as a board computer, under better control of the user. Hopefully, lots of eletronic garbage out there would have some use as control gadgets. Instead of buying a new board, just grab your old phone and put it to some nice use.

Sensors, input devices, LEDs

There are usually easy too. Some sensors may depend on your modem or some userspace code that is not that easily reverse engineered. But others would just require some device tree work, or some small input driver.

Graphics

Here, things may get complicated. Even basic video output is something I have some trouble with. Thanks to some other people's work, I have hope at least for my device. And using the vendor's linux source code, some framebuffer should be possible, even some DRM driver. But OpenGL or other 3D acceleration support requires much more work than that, and, at this moment, it's not something I am counting on. I am thankful for the work lots of people have been doing on this area, nonetheless.

Wireless

Be it Wifi or Bluetooth, things get ugly here. The vendor driver might be available. Rewriting it would take a long time. Even then, it would most likely require some non-free firmware loading. Using USB OTG here might be an option.

Modem/GSM

The work of the Replicant folks on that is what gives me some hope that it might be possible to get this working. Something I would leave to after I have a good interface experience in my hands.

GPS

Problem is similar to the Modem/GSM one, as some code lives in userspace, sometimes talking to the modem is a requirement to get GPS access, etc.

Shells

This is where I would like to see new projects, even if they work on current software to get them more friendly to these form factors. I consider doing some work there, though that's not really my area of expertise.

Next steps

For me, my next steps are getting what I have working upstream, keep working on DRM support, packaging GPE, then experimenting with some compositor code. In the middle of that, trying to get some other devices started.

But documenting some of my work is something I realized I need to do more often, and this post is some try on that.

,

LongNowThe AI Cargo Cult: The Myth of a Superhuman Artificial Intelligence

In a widely-shared essay first published in Backchannel, Kevin Kelly, a Long Now co-founder and Founding Editor of Wired Magazine, argues that the inevitable rise of superhuman artificial intelligence—long predicted by leaders in science and technology—is a myth based on misperceptions without evidence.

Kevin is now Editor at Large at Wired and has spoken in the Seminar and Interval speaking series several times, including on his most recent book, The Inevitable—sharing his ideas on the future of technology and how our culture responds to it. Kevin was part of the team that developed the idea of the Manual for Civilization, making a series of selections for it, and has also put forth a similar idea called the Library of Utility.

Read Kevin Kelly’s essay in full (LINK).

Planet DebianShirish Agarwal: Debian 9 release party at Co-hive

Dear all, This would be a biggish one so please have a chai/coffee or something stronger as it would take a while.

I would start with attempt at some alcohol humor. While some people know that I have had a series of convulsive epileptic seizure I had shared bits about it in another post as well. Recovery is going through allopathic medicines as well as physiotherapy which I go to every alternate day.

One of the exercises that I do in physiotherapy sessions is walk cross-legged on a line. While doing it today, it occurred to me that this is the same test that a Police inspector would do if they caught you drinking or are suspected of drunk driving. While some in the police force have now also have breath analyzer machines to determine alcohol content in the breath and body (and ways to deceive it are also there) the above exercise is still an integral part of examination. Now few of my friends who do drink have and had made expertise of walking on a line, while I due to this neurological disorder still have issues of walking on a line. So while I don’t think of a drinking party in the near future (6 months at least), if I ever do get caught with a friend who is drunk (by association I would also be a suspect) by a policeman who doesn’t have a breath analyzer machine, I could be in a lot of trouble. In addition if I tell him I have a neurological disorder I am bound to land up in a cell as he will think I’m trying to make a fool of him. If you are able to picturize the situation, I’m sure you will get a couple of laughs.

Now coming to the release party, I was a bit apprehensive. It’s been quite a while I had faced an audience and just coming out of illness didn’t know how well or ill-prepared I would be for the session. I had forsaken/given up exercising two days earlier before the event as I wanted to have loose body, loose limbs all over. I also took a mild sedative (1mg) the day before just so I will have a fit night sleep and be able to focus all my energies on the big day. (I don’t recommend sedatives unless the doctor prescribes) and I did have a doctor prescription so was able to have a nice sleep. I didn’t do any Debian study as I hoped my somewhat long experience with both Ubuntu and Debian should help me.

On the d-day, I had asked dhanesh (the organizer of the event) to accompany me from home to venue and back as I was unsure of the journey as it was around 9-10 kms. from my place and while I had been to the venue about couple of years back, I had just a mild remembrance of the place.

Anyways, Dhanesh compiled with my request and together we reached the venue before the appointed 1500 hrs. As it was a Sunday I was unsure as how many people would turn up as people usually like to cozy up on a Sunday.

Around 1530 hrs everybody showed up

The whole group

It included couple of co-organizers with most people being newbies so while I had thought of showing how to contribute via reporting bugs or putting up patches, had to set that aside and explain how things work in free software and open-source world. We didn’t get into the debate of free vs open-source or free/open-source/open-core as that would have been counter-productive and probably confusing for newbies.

We did however share the debian tree structure

debian-tree-structure-discussion

I was stumped by /var and /proc . I hadn’t taken my lappy as it is expensive (a lenovo thinkpad I love very dearly) and I was unsure if I would be able to take care of it (weight wise). Dhanesh had told me that he had Debian on his lappy + zsh both of which are my favourites.

While back at home I realized /var has been relegated to having apache/server logs and stuff like that, I do recall (vaguely) a thread on debian-devel about removing /var although that discussion went nowhere.

One of the bugs that we hit early on is that nobody had enough space on their hdd to have Debian comfortably. It took a while to get an external hdd and push some of the content from somebody’s lappy to the external drive to have space for the installation.

I did share the /. /home, optional swap while Dhanesh helped by sharing about having a separate /boot partition as well which I had forgotten. I can’t even begin to remember the number of times having a separate /boot partition has helped me in all of my systems.

That done, we did try to install/show Debian 9 without network but were hit with #866629 so wasn’t able to complete the installation. We had got the latest 9.0.1 as I had seen Steve’s message about issues with the live images but even then we were hit with the above bug. As shared in the bug history, it might be a good idea to have the last couple of RC’s (Release Candidate releases) as pre-release parties so people have a chance to report bugs and get them fixed. It was also nice to see Praveen raising the seriousness of the bug shared above.

The next day I also filed #866971 as I had mistaken the release to be a regular release and not the live instance. I have pushed my rationale and hope the right thing happens.

As installation takes a bit of time, we used the time to share about Google’s Summer of Code and absence of Debian from GSoC this year. I should have directed them to an itfoss article I wrote sometime ago and also shared that Debian is also looking to having a similar apprenticeship within Debian itself. There were questions about why Debian would like to take the administrative overhead, my response was that it probably had to do with Debian wanting more control over the process. While Debian has had some great luck getting all number of seats that it asks for in GSoC, the ball is always in Google’s court. Having that uncertainty off would be beneficial to Debian both in short-term as well as long-term. One interesting stat that was shared with me was that something akin to 89 students from India had been selected this year to GSoC even with the lower stipend and that is a far cry from the 10-15 students who are able to complete GSoC every year. Let’s see what happens this year.

One of the interesting fact/gossip I shared with them is that Google uses a modified/forked Debian internally which it probably would never release ever.

There were quite a few queries about GSoC which resulted into how contributions are made and how git had become the master of all VCS (Version Control Systems). While I do have pet bugs about git (the biggest one being for places/countries having not big bandwidth git fails many a times while cloning). I *think* the bug has been reported enough times but haven’t seen any improvements yet. There is a need of a solution like wget and wget -c so git just works even under the most trying circumstances (bandwidth wise)

We also shared what a repo is and Dhanesh helpfully did a git shortlog to show people how commits are commented. It was from his official work so he couldn’t show anything apart from the shortlog.

I also shared how non-technical people can help with regard to documentation, artwork but didn’t get into any concrete examples although wiki.debian.org would have been a good start. I also don’t know if it’s a fact or not but it seems/seemed that moinmoin (the current wiki solution used by debian) seems to have got sectional edit feature which I used sometime back. If moinmoin has got this feature then it is on par with mediawiki, although do know that mediawiki has lot more features going for it.

Dhanesh did manage to install a Debian 8.0.7 (while 8.8.0 was the last release) which might have been better. The debian-installer (d-i) looks the same even though I know there are improvements with respect to UEFI and many updated components.

There are and were many bugs which I wanted to share but didn’t know if it was the right forum or not, for e.g. #597176 which probably needs improvements in other libraries along with the JPEG 2000 support #604859 all of which I’m subscribed to.

We also had a talk about code documentation and code readability, python (as almost everything in Debian is based on python) I had actually wanted to show metrics.debian.net but had seen it was down the day before and again checked to see it is down now as well, hence reported it, will hopefully come on Debian BTS some time soonish. The idea was to share that Debian does have/uses other programming languages as well and is only limited by people interested in a specific language and be willing to package and maintain packages in that specific programming language.

As Dhanesh was part of Firefox OS and a Campus Ambassador we did discuss what went wrong in Firefox OS deployment and Firefox as a whole, specifically between the community and Mozilla the Corporation.

In this there were lots that was lot that I wasn’t able to share as had become tired otherwise would have shared that ncurses debian-installer interface although bit ugly to look at is still good as it has speech for visually differently abled people as well as people with poor sight.

There is and was lots to share about Debian but was conscious that I might over-burden the audience and my stamina was also stretched.

We also shared about automated builds bud didn’t show either travis.debian.net or jeankins.debian.net

We also had a small discussion about Artificial Intelligence, Robotics and how the outlook for people coming in Computer Science looks today.

One thing I forgot to share we did do the cake-cutting together

Dhanesh-Shirish cutting cake together

Dhanesh is on the left while I’m on the right.

We did have a cute 3-4 year old boy as our guest as well bud didn’t get good pictures of him.

Lastly, the Debian cake itself

Debian 9 cake

We also talked about the kernel and subsystem maintainers and how Linus is the overlord.

Look forward to comments. I am hoping Dhanesh might recollect some points that I might have missed/forgotten.


Filed under: Miscellenous Tagged: #alcohol humor, #co-hive, #crystal-gazing, #Debian bugs, #Debian cake, #Debian-release-party-9-pune, #live-installer, #moinmoin, #planet-debian, debian-installer, Pune

CryptogramDubai Deploying Autonomous Robotic Police Cars

It's hard to tell how much of this story is real and how much is aspirational, but it really is only a matter of time:

About the size of a child's electric toy car, the driverless vehicles will patrol different areas of the city to boost security and hunt for unusual activity, all the while scanning crowds for potential persons of interest to police and known criminals.

Cory DoctorowMy presentation from ConveyUX

Last March, I traveled to Seattle to present at the ConveyUX conference, with a keynote called “Dark Patterns and Bad Business Models”, the video for which has now been posted: “The Internet’s broken and that’s bad news, because everything we do today involves the Internet and everything we’ll do tomorrow will require it. But governments and corporations see the net, variously, as a perfect surveillance tool, a perfect pornography distribution tool, or a perfect video on demand tool—not as the nervous system of the 21st century. Time’s running out. Architecture is politics. The changes we’re making to the net today will prefigure the future our children and their children will thrive in—or suffer under.”

Cory DoctorowInterview with Wired UK’s Upvote podcast

Back in May, I stopped by Wired UK while on my British tour for my novel Walkaway to talk about the novel, surveillance, elections, and, of course, DRM. (MP3)

Planet DebianPatrick Matthäi: Bug in Stretch Linux vmxnet3 driver

Hey,

Are I am the only one experiencing #864642 (vmxnet3: Reports suspect GRO implementation on vSphere hosts / one VM crashes)? Unluckily I still do not have got any answer on my report so that I may help to track this issue down.

Help is welcome :)

Rondam RamblingsA brief history of political discourse in the United States

1776 When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel

Google AdsenseIntroducing AdSense Native ads

Today we’re introducing the new AdSense Native ads -- a suite of ad formats designed to match the look and feel of your site, providing a great user experience for your visitors. AdSense Native ads come in three categories: In-feed, In-article, and Matched content*. They can all be used at once or individually and are designed for:

  • A great user experience: they fit naturally on your site and use high quality advertiser elements, such as high resolution images, longer titles and descriptions, to provide a more attractive experience for your visitors. 
  • A great look and feel across different screen sizes: the ads are built to look great on mobile, desktop, and tablet. 
  • Ease of use: easy-to-use editing tools help you make the ads look great on your site.

Native In-feed opens up new revenue opportunity in your feeds
Available to all publishers, In-feed ads slot neatly inside your feeds, e.g. a list of articles or products on your site. These ads are highly customizable to match the look and feel of your feed content and offer new places to show ads.

Native In-article offers a better advertising experience
Available to all publishers, In-article ads are optimized by Google to help you put great-looking ads between the paragraphs of your pages. In-article ads use high-quality advertising elements and offer a great reading experience to your visitors.

Matched Content* drives more users to your content 
Available to publishers that meet the eligibility criteria, Matched content is a content recommendation tool that helps you promote your content to visitors and potentially increase revenue, page views, and time spent on site. Publishers that are eligible for the “Allow ads” feature can also show relevant ads within their Matched content units, creating an additional revenue opportunity in this placement.

Getting started with AdSense Native ads 
AdSense Native ads can be placed together, or separately, to customize your website’s ad experience. Use In-feed ads inside your feed (e.g. a list of articles, or products), In-article ads between the paragraphs of your pages, and Matched content ads directly below your articles.  When deciding your native strategy, keep the content best practices in mind.

To get started with AdSense Native ads:

  • Sign in to your AdSense account
  • In the left navigation panel, click My ads
  • Click +New ad unit
  • Select your ad category: In-article, In-feed or Matched content

We'd love to hear what you think about these ad formats in the comments section below this post.

Posted by: Violetta Kalathaki - AdSense Product Manager

*This format has already been available to eligible publishers, and is now part of AdSense Native ads.

Sociological ImagesPolitical baseball: Happiness, unhappiness, and whose team is winning

Are conservatives happier than liberals. Arthur Brooks thinks so. Brooks is president of the American Enterprise Institute, a conservative think tank. And he’s happy, or at least he comes across as happy in his monthly op-EDS for the Times.  In those op-EDS, he sometimes claims that conservatives are generally happier. When you’re talking about the relation between political views and happiness, though — as I do — you ought to consider who is in power. Otherwise, it’s like asking whether Yankee fans are happier than RedSox fans without checking the AL East standings.

Now that the 2016 GSS data is out, we can compare the happiness of liberals and conservatives during the Bush years and the Obama years. The GSS happiness item asks, “Taken all together, how would you say things are these days –  would you say that you are very happy, pretty happy, or not too happy?”

On the question of who is “very happy,” it looks as though Brooks is onto something. More of the people on the right are very happy in both periods. But also note that in the Obama years about 12% of those very happy folks (five out of 40) stopped being very happy.

But something else was happening during the Obama years. It wasn’t just that some of the very happy conservatives weren’t quite so happy. The opposition to Obama was not about happiness. Neither was the Trump campaign with its unrelenting negativism. What it showed was that a lot of people on the right were angry. None of that sunny Reaganesque “Morning in America: for them. That feeling is reflected in the numbers who told the GSS that they were “not too happy.”

Among extreme conservatives, the percent who were not happy doubled during the Obama years. The increase in unhappiness was about 60% for those who identified themselves as “conservative” (neither slight nor extreme).  In the last eight years, the more conservative a person’s views, the greater the likelihood of being not too happy. The pattern is reversed for liberals during the Bush years. Unhappiness rises as you more further left.

The graphs also show that for those in the middle of the spectrum – about 60% of the people – politics makes no discernible change in happiness. Their proportion of “not too happy” remained stable regardless of who was in the White House.  Those middle categories do give support to Brook’s idea that conservatives are generally somewhat happier. But as you move further out on the political spectrum the link between political views and happiness depends much more on which side is winning. Just as at Fenway or the Stadium, the fans who are cheering – or booing – the loudest are the ones whose happiness will be most affected by their team’s won-lost record.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

Sociological ImagesAlgorithms replace your biases with someone else’s

Originally posted at Scatterplot.

There has been a lot of great discussion, research, and reporting on the promise and pitfalls of algorithmic decisionmaking in the past few years. As Cathy O’Neil nicely shows in her Weapons of Math Destruction (and associated columns), algorithmic decisionmaking has become increasingly important in domains as diverse as credit, insurance, education, and criminal justice. The algorithms O’Neil studies are characterized by their opacity, their scale, and their capacity to damage.

Much of the public debate has focused on a class of algorithms employed in criminal justice, especially in sentencing and parole decisions. As scholars like Bernard Harcourt and Jonathan Simon have noted, criminal justice has been a testing ground for algorithmic decisionmaking since the early 20th century. But most of these early efforts had limited reach (low scale), and they were often published in scholarly venues (low opacity). Modern algorithms are proprietary, and are increasingly employed to decide the sentences or parole decisions for entire states.

“Code of Silence,” Rebecca Wexler’s new piece in Washington Monthlyexplores one such influential algorithm: COMPAS (also the study of an extensive, if contested, ProPublica report). Like O’Neil, Wexler focuses on the problem of opacity. The COMPAS algorithm is owned by a for-profit company, Northpointe, and the details of the algorithm are protected by trade secret law. The problems here are both obvious and massive, as Wexler documents.

Beyond the issue of secrecy, though, one issue struck me in reading Wexler’s account. One of the main justifications for a tool like COMPAS is that it reduces subjectivity in decisionmaking. The problems here are real: we know that decisionmakers at every point in the criminal justice system treat white and black individuals differently, from who gets stopped and frisked to who receives the death penalty. Complex, secretive algorithms like COMPAS are supposed to help solve this problem by turning the process of making consequential decisions into a mechanically objective one – no subjectivity, no bias.

But as Wexler’s reporting shows, some of the variables that COMPAS considers (and apparently considers quite strongly) are just as subjective as the process it was designed to replace. Questions like:

Based on the screener’s observations, is this person a suspected r admitted gang member?

In your neighborhood, have some of your friends or family been crime victims?

How often do you have barely enough money to get by?

Wexler reports on the case of Glenn Rodríguez, a model inmate who was denied parole on the basis of his puzzlingly high COMPAS score:

Glenn Rodríguez had managed to work around this problem and show not only the presence of the error, but also its significance. He had been in prison so long, he later explained to me, that he knew inmates with similar backgrounds who were willing to let him see their COMPAS results. “This one guy, everything was the same except question 19,” he said. “I thought, this one answer is changing everything for me.” Then another inmate with a “yes” for that question was reassessed, and the single input switched to “no.” His final score dropped on a ten-point scale from 8 to 1. This was no red herring.

So what is question 19? The New York State version of COMPAS uses two separate inputs to evaluate prison misconduct. One is the inmate’s official disciplinary record. The other is question 19, which asks the evaluator, “Does this person appear to have notable disciplinary issues?”

Advocates of predictive models for criminal justice use often argue that computer systems can be more objective and transparent than human decisionmakers. But New York’s use of COMPAS for parole decisions shows that the opposite is also possible. An inmate’s disciplinary record can reflect past biases in the prison’s procedures, as when guards single out certain inmates or racial groups for harsh treatment. And question 19 explicitly asks for an evaluator’s opinion. The system can actually end up compounding and obscuring subjectivity.

This story was all too familiar to me from Emily Bosk’s work on similar decisionmaking systems in the child welfare system where case workers must answer similarly subjective questions about parental behaviors and problems in order to produce a seemingly objective score used to make decisions about removing children from home in cases of abuse and neglect. A statistical scoring system that takes subjective inputs (and it’s hard to imagine one that doesn’t) can’t produce a perfectly objective decision. To put it differently: this sort of algorithmic decisionmaking replaces your biases with someone else’s biases.

Dan Hirschman is a professor of sociology at Brown University. He writes for scatterplot and is an editor of the ASA blog Work in Progress. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramCommentary on US Election Security

Good commentaries from Ed Felten and Matt Blaze.

Both make a point that I have also been saying: hacks can undermine the legitimacy of an election, even if there is no actual voter or vote manipulation.

Felten:

The second lesson is that we should be paying more attention to attacks that aim to undermine the legitimacy of an election rather than changing the election's result. Election-stealing attacks have gotten most of the attention up to now -- ­and we are still vulnerable to them in some places -- ­but it appears that external threat actors may be more interested in attacking legitimacy.

Attacks on legitimacy could take several forms. An attacker could disrupt the operation of the election, for example, by corrupting voter registration databases so there is uncertainty about whether the correct people were allowed to vote. They could interfere with post-election tallying processes, so that incorrect results were reported­ an attack that might have the intended effect even if the results were eventually corrected. Or the attacker might fabricate evidence of an attack, and release the false evidence after the election.

Legitimacy attacks could be easier to carry out than election-stealing attacks, as well. For one thing, a legitimacy attacker will typically want the attack to be discovered, although they might want to avoid having the culprit identified. By contrast, an election-stealing attack must avoid detection in order to succeed. (If detected, it might function as a legitimacy attack.)

Blaze:

A hostile state actor who can compromise a handful of county networks might not even need to alter any actual votes to create considerable uncertainty about an election's legitimacy. It may be sufficient to simply plant some suspicious software on back end networks, create some suspicious audit files, or add some obviously bogus names to to the voter rolls. If the preferred candidate wins, they can quietly do nothing (or, ideally, restore the compromised networks to their original states). If the "wrong" candidate wins, however, they could covertly reveal evidence that county election systems had been compromised, creating public doubt about whether the election had been "rigged". This could easily impair the ability of the true winner to effectively govern, at least for a while.

In other words, a hostile state actor interested in disruption may actually have an easier task than someone who wants to undetectably steal even a small local office. And a simple phishing and trojan horse email campaign like the one in the NSA report is potentially all that would be needed to carry this out.

Me:

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­ by extension ­ our democracy.

And, finally, a report from the Brennan Center for Justice on how to secure elections.

Krebs on SecurityWho is the GovRAT Author and Mirai Botmaster ‘Bestbuy’?

In February 2017, authorities in the United Kingdom arrested a 29-year-old U.K. man on suspicion of knocking more than 900,000 Germans offline in an attack tied to Mirai, a malware strain that enslaves Internet of Things (IoT) devices like security cameras and Internet routers for use in large-scale cyberattacks. Investigators haven’t yet released the man’s name, but news reports suggest he may be better known by the hacker handle “Bestbuy.” This post will follow a trail of clues back to one likely real-life identity of Bestbuy.

At the end of November 2016, a modified version of Mirai began spreading across the networks of German ISP Deutsche Telekom. This version of the Mirai worm spread so quickly that the very act of scanning for new infectable hosts overwhelmed the devices doing the scanning, causing outages for more than 900,000 customers. The same botnet had previously been tied to attacks on U.K. broadband providers Post Office and Talk Talk.

dtoutage

Security firm Tripwire published a writeup on that failed Mirai attack, noting that the domain names tied to servers used to coordinate the activities of the botnet were registered variously to a “Peter Parker” and “Spider man,” and to a street address in Israel (27 Hofit St). We’ll come back to Spider Man in a moment.

According to multiple security firms, the Mirai botnet responsible for the Deutsche Telekom outage was controlled via servers at the Internet address 62.113.238.138Farsight Security, a company that maps which domain names are tied to which Internet addresses over time, reports that this address has hosted just nine domains.

The only one of those domains that is not related to Mirai is dyndn-web[dot]com, which according to a 2015 report from BlueCoat (now Symantec) was a domain tied to the use and sale of a keystroke logging remote access trojan (RAT) called “GovRAT.” The trojan is documented to have been used in numerous cyber espionage campaigns against governments, financial institutions, defense contractors and more than 100 corporations.

Another report on GovRAT — this one from security firm InfoArmor — shows that the GovRAT malware was sold on Dark Web cybercrime forums by a hacker or hackers who went by the nicknames BestBuy and “Popopret” (some experts believe these were just two different identities managed by the same cybercriminal).

The hacker "bestbuy" selling his Govrat trojan on the dark web forum "Hell." Image: InfoArmor.

The hacker “bestbuy” selling his GovRAT trojan on the dark web forum “Hell.” Image: InfoArmor.

GovRAT has been for sale on various other malware and exploit-related sites since at least 2014. On oday[dot]today, for example, GovRAT was sold by a user who picked the nickname Spdr, and who used the email address spdr01@gmail.com.

Recall that the domains used to control the Mirai botnet that hit Deutsche Telekom all had some form of Spider Man in the domain registration records. Also, recall that the controller used to manage the GovRAT trojan and that Mirai botnet were both at one time hosted on the same server with just a handful of other (Mirai-related) domains.

According to a separate report (PDF) from InfoArmor, GovRAT also was sold alongside a service that allows anyone to digitally sign their malware using code-signing certificates stolen from legitimate companies. InfoArmor said the digital signature it found related to the service was issued to an open source developer Singh Aditya, using the email address parkajackets@gmail.com.

Interestingly, both of these email addresses — parkajackets@gmail.com and spdr01@gmail.com — were connected to similarly-named user accounts at vDOS, for years the largest DDoS-for-hire service (that is, until KrebsOnSecurity last fall outed its proprietors as two 18-year-old Israeli men).

Last summer vDOS got massively hacked, and a copy of its user and payments databases was shared with this author and with U.S. federal law enforcement agencies. The leaked database shows that both of those email addresses are tied to accounts on vDOS named “bestbuy” (bestbuy and bestbuy2).

Spdr01's sales listing for the GovRAT trojan on a malware and exploits site shows he used the email address spdr01@gmail.com

Spdr01’s sales listing for the GovRAT trojan on a malware and exploits site shows he used the email address spdr01@gmail.com

The leaked vDOS database also contained detailed records of the Internet addresses that vDOS customers used to log in to the attack-for-hire service. Those logs show that the bestbuy and bestbuy2 accounts logged in repeatedly from several different IP addresses in the United Kingdom and in Hong Kong.

The technical support logs from vDOS indicate that the reason the vDOS database shows two different accounts named “bestbuy” is the vDOS administrators banned the original “bestbuy” account after it was seen logged into the account from both the UK and Hong Kong. Bestbuy’s pleas to the vDOS administrators that he was not sharing the account and that the odd activity could be explained by his recent trip to Hong Kong did not move them to refund his money or reactivate his original account.

A number of clues in the data above suggest that the person responsible for both this Mirai botnet and GovRAT had ties to Israel. For one thing, the email address spdr01@gmail.com was used to register at least three domain names, all of which are tied back to a large family in Israel. What’s more, in several dark web postings, Bestbuy can be seen asking if anyone has any “weed for sale in Israel,” noting that he doesn’t want to risk receiving drugs in the mail.

The domains tied to spdr01@gmail.com led down a very deep rabbit hole that ultimately went nowhere useful for this investigation. But it appears the nickname “spdr01” and email spdr01@gmail.com was used as early as 2008 by a core member of the Israeli hacking forum and IRC chat room Binaryvision.co.il.

Visiting the Binaryvision archives page for this user, we can see Spdr was a highly technical user who contributed several articles on cybersecurity vulnerabilities and on mobile network security (Google Chrome or Google Translate deftly translates these articles from Hebrew to English).

I got in touch with multiple current members of Binaryvision and asked if anyone still kept in contact with Spdr from the old days. One of the members said he thought Spdr held dual Israeli and U.K. citizenship, that he would be approximately 30 years old at the moment. Another said Spdr was engaged to be married recently. None of those who shared what they recalled about Spdr wished to be identified for this story.

But a bit of searching on those users’ social networking accounts showed they had a friend in common that fit the above description. The Facebook profile for one Daniel Kaye using the Facebook alias “DanielKaye.il” (.il is the top level country code domain for Israel) shows that Mr. Kaye is now 29 years old and is or was engaged to be married to a young woman named Catherine in the United Kingdom.

The background image on Kaye’s Facebook profile is a picture of Hong Kong, and Kaye’s last action on Facebook was apparently to review a sports and recreation facility in Hong Kong.

dankaye

Using Domaintools.com [full disclosure: Domaintools is an advertiser on this blog], I ran a “reverse WHOIS” search on the name “Daniel Kaye,” and it came back with exactly 103 current and historic domain names with this name in their records. One of them in particular caught my eye: Cathyjewels[dot]com, which appears to be tied to a homemade jewelry store located in the U.K. that never got off the ground.

Cathyjewels[dot]com was registered in 2014 to a Daniel Kaye in Egham, U.K., using the email address danielkaye02@gmail.com. I decided to run this email address through Socialnet, a plugin for the data analysis tool Maltego that scours dozens of social networking sites for user-defined terms. Socialnet reports that this email address is tied to an account at Gravatar — a service that lets users keep the same avatar at multiple Web sites. The name on that account? You guessed it: Spdr01.

The output from the Socialnet plugin for Maltego when one searches for the email address danielkaye02@gmail.com.

The output from the Socialnet plugin for Maltego when one searches for the email address danielkaye02@gmail.com.

Daniel Kaye did not return multiple requests for comment sent via Facebook and the various email addresses mentioned here.

In case anyone wants to follow up on this research, I highlighted the major links between the data points mentioned in this post in the following mind map (created with the excellent and indispensable MindNode Pro for Mac).

A “mind map” tracing some of the research mentioned in this post.

Worse Than FailureCodeSOD: Swap the Workaround

Blane D is responsible for loading data into a Vertica 8.1 database for analysis. Vertica is a distributed, column-oriented store, for data-warehousing applications, and its driver has certain quirks.

For example, a common task that you might need to perform is swapping storage partitions around between tables to facilitate bulk data-loading. Thus, there is a SWAP_PARTITIONS_BETWEEN_TABLES() stored procedure. Unfortunately, if you call this function from within a prepared statement, one of two things will happen: the individual node handling the request will crash, or the entire cluster will crash.

No problem, right? Just don’t use a prepared statement. Unfortunately, if you use the ODBC driver for Python, every statement is converted to a prepared statement. There’s a JDBC driver, and a bridge to enable it from within Python, but it also has that problem, and it has the added cost of requiring a a JVM running.

So Blane did what any of us would do in this situation: he created a hacky-workaround that does the job, but requires thorough apologies.

def run_horrible_partition_swap_hack(self, horrible_src_table, horrible_src_partition,
                                     terrible_dest_table, terrible_dest_partition):
    """
    First things first - I'm sorry, I am a terrible person.

    This is a horrible horrible hack to avoid Vertica's partition swap bug and should be removed once patched!

    What does this atrocity do?
    It passes our partition swap into the ODBC connection string so that it gets executed outside of a prepared
    statement... that's it... I'm sorry.
    """
    conn = self.get_connection(getattr(self, self.conn_name_attr))

    hacky_sql = "SELECT SWAP_PARTITIONS_BETWEEN_TABLES('{src_table}',{src_part},{dest_part},'{dest_table}')"\
        .format(src_table=horrible_src_table,
                src_part=horrible_src_partition,
                dest_table=terrible_dest_table,
                dest_part=terrible_dest_partition)

    even_hackier_sql = hacky_sql.replace(' ', '+')

    conn_string = ';'.join(["DSN={}".format(conn.host),
                            "DATABASE={}".format(conn.schema) if conn.schema else '',
                            "UID={}".format(conn.uid) if conn.uid else '',
                            "PWD={}".format(conn.password) if conn.password else '',
                            "ConnSettings={}".format(even_hackier_sql)])  # :puke:

    odbc_conn = pyodbc.connect(conn_string)

    odbc_conn.close()

Specifically, Blane leverages the driver’s ConnSettings option in the connection string, which allows you to execute a command when a client connects to the database. This is specifically meant for setting up session parameters, but it also has the added “benefit” of not performing that action using a prepared statement.

If it’s stupid but it works, it’s probably the vendor’s fault, I suppose.

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!

,

Planet DebianSteve Kemp: I've never been more proud

This morning I remembered I had a beefy virtual-server setup to run some kernel builds upon (when I was playing with Linux security moduels), and I figured before I shut it down I should use the power to run some fuzzing.

As I was writing some code in Emacs at the time I figured "why not fuzz emacs?"

After a few hours this was the result:

 deagol ~ $ perl -e 'print "`" x ( 1024 * 1024  * 12);' > t.el
 deagol ~ $ /usr/bin/emacs --batch --script ./t.el
 ..
 ..
 Segmentation fault (core dumped)

Yup, evaluating an lisp file caused a segfault, due to a stack-overflow (no security implications). I've never been more proud, even though I have contributed code to GNU Emacs in the past.

CryptogramGoldenEye Malware

I don't have anything to say -- mostly because I'm otherwise busy -- about the malware known as GoldenEye, NotPetya, or ExPetr. But I wanted a post to park links.

Please add any good relevant links in the comments.

Planet Linux AustraliaDanielle Madeley: python-pkcs11 with the Nitrokey HSM

So my Nitrokey HSM arrived and it works great, thanks to the Nitrokey peeps for sending me one.

Because the OpenSC PKCS #11 module is a little more lightweight than some of the other vendors, which often implement mechanisms that are not actually supported by the hardware (e.g. the Opencryptoki TPM module), I wrote up some documentation on how to use the device, focusing on how to extract the public keys for using outside of PKCS #11, as the Nitrokey doesn’t implement any of the public key functions.

Nitrokey with python-pkcs11

This also encouraged me to add a whole bunch more of the import/extraction functions for the diverse key formats, including getting very frustrated at the lack of documentation for little things like how OpenSSL stores EC public keys (the answer is as SubjectPublicKeyInfo from X.509), although I think there might be some operating system specific glitches with encoding some DER structures. I think I need to move from pyasn1 to asn1crypto.

Planet DebianReproducible builds folks: Reproducible Builds: week 114 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 25 and Saturday July 1 2017:

Upcoming and past events

Our next IRC meeting is scheduled for July 6th at 17:00 UTC (agenda). Topics to be discussed include an update on our next Summit, a potential NMU campaign, a press release for buster, branding, etc.

Toolchain development and fixes

  • James McCoy reviewed and merged Ximin Luo's script debpatch into the devscripts Git repository. This is useful for rebasing our patches onto new versions of Debian packages.

Packages fixed and bugs filed

Ximin Luo uploaded dash, sensible-utils and xz-utils to the deferred uploads queue with a delay of 14 days. (We have had patches for these core packages for over a year now and the original maintainers seem inactive so Debian conventions allow for this.)

Patches submitted upstream:

Reviews of unreproducible packages

4 package reviews have been added, 4 have been updated and 35 have been removed in this week, adding to our knowledge about identified issues.

One issue types has been updated:

One issue type has been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (68)
  • Daniel Schepler (1)
  • Michael Hudson-Doyle (1)
  • Scott Kitterman (6)

diffoscope development

tests.reproducible-builds.org

  • Vagrant Cascadian working on testing Debian:
    • Upgraded the 27 armhf build machines to stretch.
    • Fix mtu check to only display status when eth0 is present.
  • Helmut Grohne worked on testing Debian:
    • Limit diffoscope memory usage to 10GB virtual per process. It currently tends to use 50GB virtual, 36GB resident which is bad for everything else sharing the machine. (This is #865660)
  • Alexander Couzens working on testing LEDE:
    • Add multiple architectures / targets.
    • Limit LEDE and OpenWrt jobs to only allow one build at the same time using the jenkins build blocker plugin.
    • Move git log -1 > .html to node`document environment().
    • Update TODOs.
  • Mattia Rizzolo working on testing Debian:
    • Updated the maintenance script for postgresql-9.6.
    • Restored the postgresql database from backup after Holger accidently purged it.
  • Holger Levsen working on:
    • Merging all the above commits.
    • Added a check for (known) Jenkins zombie jobs and report them. (This is an long known problem with jenkins; deleted jobs sometimes come back…)
    • Upgraded the remaining amd64 nodes to stretch.
    • Accidentially purged postgres-9.4 from jenkins, so we could test our backups ;-)
    • Updated our stretch upgrade TODOs.

Misc.

This week's edition was written by Chris Lamb, Ximin Luo, Holger Levsen, Bernhard Wiedemann, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Worse Than FailureClassic WTF: The Proven Fix

It's a holiday weekend in the US, and I can't think of a better way to celebrate the history of the US than by having something go terribly wrong in a steel foundry. AMERICA! (original)--Remy

Photo Credit: Bryan Ledgard @ flickr There are lots of ways to ruin a batch of steel.

Just like making a cake, add in too much of one ingredient, add an ingredient at the wrong time, or heat everything to the wrong temperature, and it could all end in disaster. But in the case of a steel mill, we're talking about a 150 ton cake made of red-hot molten iron that's worth millions of dollars. Obviously, the risk of messing things up is a little bit higher. So, to help keep potential financial disaster at bay, the plants remove part of the human error factor and rely upon automated systems to keep things humming along smoothly. Systems much like the ones made by the company where Robert M. was a development manager.

The systems that Robert's group developed were not turnkey solutions; instead the software that they produced was intended to interact with the plant's hardware at a very low level. Because of this — and the fact that nobody wanted to be "that guy" who caused a plant to lose a crapton of money — all bug fixes and enhancements had to run though a plant simulator system first. While the arrangement worked well, Robert was always grateful when he was allowed to bring on additional help. And this is why he was very interested when heard that Vijay would be coming onto his team.

Considerable Generosity!

The company was run by the founder and his son. Phil, the son, was in charge of the "Advanced Technologies" group, which developed all sorts of neural networks and other things might be found on the latest cover of ComputerWorld; they sounded very impressive but never quite became products. All of them had advanced degrees in computer science and other, related fields. Perhaps not coincidently, all of the money-making products were developed and maintained by people without advanced degrees.

"I'm telling you Robert, you're going to really appreciate having Vijay around," said Phil, "He interviewed real strong and, get this. He has a PhD in Computer Science! You'll be thanking me by the end of next week, you'll see!"

Vijay's tenure, however, was temporary in nature. Until a project was available to truly engage Vijay's vast knowledge, he would be "on loan" to Robert's team to "improve his group's knowledge base" while gaining valuable real-world experience. "You'll want to exercise some care and feeding with that boy," warned Phil, "he likes to be right. But I have faith that you'll be able to handle him when he arrives first thing tomorrow!"

Welcome Aboard

When Vijay arrived on the scene, there was no mistaking him. His sunglasses and clothing made him seem like he jumped out of the latest J.Crew catalog, but his swagger and facial hair made him look more like Dog the Bounty Hunter.

Robert greeted Vijay warmly and started showing him around the office, introducing him to the receptionist, the other developers on his team, how to join the coffee club and other good information like that. Once acquainted with the development environment, Vijay's first assignment was to fix a minor bug. It was the sort of problem that you give to the new guy so that he can start to learn the code base. And later that afternoon, Vijay came to Robert's desk to announce that he had fixed the bug.

"Glad to hear that you tracked down the bug," Phil responded enthusiastically. "Did you run into any problems running the fixed code through the simulator?"

"I didn't need to!" replied Vijay, "it's a proven fix. It's perfect!"

Robert looked to a nearby colleague wondering if perhaps he had missed something Vijay had said.

"Vijay, it's not that I doubt your skills, but what exactly do you mean by a 'proven fix'?"

 

Bullet Proofing the Proof

Vijay left Robert's office and returned with a notebook. He explained that, basically, the idea is that you create a mathematical proof via a formal method that describes a system. What he had done was created a proof of how the system functioned, and then he recalculated after writing the actual fix.

While patting his notebook, he smiled and concluded, "therefore, this is my proof that the code will work — in writing!"

"Vijay,you work is quite impressive," Robert began, "but would you mind if we went over to the simulator and tried it out?"

Vijay was agreeable to the proposition and joined and Robert at the simulator. As per their normal test plan, Robert installed the patch and set up the conditions for the bug. Vijay watched on silently with a very stoic of course it will you idiot during the whole process. And then his mood suddenly changed when Robert found that the bug was still there.

"You see - the annealing temperature that appears on the operator's screen still reads -255F. Perhaps—", Robert began before being abruptly cut off.

"But I proved that it was correct!" exclaimed Vijay while stabbing his notebook with his pointer finger, "Now you see here! Page 3! There! SEE?!?!"

Quietly, Robert offered, "Maybe there is an error in your proof," but it did no good. Muttering under his breath, Vijay stormed back to his desk and spent the remainder of the day (literally) pounding on his keyboard.

The next morning, the founder's son, Phil, stopped by Robert's office. "Bad news Robert," he opened, "as of this morning, Vijay will no longer be working with your group. He was pretty upset that your software was defective and that it didn't work when he fixed it... even when his fix was proven to be correct."

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianRaphaël Hertzog: My Free Software Activities in June 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

I was allocated 12 hours to work on security updates for Debian 7 Wheezy. During this time I did the following:

  • Released DLA-983-1 and DLA-984-1 on tiff3/tiff to fix 4 CVE. I also updated our patch set to get back in sync with upstream since we had our own patches for a while and upstream ended up using a slightly different approach. I checked that the upstream fix did really fix the issues with the reproducer files that were available to us.
  • Handled CVE triage for a whole week.
  • Released DLA-1006-1 on libarchive (2 CVE fixed by Markus Koschany, one by me).

Debian packaging

Django. A last-minute update to Django in stretch before the release, I uploaded python-django 1:1.10.7-2 fixing two bugs (among which one was release critical) and filed the corresponding unblock request.

schroot. I tried to prepare another last-minute update, this time for schroot. The goal was to fix the bash completion (#855283) and a problem encountered by the Debian sysadmins (#835104). Those issues are fixed in unstable/testing but my unblock request got turned into a post-release stretch update because the release managers wanted to give the package some more testing time in unstable. Even now, they are wondering whether they should accept the new systemd service file.

live-build, live-config and live-boot. On live-build, I merged a patch to add a keyboard shortcut for the advanced option menu entry (#864386). For live-config, I uploaded version 5.20170623 to fix a broken boot sequence when you have multiple partitions (#827665). For live-boot, I uploaded version 1:20170623 to fix the path to udevadm (#852570) and avoiding a file duplication in the initrd (864385).

zim. I packaged a release candidate (0.67~rc2) in Debian Experimental and started to use it. I quickly discovered two annoying regressions that I reported upstream (here and here).

logidee-tools. This is a package I authored a long time ago and that I’m no longer actively using. It does still work but I sometimes wonder if it still has real users. Anyway I wanted to quickly replace the broken dependency on pgf but I ended up converting the Subversion repository to Git and I also added autopkgtests. At least those tests will inform me when the package no longer works… otherwise I would not notice since I’m no longer using it.

Bugs filed. I filed #865531 on lintian because the new check testsuite-autopkgtest-missing is giving some bad advice and probably does its check in a bad way. I also filed #865541 on sbuild because sbuild --apt-distupgrade can under some circumstances remove build-essential and break the build chroot. I filed an upstream ticket on publican to forward the request I received in #864648.

Sponsorship. I sponsored a jessie update for php-tcpdf (#814030) and dolibarr 5.0.4+dfsg3-1 for unstable. I sponsored many other packages, but all in the context of the pkg-security team.

pkg-security work

Now that the Stretch freeze is over, the team became more active again and I have been overwhelmed with the number of packages to review and sponsor:

  • knocker
  • recon-ng
  • dsniff
  • libnids
  • rfdump
  • snoopy
  • dirb
  • wcc
  • arpwatch
  • dhcpig
  • backdoor-factory

I also updated hashcat to a new upstream release (3.6.0) and had to discuss with upstream about its weird versioning change. Looks like we will have to introduce an epoch to be able to get back in sync with upstream. 🙁 To be able to get in sync with Kali, I introduced an hashcat-meta source package (in contrib) providing hashcat-nvidia to make it easy to install hashcat for owners of NVidia hardware.

Misc stuff

Distro Tracker. I merged a small CSS fix from Aurélien Couderc (#858101) and added a missing constraint on the data model (found through an unexpected traceback that I received by email). I also updated the list of repositories shortly after the stretch release (#865070).

Salt formulas. As part of my Kali work, I did setup a build daemon on Debian stretch host and I encountered a couple of issues with my Salt rules. I reported one against salt-formula (here) and I pushed updates for debootstrap-formula, apache-formula and schroot-formula.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianFoteini Tsiami: Internationalization, part two

Now that sch-scripts has been renamed to ltsp-manager and translated to English, it was time to set up a proper project site for it in launchpad: http://www.launchpad.net/ltsp-manager

The following subpages were created for LTSP Manager there:

  • Code: a review of all the project code, which currently only includes the master git repository.
  • Bugs: a tracker where bugs can be reported. I already filed a few bugs there!
  • Translations: translators will use this to localize LTSP Manager to their languages. It’s not quite ready yet.
  • Answers: a place to ask the LTSP Manager developers for anything regarding the project.

We currently have an initial version of LTSP Manager running in Debian Stretch; although more testing and more bug reports will be needed before we start the localization phase. Attaching a first screenshot!

 


Planet DebianArturo Borrero González: Netfilter Workshop 2017: I'm new coreteam member!

nfws2017

I was invited to attend the Netfilter Workshop 2017 in Faro, Portugal this week, so I’m here with all the folks enjoying some days of talks, discussions and hacking around Netfilter and general linux networking.

The Coreteam of the Netfilter project, with active members Pablo Neira Ayuso (head), Jozsef Kadlecsik, Eric Leblond and Florian Westphal have invited me to join them, and the appointment has happened today.

You may contact me now at my new email address: arturo@netfilter.org

This is the result of my continued contribution to the Netfilter project since several years now (probably since 2012-2013). I’m really happy with this, and I appreciate their recognition. I will do my best in this new position. Thanks!

Regarding the workshop itself, we are having lots of interesting talks and discussions about the state of the Netfilter technology, open issues, missing features and where to go in the future.

Really interesting!

Don Martitransactions from a future software market

More on the third connection in Benkler’s Tripod, which was pretty general. This is just some notes on more concrete examples of how new kinds of direct connections between markets and peer production might work in the future.

Smart contracts should make it possible to enable these in a trustworthy, mostly decentralized, way.

Feature request I want emoji support on my blog, so I file, or find, a wishlist bug on the open source blog package I use: "Add emoji support." I then offer to enter into a smart contract that will be worthless to me if the bug is fixed on September 1, or give me my money back if the bug is unfixed at that date.

A developer realizes that fixing the bug would be easy, and wants to do it, so takes the other side of the contract. The developer's side will expire worthless if the bug is unfixed, and pay out if the bug is fixed.

"Unfixed" results will probably include bugs that are open, wontfix, invalid, or closed as duplicate of a bug that is still open.

"Fixed" results will include bugs closed as fixed, or any bug closed as a duplicate of a bug that is closed as fixed.

If the developer fixes the bug, and its status changes to fixed, then I lose money on the smart contract but get the feature I want. If the bug status is still unfixed, then I get my money back.

So far this is just one user paying one developer to write a feature. Not especially exciting. There is some interesting market design work to be done here, though. How can the developer signal serious interest in working on the bug, and get enough upside to be meaningful, without taking too much risk in the event the fix is not accepted on time?

Arbitrage I post the same offer, but another user realizes that the blog project can only support emoji if the template package that it depends on supports them. That user becomes an arbitrageur: takes the "fixed" side of my offer, and the "unfixed" side of the "Add emoji support" bug in the template project.

As an end user, I don't have to know the dependency relationship, and the market gives the arbitrageur an incentive to collect information about multiple dependent bugs into the best place to fix them.

Front-running Dudley Do-Right's open source project has a bug in it, users are offering to buy the "unfixed" side of the contract in order to incentivize a fix, and a trader realizes that Dudley would be unlikely to let the bug go unfixed. The trader takes the "fixed" side of the contract before Dudley wakes up. The deal means that the market gets information on the likelihood of the bug being fixed, but the developer doing the work does not profit from it.

This is a "picking up nickels in front of a steamroller" trading strategy. The front-runner is accepting the risk of Dudley burning out, writing a long Medium piece on how open source is full of FAIL, and never fixing a bug again.

Front-running game theory could be interesting. If developers get sufficiently annoyed by front-running, they could delay fixing certain bugs until after the end of the relevant contracts. A credible threat to do this might make front-runners get out of their positions at a loss.

CVE prediction A user of a static analysis tool finds a suspicious pattern in a section of a codebase, but cannot identify a specific vulnerability. The user offers to take one side of a smart contract that will pay off if a vulnerability matching a certain pattern is found. A software maintainer or key user can take the other side of these contracts, to encourage researchers to disclose information and focus attention on specific areas of the codebase.

Security information leakage Ernie and Bert discover a software vulnerability. Bert sells it to foreign spies. Ernie wants to get a piece of the action, too, but doesn't want Bert to know, so he trades on a relevant CVE prediction. Neither Bert nor the foreign spies know who is making the prediction, but the market movement gives white-hat researchers a clue on where the vulnerability can be found.

Open source metrics: Prices and volumes on bug futures could turn out to be a more credible signal of interest in a project than raw activity numbers. It may be worth using a bot to trade on a project you depend on, just to watch the market move. Likewise, new open source metrics could provide useful trading strategies. If sentiment analysis shows that a project is melting down, offer to take the "unfixed" side of the project's long-running bugs? (Of course, this is the same market action that incentivizes fixes, so betting that a project will fail is the same thing as paying them not to. My brain hurts.)

What's an "oracle"?

The "oracle" is the software component that moves information from the bug tracker to the smart contracts system. Every smart contract has to be tied to a given oracle that both sides trust to resolve it fairly.

For CVE prediction, the oracle is responsible for pattern matching on new CVEs, and feeding the info into the smart contract system. As with all of these, CVE prediction contracts are tied to a specific oracle.

Bots

Bots might have several roles.

  • Move investments out of duplicate bugs. (Take a "fixed" position in the original and an "unfixed" position in the duplicate, or vice versa.)

  • Make small investments in bugs that appear valid based on project history and interactions by trusted users.

  • Track activity across projects and social sites to identify qualified bug fixers who are unlikely to fix a bug within the time frame of a contract, and take "unfixed" positions on bugs relevant to them.

  • For companies: when a bug is mentioned in an internal customer support ticketing system, buy "unfixed" on that bug. Map confidential customer needs to possible fixers.

Don MartiApplying proposed principles for content blocking

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

In 2015, Denelle Dixon at Mozilla wrote Proposed Principles for Content Blocking.

The principles are:

  • Content Neutrality: Content blocking software should focus on addressing potential user needs (such as on performance, security, and privacy) instead of blocking specific types of content (such as advertising).

  • Transparency & Control: The content blocking software should provide users with transparency and meaningful controls over the needs it is attempting to address.

  • Openness: Blocking should maintain a level playing field and should block under the same principles regardless of source of the content. Publishers and other content providers should be given ways to participate in an open Web ecosystem, instead of being placed in a permanent penalty box that closes off the Web to their products and services.

See also Nine Principles of Policing by Sir Robert Peel, who wrote,

[T]he police are the public and that the public are the police, the police being only members of the public who are paid to give full-time attention to duties which are incumbent on every citizen in the interests of community welfare and existence.

Web browser developers have similar responsibilities to those of Peel's ideal police: to build a browser to carry out the user's intent, or, when setting defaults, to understand widely held user norms and implement those, while giving users the affordances to change the defaults if they choose.

The question now is how to apply content blocking principles to today's web environment. Some qualities of today's situation are:

  • Tracking protection often doesn't have to be perfect, because adfraud. The browser can provide some protection, and influence the market in a positive direction, just by getting legit users below the noise floor of fraudbots.

  • Tracking protection has the potential to intensify a fingerprinting arms race that's already going on, by forcing more adtech to rely on fingerprinting in place of third-party cookies.

  • Fraud is bad, but not all anti-fraud is good. Anti-fraud technologies that track users can create the same security risks as other tracking—and enable adtech to keep promising real eyeballs on crappy sites. The "flight to quality" approach to anti-fraud does not share these problems.

  • Adtech and adfraud can peek at Mozilla's homework, but Mozilla can't see theirs. Open source projects must rely on unpredictable users, not unpredictable platform decisions, to create uncertainty.

Which suggests a few tactics—low-risk ways to apply content blocking principles to address today's adtech/adfraud problems.

Empower WebExtensions developers and users. Much of the tracking protection and anti-fingerprinting magic in Firefox is hidden behind preferences. This makes a lot of sense because it enables developers to integrate their work into the browser in parallel with user testing, and enables Tor Browser to do less patching. IMHO this work is also important to enable users to choose their own balance between privacy/security and breaking legacy sites.

Inform and nudge users who express an interest in privacy. Some users care about privacy, but don't have enough information about how protection choices match up with their expectations. If a user cares enough to turn on Do Not Track, change cookie settings, or install an ad blocker, then try suggesting a tracking protection setting or tool. Don't assume that just because a user has installed an ad blocker with deceptive privacy settings that the user would not choose privacy if asked clearly.

Understand and report on adfraud. Adfraud is more than just fake impressions and clicks. New techniques include attribution fraud: taking advantage of tracking to connect a bogus ad impression to a real sale. The complexity of attribution models makes this hard to track down. (Criteo and Steelhouse settled a lawsuit about this before discovery could reveal much.)

A multi-billion-dollar industry is devoted to spreading a story that minimizes adfraud, while independent research hints at a complex and lucrative adfraud scene. Remember how there were two Methbot stories: Methbot got a bogus block of IP addresses, and Methbot circumvented some widely used anti-fraud scripts. The ad networks dealt with the first one pretty quickly, but the second is still a work in progress.

The more that Internet freedom lovers can help marketers understand adfraud, and related problems such as brand-unsafe ad placements, the more that the content blocking story can be about users, legit sites, and brands dealing with problem tracking, and not just privacy nerds against all web business.

Planet DebianJohn Goerzen: Time, Frozen

We’re expecting a baby any time now. The last few days have had an odd quality of expectation: any time, our family will grow.

It makes time seem to freeze, to stand still.

We have Jacob, about to start fifth grade and middle school. But here he is, still a sweet and affectionate kid as ever. He loves to care for cats and seeks them out often. He still keeps an eye out for the stuffed butterfly he’s had since he was an infant, and will sometimes carry it and a favorite blanket around the house. He will also many days prepare the “Yellow House News” on his computer, with headlines about his day and some comics pasted in — before disappearing to play with Legos for awhile.

And Oliver, who will walk up to Laura and “give baby a hug” many times throughout the day — and sneak up to me, try to touch my arm, and say “doink” before running off before I can “doink” him back. It was Oliver that had asked for a baby sister for Christmas — before he knew he’d be getting one!

In the past week, we’ve had out the garden hose a couple of times. Both boys will enjoy sending mud down our slide, or getting out the “water slide” to play with, or just playing in mud. The rings of dirt in the bathtub testify to the fun that they had. One evening, I built a fire, we made brats and hot dogs, and then Laura and I sat visiting and watching their water antics for an hour after, laughter and cackles of delight filling the air, and cats resting on our laps.

These moments, or countless others like Oliver’s baseball games, flying the boys to a festival in Winfield, or their cuddles at bedtime, warm the heart. I remember their younger days too, with fond memories of taking them camping or building a computer with them. Sometimes a part of me wants to just keep soaking in things just as they are; being a parent means both taking pride in children’s accomplishments as they grow up, and sometimes also missing the quiet little voice that can be immensely excited by a caterpillar.

And yet, all four of us are so excited and eager to welcome a new life into our home. We are ready. I can’t wait to hold the baby, or to lay her to sleep, to see her loving and excited older brothers. We hope for a smooth birth, for mom and baby.

Here is the crib, ready, complete with a mobile with a cute bear (and even a plane). I can’t wait until there is a little person here to enjoy it.

,

Planet DebianAntoine Beaupré: My free software activities, June 2017

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This time I worked on Mercurial, sudo and Puppet.

Mercurial remote code execution

I issued DLA-1005-1 to resolve problems with the hg server --stdio command that could be abused by "remote authenticated users to launch the Python debugger, and consequently execute arbitrary code, by using --debugger as a repository name" (CVE-2017-9462).

Backporting the patch was already a little tricky because, as is often the case in our line of work, the code had changed significantly in newer version. In particular, the commandline dispatcher had been refactored which made the patch non-trivial to port. On the other hand, mercurial has an extensive test suite which allowed me to make those patches in all confidence. I also backported a part of the test suite to detect certain failures better and to fix the output so that it matches the backported code. The test suite is slow, however, which meant slow progress when working on this package.

I also noticed a strange issue with the test suite: all hardlink operations would fail. Somehow it seems that my new sbuild setup doesn't support doing hardlinks. I ended up building a tarball schroot to build those types of packages, as it seems the issue is related to the use of overlayfs in sbuild. The odd part is my tests of overlayfs, following those instructions, show that it does support hardlinks, so there maybe something fishy here that I misunderstand.

This, however, allowed me to get a little more familiar with sbuild and the schroots. I also took this opportunity to optimize the builds by installing an apt-cacher-ng proxy to speed up builds, which will also be useful for regular system updates.

Puppet remote code execution

I have issued DLA-1012-1 to resolve a remote code execution attack against puppetmaster servers, from authenticated clients. To quote the advisory: "Versions of Puppet prior to 4.10.1 will deserialize data off the wire (from the agent to the server, in this case) with a attacker-specified format. This could be used to force YAML deserialization in an unsafe manner, which would lead to remote code execution."

The fix was non-trivial. Normally, this would have involved fixing the YAML parsing, but this was considered problematic because the ruby libraries themselves were vulnerable and it wasn't clear we could fix the problem completely by fixing YAML parsing. The update I proposed took the bold step of switching all clients to PSON and simply deny YAML parsing from the server. This means all clients need to be updated before the server can be updated, but thankfully, updated clients will run against an older server as well. Thanks to LeLutin at Koumbit for helping in testing patches to solve this issue.

Sudo privilege escalation

I have issued DLA-1011-1 to resolve an incomplete fix for a privilege escalation issue (CVE-2017-1000368 from CVE-2017-1000367). The backport was not quite trivial as the code had changed quite a lot since wheezy as well. Whereas mercurial's code was more complex, it's nice to see that sudo's code was actually simpler and more straightforward in newer versions, which is reassuring. I uploaded the packages for testing and uploaded them a year later.

I also took extra time to share the patch in the Debian bugtracker, so that people working on the issue in stable may benefit from the backported patch, if needed. One issue that came up during that work is that sudo doesn't have a test suite at all, so it is quite difficult to test changes and make sure they do not break anything.

Should we upload on fridays?

I brought up a discussion on the mailing list regarding uploads on fridays. With the sudo and puppet uploads pending, it felt really ... daring to upload both packages, on a friday. Years of sysadmin work hardwired me to be careful on fridays; as the saying goes: "don't deploy on a friday if you don't want to work on the weekend!"

Feedback was great, but I was surprised to find that most people are not worried worried about those issues. I have tried to counter some of the arguments that were brought up: I wonder if there could be a disconnection here between the package maintainer / programmer work and the sysadmin work that is at the receiving end of that work. Having myself to deal with broken updates in the past, I'm surprised this has never come up in the discussions yet, or that the response is so underwhelming.

So far, I'll try to balance the need for prompt security updates and the need for stable infrastructure. One does not, after all, go without the other...

Triage

I also did small fry triage:

Hopefully some of those will come to fruitition shortly.

Other work

My other work this month was a little all over the place.

Stressant

Uploaded a new release (0.4.1) of stressant to split up the documentation from the main package, as the main package was taking up too much space according to grml developers.

The release also introduces limited anonymity option, by blocking serial numbers display in the smartctl output.

Debiman

Also did some small followup on the debiman project to fix the FAQ links.

Local server maintenance

I upgraded my main server to Debian stretch. This generally went well, althought the upgrade itself took way more time than I would have liked (4 hours!). This is partly because I have a lot of cruft installed on the server, but also because of what I consider to be issues in the automation of major Debian upgrades. For example, I was prompted for changes in configuration files at seemingly random moments during the upgrade, and got different debconf prompts to answer. This should really be batched together, and unfortunately I had forgotten to use the home-made script I established when i was working at Koumbit which shortens the upgrade a bit.

I wish we would improve on our major upgrade mechanism. I documented possible solutions for this in the AutomatedUpgrade wiki page, but I'm not sure I see exactly where to go from here.

I had a few regressions after the upgrade:

  • the infrared remote control stopped working: still need to investigate
  • my home-grown full-disk encryption remote unlocking script broke, but upstream has a nice workaround, see Debian bug #866786
  • gdm3 breaks bluetooth support (Debian bug #805414 - to be fair, this is not a regression in stretch, it's just that I switched my workstation from lightdm to gdm3 after learning that the latter can do rootless X11!)

Docker and Subsonic

I did my first (and late?) foray into Docker and containers. My rationale was that I wanted to try out Subsonic, an impressive audio server which some friends have shown me. Since Subsonic is proprietary, I didn't want it to contaminate the rest of my server and it seemed like a great occasion to try out containers to keep things tidy. Containers may also allow me to transparently switch to the FLOSS fork LibreSonic once the trial period is over.

I have learned a lot and may write more about the details of that experience soon, for now you can look at the contributions I made to the unofficial Subsonic docker image, but also the LibreSonic one.

Since Subsonic also promotes album covers as first-class citizens, I used beets to download a lot of album covers, which was really nice. I look forward to using beets more, but first I'll need to implement two plugins.

Wallabako

I did a small release of wallabako to fix the build with the latest changes in the underlying wallabago library, which led me to ask upstream to make versionned releases.

I also looked into creating a separate documentation site but it looks like mkdocs doesn't like me very much: the table of contents is really ugly...

Small fry

That's about it! And that was supposed to be a slow month...

Planet DebianBen Hutchings: Debian LTS work, June 2017

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 5 hours. I worked all 20 hours.

I spent most of my time working - together with other Linux kernel developers - on backporting and testing several versions of the fix for CVE-2017-1000364, part of the "Stack Clash" problem. I uploaded two updates to linux and issued DLA-993-1 and DLA-993-2. Unfortunately the latest version still causes regressions for some applications, which I will be investigating this month.

I also released a stable update on the Linux 3.2 longterm stable branch (3.2.89) and prepared another (3.2.90) which I released today.

Sociological ImagesTransfers of military equipment to police increases civilian and canine death

Human beings are prone to a cognitive bias called the Law of the Instrument. It’s the tendency to see everything as being malleable according to whatever tool you’re used to using. In other words, if you have a hammer, suddenly all the world’s problems look like nails to you.

Objects humans encounter, then, aren’t simply useful to them, or instrumental; they are transformative: they change our view of the world. Hammers do. And so do guns. “You are different with a gun in your hand,” wrote the philosopher Bruno Latour, “the gun is different with you holding it. You are another subject because you hold the gun; the gun is another object because it has entered into a relationship with you.”

In that case, what is the effect of transferring military grade equipment to police officers?

In 1996, the federal government passed a law giving the military permission to donate excess equipment to local police departments. Starting in 1998, millions of dollars worth of equipment was transferred each year. Then, after 9/11, there was a huge increase in transfers. In 2014, they amounted to the equivalent of 796.8 million dollars.

Image via The Washington Post:

Those concerned about police violence worried that police officers in possession of military equipment would be more likely to use violence against civilians, and new research suggests that they’re right.

Political scientist Casey Delehanty and his colleagues compared the number of civilians killed by police with the monetary value of transferred military equipment across 455 counties in four states. Controlling for other factors (e.g., race, poverty, drug use), they found that killings rose along with increasing transfers. In the case of the county that received the largest transfer of military equipment, killings more than doubled.

But maybe they got it wrong? High levels of military equipment transfers could be going to counties with rising levels of violence, such that it was increasingly violent confrontations that caused the transfers, not the transfers causing the confrontations.

Delehanty and his colleagues controlled for the possibility that they were pointing the causal arrow the wrong way by looking at the killings of dogs by police. Police forces wouldn’t receive military equipment transfers in response to an increase in violence by dogs, but if the police were becoming more violent as a response to having military equipment, we might expect more dogs to die. And they did.

Combined with research showing that police who use military equipment are more likely to be attacked themselves, literally everyone will be safer if we reduce transfers and remove military equipment from the police arsenal.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianVincent Bernat: Performance progression of IPv4 route lookup on Linux

TL;DR: Each of Linux 2.6.39, 3.6 and 4.0 brings notable performance improvements for the IPv4 route lookup process.


In a previous article, I explained how Linux implements an IPv4 routing table with compressed tries to offer excellent lookup times. The following graph shows the performance progression of Linux through history:

IPv4 route lookup performance

Two scenarios are tested:

  • 500,000 routes extracted from an Internet router (half of them are /24), and
  • 500,000 host routes (/32) tightly packed in 4 distinct subnets.

All kernels are compiled with GCC 4.9 (from Debian Jessie). This version is able to compile older kernels1 as well as current ones. The kernel configuration used is the default one with CONFIG_SMP and CONFIG_IP_MULTIPLE_TABLES options enabled (however, no IP rules are used). Some other unrelated options are enabled to be able to boot them in a virtual machine and run the benchmark.

The measurements are done in a virtual machine with one vCPU2. The host is an Intel Core i5-4670K and the CPU governor was set to “performance”. The benchmark is single-threaded. Implemented as a kernel module, it calls fib_lookup() with various destinations in 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC (and converted to nanoseconds by assuming a constant clock).

The following kernel versions bring a notable performance improvement:

  • In Linux 2.6.39, commit 3630b7c050d9, David Miller removes the hash-based routing table implementation to switch to the LPC-trie implementation (available since Linux 2.6.13 as a compile-time option). This brings a small regression for the scenario with many host routes but boosts the performance for the general case.

  • In Linux 3.0, commit 281dc5c5ec0f, the improvement is not related to the network subsystem. Linus Torvalds disables the compiler size-optimization from the default configuration. It was believed that optimizing for size would help keeping the instruction cache efficient. However, compilers generated under-performing code on x86 when this option was enabled.

  • In Linux 3.6, commit f4530fa574df, David Miller adds an optimization to not evaluate IP rules when they are left unconfigured. From this point, the use of the CONFIG_IP_MULTIPLE_TABLES option doesn’t impact the performances unless some IP rules are configured. This version also removes the route cache (commit 5e9965c15ba8). However, this has no effect on the benchmark as it directly calls fib_lookup() which doesn’t involve the cache.

  • In Linux 4.0, notably commit 9f9e636d4f89, Alexander Duyck adds several optimizations to the trie lookup algorithm. It really pays off!

  • In Linux 4.1, commit 0ddcf43d5d4a, Alexander Duyck collapses the local and main tables when no specific IP rules are configured. For non-local traffic, both those tables were looked up.


  1. Compiling old kernels with an updated userland may still require some small patches

  2. The kernels are compiled with the CONFIG_SMP option to use the hierarchical RCU and activate more of the same code paths as actual routers. However, progress on parallelism are left unnoticed. 

CryptogramA Man-in-the-Middle Attack against a Password Reset System

This is nice work: "The Password Reset MitM Attack," by Nethanel Gelerntor, Senia Kalma, Bar Magnezi, and Hen Porcilan:

Abstract: We present the password reset MitM (PRMitM) attack and show how it can be used to take over user accounts. The PRMitM attack exploits the similarity of the registration and password reset processes to launch a man in the middle (MitM) attack at the application level. The attacker initiates a password reset process with a website and forwards every challenge to the victim who either wishes to register in the attacking site or to access a particular resource on it.

The attack has several variants, including exploitation of a password reset process that relies on the victim's mobile phone, using either SMS or phone call. We evaluated the PRMitM attacks on Google and Facebook users in several experiments, and found that their password reset process is vulnerable to the PRMitM attack. Other websites and some popular mobile applications are vulnerable as well.

Although solutions seem trivial in some cases, our experiments show that the straightforward solutions are not as effective as expected. We designed and evaluated two secure password reset processes and evaluated them on users of Google and Facebook. Our results indicate a significant improvement in the security. Since millions of accounts are currently vulnerable to the PRMitM attack, we also present a list of recommendations for implementing and auditing the password reset process.

Password resets have long been a weak security link.

BoingBoing Post.

Worse Than FailureCodeSOD: Classic WTF: When the Query String is Just Not Enough

It's a holiday weekend in the US, as as we prepare for the 4th of July, we have some query strings that are worth understanding. (original)--Remy

As Stephen A.'s client was walking him through their ASP.NET site, Stephen noticed a rather odd URL scheme. Instead of using the standard Query String -- i.e., http://their.site/Products/?ID=2 -- theirs used some form of URL-rewriting utilizing the "@" symbol in the request name: http://their.site/Products/@ID=2.aspx. Not being an expert on Search Engine Optimization, Stephan had just assumed it had something to do with that.

A few weeks later, when Stephan finally had a chance to take a look at the code, he noticed something rather different...

That's right; every "dynamic-looking" page was, in fact, static. It was going to be a long maintenance contract...

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Don MartiSoftware: annoying speech or crappy product?

Zeynep Tufekci, in the New York Times:

Since most software is sold with an “as is” license, meaning the company is not legally liable for any issues with it even on day one, it has not made much sense to spend the extra money and time required to make software more secure quickly.

The software business is still stuck on the kind of licensing that might have made sense in the 8-bit micro days, when "personal computer productivity" was more aspirational than a real thing, and software licenses were printed on the backs of floppy sleeves.

Today, software is part of products that do real stuff, and it makes zero sense to ship a real product, that people's safety or security depends on, with the fine print "WE RESERVE THE RIGHT TO TOTALLY HALF-ASS OUR JOBS" or in business-speak, "SELLER DISCLAIMS THE IMPLIED WARRANTY OF MERCHANTABILITY."

But what about open source and collaboration and science, and all that stuff? Software can be both "product" and "speech". Should there be a warranty on speech? If I dig up my shell script for re-running the make command when a source file changes, and put it on the Internet, should I be putting a warranty on it?

It seems that there are two kinds of software: some is more product-like, and should have a grown-up warranty on it like a real busines. And some software is more speech-like, and should have ethical requirements like a scientific paper, but not a product-like warranty.

What's the dividing line? Some ideas.

"productware is shipped as executables, freespeechware is shipped as source code" Not going to work for elevator_controller.php or a home router security tool written in JavaScript.

"productware is preinstalled, freespeechware is downloaded separately" That doesn't make sense when even implanted defibrillators can update over the net.

"productware is proprietary, freespeechware is open source" Companies could put all the fragile stuff in open source components, then use the DMCA and CFAA to enable them to treat the whole compilation as proprietary.

Software companies are built to be good at getting around rules. If a company can earn all its money in faraway Dutch Sandwich Land and be conveniently too broke to pay the IRS in the USA, then it's going to be hard to make it grow up licensing-wise without hurting other people first.

How about splitting out the legal advantages that the government offers to software and extending some to productware, others to freespeechware?

Freespeechware licenses

  • license may disclaim implied warranty

  • no anti-reverse-engineering clause in a freespeechware license is enforceable

  • freespeechware is not a "technological protection measure" under section 1201 of Title 17 of the United States Code (DMCA anticircumvention)

  • exploiting a flaw in freespeechware is never a violation of the Computer Fraud and Abuse Act

  • If the license allows it, a vendor may sell freespeechware, or a derivative work of it, as productware. (This could be as simple as following the You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. term of the GPL.)

Productware licenses:

  • license may not disclaim implied warranty

  • licensor and licensee may agree to limit reverse engineering rights

  • DMCA and CFAA apply (reformed of course, but that's another story)

It seems to me that there needs to be some kind of quid pro quo here. If a company that sells software wants to use government-granted legal powers to control its work, that has to be conditioned on not using those powers just to protect irresponsible releases.

,

Planet Linux AustraliaOpenSTEM: The Science of Cats

Ah, the comfortable cat! Most people agree that cats are experts at being comfortable and getting the best out of life, with the assistance of their human friends – but how did this come about? Geneticists and historians are continuing to study how cats and people came to live together and how cats came to organise themselves into such a good deal in their relationship with humans. Cats are often allowed liberties that few other animals, even domestic animals, can get away with – they are fed and usually pampered with comfortable beds (including human furniture), are kept warm, cuddled on demand; and, very often, are not even asked to provide anything except affection (on their terms!) in return. Often thought of as solitary animals, cats’ social behaviour is actually a lot more complex and recently further insights have been gained about how cats and humans came to enjoy the relationship that they have today.

Many people know that the Ancient Egyptians came to certain agreements with cats – cats are depicted in some of their art and mummified cats have been found. It is believed that cats may have been worshipped as representatives of the Cat Goddess, Bastet – interestingly enough, a goddess of war! Statues of cats from Ancient Egypt emphasise their regal bearing and tendency towards supercilious expressions. Cats were present in Egyptian art by 1950 B.C. and it was long thought that Egyptians were the first to domesticate the cat. However, in 2004 a cat was found buried with a human  on the island of Cyprus in the Mediterranean 9,500 years ago, making it the earliest known cat associated with humans. This date was many thousands of years earlier than Egyptian cats. In 2008 a site in the Nile Valley was found which contained the remains of 6 cats – a male, a female and 4 kittens, which seemed to have been cared for by people about 6,000 years ago.

African Wild Cat, photo by Sonelle, CC-BY-SA

It is now fairly well accepted that cats domesticated people, rather than the other way round! Papers refer to cats as having “self-domesticated”, which sounds in line with cat behaviour. Genetically all modern cats are related to African (also called Near Eastern) wild cats 8,000 years ago. There was an attempt to domesticate leopard cats about 7,500 years ago in China, but none of these animals contributed to the genetic material of the world’s modern cat populations. As humans in the Near East developed agriculture and started to live in settled villages, after 10,000 years ago, cats were attracted to these ready sources of food and more. The steady supply of food from agriculture allowed people to live in permanent villages. Unfortunately, these villages, stocked with food, also attracted other animals, such as rats and mice, not as welcome and potential carriers of disease. The rats and mice were a source of food for the cats who probably arrived in the villages as independent, nocturnal hunters, rather than as deliberately being encouraged by people.

Detail of cat from tomb of Nebamun

Once cats were living in close proximity to people, trust developed and soon cats were helping humans in the hunt, as is shown in this detail from an Egyptian tomb painting on the right. Over time, cats became pets and part of the family and followed farmers from Turkey across into Europe, as well as being painted sitting under dining tables in Egypt. People started to interfere with the breeding of cats and it is now thought that the Egyptians selected more social, rather than more territorial cats. Contrary to the popular belief that cats are innately solitary, in fact African Wild Cats have complex social behaviour, much of which has been inherited by the domestic cat. African wild cats live in loosely affiliated groups made up mostly of female cats who raise kittens together. There are some males associated with the group, but they tend to visit infrequently and have a larger range, visiting several of the groups of females and kittens. The female cats take turns at nursing, looking after the kittens and hunting. The adult females share food only with their own kittens and not with the other adults. Cats recognise who belongs to their group and who doesn’t and tend to be aggressive to those outside the group. Younger cats are more tolerant of strangers, until they form their own groups. Males are not usually social towards each other, but occasionally tolerate each other in loose ‘brotherhoods’.

In our homes we form the social group, which may include one or more cats. If there is more than one cat these may subdivide themselves into cliques or factions. Pairs of cats raised together often remain closely bonded and affectionate for life. Other cats (especially males) may isolate themselves from the group and do not want to interact with other cats. Cats that are happy on their own do not need other cats for company. It is more common to find stressed cats in multi-cat households. Cats will tolerate other cats best if they are introduced when young. After 2 years of age cats are less tolerant of newcomers to the group. Humans take the place of parents in their cats’ lives. Cats who grow up with humans retain some psychological traits from kittenhood and never achieve full psychological maturity.

At the time that humans were learning to manipulate the environment to their own advantage by domesticating plants and animals, cats started learning to manipulate us. They have now managed to achieve very comfortable and prosperous lives with humans and have followed humans around the planet. Cats arrived in Australia with the First Fleet, having found a very comfortable niche on sailing ships helping to control vermin. Matthew Flinders‘ cat, Trim, became famous as a result of the book Flinders wrote about him. However, cats have had a devastating effect on the native wildlife of Australia. They kill millions of native animals every year, possibly even millions each night. It is thought that they have been responsible for the extinction of numbers of native mice and small marsupial species. Cats are very efficient and deadly nocturnal hunters. It is recommended that all cats are kept restrained indoors or in runs, especially at night. We must not forget that our cuddly companions are still carnivorous predators.

Krebs on SecurityIs it Time to Can the CAN-SPAM Act?

Regulators at the U.S. Federal Trade Commission (FTC) are asking for public comment on the effectiveness of the CAN-SPAM Act, a 14-year-old federal law that seeks to crack down on unsolicited commercial email. Judging from an unscientific survey by this author, the FTC is bound to get an earful.

spamspamspam

Signed into law by President George W. Bush in 2003, the “Controlling the Assault of Non-Solicited Pornography and Marketing Act” was passed in response to a rapid increase in junk email marketing.

The law makes it a misdemeanor to spoof the information in the “from:” field of any marketing message, and prohibits the sending of sexually-oriented spam without labeling it “sexually explicit.” The law also requires spammers to offer recipients a way to opt-out of receiving further messages, and to process unsubscribe requests within 10 business days.

The “CAN” in CAN-SPAM was a play on the verb “to can,” as in “to put an end to,” or “to throw away,” but critics of the law often refer to it as the YOU-CAN-SPAM Act, charging that it essentially legalized spamming. That’s partly because the law does not require spammers to get permission before they send junk email. But also because the act prevents states from enacting stronger anti-spam protections, and it bars individuals from suing spammers except under laws not specific to email.

Those same critics often argue that the law is rarely enforced, although a search on the FTC’s site for CAN-SPAM press releases produces quite a few civil suits brought by the commission against marketers over the years. Nevertheless, any law affecting Internet commerce is bound to need a few tweaks over the years, and CAN-SPAM has been showing its age for some time now.

Ron Guilmette, an anti-spam activists whose work has been profiled extensively on this blog, didn’t sugar-coat it, calling CAN-SPAM “a travesty that was foisted upon the American people by a small handful of powerful companies, most notably AOL and Microsoft, and by their obedient lackeys in Congress.”

According to Guilmette, the Act was deliberately fashioned so as to nullify California’s more restrictive anti-spam law, and it made it impossible for individual victims of spam to sue spam senders. Rather, he said, that right was reserved only for the same big companies that lobbied heavily for the passage of the CAN-SPAM Act.

“The entire Act should be thrown out and replaced,” Guilmette said. “It hasn’t worked to control spam, and it has in fact only served to make the problem worse.”

In the fix-it-don’t-junk-it camp is Joe Jerome, policy counsel for the Electronic Frontier Foundation (EFF), a nonprofit digital rights advocacy group. Jerome allowed that CAN-SPAM is far from perfect, but he said it has helped to set some ground rules.

“In her announcement on this effort, Acting Chairman Ohlhausen hinted that regulations can be excessive, outdated, or unnecessary,” Jerome said. “Nothing can be further from the case with respect to spam. CAN-SPAM was largely ineffective in stopping absolutely bad, malicious spammers, but it’s been incredibly important in creating a baseline for commercial email senders. Advertising transparency and easy opt-outs should not be viewed as a burden on companies, and I’d worry that weakening CAN-SPAM would set us back. If anything, we need stronger standards around opt-outs and quicker turn-around time, not less.”

Dan Balsam, an American lawyer who’s made a living suing spammers, has argued that CAN-SPAM is nowhere near as tough as it needs to be on junk emailers. Balsam argues that spammy marketers win as long as the federal law leaves enforcement up to state attorneys general, the FTC and Internet service providers.

“I would tell the FTC that it’s a travesty that Congress purports to usurp the states’ traditional authority to regulate advertising,” Balsam said via email. “I would tell the FTC that it’s ridiculous that the CAN-SPAM Act allows spam unless/until the person opts out, unlike e.g. Canada’s law. And I would tell the FTC that the CAN-SPAM Act isn’t working because there’s still obviously a spam problem.”

Cisco estimates that 65 percent of all email sent today is spam. That’s down only slightly from 2004 when CAN-SPAM took effect. At the time, Postini Inc. — an email filtering company later bought by Google — estimated that 70 percent of all email was spam.

Those figures may be misleading because a great deal of spam today is simply malicious email. Nobody harbored any illusions that CAN-SPAM could do much to stop the millions of phishing scams, malware and booby-trapped links being blasted out each day by cyber criminals. This type of spam is normally relayed via hacked servers and computers without the knowledge of their legitimate owners. Also, while the world’s major ISPs have done a pretty good job blocking most pornography spam, it’s still being blasted out en masse from some of the same criminals who are pumping malware spam.

Making life more miserable and expensive for malware spammers and phishers has been major focus of my work, both here at KrebsOnSecurity and in my book, Spam Nation: The Inside Story of Organized Cybercrime. Stay tuned later this week for the results of a lengthy investigation into a spam gang that has stolen millions of Internet addresses to play their trade (a story, by the way, that features prominently the work of the above-quoted anti-spammer Ron Guilmette).

What do you think about the CAN-SPAM law? Sound off in the comments below, and consider leaving a copy of your thoughts at the FTC’s CAN-SPAM comments page.

Planet DebianBits from Debian: New Debian Developers and Maintainers (May and June 2017)

The following contributors got their Debian Developer accounts in the last two months:

  • Alex Muntada (alexm)
  • Ilias Tsitsimpis (iliastsi)
  • Daniel Lenharo de Souza (lenharo)
  • Shih-Yuan Lee (fourdollars)
  • Roger Shimizu (rosh)

The following contributors were added as Debian Maintainers in the last two months:

  • James Valleroy
  • Ryan Tandy
  • Martin Kepplinger
  • Jean Baptiste Favre
  • Ana Cristina Custura
  • Unit 193

Congratulations!

Planet DebianRitesh Raj Sarraf: apt-offline 1.8.1 released

apt-offline 1.8.1 released.

This is a bug fix release fixing some python3 glitches related to module imports. Recommended for all users.

apt-offline (1.8.1) unstable; urgency=medium

  * Switch setuptools to invoke py3
  * No more argparse needed on py3
  * Fix genui.sh based on comments from pyqt mailing list
  * Bump version number to 1.8.1

 -- Ritesh Raj Sarraf <rrs@debian.org>  Sat, 01 Jul 2017 21:39:24 +0545
 

What is apt-offline

Description: offline APT package manager
 apt-offline is an Offline APT Package Manager.
 .
 apt-offline can fully update and upgrade an APT based distribution without
 connecting to the network, all of it transparent to APT.
 .
 apt-offline can be used to generate a signature on a machine (with no network).
 This signature contains all download information required for the APT database
 system. This signature file can be used on another machine connected to the
 internet (which need not be a Debian box and can even be running windows) to
 download the updates.
 The downloaded data will contain all updates in a format understood by APT and
 this data can be used by apt-offline to update the non-networked machine.
 .
 apt-offline can also fetch bug reports and make them available offline.

 

Categories: 

Keywords: 

Like: 

Planet DebianJunichi Uekawa: PLC network connection seems to be less reliable.

PLC network connection seems to be less reliable. Maybe it's too hot. Reconnecting it seems to make it better. Hmmm..

,

Planet DebianThorsten Alteholz: My Debian Activities in June 2017

FTP assistant

This month I marked 100 packages for accept and rejected zero packages. I promise, I will reject more in July!

Debian LTS

This was my thirty-sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload went down again to 16h. During that time I did LTS uploads of:

  • [DLA 994-1] zziplib security update for seven CVEs
  • [DLA 995-1] swftools security update for two CVEs
  • [DLA 998-1] c-ares security update for one CVE
  • [DLA 1008-1] libxml2 security update for five CVEs

I also tested the proposed apache2 package prepared by Roberto and started to work on a new bind9 upload

Last but not least I had five days of frontdesk duties.

Other stuff

This month was filled with updating lots of machines to Stretch. Everything went fine, so thanks a lot to everybody involved in that release.

Nevertheless I also did a DOPOM upload and now take care of otpw. Luckily most of the accumulated bugs already contained a patch, so that I just had to upload a new version and close them.

Planet DebianDaniel Silverstone: F/LOSS activity, June 2017

It seems to be becoming popular to send a mail each month detailing your free software work for that month. I have been slowly ramping my F/LOSS activity back up, after years away where I worked on completing my degree. My final exam for that was in June 2017 and as such I am now in a position to try and get on with more F/LOSS work.

My focus, as you might expect, has been on Gitano which reached 1.0 in time for Stretch's release and which is now heading gently toward a 1.1 release which we have timed for Debconf 2017. My friend a colleague Richard has been working hard on Gitano and related components during this time too, and I hope that Debconf will be an opportunity for him to meet many of my Debian friends too. But enough of that, back to the F/LOSS.

We've been running Gitano developer days roughly monthly since March of 2017, and the June developer day was attended by myself, Richard Maw, and Richard Ipsum. You are invited to read the wiki page for the developer day if you want to know exactly what we got up to, but a summary of my involvement that day is:

  • I chaired the review of our current task state for the project
  • I chaired the decision on the 1.1 timeline.
  • I completed a code branch which adds rudimentary hook support to Gitano and submitted it for code review.
  • I began to learn about git-multimail since we have a need to add support for it to Gitano

Other than that, related to Gitano during June I:

  • Reviewed Richard Ipsum's lua-scrypt patches for salt generation
  • Reviewed Richard Maw's work on centralising Gitano's patterns into a module.
  • Responded to reviews of my hook work, though I need to clean it up some before it'll be merged.

My non-Gitano F/LOSS related work in June has been entirely centred around the support I provide to the Lua community in the form of the Lua mailing list and website. The host on which it's run is ailing, and I've had to spend time trying to improve and replace that.

Hopefully I'll have more to say next month. Perhaps by doing this reporting I'll get more F/LOSS done. Of course, July's report will be sent out while I'm in Montréal for debconf 2017 (or at least for debcamp at that point) so hopefully more to say anyway.

Planet DebianKeith Packard: DRM-lease-4

DRM leasing part three (vblank)

The last couple of weeks have been consumed by getting frame sequence numbers and events handled within the leasing environment (and Vulkan) correctly.

Vulkan EXT_display_control extension

This little extension provides the bits necessary for applications to track the display of frames to the user.

VkResult
vkGetSwapchainCounterEXT(VkDevice           device,
             VkSwapchainKHR         swapchain,
             VkSurfaceCounterFlagBitsEXT    counter,
             uint64_t           *pCounterValue);

This function just retrieves the current frame count from the display associated with swapchain.

VkResult
vkRegisterDisplayEventEXT(VkDevice          device,
              VkDisplayKHR          display,
              const VkDisplayEventInfoEXT   *pDisplayEventInfo,
              const VkAllocationCallbacks   *pAllocator,
              VkFence           *pFence);

This function creates a fence that will be signaled when the specified event happens. Right now, the only event supported is when the first pixel of the next display refresh cycle leaves the display engine for the display. If you want something fancier (like two frames from now), you get to do that on your own using this basic function.

drmWaitVBlank

drmWaitVBlank is the existing interface for all things sequence related and has three modes (always nice to have one function do three things, I think). It can:

  1. Query the current vblank number
  2. Block until a specified vblank number
  3. Queue an event to be delivered at a specific vblank number

This interface has a few issues:

  • It has been kludged into supporting multiple CRTCs by taking bits from the 'type' parameter to hold a 'pipe' number, which is the index in the kernel into the array of CRTCs.

  • It has a random selection of 'int' and 'long' datatypes in the interface, making it need special helpers for 32-bit apps running on a 64-bit kernel.

  • Times are in microseconds, frame counts are 32 bits. Vulkan does everything in nanoseconds and wants 64-bits of frame counts.

For leases, figuring out the index into the kernel list of crtcs is pretty tricky -- our lease has a subset of those crtcs, so we can't actually compute the global crtc index.

drmCrtcGetSequence

int drmCrtcGetSequence(int fd, uint32_t crtcId,
               uint64_t *sequence, uint64_t *ns);

Here's a simple new function — hand it a crtc ID and it provides the current frame sequence number and the time when that frame started (in nanoseconds).

drmCrtcQueueSequence

int drmCrtcQueueSequence(int fd, uint32_t crtcId,
                 uint32_t flags, uint64_t sequence,
             uint64_t user_data);

struct drm_event_crtc_sequence {
    struct drm_event    base;
    __u64           user_data;
    __u64           time_ns;
    __u64           sequence;
};

This will cause a CRTC_SEQUENCE event to be delivered at the start of the specified frame sequence. That event will include the frame when the event was actually generated (in case it's late), along with the time (in nanoseconds) when that frame was started. The event also includes a 64-bit user_data value, which can be used to hold a pointer to whatever data the application wants to see in the event handler.

The 'flags' argument contains a combination of:

#define DRM_CRTC_SEQUENCE_RELATIVE      0x00000001  /* sequence is relative to current */
#define DRM_CRTC_SEQUENCE_NEXT_ON_MISS      0x00000002  /* Use next sequence if we've missed */
#define DRM_CRTC_SEQUENCE_FIRST_PIXEL_OUT   0x00000004  /* Signal when first pixel is displayed */

These are similar to the values provided for the drmWaitVBlank function, except I've added a selector for when the event should be delivered to align with potential future additions to Vulkan. Right now, the only time you can ask for is first-pixel-out, which says that the event should correspond to the display of the first pixel on the screen.

DRM events → Vulkan fences

With the kernel able to deliver a suitable event at the next frame, all the Vulkan code needed was a to create a fence and hook it up to such an event. The existing fence code only deals with rendering fences, so I added window system interface (WSI) fencing infrastructure and extended the radv driver to be able to handle both kinds of fences within that code.

Multiple waiting threads

I've now got three places which can be waiting for a DRM event to appear:

  1. Frame sequence fences.

  2. Wait for an idle image. Necessary when you want an image to draw the next frame to.

  3. Wait for the previous flip to complete. The kernel can only queue one flip at a time, so we have to make sure the previous flip is complete before queuing another one.

Vulkan allows these to be run from separate threads, so I needed to deal with multiple threads waiting for a specific DRM event at the same time.

XCB has the same problem and goes to great lengths to manage this with a set of locking and signaling primitives so that only one thread is ever doing poll or read from the socket at time. If another thread wants to read at the same time, it will block on a condition variable which is then signaled by the original reader thread at the appropriate time. It's all very complicated, and it didn't work reliably for a number of years.

I decided to punt and just create a separate thread for processing all DRM events. It blocks using poll(2) until some events are readable, processes those and then broadcasts to a condition variable to notify any waiting threads that 'something' has happened. Each waiting thread simply checks for the desired condition and if not satisfied, blocks on that condition variable. It's all very simple looking, and seems to work just fine.

Code Complete, Validation Remains

At this point, all of the necessary pieces are in place for the VR application to take advantage of an HMD using only existing Vulkan extensions. Those will be automatically mapped into DRM leases and DRM events as appropriate.

The VR compositor application is working pretty well; tests with Dota 2 show occasional jerky behavior in complex scenes, so there's clearly more work to be done somewhere. I need to go write a pile of tests to independently verify that my code is working. I wonder if I'll need to wire up some kind of light sensor so I can actually tell when frames get displayed as it's pretty easy to get consistent-but-wrong answers in this environment.

Source Code

  • Linux. This is based off of a reasonably current drm-next branch from Dave Airlie. 965 commits past 4.12 RC3.

    git://people.freedesktop.org/~keithp/linux drm-lease-v3

  • X server (which includes xf86-video-modesetting). This is pretty close to master.

    git://people.freedesktop.org/~keithp/xserver drm-lease

  • RandR protocol changes

    git://people.freedesktop.org/~keithp/randrproto drm-lease

  • xcb proto (no changes to libxcb sources, but it will need to be rebuilt)

    git://people.freedesktop.org/~keithp/xcb/proto drm-lease

  • DRM library. About a dozen patches behind master.

    git://people.freedesktop.org/~keithp/drm drm-lease

  • Mesa. Branched early this month (4 June), this is pretty far from master.

    git://people.freedesktop.org/~keithp/mesa drm-lease

Planet DebianRuss Allbery: End of month haul

For some reason, June is always incredibly busy for me. It's historically the most likely month in which I don't post anything here at all except reviews (and sometimes not even that). But I'm going to tell you about what books I bought (or were given to me) on the very last day of the month to break the pattern of no journal postings in June.

Ted Chiang — Arrival (Stories of Your Life) (sff collection)
Eoin Colfer — Artemis Fowl (sff)
Philip K. Dick — The Man Who Japed (sff)
Yoon Ha Lee — Raven Strategem (sff)
Paul K. Longmore — Why I Burned My Book (nonfiction)
Melina Marchetta — The Piper's Son (mainstream)
Jules Verne — For the Flag (sff, sort of)

This is a more than usually eclectic mix.

The Chiang is his Stories of Your Life collection reissued under the title of Arrival to cash in on the huge success of the movie based on one of his short stories. I'm not much of a short fiction reader, but I've heard so many good things about Chiang that I'm going to give it a try.

The Longmore is a set of essays about disability rights that has been on my radar for a while. I finally got pushed into buying it (plus the first Artemis Fowl book and the Marchetta) because I've been reading back through every review Light has written. (I wish I were even close to that amusingly snarky in reviews, and she manages to say in a few paragraphs what usually takes me a whole essay.)

Finally, the Dick and the Verne were gifts from a co-worker from a used book store in Ireland. Minor works by both authors, but nice, old copies of the books.

Planet DebianRuss Allbery: Review: Make It Stick

Review: Make It Stick, by Peter C. Brown, et al.

Author: Peter C. Brown
Author: Henry L. Roediger III
Author: Mark A. McDaniel
Publisher: Belknap Press
Copyright: 2014
ISBN: 0-674-72901-3
Format: Kindle
Pages: 255

Another read for the work book club.

"People generally are going about learning in the wrong ways." This is the first sentence of the preface of this book by two scientists (Roediger and McDaniel are both psychology researchers specializing in memory) and a novelist and former management consultant (Brown). The goal of Make It Stick is to apply empirical scientific research to the problem of learning, specifically retention of information for long-term use. The authors aim to convince the reader that subjective impressions of the effectiveness of study habits are highly deceptive, and that scientific evidence points strongly towards mildly counter-intuitive learning methods that don't feel like they're producing as good of results.

I have such profound mixed feelings about this book.

Let's start with the good. Make It Stick is a book containing actual science. The authors quote the studies, results, and scientific argument at length. There are copious footnotes and an index, as well as recommended reading. And the science is concrete and believable, as is the overlaid interpretation based on cognitive and memory research.

The book's primary argument is that short-term and long-term memory are very different things, that what we're trying to achieve when we say "learning" is based heavily on long-term memory and recall of facts for an extended time after study, and that building this type of recall requires not letting our short-term memory do all the work. We tend towards study patterns that show obvious short-term improvement and that produce an increased feeling of effortless recall of the material, but those study patterns are training short-term memory and mean the knowledge slips away quickly. Choosing learning methods that instead make us struggle a little with what we're learning are significantly better. It's that struggle that leads to committing the material to long-term memory and building good recall pathways for it.

On top of this convincingly-presented foundation, the authors walk through learning methods that feel worse in the moment but have better long-term effects: mixing practice of different related things (different types of solids when doing geometry problems, different pitches in batting practice) and switching types before you've mastered the one you're working on, forcing yourself to interpret and analyze material (such as writing a few paragraphs of summary in your own words) instead of re-reading it, and practicing material at spaced intervals far enough apart that you've forgotten some of the material and have to struggle to recall it. Possibly the most useful insight here (at least for me) was the role of testing in learning, not as just a way of measuring progress, but as a learning tool. Frequent, spaced, cumulative testing forces exactly the type of recall that builds long-term memory. The tests themselves help improve our retention of what we're learning. It's bad news for people like me who were delighted to leave school and not have to take a test again, but viewing tests as a more effective learning tool than re-reading and review (which they are) does cast them in a far more positive light.

This is all solid stuff, and I'm very glad the research underlying this book exists and that I now know about it. But there are some significant problems with its presentation.

The first is that there just isn't much here. The two long paragraphs above summarize nearly all of the useful content of this book. The authors certainly provide more elaboration, and I haven't talked about all of the study methods they mention or some of the useful examples of their application. But 80% of it is there, and the book is intentionally repetitive (because it tries to follow the authors' advice on learning theory). Make It Stick therefore becomes tedious and boring, particularly in the first four chapters. I was saying a lot of "yes, yes, you said that already" and falling asleep while trying to read it. The summaries at the end of the book are a bit better, but you will probably not need most of this book to get the core ideas.

And then there's chapter five, which ends in a train wreck.

Chapter five is on cognitive biases, and I see why the authors wanted to include it. The Dunning-Kruger effect is directly relevant to their topic. It undermines our ability to learn, and is yet another thing that testing helps avoid. Their discussion of Daniel Kahneman's two system theory (your fast, automatic, subconscious reactions and your slow, thoughtful, conscious processing) is somewhat less directly relevant, but it's interesting stuff, and it's at least somewhat related to the short-term and long-term memory dichotomy. But some of the stories they choose to use to illustrate this are... deeply unfortunate. Specifically, the authors decided to use US police work in multiple places as their example of choice for two-system thinking, and treat it completely uncritically.

Some of you are probably already wincing because you can see where this is going.

They interview a cop who, during scenario training for traffic stops, was surprised by the car trunk popping open and a man armed with a shotgun popping out of it. To this day, he still presses down on the trunk of the car as he walks up; it's become part of his checklist for every traffic stop. This would be a good example if the authors realized how badly his training has failed and deconstructed it, but they're apparently oblivious. I wanted to reach into the book and shake them. People have a limited number of things they can track and follow as part of a procedure, and some bad trainer has completely wasted part of this cop's attention in every traffic stop and thereby made him less safe! Just calculate the chances that someone would be curled up in an unlocked trunk with a shotgun and a cop would just happen to stop that car for some random reason, compared to any other threat the cop could use that same attention to watch for. This is exactly the type of scenario that's highly memorable but extremely improbable and therefore badly breaks human risk analysis. It's what Bruce Schneier calls a movie plot threat. The correct reaction to movie plot threats is to ignore them; wasting effort on mitigating them means not having that effort to spend on mitigating some other less memorable but more likely threat.

This isn't the worst, though. The worst is the very next paragraph, also from police training, of showing up at a domestic call, seeing an armed person on the porch who stands up and walks away when ordered to drop their weapon, and not being sure how to react, resulting in that person (in the simulated exercise) killing the cop before they did anything. The authors actually use this as an example of how the cop was using system two and needed to train to use system one in that situation to react faster, and that this is part of the point of the training.

Those of us who have been paying attention to the real world know what using system one here means: the person on the porch gets shot if they're black and doesn't get shot if they're white. The authors studiously refuse to even hint at this problem.

I would have been perfectly happy if this book avoided the unconscious bias aspect of system one thinking. It's a bit far afield of the point of the book, and the authors are doubtless trying to stay apolitical. But that's why you pick some other example. You cannot just drop this kind of thing on the page and then refuse to even comment on it! It's like writing a chapter about the effect of mass transit on economic development, choosing Atlanta as one of your case studies, and then never mentioning race.

Also, some editor seriously should have taken an ax to the sentence where the authors (for no justified reason) elaborate a story to describe a cop maiming a person, solely to make a cliched joke about how masculinity is defined by testicles and how people who lose body parts are less human. Thanks, book.

This was bad enough that it dominated my memory of this chapter, but, reviewing the book for this review, I see it was just a few badly chosen examples at the end of the chapter and one pointless story at the start. The rest of the chapter is okay, although it largely summarizes things covered better in other books. The most useful part that's relevant to the topic of the book is probably the discussion of peer instruction. Just skip over all the police bits; you won't be missing anything.

Thankfully, the rest of the book mostly avoids failing quite this hard. Chapter six does open with the authors obliviously falling for a string of textbook examples of survivorship bias (immediately after the chapter on cognitive biases!), but they shortly thereafter settle down to the accurate and satisfying work of critiquing theories of learning methods and types of intelligence. And by critiquing, I mean pointing out that they're mostly unscientific bullshit, which is fighting the good fight as far as I'm concerned.

So, mixed feelings. The science seems solid, and is practical and directly applicable to my life. Make It Stick does an okay job at presenting it, but gets tedious and boring in places, particularly near the beginning. And there are a few train-wreck examples that had me yelling at the book and scribbling notes, which wasn't really the cure for boredom I was looking for. I recommend being aware of this research, and I'm glad the authors wrote this book, but I can't really recommend the book itself as a reading experience.

Rating: 6 out of 10

Planet DebianPaul Wise: FLOSS Activities June 2017

Changes

Issues

Review

Administration

  • Debian: redirect 2 users to support channels, redirect 1 person to the mirrors team, investigate SMTP TLS question, fix ACL issue, restart dead exim4 service
  • Debian mentors: service restarts, security updates & reboot
  • Debian QA: deploy my changes
  • Debian website: release related rebuilds, rebuild installation-guide
  • Debian wiki: whitelist several email addresses, whitelist 1 domain
  • Debian package tracker: deploy my changes
  • Debian derivatives census: deploy my changes
  • Openmoko: security updates & reboots.

Communication

Sponsors

All work was done on a volunteer basis.

Rondam RamblingsMcConnell's Monster

Like a movie monster that keeps rising from the dead long after you think it has been dispatched, the American Health Care Act, and the Senate's sequel, the Better Care Reconciliation Act, simply refuse to die.  And also like movie monsters, if they are released from the laboratory into the world, they will main and kill thousands of innocent people.  The numbers change from week to week, but the

,

Planet DebianChris Lamb: Free software activities in June 2017

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
    • Support Debian "buster". (commit)
    • Set TRAVIS=true environment variable when running autopkgtests. (#45)
  • Updated the documentation in django-slack, my library to easily post messages to the Slack group-messaging utility to link to Slack's own message formatting documentation. (#66)
  • Added "buster" support to local-debian-mirror, my package to easily maintain and customise a local Debian mirror via the DebConf configuration tool. (commit)

Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source. Multiple third-parties then can come to a consensus on whether a build was compromised or not.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Chaired our monthly IRC meeting. (Summary, logs, etc.)
  • Presented at Hong Kong Open Source Conference 2017.
  • Presented at LinuxCon China.
  • Submitted the following patches to fix reproducibility-related toolchain issues within Debian:
    • cracklib2: Ensuring /var/cache/cracklib/src-dicts are reproducible. (#865623)
    • fontconfig: Ensuring the cache files are reproducible. (#864082)
    • nfstrace: Make the PDF footers reproducible. (#865751)
  • Submitted 6 patches to fix specific reproducibility issues in cd-hit, janus, qmidinet, singularity-container, tigervnc & xabacus.
  • Submitted a wishlist request to the TeX mailing list to ensure that PDF files are reproducible even if generated from a difficult path after identifying underlying cause. (Thread)
  • Categorised a large number of packages and issues in the Reproducible Builds notes.git repository.
  • Worked on publishing our weekly reports. (#110, #111, #112 & #113)
  • Updated our website with 13 missing talks (e291180), updated the metadata for some existing talks (650a201) and added OpenEmbedded to the projects page (12dfcf0).

I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Add libarchive-cpio-perl with the !nocheck build profile. (01e408e)
  • Add dpkg-dev dependency build profile. (f998bbe)


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list. However, I:


Debian LTS


This month I have been paid to work 16 hours hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 974-1 fixing a command injection vulnerability in picocom, a dumb-terminal emulation program.
  • Issued DLA 972-1 which patches a double-free vulnerability in the openldap LDAP server.
  • Issued DLA 976-1 which corrects a buffer over-read vulnerability in the yodl ("Your Own Document Language") document processor.
  • Issued DLA 985-1 to address a vulnerability in libsndfile (a library for reading/writing audio files) where a specially-crafted AIFF file could result in an out-of-bounds memory read.
  • Issued DLA 990-1 to fix an infinite loop vulnerability in the expat, an XML parsing library.
  • Issued DLA 999-1 for the openvpn VPN server — if clients used a HTTP proxy with NTLM authentication, a man-in-the-middle attacker could cause the client to crash or disclose stack memory that was likely to contain the proxy password.

Uploads

  • bfs (1.0.2-1) — New upstream release, add basic/smoke autopkgtests.
  • installation-birthday (5) — Add some basic autopkgtest smoke tests and correct the Vcs-{Git,Browser} headers.
  • python-django:
    • 1:1.11.2-1 — New upstream minor release & backport an upstream patch to prevent a test failure if the source is not writable. (#816435)
    • 1:1.11.2-2 — Upload to unstable, use !nocheck profile for build dependencies that are only required for tests and various packaging updates.

I also made the following non-maintainer uploads (NMUs):

  • kluppe (0.6.20-1.1) — Fix segmentation fault caused by passing a truncated pointer instead of a GtkType. (#863421)
  • porg (2:0.10-1.1) — Fix broken LD_PRELOAD path for libporg-log.so. (#863495)
  • ganeti-instance-debootstrap (0.16-2.1) — Fix "illegal option for fgrep" error by using "--" to escape the search needle. (#864025)
  • pavuk (0.9.35-6.1) — Fix segmentation fault when opening the "Limitations" window due to pointer truncation in src/gtkmulticol.[ch]. (#863492)
  • timemachine (0.3.3-2.1) — Fix two segmentation faults in src/gtkmeter.c and gtkmeterscale.c caused by passing a truncated pointers using guint instead of a GtkType. (#863420)
  • jackeq (0.5.9-2.1) — Fix another segmentation fault caused by passing a truncated pointer instead of a GtkType. (#863416)

Debian bugs filed

  • debhelper: Don't run dh_installdocs if nodoc is specified in DEB_BUILD_PROFILES? (#865869)
  • python-blessed: Non-determistically FTBFS due to unreliable timing in tests. (#864337)
  • apt: Please print a better error message if zero certificates are loaded from the system CA store. (#866377)

CryptogramFood Supplier Passes Squid Off as Octopus

According to a lawsuit (main article behind paywall), "a Miami-based food vendor and its supplier have been misrepresenting their squid as octopus in an effort to boost profits."

Krebs on SecuritySo You Think You Can Spot a Skimmer?

This week marks the 50th anniversary of the automated teller machine — better known to most people as the ATM or cash machine. Thanks to the myriad methods thieves have devised to fleece unsuspecting cash machine users over the years, there are now more ways than ever to get ripped off at the ATM. Think you’re good at spotting the various scams? A newly released ATM fraud inspection guide may help you test your knowledge.

The first cash machine opened for business on June 27, 1967 at a Barclays bank branch in Enfield, north London, but ATM transactions back then didn’t remotely resemble the way ATMs work today.

The first ATM was installed in Enfield, in North London, on June 27, 1967. Image: Barclays Bank.

The first ATM was installed in Enfield, in North London, on June 27, 1967. Image: Barclays Bank.

The cash machines of 1967 relied not on plastic cards but instead on paper checks that the bank would send to customers in the mail. Customers would take those checks — which had little punched-card holes printed across the surface — and feed them into the ATM, which would then validate the checks and dispense a small amount of cash.

This week, Barclay’s turned the ATM at the same location into a gold color to mark its golden anniversary, dressing the machine with velvet ropes and a red carpet leading up to the machine’s PIN pad.

The location of the world's first ATM, turned to gold to commemorate the cash machine's golden anniversary. Image: Barclays Bank.

The location of the world’s first ATM, turned to gold and commemorated with a plaque to mark the cash machine’s golden anniversary. Image: Barclays Bank.

Chances are, the users of that gold ATM have little to worry about from skimmer scammers. But the rest of us practically need a skimming-specific dictionary to keep up with today’s increasingly ingenious thieves.

These days there are an estimated three million ATMs around the globe, and a seemingly endless stream of innovative criminal skimming devices has introduced us over the years to a range of terms specific to cash machine scams like wiretapping, eavesdropping, card-trapping, cash-trapping, false fascias, shimming, black box attacks, bladder bombs (pump skimmers), gas attacks, and deep insert skimmers.

Think you’ve got what it takes to spot the telltale signs of a skimmer? Then have a look at the ATM Fraud Inspection Guide (PDF) from cash machine giant NCR Corp., which briefly touches on the most common forms of ATM skimming and their telltale signs.

For example, below are a few snippets from that guide showing different cash trapping devices made to siphon bills being dispensed from the ATM.

Cash-trapping devices. Source: NCR.

Cash-trapping devices. Source: NCR.

As sophisticated as many modern ATM skimmers may be, most of them can still be foiled by ATM customers simply covering the PIN pad with their hands while entering their PIN (the rare exceptions here involve expensive, complex fraud devices called “PIN pad overlays”).

The proliferation of skimming devices can make a trip to any ATM seem like a stressful experience, but keep in mind that skimmers aren’t the only thing that can go wrong at an ATM. It’s a good idea to visit only ATMs that are in well-lit and public areas, and to be aware of your surroundings as you approach the cash machine. If you visit a cash machine that looks strange, tampered with, or out of place, then try to find another ATM.

You are far more likely to encounter ATM skimmers over the weekend when the bank is closed (skimmer thieves especially favor long holiday weekends when the banks are closed on Monday). Also, if you have the choice between a stand-alone, free-standing ATM and one that is installed at a fixed location (particularly a bank) opt for the fixed-location machine, which is typically more secure against physical tampering.

"Deep insert" skimmers, top. Below, an ATM "shimming" device. Source: NCR.

“Deep insert” skimmers, top. Below, ATM “shimming” devices. Source: NCR.

Sociological ImagesRace, femininity, and benign nature in a vintage tobacco ad

Flashback Friday.

In Race, Ethnicity, and Sexuality, Joane Nagel looks at how these characteristics are used to create new national identities and frame colonial expansion. In particular, White female sexuality, presented as modest and appropriate, was often contrasted with the sexuality of colonized women, who were often depicted as promiscuous or immodest.

This 1860s advertisement for Peter Lorillard Snuff & Tobacco illustrates these differences. According to Toby and Will Musgrave, writing in An Empire of Plants, the ad drew on a purported Huron legend of a beautiful white spirit bringing them tobacco.

There are a few interesting things going on here. We have the association of femininity with a benign nature: the women are surrounded by various animals (monkeys, a fox and a rabbit, among others) who appear to pose no threat to the women or to one another. The background is lush and productive.

Racialized hierarchies are embedded in the personification of the “white spirit” as a White woman, descending from above to provide a precious gift to Native Americans, similar to imagery drawing on the idea of the “white man’s burden.”

And as often occurred (particularly as we entered the Victorian Era), there was a willingness to put non-White women’s bodies more obviously on display than the bodies of White women. The White woman above is actually less clothed than the American Indian woman, yet her arm and the white cloth are strategically placed to hide her breasts and crotch. On the other hand, the Native American woman’s breasts are fully displayed.

So, the ad provides a nice illustration of the personification of nations with women’s bodies, essentialized as close to nature, but arranged hierarchically according to race and perceived purity.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

(View original at https://thesocietypages.org/socimages)

Sam VargheseSage advice

A wise old owl sat on an oak;

The more he saw, the less he spoke;

The less he spoke, the more he heard;

Why aren’t we like this wise, old bird?

CryptogramGood Article About Google's Project Zero

Fortune magazine just published a good article about Google's Project Zero, which finds and publishes exploits in other companies' software products.

I have mixed feeling about it. The project does great work, and the Internet has benefited enormously from these efforts. But as long as it is embedded inside Google, it has to deal with accusations that it targets Google competitors.

Planet DebianArturo Borrero González: About the OutlawCountry Linux malware

netfilter_predator

Today I noticed the internet buzz about a new alleged Linux malware called OutlawCountry by the CIA, and leaked by Wikileaks.

The malware redirects traffic from the victim to a control server in order to spy or whatever. To redirect this traffic, they use simple Netfilter NAT rules injected in the kernel.

According to many sites commenting on the issue, is seems that there is something wrong with the Linux kernel Netfilter subsystem, but I read the leaked docs, and what they do is to load a custom kernel module in order to be able to load Netfilter NAT table/rules with more priority than the default ones (overriding any config the system may have).

Isn’t that clear? The attacker is loading a custom kernel module as root in your machine. They don’t use Netfilter to break into your system. The problem is not Netfilter, the problem is your whole machine being under their control.

With root control of the machine, they could simply use any mechanism, like kpatch or whatever, to replace your whole running kernel with a new one, with full access to memory, networking, file system et al.

They probably use a rootkit or the like to take over the system.

Worse Than FailureError'd: Best Null I Ever Had

"Truly the best null I've ever had. Definitely would purchase again," wrote Andrew R.

 

"Apparently, the Department of Redundancy Department got a hold of the internet," writes Ken F.

 

Berend writes, "So, if I enter 'N' does that mean I'll be instantly hit by a death ray?"

 

"Move over, fake news, Google News has this thing," wrote Jack.

 

Evan C. writes, "I honestly wouldn't put it past parents in Canada to register their yet-to-be-born children for hockey 10 years in advance."

 

"I think that a problem has, um, something, to that computer," writes Tyler Z.

 

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianMichal Čihař: Weblate 2.15

Weblate 2.15 has been released today. It is slightly behind schedule what was mostly caused by my vacation. As with 2.14, there are quite a lot of security improvements based on reports we got from HackerOne program and various new features.

Full list of changes:

  • Show more related translations in other translations.
  • Add option to see translations of current unit to other languages.
  • Use 4 plural forms for Lithuanian by default.
  • Fixed upload for monolingual files of different format.
  • Improved error messages on failed authentication.
  • Keep page state when removing word from glossary.
  • Added direct link to edit secondary language translation.
  • Added Perl format quality check.
  • Added support for rejecting reused passwords.
  • Extended toolbar for editing RTL languages.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Planet DebianDaniel Pocock: A FOSScamp by the beach

I recently wrote about the great experience many of us had visiting OSCAL in Tirana. Open Labs is doing a great job promoting free, open source software there.

They are now involved in organizing another event at the end of the summer, FOSScamp in Syros, Greece.

Looking beyond the promise of sun and beach, FOSScamp is also just a few weeks ahead of the Outreachy selection deadline so anybody who wants to meet potential candidates in person may find this event helpful.

If anybody wants to discuss the possibilities for involvement in the event then the best place to do that may be on the Open Labs forum topic.

What will tomorrow's leaders look like?

While watching a talk by Joni Baboci, head of Tirana's planning department, I was pleasantly surprised to see this photo of Open Labs board members attending the town hall for the signing of an open data agreement:

It's great to see people finding ways to share the principles of technological freedoms far and wide and it will be interesting to see how this relationship with their town hall grows in the future.

LongNow10 Years Ago: Brian Eno’s 77 Million Paintings in San Francisco, 02007


Long Now co-founders Stewart Brand (center)
and Brian Eno (right) in San Francisco, 02007

Exactly a decade ago today, in June 02007, Long Now hosted the North American Premiere of Brian Eno’s 77 Million Paintings. It was a celebration of Eno’s unique generative art work, as well as the inaugural event of our newly launched Long Now Membership program.

Here’s how we described the large scale art work at the time:

Conceived by Brian Eno as “visual music”, his latest artwork 77 Million Paintings is a constantly evolving sound and imagescape which continues his exploration into light as an artist’s medium and the aesthetic possibilities of “generative software”.

We presented 77 Million Paintings over three nights at San Francisco’s Yerba Buena Center for the Arts (YBCA) on June 29 & 30 and July 1, 02007.

The Friday and Saturday shows were packed with about 900 attendees each. On Sunday we hosted a special Long Now members night. The crowd was smaller with only our newly joined charter members plus Long Now staff and Board, including the artist himself.

Long Now co-founder Brian Eno at his 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Brian Eno came in from London for the event. While he’d shown this work at the Venice Bienniale, the Milan Triennale, Tokyo, London and South Africa; this was its first showing in the U.S. (or anywhere in North America). The actual presentation was a unique large scale projection created collaboratively with San Francisco’s Obscura Digital creative studio.

The installation was in a large, dark room accompanied by generative ambient Eno music. The audience could sit in chairs at the back of the room, sink into bean bags, or lie down on rugs or the floor closer to the screens. Like the Ambient Painting at The Interval or other examples of Eno’s generative visual art, a series of high resolution images slowly fade in and over each other and out again at a glacial pace. In brilliant colors, constantly transforming at a rate so slow it is difficult to track. Until you notice it’s completely different.

Close up of 77 Million Paintings at the opening in San Francisco, 02007; photo by Robin Rupe

Close up of 77 Million Paintings at the opening in San Francisco, 02007; photo by Robin Rupe

Long Now Executive Director Alexander Rose also spoke:

About the work: Brian Eno discusses 77 Million Paintings with Wired News (excerpt):

Eno: The pieces surprise me. I have 77 Million Paintings running in my studio a lot of the time. Occasionally I’ll look up from what I’m doing and I think, “God, I’ve never seen anything like that before!” And that’s a real thrill.
Wired News: When you look at it, do you feel like it’s something that you had a hand in creating?
Eno: Well, I know I did, but it’s a little bit like if you are dealing hands of cards and suddenly you deal four aces. You know it’s only another combination that’s no more or less likely than any of the other combinations of four cards you could deal. Nonetheless, some of the combinations are really striking. You think, “Wow — look at that one.” Sometimes some combination comes up and I know it’s some result of this system that I invented, but nonetheless I didn’t imagine such a thing could be produced by it.

The exterior of Yerba Buena Center for the Arts (YBCA) has its own beautiful illumination:

There was a simultaneous virtual version of 77 Million Paintings ongoing in Second Life:

Here’s video of the Second Life version of 77 Million Paintings. Looking at it today gives you some sense of what the 02007 installation was like in person:

We brought the prototype of the 10,000 Year Clock’s Chime Generator to YBCA (this was 7 years before it was converted into a table for the opening of The Interval):

The Chime Generator was outfitted with tubular bells at the time:

10,000 Year Clock Chime Generator prototype at 77 Million Paintings in San Francisco, 02007; photo by Scott Beale

After two days open to the public, the closing night event was a performance and a party exclusively for Long Now members. Our membership program was brand new then, and many charter members joined just in time to attend the event. So a happy anniversary to all of you who are celebrating a decade as a Long Now member!

Brian Eno, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Members flew in from around the country for the event. Long Now’s Founders were all there. This began a tradition of Long Now special events with members which have included Longplayer / Long Conversation which was also at YBCA and our two Mechanicrawl events which explored of San Francisco’s mechanical wonders.

Here are a few more photos of Long Now staff and friends who attended:

Long Now co-founder Danny Hillis, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Burning Man co-founder Larry Harvey and Long Now co-founder Stewart Brand at 77 Million Paintings opening in San Francisco; photo by Scott Beale

Kevin Kelly and Louis Rosetto, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Lori Dorn and Jeffrey & Jillian of Because We Can at the 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Long Now staff Mikl-em & Danielle Engelman at the 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Thanks to Scott Beale / Laughing Squid, Robin Rupe, and myself for the photos used above. Please mouse over each to see the photo credit.

Brian Eno at 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

More photos from Scott Beale. More photos of the event by Mikl Em.

More on the production from Obscura Digital.

,

Cory DoctorowI’ll see you this weekend at Denver Comic-Con!




I just checked in for my o-dark-hundred flight to Denver tomorrow morning for this weekend’s Denver Comic-Con, where I’m appearing for several hours on Friday, Saturday and Sunday, including panels with some of my favorite writers, like John Scalzi, Richard Kadrey, Catherynne Valente and Scott Sigler:


Friday:


* 1:30-2:30pm The Future is Here :: Room 402
How have recent near-future works fared in preparing us for the realities of the current day? What can near-future works being published now tell us about what’s coming?
Mario Acevedo, Cory Doctorow, Sherry Ficklin, Richard Kadrey, Dan Wells

* 2:30-4:30pm Signing :: Tattered Cover Signing Booth


* 4:30-5:30pm Fight The Power! Fiction For Political Change :: Room 402
Some authors incorporate political themes and beliefs into their stories. In this tumultuous political climate, even discussions of a political nature in fiction can resonate with readers, and could even be a source of change. Our panelists will talk about what they have done in their books to cause change, and the (desired) results.
Charlie Jane Anders, Cory Doctorow, John Scalzi, Emily Singer, Ausma Zehanat Khan

Saturday:

* 12:00-1:00pm The Writing Process of Best Sellers :: Room 407
The authors of the today’s best sellers discuss their technical process and offer creative insight.
Eneasz Brodski, Cory Doctorow, John Scalzi, Catherynne Valente


* 1:30-2:30pm Creating the Anti-Hero :: Room 402
Millennial both read and write YA, and they’re sculpting the genre to meet this generation’s needs. Authors of recent YA titles discuss writing for the modern YA audience and how their books contribute to the genre.
Pierce Brown, Delilah Dawson, Cory Doctorow, Melissa Koons, Catherynne Valente, Dan Wells

* 3:30-4:30pm Millennials Rising – YA Literature Today :: Room 402
Millennial both read and write YA, and they’re sculpting the genre to meet this generation’s needs. Authors of recent YA titles discuss writing for the modern YA audience and how their books contribute to the genre.
Pierce Brown, Delilah Dawson, Cory Doctorow, Melissa Koons, Catherynne Valente, Dan Wells

* 4:30-6:30pm Signing :: Tattered Cover Signing Booth

Sunday:

11:00am-12:00pm Economics, Value and Motivating your Character :: Room 407
How does money and economics figure into writing compelling characters. The search for money In our daily lives fashions our character, and in fiction can be the cause of turning good people into bad, making characters do things that are, well, out-of-character. Money is the great motivator – and find out how these authors use it to shape their characters and move along the story.
Dr. Jason Arentz, Cory Doctorow, Van Aaron Hughes, Matt Parrish, John Scalzi

12:30-2:30pm Signing :: Tattered Cover Signing Booth

3:00-4:00pm Urban Science Fiction :: DCCP4 – Keystone City Room
The sensibility and feel of Urban Fantasy, without the hocus-pocus. They tend to be stories that could be tomorrow, with the direct consequences of today’s technologies, today’s societies. Urban Science Fiction authors discuss writing the world we don’t yet live in, but could!
Cory Doctorow, Sue Duff, Richard Kadrey, Cynthia Richards, Scott Sigler

(Image: Pat Loika, CC-BY)

CryptogramThe Women of Bletchley Park

Really good article about the women who worked at Bletchley Park during World War II, breaking German Enigma-encrypted messages.

CryptogramWebsites Grabbing User-Form Data Before It's Submitted

Websites are sending information prematurely:

...we discovered NaviStone's code on sites run by Acurian, Quicken Loans, a continuing education center, a clothing store for plus-sized women, and a host of other retailers. Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

This is important because it goes against what people expect:

In yesterday's report on Acurian Health, University of Washington law professor Ryan Calo told Gizmodo that giving users a "send" or "submit" button, but then sending the entered information regardless of whether the button is pressed or not, clearly violates a user's expectation of what will happen. Calo said it could violate a federal law against unfair and deceptive practices, as well as laws against deceptive trade practices in California and Massachusetts. A complaint on those grounds, Calo said, "would not be laughed out of court."

This kind of thing is going to happen more and more, in all sorts of areas of our lives. The Internet of Things is the Internet of sensors, and the Internet of surveillance. We've long passed the point where ordinary people have any technical understanding of the different ways networked computers violate their privacy. Government needs to step in and regulate businesses down to reasonable practices. Which means government needs to prioritize security over their own surveillance needs.

Worse Than FailureThe Agreement

In addition to our “bread and butter” of bad code, bad bosses, worse co-workers and awful decision-making, we always love the chance to turn out occassional special events. This time around, our sponsors at Hired gave us the opportunity to build and film a sketch.

I’m super-excited for this one. It’s a bit more ambitious than some of our previous projects, and pulled together some of the best talent in the Pittsburgh comedy community to make it happen. Everyone who worked on it- on set or off- did an excellent job, and I couldn't be happier with the results.

Once again, special thanks to Hired, who not only helped us produce this sketch, but also helps keep us keep the site running. With Hired, instead of applying for jobs, your prospective employer will apply to interview you. You get placed in control of your job search, and Hired provides a “talent advocate” who can provide unbiased career advice and make sure you put your best foot forward. Sign up now, and find the best opportunities for your future with Hired.

And now, our feature presentation: The Agreement

Brought to you by:

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

,

Planet DebianSteve Kemp: Yet more linux security module craziness ..

I've recently been looking at linux security modules. My first two experiments helped me learn:

My First module - whitelist_lsm.c

This looked for the presence of an xattr, and if present allowed execution of binaries.

I learned about the Kernel build-system, and how to write a simple LSM.

My second module - hashcheck_lsm.c

This looked for the presence of a "known-good" SHA1 hash xattr, and if it matched the actual hash of the file on-disk allowed execution.

I learned how to hash the contents of a file, from kernel-space.

Both allowed me to learn things, but both were a little pointless. They were not fine-grained enough to allow different things to be done by different users. (i.e. If you allowed "alice" to run "wget" you'd also allow www-data to do the same.)

So, assuming you wanted to do your security job more neatly what would you want? You'd want to allow/deny execution of commands based upon:

  • The user who was invoking them.
  • The path of the binary itself.

So your local users could run "bad" commands, but "www-data" (post-compromise) couldn't.

Obviously you don't want to have to recompile your kernel to change the rules of who can execute what. So you think to yourself "I'll write those rules down in a file". But of course reading a file from kernel-space is tricky. And parsing any list of rules, in a file, from kernel-space would prone to buffer-related problems.

So I had a crazy idea:

  • When a user attempts to execute a program.
  • Call back to user-space to see if that should be permitted.
    • Give the user-space binary the UID of the invoker, and the path to the command they're trying to execute.

Calling userspace? Every time a command is to be executed? Crazy. But it just might work.

One problem I had with this approach is that userspace might not even be available, when you're booting. So I setup a flag to enable this stuff:

# echo 1 >/proc/sys/kernel/can-exec/enabled

Now the kernel will invoke the following on every command:

/sbin/can-exec $UID $PATH

Because the kernel waits for this command to complete - as it reads the exit-code - you cannot execute any child-processes from it as you'd end up in recursive hell, but you can certainly read files, write to syslog, etc. My initial implementionation was as basic as this:

int main( int argc, char *argv[] )
{
...

  // Get the UID + Program
  int uid = atoi( argv[1] );
  char *prg = argv[2];

  // syslog
  openlog ("can-exec", LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1);
  syslog (LOG_NOTICE, "UID:%d CMD:%s", uid, prg );

  // root can do all.
  if ( uid == 0 )
    return 0;

  // nobody
  if ( uid == 65534 ) {
    if ( ( strcmp( prg , "/bin/sh" ) == 0 ) ||
         ( strcmp( prg , "/usr/bin/id" ) == 0 ) ) {
      fprintf(stderr, "Allowing 'nobody' access to shell/id\n" );
      return 0;
    }
  }

  fprintf(stderr, "Denied\n" );
  return -1;
}

Although the UIDs are hard-code it actually worked! Yay!

I updated the code to convert the UID to a username, then check executables via the file /etc/can-exec/$USERNAME.conf, and this also worked.

I don't expect anybody to actually use this code, but I do think I've reached a point where I can pretend I've written a useful (or non-pointless) LSM at last. That means I can stop.

Google AdsenseAdSense now understands Urdu

Today, we’re excited to announce the addition of Urdu, a language spoken by millions in Pakistan, India and many other countries around the world, to the family of AdSense supported languages.

The interest for Urdu language content has been growing steadily over the last few years. AdSense provides an easy way for publishers to monetize the content they create in Urdu, and help advertisers looking to connect with the growing online Urdu audience to reach them with relevant ads.

To start monetizing your Urdu content website with Google AdSense:

  1. Check the AdSense program policies and make sure your website is compliant. 
  2. Sign up for an AdSense account 
  3. Add the AdSense code to start displaying relevant ads to your users. 
Welcome to AdSense!



Posted by: AdSense Internationalization Team

CryptogramGirl Scouts to Offer Merit Badges in Cybersecurity

The Girl Scouts are going to be offering 18 merit badges in cybersecurity, to scouts as young as five years old.

CryptogramCIA Exploits Against Wireless Routers

WikiLeaks has published CherryBlossom, the CIA's program to hack into wireless routers. The program is about a decade old.

Four good news articles. Five. And a list of vulnerable routers.

Worse Than FailureNews Roundup: The Internet of Nope

Folks, we’ve got to talk about some of the headlines about the Internet of “Things”. If you’ve been paying even no attention to that space, you know that pretty much everything getting released is some combination of several WTFs, whether in conception, implementation, and let’s not forget security.

A diagram of IoT approaches

I get it. It’s a gold-rush business. We’ve got computers that are so small, so cheap, and so power-efficient, that we can slap the equivalent of a 1980s super-computer in a toilet seat. There's the potential to create products that make our lives better, that make the world better, and could carry us into a glowing future. It just sometimes feels like that's not what anybody's actually trying to make, though. Without even checking, I’m sure you can buy a WiFi enabled fidget spinner that posts the data to a smartphone app where you can send “fidges” to your friends, bragging about your RPMs.

We need this news-roundup, because when Alexa locks you out of your house because you didn’t pay for Amazon Prime this month, we can at least say “I told you so”. You think I’m joking, but Burger King wants in on that action, with its commercial that tries to trick your Google Assistant into searching for burgers. That’s also not the first time that a commercial has trigged voice commands, and I can guarantee that it isn’t going to be the last.

Now, maybe this is sour grapes. I bought a Nest thermostat before it was cool, and now three hardware generations on, I’m not getting software updates, and there are rumors about the backend being turned off someday. Maybe Nest needs a model more like “Hive Hub”. Hive is a startup with £500M invested, making it one of the only “smart home” companies with an actual business model. Of course, that business model is that you’ll pay $39.99 per month to turn your lights on and off.

At least you know that some of that money goes to keeping your smart-home secure. I’m kidding, of course- nobody spends any effort on making these devices secure. There are many, many high profile examples of IoT hacks. You hook your toaster up to the WiFi and suddenly it’s part of a botnet swarm mining BitCoins. One recent, high-profile example is the ZigBee Protocol, which powers many smart-home systems. It’s a complete security disaster, and opens up a new line of assault- instead of tricking a target to plug a thumb drive into their network, you can now put your payload in a light bulb.

Smart-homes aside, IoT in general is breeding ground for botnets. Sure, your uncle Jack will blindly click through every popup and put his computer password in anything that looks like a password box, but at least you can have some confidence that his Windows/Mac/Linux desktop has some rudimentary protections bundled with the OS. IoT vendors apparently don’t care.

Let’s take a break, and take a peek at a fun story about resetting a computerized lock. Sure, they could have just replaced the lock, but look at all the creative hackery they had to do to get around it.

With that out of the way, let’s talk about tea. Ever since the Keurig coffee maker went big, everyone’s been trying to be “the Keurig for waffles” or “the Keurig for bacon” or “the Keurig for juice”- the latter giving us the disaster that is the Juicero. Mash this up with the Internet of Things, and you get this WiFi enabled tea-maker, which can download recipes for brewing tea off the Internet. And don’t worry, it’ll always use the correct recipe because each pod is loaded with an RFID that not only identifies which recipe to use, but ensures that you’re not using any unathorized tea.

In addition to the “Keurig, but for $X,” there’s also the ever popular “the FitBit, but for $X.” Here’s the FitBit for desks. It allows your desk to nag you about getting up, moving around, and it’ll upload your activity to the Internet while it’s at it. I’m sure we’re all really excited for when our “activity” gets logged for future review.

Speaking of FitBits, Qualcomm just filed some patents for putting that in your workout shoes. This is actually not a totally terrible idea- I mean, by standards of that tea pot, anyway. I share it here because they’re calling it “The Internet of Shoes” which is a funny way of saying, “our marketing team just gave up”.

Finally, since we’re talking about Internet connected gadgets that serve no real purpose, Google Glass got its first software update in three years. Apparently Google hasn’t sent the Glass to a farm upstate, where it can live with Google Reader, Google Wave, Google Hangouts, and all the other projects Google got bored of.

[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

,

Krebs on Security‘Petya’ Ransomware Outbreak Goes Global

A new strain of ransomware dubbed “Petya” is worming its way around the world with alarming speed. The malware is spreading using a vulnerability in Microsoft Windows that the software giant patched in March 2017 — the same bug that was exploited by the recent and prolific WannaCry ransomware strain.

The ransom note that gets displayed on screens of Microsoft Windows computers infected with Petya.

The ransom note that gets displayed on screens of Microsoft Windows computers infected with Petya.

According to multiple news reports, Ukraine appears to be among the hardest hit by Petya. The country’s government, some domestic banks and largest power companies all warned today that they were dealing with fallout from Petya infections.

Danish transport and energy firm Maersk said in a statement on its Web site that “We can confirm that Maersk IT systems are down across multiple sites and business units due to a cyber attack.” In addition, Russian energy giant Rosneft said on Twitter that it was facing a “powerful hacker attack.” However, neither company referenced ransomware or Petya.

Security firm Symantec confirmed that Petya uses the “Eternal Blue” exploit, a digital weapon that was believed to have been developed by the U.S. National Security Agency and in April 2017 leaked online by a hacker group calling itself the Shadow Brokers.

Microsoft released a patch for the Eternal Blue exploit in March (MS17-010), but many businesses put off installing the fix. Many of those that procrastinated were hit with the WannaCry ransomware attacks in May. U.S. intelligence agencies assess with medium confidence that WannaCry was the work of North Korean hackers.

Organizations and individuals who have not yet applied the Windows update for the Eternal Blue exploit should patch now. However, there are indications that Petya may have other tricks up its sleeve to spread inside of large networks.

Russian security firm Group-IB reports that Petya bundles a tool called “LSADump,” which can gather passwords and credential data from Windows computers and domain controllers on the network.

Petya seems to be primarily impacting organizations in Europe, however the malware is starting to show up in the United States. Legal Week reports that global law firm DLA Piper has experienced issues with its systems in the U.S. as a result of the outbreak.

Through its twitter account, the Ukrainian Cyber Police said the attack appears to have been seeded through a software update mechanism built into M.E.Doc, an accounting program that companies working with the Ukranian government need to use.

Nicholas Weaver, a security researcher at the International Computer Science Institute and a lecturer at UC Berkeley, said Petya appears to have been well engineered to be destructive while masquerading as a ransomware strain.

Weaver noted that Petya’s ransom note includes the same Bitcoin address for every victim, whereas most ransomware strains create a custom Bitcoin payment address for each victim.

Also, he said, Petya urges victims to communicate with the extortionists via an email address, while the majority of ransomware strains require victims who wish to pay or communicate with the attackers to use Tor, a global anonymity network that can be used to host Web sites which can be very difficult to take down.

“I’m willing to say with at least moderate confidence that this was a deliberate, malicious, destructive attack or perhaps a test disguised as ransomware,” Weaver said. “The best way to put it is that Petya’s payment infrastructure is a fecal theater.”

Ransomware encrypts important documents and files on infected computers and then demands a ransom (usually in Bitcoin) for a digital key needed to unlock the files. With most ransomware strains, victims who do not have recent backups of their files are faced with a decision to either pay the ransom or kiss their files goodbye.

Ransomware attacks like Petya have become such a common pestilence that many companies are now reportedly stockpiling Bitcoin in case they need to quickly unlock files that are being held hostage by ransomware.

Security experts warn that Petya and other ransomware strains will continue to proliferate as long as companies delay patching and fail to develop a robust response plan for dealing with ransomware infestations.

According to ISACA, a nonprofit that advocates for professionals involved in information security, assurance, risk management and governance, 62 percent of organizations surveyed recently reported experiencing ransomware in 2016, but only 53 percent said they had a formal process in place to address it.

Update: 5:06 p.m. ET: Added quotes from Nicholas Weaver and links to an analysis by the Ukrainian cyber police.

Planet DebianDaniel Pocock: How did the world ever work without Facebook?

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandhi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandhi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and identi.ca are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

Planet DebianReproducible builds folks: Reproducible Builds: week 113 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 18 and Saturday June 24 2017:

Upcoming and Past events

Our next IRC meeting is scheduled for the 6th of July at 17:00 UTC with this agenda currently:

  1. Introductions
  2. Reproducible Builds Summit update
  3. NMU campaign for buster
  4. Press release: Debian is doing Reproducible Builds for Buster
  5. Reproducible Builds Branding & Logo
  6. should we become an SPI member
  7. Next meeting
  8. Any other business

On June 19th, Chris Lamb presented at LinuxCon China 2017 on Reproducible Builds.

On June 23rd, Vagrant Cascadian held a Reproducible Builds question and answer session at Open Source Bridge.

Reproducible work in other projects

LEDE: firmware-utils and mtd-utils/mkfs.jffs2 now honor SOURCE_DATE_EPOCH.

Toolchain development and fixes

There was discussion on #782654 about packaging bazel for Debian.

Dan Kegel wrote a patch to use ar determinitiscally for Homebrew, a package manager for MacOS.

Dan Kegel worked on using SOURCE_DATE_EPOCH and other reproduciblity fixes in fpm, a multi plattform package builder.

The Fedora Haskell team disabled parallel builds to achieve reproducible builds.

Bernhard M. Wiedemann submitted many patches upstream:

Packages fixed and bugs filed

Patches submitted upstream:

Other patches filed in Debian:

Reviews of unreproducible packages

573 package reviews have been added, 154 have been updated and 9 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (98)

diffoscope development

Version 83 was uploaded to unstable by Chris Lamb. It also moved the previous changes from experimental (to where they were uploaded) to unstable. It included contributions from previous weeks.

You can read about these changes in our previous weeks' posts, or view the changelog directly (raw form).

We plan to maintain a backport of this and future versions in stretch-backports.

Ximin Luo also worked on better html-dir output for very very large diffs such as those for GCC. So far, this includes unreleased work on a PartialString data structure which will form a core part of a new and more intelligent recursive display algorithm.

strip-nondeterminism development

Versions 0.035-1 was uploaded to unstable from experimental by Chris Lamb. It included contributions from:

  • Bernhard M. Wiedemann
    • Add CPIO handler and test case.
  • Chris Lamb
    • Packaging improvements.

Later in the week Mattia Rizzolo uploaded 0.035-2 with some improvements to the autopkgtest and to the general packaging.

We currently don't plan to maintain a backport in stretch-backports like we did for jessie-backports. Please speak up if you think otherwise.

reproducible-website development

  • Chris Lamb:
    • Add OpenEmbedded to projects page after a discussion at LinuxCon China.
    • Update some metadata for existing talks.
    • Add 13 missing talks.

tests.reproducible-builds.org

  • Alexander 'lynxis' Couzens
    • LEDE: do a quick sha256sum before calling diffoscope. The LEDE build consists of 1000 packages, using diffoscope to detect whether two packages are identical takes 3 seconds in average, while calling sha256sum on those small packages takes less than a second, so this reduces the runtime from 3h to 2h (roughly). For Debian package builds this is neglectable, as each build takes several minutes anyway, thus adding 3 seconds to each build doesn't matter much.
    • LEDE/OpenWrt: move toolchain.html creation to remote node, as this is were the toolchain is build.
    • LEDE: remove debugging output for images.
    • LEDE: fixup HTML creation for toolchain, build path, downloaded software and GIT commit used.
  • Mattia 'mapreri' Rizzolo:
    • Debian: introduce Buster.
    • Debian: explain how to migrate from squid3 (in jessie) to squid (in stretch).
  • Holger 'h01ger' Levsen:
    • Debian:
      • Add jenkins jobs to create schroots and configure pbuilder for Buster.
      • Add Buster to README/about jenkins.d.n.
      • Teach jessie and ubuntu 16.04 systems how to debootstrap Buster.
      • Only update indexes and pkg_sets every 30min as the jobs almost run for 15 min now that we test four suites (compared to three before).
      • Create HTML dashboard, live status and dd-list pages less often.
      • (Almost) stop scheduling old packages in stretch, new versions will still be scheduled and tested as usual.
      • Increase scheduling limits, especially for untested, new and depwait packages.
      • Replace Stretch with Buster in the repository comparison page.
      • Only keep build_service logs for a day, not three.
      • Add check for hanging mounts to node health checks.
      • Add check for haveged to node health checks.
      • Disable ntp.service on hosts running in the future, needed on stretch.
      • Install amd64 kernels on all i386 systems. There is a performance issue with i386 kernels, for which a bug should be filed. Installing the amd64 kernel is a sufficient workaround, but it breaks our 32/64 bit kernel variation on i386.
    • LEDE, OpenWrt: Fix up links and split TODO list.
    • Upgrade i386 systems (used for Debian) and pb3+4-amd64 (used for coreboot, LEDE, OpenWrt, NetBSD, Fedora and Arch Linux tests) to Stretch
    • jenkins: use java 8 as required by jenkins >= 2.60.1

Misc.

This week's edition was written by Ximin Luo, Holger Levsen, Bernhard M. Wiedemann, Mattia Rizzolo, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Rondam RamblingsThere's something very odd about the USS Fitzgerald incident

For a US Navy warship to allow itself to be very nearly destroyed by a civilian cargo ship virtually requires an epic career-ending screwup.  The exact nature of that screwup has yet to be determined, and the Navy is understandably staying very tight-lipped about it.  But they have said one thing on the record which is almost certainly false: that the collision happened at 2:30 AM local time:

Planet DebianColin Watson: New address book

I’ve had a kludgy mess of electronic address books for most of two decades, and have got rather fed up with it. My stack consisted of:

  • ~/.mutt/aliases, a flat text file consisting of mutt alias commands
  • lbdb configuration to query ~/.mutt/aliases, Debian’s LDAP database, and Canonical’s LDAP database, so that I can search by name with Ctrl-t in mutt when composing a new message
  • Google Contacts, which I used from Android and was completely separate from all of the above

The biggest practical problem with this was that I had the address book that was most convenient for me to add things to (Google Contacts) and the one I used when sending email, and no sensible way to merge them or move things between them. I also wasn’t especially comfortable with having all my contact information in a proprietary web service.

My goals for a replacement address book system were:

  • free software throughout
  • storage under my control
  • single common database
  • minimal manual transcription when consolidating existing databases
  • integration with Android such that I can continue using the same contacts, messaging, etc. apps
  • integration with mutt such that I can continue using the same query interface
  • not having to write my own software, because honestly

I think I have all this now!

New stack

The obvious basic technology to use is CardDAV: it’s fairly complex, admittedly, but lots of software supports it and one of my goals was not having to write my own thing. This meant I needed a CardDAV server, some way to sync the database to and from both Android and the system where I run mutt, and whatever query glue was necessary to get mutt to understand vCards.

There are lots of different alternatives here, and if anything the problem was an embarrassment of choice. In the end I just decided to go for things that looked roughly the right shape for me and tried not to spend too much time in analysis paralysis.

CardDAV server

I went with Xandikos for the server, largely because I know Jelmer and have generally had pretty good experiences with their software, but also because using Git for history of the backend storage seems like something my future self will thank me for.

It isn’t packaged in stretch, but it’s in Debian unstable, so I installed it from there.

Rather than the standalone mode suggested on the web page, I decided to set it up in what felt like a more robust way using WSGI. I installed uwsgi, uwsgi-plugin-python3, and libapache2-mod-proxy-uwsgi, and created the following file in /etc/uwsgi/apps-available/xandikos.ini which I then symlinked into /etc/uwsgi/apps-enabled/xandikos.ini:

[uwsgi]
socket = 127.0.0.1:8801
uid = xandikos
gid = xandikos
umask = 022
master = true
cheaper = 2
processes = 4
plugin = python3
module = xandikos.wsgi:app
env = XANDIKOSPATH=/srv/xandikos/collections

The port number was arbitrary, as was the path. You need to create the xandikos user and group first (adduser --system --group --no-create-home --disabled-login xandikos). I created /srv/xandikos owned by xandikos:xandikos and mode 0700, and I recommend setting a umask as shown above since uwsgi’s default umask is 000 (!). You should also run sudo -u xandikos xandikos -d /srv/xandikos/collections --autocreate and then Ctrl-c it after a short time (I think it would be nicer if there were a way to ask the WSGI wrapper to do this).

For Apache setup, I kept it reasonably simple: I ran a2enmod proxy_uwsgi, used htpasswd to create /etc/apache2/xandikos.passwd with a username and password for myself, added a virtual host in /etc/apache2/sites-available/xandikos.conf, and enabled it with a2ensite xandikos:

<VirtualHost *:443>
        ServerName xandikos.example.org
        ServerAdmin me@example.org

        ErrorLog /var/log/apache2/xandikos-error.log
        TransferLog /var/log/apache2/xandikos-access.log

        <Location />
                ProxyPass "uwsgi://127.0.0.1:8801/"
                AuthType Basic
                AuthName "Xandikos"
                AuthBasicProvider file
                AuthUserFile "/etc/apache2/xandikos.passwd"
                Require valid-user
        </Location>
</VirtualHost>

Then service apache2 reload, set the new virtual host up with Let’s Encrypt, reloaded again, and off we go.

Android integration

I installed DAVdroid from the Play Store: it cost a few pounds, but I was OK with that since it’s GPLv3 and I’m happy to help fund free software. I created two accounts, one for my existing Google Contacts database (and in fact calendaring as well, although I don’t intend to switch over to self-hosting that just yet), and one for the new Xandikos instance. The Google setup was a bit fiddly because I have two-step verification turned on so I had to create an app-specific password. The Xandikos setup was straightforward: base URL, username, password, and done.

Since I didn’t completely trust the new setup yet, I followed what seemed like the most robust option from the DAVdroid contacts syncing documentation, and used the stock contacts app to export my Google Contacts account to a .vcf file and then import that into the appropriate DAVdroid account (which showed up automatically). This seemed straightforward and everything got pushed to Xandikos. There are some weird delays in syncing contacts that I don’t entirely understand, but it all seems to get there in the end.

mutt integration

First off I needed to sync the contacts. (In fact I happen to run mutt on the same system where I run Xandikos at the moment, but I don’t want to rely on that, and going through the CardDAV server means that I don’t have to poke holes for myself using filesystem permissions.) I used vdirsyncer for this. In ~/.vdirsyncer/config:

[general]
status_path = "~/.vdirsyncer/status/"

[pair contacts]
a = "contacts_local"
b = "contacts_remote"
collections = ["from a", "from b"]

[storage contacts_local]
type = "filesystem"
path = "~/.contacts/"
fileext = ".vcf"

[storage contacts_remote]
type = "carddav"
url = "<Xandikos base URL>"
username = "<my username>"
password = "<my password>"

Running vdirsyncer discover and vdirsyncer sync then synced everything into ~/.contacts/. I added an hourly crontab entry to run vdirsyncer -v WARNING sync.

Next, I needed a command-line address book tool based on this. khard looked about right and is in stretch, so I installed that. In ~/.config/khard/khard.conf (this is mostly just the example configuration, but I preferred to sort by first name since not all my contacts have neat first/last names):

[addressbooks]
[[contacts]]
path = ~/.contacts/<UUID of my contacts collection>/

[general]
debug = no
default_action = list
editor = vim
merge_editor = vimdiff

[contact table]
# display names by first or last name: first_name / last_name
display = first_name
# group by address book: yes / no
group_by_addressbook = no
# reverse table ordering: yes / no
reverse = no
# append nicknames to name column: yes / no
show_nicknames = no
# show uid table column: yes / no
show_uids = yes
# sort by first or last name: first_name / last_name
sort = first_name

[vcard]
# extend contacts with your own private objects
# these objects are stored with a leading "X-" before the object name in the vcard files
# every object label may only contain letters, digits and the - character
# example:
#   private_objects = Jabber, Skype, Twitter
private_objects = Jabber, Skype, Twitter
# preferred vcard version: 3.0 / 4.0
preferred_version = 3.0
# Look into source vcf files to speed up search queries: yes / no
search_in_source_files = no
# skip unparsable vcard files: yes / no
skip_unparsable = no

Now khard list shows all my contacts. So far so good. Apparently there are some awkward vCard compatibility issues with creating or modifying contacts from the khard end. I’ve tried adding one address from ~/.mutt/aliases using khard and it seems to at least minimally work for me, but I haven’t explored this very much yet.

I had to install python3-vobject 0.9.4.1-1 from experimental to fix eventable/vobject#39 saving certain vCard files.

Finally, mutt integration. I already had set query_command="lbdbq '%s'" in ~/.muttrc, and I wanted to keep that in place since I still wanted to use LDAP querying as well. I had to write a very small amount of code for this (perhaps I should contribute this to lbdb upstream?), in ~/.lbdb/modules/m_khard:

#! /bin/sh

m_khard_query () {
    khard email --parsable --remove-first-line --search-in-source-files "$1"
}

My full ~/.lbdb/rc now reads as follows (you probably won’t want the LDAP stuff, but I’ve included it here for completeness):

MODULES_PATH="$MODULES_PATH $HOME/.lbdb/modules"
METHODS='m_muttalias m_khard m_ldap'
LDAP_NICKS='debian canonical'

Next steps

I’ve deleted one account from Google Contacts just to make sure that everything still works (e.g. I can still search for it when composing a new message), but I haven’t yet deleted everything. I won’t be adding anything new there though.

I need to push everything from ~/.mutt/aliases into the new system. This is only about 30 contacts so shouldn’t take too long.

Overall this feels like a big improvement! It wasn’t a trivial amount of setup for just me, but it means I have both better usability for myself and more independence from proprietary services, and I think I can add extra users with much less effort if I need to.

Postscript

A day later and I’ve consolidated all my accounts from Google Contacts and ~/.mutt/aliases into the new system, with the exception of one group that I had defined as a mutt alias and need to work out what to do with. This all went smoothly.

I’ve filed the new lbdb module as #866178, and the python3-vobject bug as #866181.

CryptogramFighting Leakers at Apple

Apple is fighting its own battle against leakers, using people and tactics from the NSA.

According to the hour-long presentation, Apple's Global Security team employs an undisclosed number of investigators around the world to prevent information from reaching competitors, counterfeiters, and the press, as well as hunt down the source when leaks do occur. Some of these investigators have previously worked at U.S. intelligence agencies like the National Security Agency (NSA), law enforcement agencies like the FBI and the U.S. Secret Service, and in the U.S. military.

The information is from an internal briefing, which was leaked.

Worse Than FailureNot so DDoS

Joe K was a developer at a company that provided a SaaS Natural Language Processing system. As Chief Engineer of the Data Science Team (a term that make him feel like some sort of mad professor), his duties included coding the Data Science Service. It provided the back-end for handling the complex, heavy-lifting type of processing that had to happen in real-time. Since it was very CPU-intensive, Joe spent a lot of time trying to battle latency. But that was the least of his problems.

Ddos-attack-ex

The rest of the codebase was a cobbled-together mess that had been coded by the NLP researchers- scientists with no background in programming or computer science. Their mantra was “If it gets us the results we need, who cares how it looks behind the scenes?” This meant Joe’s well-designed data service somehow had to interface with applications made from a pile of ugly hacks. It was difficult at times, but he managed to get the job done while also keeping CPU usage to a minimum.

One day Joe was working away when Burt, the company CEO, burst in to their humble basement computer lab in an obvious tizzy. Burt rarely visited the “egghead dungeon”, as he called it, so something had to be amiss. “JOE!” he cried out. “The production data science service is completely down! Every customer we have gave me an angry call within the last ten minutes!”

Considering this was an early-stage startup with only five customers, Burt’s assertion was probably true, if misleading. “Wow, ok Burt. Let me get right on that!” Joe offered, feeling flustered. He took a look at the error logging service and there was nothing to be found. He then attempted to SSH to each of the production servers, with success. He decided to check performance on the servers and an entire string of red flags shot straight up the proverbial flag pole. Every production server was at 100% CPU usage.

“I have an effect for you, Burt, but not a cause. I’ll have to dig deeper but it almost seems like… a Denial of Service attack?” Joe offered, not believing that would actually be the case. With only five whitelisted customers able to connect, all of them using the NLP system to its fullest shouldn’t come even close to causing this.

While looking further at the server logs, Joe got an instant message from Xander, the software engineer who worked on the dashboards, “Hey Joe, I noticed prod was down… could it be related to something I’m doing?”

“Ummm… maybe? What is it you are doing exactly?” Joe replied, with a new sense of concern. Xander’s dashboard shouldn’t have any interaction with the DSS, so it seemed like an odd question. Requests to the NLP site would initially come to a front-end server, and if there was some advanced analysis that needed to happen, that server would RPC to the DSS. After the response was computed, the front-end server would log the request and response to the Xander’s dashboard system so it could monitor usage stats.

“Well, the dashboard is out of sync,” Xander explained. There had been a bug causing events to not make it to the dashboard system for the past month. They would need to be added to make the dashboard accurate. This could have been a simple change to the dashboard’s database, but instead Xander decided to replay all of the actual HTTP requests to the front end. Many of those requests triggered processing on the DSS- processing which had already been done. And since it was taking a long time, Xander had batched up the resent requests and was running them from three different machines, thus providing a remarkably good simulation of a DDoS.

“STOP YOUR PROCESS IMMEDIATELY AND DO THIS THE RIGHT WAY!” Joe shot back, all caps intended.

“Ok, ok, sorry. I’ll get this cleaned up,” Xander assured Joe. Within 15 minutes, the server CPU usage returned to normal levels and everything was great again. Joe was able to get Burt off his back and return to his normal duties.

A few minutes later, Joe’s IM dinged again with a message from Xander. "Hey Joe, sorry about that, LOL. But are we 100% sure that was the problem? Should I do it again just to be sure?

If there was a way for Joe to use instant messaging to send a virtual strangulation to Xander, he would have done it. But a “HELL NO!!!” would have to suffice.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianJoey Hess: 12 to 24 volt house conversion

Upgrading my solar panels involved switching the house from 12 volts to 24 volts. No reasonably priced charge controllers can handle 1 KW of PV at 12 volts.

There might not be a lot of people who need to do this; entirely 12 volt offgrid houses are not super common, and most upgrades these days probably involve rooftop microinverters and so would involve a switch from DC to AC. I did not find a lot of references online for converting a whole house's voltage from 12V to 24V.

To prepare, I first checked that all the fuses and breakers were rated for > 24 volts. (Actually, > 30 volts because it will be 26 volts or so when charging.) Also, I checked for any shady wiring, and verified that all the wires I could see in the attic and wiring closet were reasonably sized (10AWG) and in good shape.

Then I:

  1. Turned off every light, unplugged every plug, pulled every fuse and flipped every breaker.
  2. Rewired the battery bank from 12V to 24V.
  3. Connected the battery bank to the new charge controller.
  4. Engaged the main breaker, and waited for anything strange.
  5. Screwed in one fuse at a time.

lighting

The house used all fluorescent lights, and they have ballasts rated for only 12V. While they work at 24V, they might blow out sooner or overheat. In fact one died this evening, and while it was flickering before, I suspect the 24V did it in. It makes sense to replace them with more efficient LED lights anyway. I found some 12-24V DC LED lights for regular screw-in (edison) light fixtures. Does not seem very common; Amazon only had a few models and they shipped from China.

Also, I ordered a 15 foot long, 300 LED strip light, which runs on 24V DC and has an adhesive backing. Great stuff -- it can be cut to different lengths and stuck anywhere. I installed some underneath the cook stove hood and the kitchen cabinets, which didn't have lights before.

Similar LED strips are used in some desktop lamps. My lamp was 12V only (barely lit at 24V), but I was able to replace its LED strip, upgrading it to 24V and three times as bright.

(Christmas lights are another option; many LED christmas lights run on 24V.)

appliances

My Lenovo laptop's power supply that I use in the house is a vehicle DC-DC converter, and is rated for 12-24V. It seems to be running fine at 26V, did not get warm even when charging the laptop up from empty.

I'm using buck converters to run various USB powered (5V) ARM boxes such as my sheevaplug. They're quarter sized, so fit anywhere, and are very efficient.

My satellite internet receiver is running with a large buck converter, feeding 12V to an inverter, feeding to a 30V DC power supply. That triple conversion is inneficient, but it works for now.

The ceiling fan runs on 24V, and does not seem to run much faster than on 12V. It may be rated for 12-24V. Can't seem to find any info about it.

The radio is a 12V car radio. I used a LM317 to run it on 24V, to avoid the RF interference a buck converter would have produced. This is a very inneficient conversion; half of the power is wasted as heat. But since I can stream internet radio all day now via satellite, I'll not use the FM radio very often.

Fridge... still running on propane for now, but I have an idea for a way to build a cold storage battery that will use excess power from the PV array, and keep a fridge at a constant 34 degrees F. Next home improvement project in the queue.

Planet DebianBenjamin Mako Hill: Learning to Code in One’s Own Language

I recently published a paper with Sayamindu Dasgupta that provides evidence in support of the idea that kids can learn to code more quickly when they are programming in their own language.

Millions of young people from around the world are learning to code. Often, during their learning experiences, these youth are using visual block-based programming languages like Scratch, App Inventor, and Code.org Studio. In block-based programming languages, coders manipulate visual, snap-together blocks that represent code constructs instead of textual symbols and commands that are found in more traditional programming languages.

The textual symbols used in nearly all non-block-based programming languages are drawn from English—consider “if” statements and “for” loops for common examples. Keywords in block-based languages, on the other hand, are often translated into different human languages. For example, depending on the language preference of the user, an identical set of computing instructions in Scratch can be represented in many different human languages:

Examples of a short piece of Scratch code shown in four different human languages: English, Italian, Norwegian Bokmål, and German.

Although my research with Sayamindu Dasgupta focuses on learning, both Sayamindu and I worked on local language technologies before coming back to academia. As a result, we were both interested in how the increasing translation of programming languages might be making it easier for non-English speaking kids to learn to code.

After all, a large body of education research has shown that early-stage education is more effective when instruction is in the language that the learner speaks at home. Based on this research, we hypothesized that children learning to code with block-based programming languages translated to their mother-tongues will have better learning outcomes than children using the blocks in English.

We sought to test this hypothesis in Scratch, an informal learning community built around a block-based programming language. We were helped by the fact that Scratch is translated into many languages and has a large number of learners from around the world.

To measure learning, we built on some of our our own previous work and looked at learners’ cumulative block repertoires—similar to a code vocabulary. By observing a learner’s cumulative block repertoire over time, we can measure how quickly their code vocabulary is growing.

Using this data, we compared the rate of growth of cumulative block repertoire between learners from non-English speaking countries using Scratch in English to learners from the same countries using Scratch in their local language. To identify non-English speakers, we considered Scratch users who reported themselves as coming from five primarily non-English speaking countries: Portugal, Italy, Brazil, Germany, and Norway. We chose these five countries because they each have one very widely spoken language that is not English and because Scratch is almost fully translated into that language.

Even after controlling for a number of factors like social engagement on the Scratch website, user productivity, and time spent on projects, we found that learners from these countries who use Scratch in their local language have a higher rate of cumulative block repertoire growth than their counterparts using Scratch in English. This faster growth was despite having a lower initial block repertoire. The graph below visualizes our results for two “prototypical” learners who start with the same initial block repertoire: one learner who uses the English interface, and a second learner who uses their native language.

Summary of the results of our model for two prototypical individuals.

Our results are in line with what theories of education have to say about learning in one’s own language. Our findings also represent good news for designers of block-based programming languages who have spent considerable amounts of effort in making their programming languages translatable. It’s also good news for the volunteers who have spent many hours translating blocks and user interfaces.

Although we find support for our hypothesis, we should stress that our findings are both limited and incomplete. For example, because we focus on estimating the differences between Scratch learners, our comparisons are between kids who all managed to successfully use Scratch. Before Scratch was translated, kids with little working knowledge of English or the Latin script might not have been able to use Scratch at all. Because of translation, many of these children are now able to learn to code.


This blog post and the work that it describes is a collaborative project with Sayamindu Dasgupta. Sayamindu also published a very similar version of the blog post in several places. Our paper is open access and you can read it here. The paper was published in the proceedings of the ACM Learning @ Scale Conference. We also recently gave a talk about this work at the International Communication Association’s annual conference. We received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Nathan TeBlunthuis at the University of Washington. Financial support came from the US National Science Foundation.

,

Planet Linux AustraliaDavid Rowe: Codec 2 Wideband

I’m spending a month or so improving the speech quality of a couple of Codec 2 modes. I have two aims:

  1. Make the 700 bit/s codec sound better, to improve speech quality on low SNR HF channels (beneath 0dB).
  2. Develop a higher quality mode in the 2000 to 3000 bit/s range, that can be used on HF channels with modest SNRs (around 10dB)

I ran some numbers on the new OFDM modem and LDPC codes, and turns out we can get 3000 bit/s of codec data through a 2000 Hz channel at down to 7dB SNR.

Now 3000 bit/s is broadband for me – I’ve spent years being very frugal with my bits while I play in low SNR HF land. However it’s still a bit low for Opus which kicks in at 6000 bit/s. I can’t squeeze 6000 bit/s through a 2000 Hz RF channel without higher order QAM constellations which means SNRs approaching 20dB.

So – what can I do with 3000 bit/s and Codec 2? I decided to try wideband(-ish) audio – the sort of audio bandwidth you get from Skype or AM broadcast radio. So I spent a few weeks modifying Codec 2 to work at 16 kHz sample rate, and Jean Marc gave me a few tips on using DCTs to code the bits.

It’s early days but here are a few samples:

Description Sample
1 Original Speech Listen
2 Codec 2 Model, orignal amplitudes and phases Listen
3 Synthetic phase, one bit voicing, original amplitudes Listen
4 Synthetic phase, one bit voicing, amplitudes at 1800 bit/s Listen
5 Simulated analog SSB, 300-2600Hz BPF, 10dB SNR Listen

Couple of interesting points:

  • Sample (2) is as good as Codec 2 can do, its the unquantised model parameters (harmonic phases and amplitudes). It’s all down hill from here as we quantise or toss away parameters.
  • In (3) I’m using a one bit voicing model, this is very vocoder and shouldn’t work this well. MBE/MELP all say you need mixed excitation. Exploring that conundrum would be a good Masters degree topic.
  • In (3) I can hear the pitch estimator making a few mistakes, e.g. around “sheet” on the female.
  • The extra 4kHz of audio bandwidth doesn’t take many more bits to encode, as the ear has a log frequency response. It’s maybe 20% more bits than 4kHz audio.
  • You can hear some words like “well” are muddy and indistinct in the 1800 bit/s sample (4). This usually means the formants (spectral) peaks are not well defined, so we might be tossing away a little too much information.
  • The clipping on the SSB sample (5) around the words “depth” and “hours” is an artifact of the PathSim AGC. But dat noise. It gets really fatiguing after a while.

Wideband audio is a big paradigm shift for Push To Talk (PTT) radio. You can’t do this with analog radio: 2000 Hz of RF bandwidth, 8000 Hz of audio bandwidth. I’m not aware of any wideband PTT radio systems – they all work at best 4000 Hz audio bandwidth. DVSI has a wideband codec, but at a much higher bit rate (8000 bits/s).

Current wideband codecs shoot for artifact-free speech (and indeed general audio signals like music). Codec 2 wideband will still have noticeable artifacts, and probably won’t like music. Big question is will end users prefer this over SSB, or say analog FM – at the same SNR? What will 8kHz audio sound like on your HT?

We shall see. I need to spend some time cleaning up the algorithms, chasing down a few bugs, and getting it all into C, but I plan to be testing over the air later this year.

Let me know if you want to help.

Play Along

Unquantised Codec 2 with 16 kHz sample rate:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 -o - | play -t raw -r 16000 -s -2 -

With “Phase 0” synthetic phase and 1 bit voicing:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 --phase0 --postfilter -o - | play -t raw -r 16000 -s -2 -

Links

FreeDV 2017 Road Map – this work is part of the “Codec 2 Quality” work package.

Codec 2 page – has an explanation of the way Codec 2 models speech with harmonic amplitudes and phases.

Planet DebianNiels Thykier: debhelper 10.5.1 now available in unstable

Earlier today, I uploaded debhelper version 10.5.1 to unstable.  The following are some highlights compared to version 10.2.5:

  • debhelper now supports the “meson+ninja” build system. Kudos to Michael Biebl.
  • Better cross building support in the “makefile” build system (PKG_CONFIG is set to the multi-arched version of pkg-config). Kudos to Helmut Grohne.
  • New dh_missing helper to take over dh_install –list-missing/–fail-missing while being able to see files installed from other helpers. Kudos to Michael Stapelberg.
  • dh_installman now logs what files it has installed so the new dh_missing helper can “see” them as installed.
  • Improve documentation (e.g. compare and contrast the dh_link config file with ln(1) to assist people who are familiar with ln(1))
  • Avoid triggering a race-condition with libtool by ensuring that dh_auto_install run make with -j1 when libtool is detected (see Debian bug #861627)
  • Optimizations and parallel processing (more on this later)

There are also some changes to the upcoming compat 11

  • Use “/run” as “run state dir” for autoconf
  • dh_installman will now guess the language of a manpage from the path name before using the extension.

 


Filed under: Debhelper, Debian

Planet DebianJoey Hess: DIY solar upgrade complete-ish

Success! I received the Tracer4215BN charge controller where UPS accidentially-on-purpose delivered it to a neighbor, and got it connected up, and the battery bank rewired to 24V in a couple hours.

charge controller reading 66.1V at 3.4 amps on panels, charging battery at 29.0V at 7.6A

Here it's charging the batteries at 220 watts, and that picture was taken at 5 pm, when the light hits the panels at nearly a 90 degree angle. Compare with the old panels, where the maximum I ever recorded at high noon was 90 watts. I've made more power since 4:30 pm than I used to be able to make in a day! \o/

Planet DebianAlexander Wirt: Stretch Backports available

With the release of stretch we are pleased to open the doors for stretch-backports and jessie-backports-sloppy. \o/

As usual with a new release we will change a few things for the backports service.

What to upload where

As a reminder, uploads to a release-backports pocket are to be taken from release + 1, uploads to a release-backports-sloppy pocket are to be taken from release + 2. Which means:

Source Distribution Backports Distribution Sloppy Distribution
buster stretch-backports jessie-backports-sloppy
stretch jessie-backports -

Deprecation of LTS support for backports

We started supporting backports as long as there is LTS support as an experiment. Unfortunately it didn’t worked, most maintainers didn’t wanted to support oldoldstable-backports (squeeze) for the lifetime of LTS. So things started to rot in squeeze and most packages didn’t received updates. After long discussions we decided to deprecate LTS support for backports. From now on squeeze-backports(-sloppy) is closed and will not receive any updates. Expect it to get removed from the mirrors and moved to archive in the near future.

BSA handling

We - the backports team - didn’t scale well in processing BSA requests. To get things better in the future we decided to change the process a little bit. If you upload a package which fixes security problems please fill out the BSA template and create a ticket in the rt tracker (see https://backports.debian.org/Contribute/#index3h2 for details).

Stretching the rules

From time to time its necessary to not follow the backports rules, like a package needs to be in testing or a version needs to be in Debian. If you think you have one of those cases, please talk to us on the list before upload the package.

Thanks

Thanks have to go out to all people making backports possible, and that includes up front the backporters themself who do upload the packages, track and update them on a regular basis, but also the buildd team making the autobuilding possible and the ftp masters for creating the suites in the first place.

We wish you a happy stretch :) Alex, on behalf of the Backports Team

CryptogramSeparating the Paranoid from the Hacked

Sad story of someone whose computer became owned by a griefer:

The trouble began last year when he noticed strange things happening: files went missing from his computer; his Facebook picture was changed; and texts from his daughter didn't reach him or arrived changed.

"Nobody believed me," says Gary. "My wife and my brother thought I had lost my mind. They scheduled an appointment with a psychiatrist for me."

But he built up a body of evidence and called in a professional cybersecurity firm. It found that his email addresses had been compromised, his phone records hacked and altered, and an entire virtual internet interface created.

"All my communications were going through a man-in-the-middle unauthorised server," he explains.

It's the "psychiatrist" quote that got me. I regularly get e-mails from people explaining in graphic detail how their whole lives have been hacked. Most of them are just paranoid. But a few of them are probably legitimate. And I have no way of telling them apart.

This problem isn't going away. As computers permeate even more aspects of our lives, it's going to get even more debilitating. And we don't have any way, other than hiring a "professional cybersecurity firm," of telling the paranoids from the victims.

Planet DebianJonathan Dowland: Coming in from the cold

I've been using a Mac day-to-day since around 2014, initially as a refreshing break from the disappointment I felt with GNOME3, but since then a few coincidences have kept me on the platform. Something happened earlier in the year that made me start to think about a move back to Linux on the desktop. My next work hardware refresh is due in March next year, which gives me about nine months to "un-plumb" myself from the Mac ecosystem. From the top of my head, here's the things I'm going to have to address:

  • the command modifier key (⌘). It's a small thing but its use on the Mac platform is very consistent, and since it's not used at all within terminals, there's never a clash between window management and terminal applications. Compared to the morass of modifier keys on Linux, I will miss it. It's possible if I settle on a desktop environment and spend some time configuring it I can get to a similarly comfortable place. Similarly, I'd like to stick to one clipboard, and if possible, ignore the select-to-copy, middle-click-to-paste one entirely. This may be an issue for older software.

  • The Mac hardware trackpad and gestures are honestly fantastic. I still have some residual muscle memory of using the Thinkpad trackpoint, and so I'm weaning myself off the trackpad by using an external thinkpad keyboard with the work Mac, and increasingly using a x61s where possible.

  • SizeUp. I wrote about this in useful mac programs. It's a window management helper that lets you use keyboard shortcuts to move move and resize windows. I may need something similar, depending on what desktop environment I settle on. (I'm currently evaluating Awesome WM).

  • 1Password. These days I think a password manager is an essential piece of software, and 1Password is a very, very good example of one. There are several other options now, but sadly none that seem remotely as nice as 1Password. Ryan C Gordon wrote 1pass, a Linux-compatible tool to read a 1Password keychain, but it's quite raw and needs some love. By coincidence that's currently his focus, and one can support him in this work via his Patreon.

  • Font rendering. Both monospace and regular fonts look fantastic out of the box on a Mac, and it can be quite hard to switch back and forth between a Mac and Linux due to the difference in quality. I think this is a mixture of ensuring the font rendering software on Linux is configured properly, but also that I install a reasonable selection of fonts.

I think that's probably it: not a big list! Notably, I'm not locked into iTunes, which I avoid where possible; Apple's Photo app (formerly iPhoto) which is a bit of a disaster; nor Time Machine, which is excellent, but I have a backup system for other things in place that I can use.

CryptogramThe FAA Is Arguing for Security by Obscurity

In a proposed rule by the FAA, it argues that software in an Embraer S.A. Model ERJ 190-300 airplane is secure because it's proprietary:

In addition, the operating systems for current airplane systems are usually and historically proprietary. Therefore, they are not as susceptible to corruption from worms, viruses, and other malicious actions as are more-widely used commercial operating systems, such as Microsoft Windows, because access to the design details of these proprietary operating systems is limited to the system developer and airplane integrator. Some systems installed on the Embraer Model ERJ 190-300 airplane will use operating systems that are widely used and commercially available from third-party software suppliers. The security vulnerabilities of these operating systems may be more widely known than are the vulnerabilities of proprietary operating systems that the avionics manufacturers currently use.

Longtime readers will immediately recognize the "security by obscurity" argument. Its main problem is that it's fragile. The information is likely less obscure than you think, and even if it is truly obscure, once it's published you've just lost all your security.

This is me from 2014, 2004, and 2002.

The comment period for this proposed rule is ongoing. If you comment, please be polite -- they're more likely to listen to you.

Worse Than FailureCodeSOD: Plurals Dones Rights

Today, submitter Adam shows us how thoughtless language assumptions made by programmers are also hilarious language assumptions:

"So we're querying a database for data matching *title* but then need to try again with plural/singular if we didn't get anything. Removing the trailing S is bad, but what really caught my eye was how we make a word plural. Never mind any English rules or if the word is actually Greek, Chinese, or Klingon."


if ((locs == NULL || locs->Length() == 0) && (title->EndsWith(@"s") || title->EndsWith(@"S")))
{
    title->RemoveCharAt(title->Length()-1);
    locs = data->GetLocationsForWord(title);
}
else if ((locs == NULL || locs->Length() == 0) && title->Length() > 0)
{
    WCHAR c = title->CharAt(title->Length()-1);
    if (c >= 'A' && c <= 'Z')
    {
        title->Append(@"S");
    }
    else
    {
        title->Append(@"s");
    }
    locs = data->GetLocationsForWord(title);
}

Untils nexts times: ευχαριστώs &s 再见s, Hochs!

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaOpenSTEM: Guess the Artefact! – #2

Today’s Guess the Artefact! covers one of a set of artefacts which are often found confusing to recognise. We often get questions about these artefacts, from students and teachers alike, so here’s a chance to test your skills of observation. Remember – all heritage and archaeological material is covered by State or Federal legislation and should never be removed from its context. If possible, photograph the find in its context and then report it to your local museum or State Heritage body (the Dept of Environment and Heritage Protection in Qld; the Office of Environment and Heritage in NSW; the Dept of Environment, Planning and Sustainable Development in ACT; Heritage Victoria; the Dept of Environment, Water and Natural Resources in South Australia; the State Heritage Office in WA and the Heritage Council – Dept of Tourism and Culture in NT).

This artefact is made of stone. It measures about 12 x 8 x 3 cm. It fits easily and comfortably into an adult’s hand. The surface of the stone is mostly smooth and rounded, it looks a little like a river cobble. However, one side – the right-hand side in the photo above – is shaped so that 2 smooth sides meet in a straight, sharpish edge. Such formations do not occur on naturally rounded stones, which tells us that this was shaped by people and not just rounded in a river. The smoothed edges meeting in a sharp edge tell us that this is ground-stone technology. Ground stone technology is a technique used by people to create smooth, sharp edges on stones. People grind the stone against other rocks, occasionally using sand and water to facilitate the process, usually in a single direction. This forms a smooth surface which ends in a sharp edge.

Neolithic Axe

Ground stone technology is usually associated with the Neolithic period in Europe and Asia. In the northern hemisphere, this technology was primarily used by people who were learning to domesticate plants and animals. These early farmers learned to grind grains, such as wheat and barley, between two stones to make flour – thus breaking down the structure of the plant and making it easier to digest. Our modern mortar and pestle is a descendant of this process. Early farmers would have noticed that these actions produced smooth and sharp edges on the stones. These observations would have led them to apply this technique to other tools which they used and thus develop the ground-stone technology. Here (picture on right) we can see an Egyptian ground stone axe from the Neolithic period. The toolmaker has chosen an attractive red and white stone to make this axe-head.

In Japan this technology is much older than elsewhere in the northern hemisphere, and ground-stone axes have been found dating to 30,000 years ago during the Japanese Palaeolithic period. Until recently these were thought to be the oldest examples of ground-stone technology in the world. However, in 2016, Australian archaeologists Peter Hiscock, Sue O’Connor, Jane Balme and Tim Maloney reported in an article in the journal Australian Archaeology, the finding of a tiny flake of stone (just over 1 cm long and 1/2 cm wide) from a ground stone axe in layers dated to 44,000 to 49,000 years ago at the site of Carpenter’s Gap in the Kimberley region of north-west Australia. This tiny flake of stone – easily missed by anyone not paying close attention – is an excellent example of the extreme importance of ‘archaeological context’. Archaeological material that remains in its original context (known as in situ) can be dated accurately and associated with other material from the same layers, thus allowing us to understand more about the material. Anything removed from the context usually can not be dated and only very limited information can be learnt.

The find from the Kimberley makes Australia the oldest place in the world to have ground-stone technology. The tiny chip of stone, broken off a larger ground-stone artefact, probably an axe, was made by the ancestors of Aboriginal people in the millennia after they arrived on this continent. These early Australians did not practise agriculture, but they did eat various grains, which they leaned to grind between stones to make flour. It is possible that whilst processing these grains they learned to grind stone tools as well. Our artefact, shown above, is undated. It was found, totally removed from its original context, stored under an old house in Brisbane. The artefact is useful as a teaching aid, allowing students to touch and hold a ground-stone axe made by Aboriginal people in Australia’s past. However, since it was removed from its original context at some point, we do not know how old it is, or even where it came from exactly.

Our artefact is a stone tool. Specifically, it is a ground stone axe, made using technology that dates back almost 50,000 years in Australia! These axes were usually made by rubbing a hard stone cobble against rocks by the side of a creek. Water from the creek was used as a lubricant, and often sand was added as an extra abrasive. The making of ground-stone axes often left long grooves in these rocks. These are called ‘grinding grooves’ and can still be found near some creeks in the landscape today, such as in Kuringai Chase National Park in Sydney. The ground-stone axes were usually hafted using sticks and lashings of plant fibre, to produce a tool that could be used for cutting vegetation or other uses. Other stone tools look different to the one shown above, especially those made by flaking stone; however, smooth stones should always be carefully examined in case they are also ground-stone artefacts and not just simple stones!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners July Meeting

Jul 15 2017 12:30
Jul 15 2017 16:30
Jul 15 2017 12:30
Jul 15 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Workshop to be announced.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 15, 2017 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main July 2017 Meeting

Jul 4 2017 18:30
Jul 4 2017 20:30
Jul 4 2017 18:30
Jul 4 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, July 4, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• To be announced

Come have a drink with us and talk about Linux.  If you have something cool to show, please bring it along!

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 4, 2017 - 18:30

Cory DoctorowAudio from my NYPL appearance with Edward Snowden


Last month, I appeared onstage with Edward Snowden at the NYPL, hosted by Paul Holdengraber, discussing my novel Walkaway. The library has just posted the audio! It was quite an evening

,

Cory DoctorowBruce Sterling reviews WALKAWAY

Bruce Sterling, Locus Magazine: Walkaway is a real-deal, generically traditional science-fiction novel; it’s set in an undated future and it features weird set design, odd costumes, fights, romances, narrow escapes, cool weapons, even zeppelins. This is the best Cory Doctorow book ever. I don’t know if it’s destined to become an SF classic, mostly because it’s so advanced and different that it makes the whole genre look archaic.

For instance: in a normal science fiction novel, an author pals up with scientists and popularizes what the real experts are doing. Not here, though. Cory Doctorow is such an Internet policy wonk that he’s ‘‘popularizing’’ issues that only he has ever thought about. Walkaway is mostly about advancing and demolishing potential political arguments that have never been made by anybody but him…

…Walkaway is what science fiction can look like under modern cultural conditions. It’s ‘‘relevant,’’ it’s full of tremulous urgency, it’s Occupy gone exponential. It’s a novel of polarized culture-war in which all the combatants fast-talk past each other while occasionally getting slaughtered by drones. It makes Ed Snowden look like the first robin in spring.

…The sci-fi awesome and the authentically political rarely mix successfully. Cory had an SF brainwave and decided to exploit a cool plot element: people get uploaded into AIs. People often get killed horribly in Walkaway, and the notion that the grim victims of political struggle might get a silicon afterlife makes their fate more conceptually interesting. The concept’s handled brilliantly, too: these are the best portrayals of people-as-software that I’ve ever seen. They make previous disembodied AI brains look like glass jars from 1950s B-movies. That’s seduc­tively interesting for a professional who wants to mutate the genre’s tropes, but it disturbs the book’s moral gravity. The concept makes death and suffering silly.

…I’m not worried about Cory’s literary fate. I’ve read a whole lot of science fiction novels. Few are so demanding and thought-provoking that I have to abandon the text and go for a long walk.

I won’t say there’s nothing else like Walkaway, because there have been some other books like it, but most of them started mass movements or attracted strange cults. There seems to be a whole lot of that activity going on nowadays. After this book, there’s gonna be more.

Planet DebianAndreas Bombe: PDP-8/e Replicated — Introduction

I am creating a replica of the DEC PDP-8/e architecture in an FPGA from schematics of the original hardware. So how did I end up with a project like this?

The story begins with me wanting to have a computer with one of those front panels that have many, many lights where you can really see, in real time, what the computer is doing while it is executing code. Not because I am nostalgic for a prior experience with any of those — I was born a bit too late for that and my first computer as a kid was a Commodore 64.

Now, the front panel era ended around 40 years ago with the advent of microprocessors and computers of that age and older that are complete and working are hard to find and not cheap. And even if you do, there’s the issue of weight, size (complete systems with peripherals fill at least a rack) and power consumption. So what to do — build myself a small one with modern technology of course.

While there’s many computer architectures of that era to choose from, the various PDP machines by DEC are significant and well known (and documented) due to their large numbers. The most important are probably the 12 bit PDP-8, the 16 bit PDP-11 and the 36 bit PDP-10. While the PDP-11 is enticing because of the possibility to run UNIX I wanted to start with something simpler, so I chose the PDP-8.

My implementation on display next to a real PDP-8/e at VCFe 18.0

My implementation on display next to a real PDP-8/e at VCFe 18.0

The Original

DEC started the PDP-8 line of computers programmed data processors designed as low cost machines in 1965. It is a quite minimalist 12 bit architecture based on the earlier PDP-5, and by minimalist I mean seriously minimal. If you are familiar with early 8 bit microprocessors like the 6502 or 8080 you will find them luxuriously equipped in comparison.

The PDP-8 base architecture has a program counter (PC) and an accumulator (AC)1. That’s it. There are no pointer or index registers2. There is no stack. It has addition and AND instructions but subtractions and OR operations have to be manually coded. The optional Extended Arithmetic Element adds the MQ register but that’s really it for visible registers. The Wikipedia page on the PDP-8 has a good detailed description.

Regarding technology, the PDP-8 series has been in production long enough to get the whole range of implementations from discrete transistor logic to microprocessors. The 8/e which I target was right in the middle, implemented in TTL logic where each IC contains multiple logic elements. This allowed the CPU itself (including timing generator) to fit on three large circuit boards plugged into a backplane. Complete systems would have at least another board for the front panel and multiple boards for the core memory, then additional boards for whatever options and peripherals were desired.

Design Choices and Comparisons

I’m not the only one who had the idea to build something like that, of course. Among the other modern PDP-8 implementations with a front panel, probably the most prominent project is the Spare Time Gizmos SBC6120 which is a PDP-8 single board computer built around the Harris/Intersil HD-6120 microprocessor, which implementes the PDP-8 architecture, combined with a nice front panel. Another is the PiDP-8/I, which is another nice front panel (modeled after the 8/i which has even more lights) driven by the simh simulator running under Linux on a Raspberry Pi.

My goal is to get front panel lights that appear exactly like the real ones in operation. This necessitates driving the lights at full speed as they change with every instruction or even within instructions for some display selections. For example, if you run a tight loop that does nothing but increment AC while displaying that register, it would appear that all lights are lit at equal but less than full brightness. The reason is that the loop runs at such a high speed that even the most significant bit, which is blinking the slowest, is too fast to see flicker. Hence they are all effectively 50% on, just at different frequencies, and appear to be constantly light at the same brightness.

This is where the other projects lack what I am looking for. The PiDP-8/I is a multiplexed display which updates at something like 30 Hz or 60 Hz, taking whatever value is current in the simulation software at the time. All the states the lights took inbetween are lost and consequently there is flickering where there shouldn’t be. On the SBC6120 at least the address lines appear to update at full speed as these are the actual RAM address lines. However the used 6120 microprocessor does not have required data for the indicator display externally available. Instead, the SBC6120 runs an interrupt at 30 Hz to trap into its firmware/monitor program which then reads the current state and writes it to the front panel display, which is essentially just another peripheral. A different considerable problem with the SBC6120 is its use of the 6100 microprocessor family ICs, which are themselves long out of production and not trivial (or cheaply) to come by.

Given that the way to go is to drive all lights in step with every cycle3, this can be done by a software running on a dedicated microcontroller — which is how I started — or by implementing a real CPU with all the needed outputs in an FPGA — which is the project I am writing about.

In the next post I will give on overview of the hardware I built so far and some of the features that are yet to be implemented.


  1. With an associated link bit which is a little different from a carry bit in that it is treated as a thirteenth bit, i.e. it will be flipped rather than set when a carry occurs. [return]
  2. Although there are 8 specially treated memory addresses that will pre-increment when used in indirect addressing. [return]
  3. Basic cycles on the PDP-8/e are 1.4 µs for memory modifying cycles and fast cycles of 1.2 µs for everything else. Instructions can be one to three cycles long. [return]

Planet DebianSteinar H. Gunderson: Frame queue management in Nageru 1.6.1

Nageru 1.6.1 is on its way, and what was intended to only be a release centered around monitoring improvements (more specifically a full set of native Prometheus] metrics) actually ended up getting a fairly substantial change to how Nageru manages its frame queues. To understand what's changing and why, it's useful to first understand the history of Nageru's queue management. Nageru 1.0.0 started out with a fairly simple scheme, but with some basics that are still relevant today: One of the input cards was deemed the master card, and whenever it delivers a frame, the master clock ticks and an output frame is produced. (There are some subtleties about dropped frames and/or the master card changing frame rates, but I'm going to ignore them, since they're not important to the discussion.)

To this end, every card keeps a preallocated frame queue; when a card delivers a frame, it's put into the queue, and when the master clock ticks, it tries picking out one frame from each of the other card's queues to mix together. Note that “mix” here could be as simple as picking one input and throwing all the other ones away; the queueing algorithm doesn't care, it just feeds all of them to the theme and lets that run whatever GPU code it needs to match the user's preferences.

The only thing that really keeps the queues bounded is that the frames in them are preallocated (in GPU memory), so if one queue gets longer than 16 frames, Nageru starts dropping it. But is 16 the right number? There are two conflicting demands here, ignoring memory usage:

  • You want to keep the latency down.
  • You don't want to run out of frames in the queue if you can avoid it; if you drop too aggressively, you could find yourself at the next frame with nothing in the queue, because the input card hasn't delivered it yet when the master card ticks. (You could argue one should delay the output in this case, but for how long? And if you're using HDMI/SDI output, you have no such luxury.)

The 1.0.0 scheme does about as well as one could possibly hope in never dropping frames, but unfortunately, it can be pretty poor at latency. For instance, if your master card runs at 50 Hz and you have a 60 Hz card, the latter will eventually build up a delay of 16 * 16.7 ms = 266.7 ms—clearly unacceptable, and rather unneeded.

You could ask the user to specify a queue length, but the user probably doesn't know, and also shouldn't really have to care—more knobs to twiddle are a bad thing, and even more so knobs the user is expected to twiddle. Thus, Nageru 1.2.0 introduced queue autotuning; it keeps a running estimate on how big the queue needs to be to avoid underruns, simply based on experience. If we've been dropping frames on a queue and then there's an underrun, the “safe queue length” is increased by one, and if the queue has been having excess frames for more than a thousand successive master clock ticks, we reduce it by one again. Whenever the queue has more than this “safe” number, we drop frames.

This was simple, effective and largely fixed the problem. However, when adding metrics, I noticed a peculiar effect: Not all of my devices have equally good clocks. In particular, when setting up for 1080p50, my output card's internal clock (which assumes the role of the master clock when using HDMI/SDI output) seems to tick at about 49.9998 Hz, and my simple home camcorder delivers frames at about 49.9995 Hz. Over the course of an hour, this means it produces one more frame than you should have… which should of course be dropped. Having an SDI setup with synchronized clocks (blackburst/tri-level) would of course fix this problem, but most people are not so lucky with their cameras, not to mention the price of PC graphics cards with SDI outputs!

However, this happens very slowly, which means that for a significant amount of time, the two clocks will very nearly be in sync, and thus racing. Who ticks first is determined largely by luck in the jitter (normal is maybe 1ms, but occasionally, you'll see delayed delivery of as much as 10 ms), and this means that the “1000 frames” estimate is likely to be thrown off, and the result is hundreds of dropped frames and underruns in that period. Once the clocks have diverged enough again, you're off the hook, but again, this isn't a good place to be.

Thus, Nageru 1.6.1 change the algorithm around yet again, by incorporating more data to build an explicit jitter model. 1.5.0 was already timestamping each frame to be able to measure end-to-end latency precisely (now also exposed in Prometheus metrics), but from 1.6.1, they are actually used in the queueing algorithm. I ran several eight- to twelve-hour tests and simply stored all the event arrivals to a file, and then simulated a few different algorithms (including the old algorithm) to see how they fared in measures such as latency and number of drops/underruns.

I won't go into the full details of the new queueing algorithm (see the commit if you're interested), but the gist is: Based on the last 5000 frames, it tries to estimate the maximum possible jitter for each input (ie., how late the frame could possibly be). Based on this as well as clock offsets, it determines whether it's really sure that there will be an input frame available on the next master tick even if it drops the queue, and then trims the queue to fit.

The result is pretty satisfying; here's the end-to-end latency of my camera being sent through to the SDI output:

As you can see, the latency goes up, up, up until Nageru figures it's now safe to drop a frame, and then does it in one clean drop event; no more hundreds on drops involved. There are very late frame arrivals involved in this run—two extra frame drops, to be precise—but the algorithm simply determines immediately that they are outliers, and drops them without letting them linger in the queue. (Immediate dropping is usually preferred to sticking around for a bit and then dropping it later, as it means you only get one disturbance event in your stream as opposed to two. Of course, you can only do it if you're reasonably sure it won't lead to more underruns later.)

Nageru 1.6.1 will ship before Solskogen, as I intend to run it there :-) And there will probably be lovely premade Grafana dashboards from the Prometheus data. Although it would have been a lot nicer if Grafana were more packaging-friendly, so I could pick it up from stock Debian and run it on armhf. Hrmf. :-)

Krebs on SecurityGot Robocalled? Don’t Get Mad; Get Busy.

Several times a week my cell phone receives the telephonic equivalent of spam: A robocall. On each occasion the call seems to come from a local number, but when I answer there is that telltale pause followed by an automated voice pitching some product or service. So when I heard from a reader who chose to hang on the line and see where one of these robocalls led him, I decided to dig deeper. This is the story of that investigation. Hopefully, it will inspire readers to do their own digging and help bury this annoying and intrusive practice.

robocallThe reader — Cedric (he asked to keep his last name out of this story) had grown increasingly aggravated with the calls as well, until one day he opted to play along by telling a white lie to the automated voice response system that called him: Yes, he said, yes he definitely was interested in credit repair services.

“I lied about my name and played like I needed credit repair to buy a home,” Cedric said. “I eventually wound up speaking with a representative at creditfix.com.”

The number that called Cedric — 314-754-0123 — was not in service when Cedric tried it back, suggesting it had been spoofed to make it look like it was coming from his local area. However, pivoting off of creditfix.com opened up some useful avenues of investigation.

Creditfix is hosted on a server at the Internet address 208.95.62.8. According to records maintained by Farsight Security — a company that tracks which Internet addresses correspond to which domain names — that server hosts or recently hosted dozens of other Web sites (the full list is here).

Most of these domains appear tied to various credit repair services owned or run by a guy named Michael LaSala and registered to a mail drop in Las Vegas. Looking closer at who owns the 208.95.62.8 address, we find it is registered to System Admin, LLC, a Florida company that lists LaSala as a manager, according to a lookup at the Florida Secretary of State’s office.

An Internet search for the company’s address turns up a filing by System Admin LLC with the U.S. Federal Communications Commission (FCC). That filing shows that the CEO of System Admin is Martin Toha, an entrepreneur probably best known for founding voip.com, a voice-over-IP (VOIP) service that allows customers to make telephone calls over the Internet.

Emails to the contact address at Creditfix.com elicited a response from a Sean in Creditfix’s compliance department. Sean told KrebsOnSecurity that mine was the second complaint his company had received about robocalls. Sean said he was convinced that his employer was scammed by a lead generation company that is using robocalls to quickly and illegally gin up referrals, which generate commissions for the lead generation firm.

Creditfix said the robocall leads it received appear to have been referred by Little Brook Media, a marketing firm in New York City. Little Brook Media did not respond to multiple requests for comment.

Robocalls are permitted for political candidates, but beyond that if the recording is a sales message and you haven’t given your written permission to get calls from the company on the other end, the call is illegal. According to the Federal Trade Commission (FTC), companies are using auto-dialers to send out thousands of phone calls every minute for an incredibly low cost.

“The companies that use this technology don’t bother to screen for numbers on the national Do Not Call Registry,” the FTC notes in an advisory on its site. “If a company doesn’t care about obeying the law, you can be sure they’re trying to scam you.”

Mr. Toha confirmed that Creditfix was one of his clients, but said none of his clients want leads from robocalls for that very reason. Toha said the problem is that many companies buy marketing leads but don’t always know where those leads come from or how they are procured.

“A lot of times clients don’t know the companies that the ad agency or marketing agency works with,” Toha said. “You submit yourself as a publisher to a network of publishers, and what they do is provide calls to marketers.”

Robby Birnbaum is a debt relief attorney in Florida and president of the National Association of Credit Services Organizations. Birnbaum said no company wants to buy leads from robocalls, and that marketers who fabricate leads this way are not in business for long.

But he said those that end up buying leads from robocall marketers are often smaller mom-and-pop debt relief shops, and that these companies soon find themselves being sued by what Birnbaum called “frequent filers,” lawyers who make a living suing companies for violating laws against robocalls.

“It’s been a problem in this industry for a while, but robocalls affect every single business that wants to reach consumers,” Birnbaum said. He noted that the best practice is for companies to require lead generators to append to each customer file information about how and from where the lead was generated.

“A lot of these lead companies will not provide that, and when my clients insist on it, those companies have plenty of other customers who will buy those leads,” Birnbaum said. “The phone companies can block many of these robocalls, but they don’t.”

That may be about to change. The FCC recently approved new rules that would let phone companies block robocallers from using numbers they aren’t supposed to be using.

“If a robocaller decides to spoof another phone number — making it appear that they’re calling from a different line to hide their identity — phone providers would be able to block them if they use a number that clearly can’t exist because it hasn’t been assigned or that an existing subscriber has asked not to have spoofed,” reads a story at The Verge.

The FCC estimates that there are more than 2.4 billion robocalls made every month, or roughly seven calls per person per month. The FTC received nearly 3.5 million robocall complaints in fiscal year 2016, an increase of 60 percent from the year prior.

The newest trend in robocalls is the “ringless voicemail,” in which the marketing pitch lands directly in your voicemail inbox without ringing the phone. The FCC also is considering new rules to prohibit ringless voicemails.

Readers may be able to avoid some marketing calls by registering their mobile number with the Do Not Call registry, but the list appears to do little to deter robocallers. If and when you do receive robocalls, consider reporting them to the FTC.

Some wireless providers now offer additional services and features to help block automated calls. For example, AT&T offers wireless customers its free Call Protect app, which screens incoming calls and flags those that are likely spam calls. See the FCC’s robocall resource page for links to resources at your mobile provider.

In addition, there are a number of third-party mobile apps designed to block spammy calls, such as Nomorobo and TrueCaller.

Update, June 27, 2017, 3:04 p.m. ET: Corrected spelling of Michael LaSala.

Planet DebianLars Wirzenius: Obnam 1.22 released (backup application)

I've just released version 1.22 of Obnam, my backup application. It is the first release for this year. Packages are available on code.liw.fi/debian and in Debian unstable, and source is in git. A summary of the user-visible changes is below.

For those interested in living dangerously and accidentally on purpose deleting all their data, the link below shows that status and roadmap for FORMAT GREEN ALBATROSS. http://distix.obnam.org/obnam-dev/182bd772889544d5867e1a0ce4e76652.html

Version 1.22, released 2017-06-25

  • Lars Wirzenius made Obnam log the full text of an Obnam exception/error message with more than one line. In particular this applies to encryption error messages, which now log the gpg output.

  • Lars Wirzenius made obnam restore require absolute paths for files to be restored.

  • Lars Wirzenius made obnam forget use a little less memory. The amount depends on the number of genrations and the chunks they refer to.

  • Jan Niggemann updated the German translation of the Obnam manual to match recent changes in the English version.

  • SanskritFritz and Ian Cambell fixed the kdirstat plugin.

  • Lars Wirzenius changed Obnam to hide a Python stack trace when there's a problem with the SSH connection (e.g., failure to authenticate, or existing connection breaks).

  • Lars Wirzenius made the Green Albatross version of obnam forget actually free chunks that are no longer used.

Planet DebianShirish Agarwal: Dreams don’t cost a penny, mumma’s boy :)

This one I promise will be short 🙂

After the last two updates, I thought it was time for something positive to share. While I’m doing the hard work (of physiotherapy which is gruelling), the only luxury I have nowadays is of falling on to dreams which are also rare. The dream I’m going to share is totally unrealistic as my mum hates travel but I’m sure many sons and daughters would identify with it. Almost all travel outside of relatives is because I dragged her onto it.

In the dream, I am going to a Debconf, get bursary and the conference is being held somewhere in Europe, maybe Paris (2019 probably) 🙂 . I reach and attend the conference, present and generally have a great time sharing and learning from my peers. As we all do in these uncertain times, I too phone home every couple of days ensuring her that I’m well and in best of health. The weather is mild like it was in South Africa and this time I had come packed with woollens so all was well.

Just the day before the conference is to end, I call up mum and she tells about a specific hotel/hostel which I should check out. I am somewhat surprised that she knows of a specific hotel/hostel in a specific place but I accede to her request. I go there to find a small, quiet, quaint bed & breakfast place (dunno if Paris has such kind of places), something which my mum would like. Intuitively, I ask at the reception to look in the register. After seeing my passport, he too accedes to my request and shows me the logbook of all visitors registering to come in the last few days (for some reason privacy is not a concern then) . I am surprised to find my mother’s name in the register and she had turned up just a day or two before.

I again request the reception to be able to go to room abcd without being announced and have some sort of key (like for maintenance or laundry as an excuse) . The reception calls the manager and after looking copy of mother’s passport and mine they somehow accede to the request with help located nearby just in case something goes wrong.

I disguise my voice and announce as either Room service or Maintenance and we are both surprised and elated to see each other. After talking a while, I go back to the reception and register myself as her guest.

The next week is a whirlwind as I come to know of hop-on, hop-off buses similar service in Paris as was in South Africa. I buy a small booklet and we go through all the museums. the vineyards or whatever it is that Paris has to offer. IIRC there is also a lock and key on a famous bridge. We also do that. She is constantly surprised at the different activities the city shows her and the mother-son bond becomes much better.

I had shared the same dream with her and she laughed. In reality, she is happy and comfortable in the confines of her home.

Still I hope some mother-daughter, son-father, son-mother or any combination of parent, sibling takes this entry and enrich each other by travelling together.


Filed under: Miscellenous Tagged: #dream, #mother, #planet-debian, travel

,

Planet DebianLisandro Damián Nicanor Pérez Meyer: Qt 5.7 submodules that didn't make it to Stretch but will be in testing

There are two Qt 5.7 submodules that we could not package in time for Strech but are/will be available in their 5.7 versions in testing. This are qtdeclarative-render2d-plugin and qtvirtualkeyboard.

declarative-render2d-plugin makes use of the Raster paint engine instead of OpenGL to render the  contents of a scene graph, thus making it useful when Qt Quick2 applications  are run in a system without OpenGL 2  enabled hardware. Using it might require tweaking Debian's /etc/X11/Xsession.d/90qt5-opengl. On Qt 5.9 and newer this plugin is merged in Qt GUI so there should be no need to perform any action on the user's behalf.

Debian's VirtualKeyboard currently has a gotcha: we are not building it with the embedded code it ships. Upstream ships 3rd party code but lacks a way to detect and use the system versions of them. See QTBUG-59594, patches are welcomed. Please note that we prefer patches sent directly upstream to the current dev revision, we will be happy to backport patches if necessary.
Yes, this means no hunspell, openwnn, pinyin, tcime nor lipi-toolkit/t9write support.


Planet DebianSteve Kemp: Linux security modules, round two.

So recently I wrote a Linux Security Module (LSM) which would deny execution of commands, unless an extended attribute existed upon the filesystem belonging to the executables.

The whitelist-LSM worked well, but it soon became apparent that it was a little pointless. Most security changes are pointless unless you define what you're defending against - your "threat model".

In my case it was written largely as a learning experience, but also because I figured it seemed like it could be useful. However it wasn't actually as useful because you soon realize that you have to whitelist too much:

  • The redis-server binary must be executable, to the redis-user, otherwise it won't run.
  • /usr/bin/git must be executable to the git user.

In short there comes a point where user alice must run executable blah. If alice can run it, then so can mallory. At which point you realize the exercise is not so useful.

Taking a step back I realized that what I wanted to to prevent was the execution of unknown/unexpected, and malicious binaries How do you identify known-good binaries? Well hashes & checksums are good. So for my second attempt I figured I'd not look for a mere "flag" on a binary, instead look for a valid hash.

Now my second LSM is invoked for every binary that is executed by a user:

  • When a binary is executed the sha1 hash is calculated of the files contents.
  • If that matches the value stored in an extended attribute the execution is permitted.
    • If the extended-attribute is missing, or the checksum doesn't match, then the execution is denied.

In practice this is the same behaviour as the previous LSM - a binary is either executable, because there is a good hash, or it is not, because it is missing or bogus. If somebody deploys a binary rootkit this will definitely stop it from executing, but of course there is a huge hole - scripting-languages:

  • If /usr/bin/perl is whitelisted then /usr/bin/perl /tmp/exploit.pl will succeed.
  • If /usr/bin/python is whitelisted then the same applies.

Despite that the project was worthwhile, I can clearly describe what it is designed to achieve ("Deny the execution of unknown binaries", and "Deny binaries that have been modified"), and I learned how to hash a file from kernel-space - which was surprisingly simple.

(Yes I know about IMA and EVM - this was a simple project for learning purposes. Public-key signatures will be something I'll look at next/soon/later. :)

Perhaps the only other thing to explore is the complexity in allowing/denying actions based on the user - in a human-readable fashion, not via UIDs. So www-data can execute some programs, alice can run a different set of binaries, and git can only run /usr/bin/git.

Of course down that path lies apparmour, selinux, and madness..

Planet DebianIngo Juergensmann: Upgrade to Debian Stretch - GlusterFS fails to mount

Before I upgrade from Jessie to Stretch everything worked as a charme with glusterfs in Debian. But after I upgraded the first VM to Debian Stretch I discovered that glusterfs-client was unable to mount the storage on Jessie servers. I got this in glusterfs log:

[2017-06-24 12:51:53.240389] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 12:51:54.534826] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 12:51:54.534896] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount
[2017-06-24 12:51:56.668254] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-24 12:51:56.671649] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-06-24 12:51:56.671669] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/le)
[2017-06-24 12:51:57.014502] W [glusterfsd.c:1327:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7fbea36c4a20] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x494) [0x55fbbaed06f4] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55fbbaeca444] ) 0-: received signum (0), shutting down
[2017-06-24 12:51:57.014564] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting '/etc/letsencrypt.sh/certs'.
[2017-06-24 16:44:45.501056] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 16:44:45.504038] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 16:44:45.504084] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount

After some searches on the Internet I found Debian #858495, but no solution for my problem. Some search results recommended to set "option rpc-auth-allow-insecure on", but this didn't help. In the end I joined #gluster on Freenode and got some hints there:

JoeJulian | ij__: debian breaks apart ipv4 and ipv6. You'll need to remove the ipv6 ::1 address from localhost in /etc/hosts or recombine your ip stack (it's a sysctl thing)
JoeJulian | It has to do with the decisions made by the debian distro designers. All debian versions should have that problem. (yes, server side).

Removing ::1 from /etc/hosts and from lo interface did the trick and I could mount glusterfs storage from Jessie servers in my Stretch VMs again. However, when I upgraded the glusterfs storages to Stretch as well, this "workaround" didn't work anymore. Some more searching on the Internet made me found this posting on glusterfs mailing list:

We had seen a similar issue and Rajesh has provided a detailed explanation on why at [1]. I'd suggest you to not to change glusterd.vol but execute "gluster volume set <volname> transport.address-family inet" to allow Gluster to listen on IPv4 by default.

Setting this option instantly fixed my issues with mounting glusterfs storages.

So, whatever is wrong with glusterfs in Debian, it seems to have something to do with IPv4 and IPv6. When disabling IPv6 in glusterfs, it works. I added information to #858495.

Kategorie: 
 

Planet DebianRiku Voipio: Cross-compiling with debian stretch

Debian stretch comes with cross-compiler packages for selected architectures:
 $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...

Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:

sudo debootstrap stretch /var/lib/container/stretch http://deb.debian.org/debian
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
Then we set up cross-building enviroment for arm64 inside the container:

# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
Now we have a nice build enviroment, lets choose something more complicated than the usual kernel/BusyBox to cross-build, qemu:

# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)

CryptogramAmazon Patents Measures to Prevent In-Store Comparison Shopping

Amazon has been issued a patent on security measures that prevents people from comparison shopping while in the store. It's not a particularly sophisticated patent -- it basically detects when you're using the in-store Wi-Fi to visit a competitor's site and then blocks access -- but it is an indication of how retail has changed in recent years.

What's interesting is that Amazon is on the other side of this arms race. As an on-line retailer, it wants people to walk into stores and then comparison shop on its site. Yes, I know it's buying Whole Foods, but it's still predominantly an online retailer. Maybe it patented this to prevent stores from implementing the technology.

It's probably not nearly that strategic. It's hard to build a business strategy around a security measure that can be defeated with cellular access.

Planet Linux AustraliaLev Lafayette: Duolingo Plus is Extremely Broken

After using Duolingo for over a year and accumulating almost 100,000 points I thought it would do the right thing and pay for the Plus service. It was exactly the right time as I would be travelling overseas and the ability to do lessons offline and have them sync later seemed ideal.

For the first few days it seemed to be operating fine; I had downloaded the German tree and was working my way through it. Then I downloaded the French tree, and several problems started to emerge.

read more

Planet DebianNorbert Preining: Calibre 3 for Debian

I have updated my Calibre Debian repository to include packages of the current Calibre 3.1.1. As with the previous packages, I kept RAR support in to allow me to read comic books. I also have forwarded my changes to the maintainer of Calibre in Debian so maybe we will have soon official packages, too.

The repository location hasn’t changed, see below.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

,

Planet DebianJoachim Breitner: The perils of live demonstrations

Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target1 in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith <cdsmith@gmail.com>
Date:   Wed Jun 21 23:53:53 2017 +0000

    Change dilated to take one parameter.
    
    Function is nearly unused, so I'm not concerned about breakage.
    This new version better aligns with standard educational usage,
    in which "dilation" means uniform scaling.  Taken as a separate
    operation, it commutes with rotation, and preserves similarity
    of shapes, neither of which is true of scaling in general.

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless2. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.


  1. Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.

  2. I hope the video is going to be online soon, then you can check for yourself.

CryptogramFriday Squid Blogging: Injured Giant Squid Video

A paddleboarder had a run-in with an injured giant squid. Video. Here's the real story.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJoey Hess: PV array is hot

Only took a couple hours to wire up and mount the combiner box.

PV combiner box with breakers

Something about larger wiring like this is enjoyable. So much less fiddly than what I'm used to.

PV combiner box wiring

And the new PV array is hot!

multimeter reading 66.8 DVC

Update: The panels have an open circuit voltage of 35.89 and are in strings of 2, so I'd expect to see 71.78 V with only my multimeter connected. So I'm losing 0.07 volts to wiring, which is less than I designed for.

Cory DoctorowCanada: Trump shows us what happens when “good” politicians demand surveillance powers

The CBC asked me to write an editorial for their package about Canadian identity and politics, timed with the 150th anniversary of the founding of the settler state on indigenous lands. They’ve assigned several writers to expand on themes in the Canadian national anthem, and my line was “We stand on guard for thee.”

I wrote about bill C-51, a reckless, sweeping mass surveillance bill that now-PM Trudeau got his MPs to support when he was in opposition, promising to reform the bill once he came to power.


The situation is analogous to Barack Obama’s history with mass surveillance in the USA: when Obama was a Senator, he shepherded legislation to immunize the phone companies for their complicity with illegal spying under GW Bush, promising to fix the situation when he came to power. Instead, he built out a fearsome surveillance apparatus that he handed to the paranoid, racist Donald Trump, who now gets to use that surveillance system to target his enemies, including 11 million undocumented people in America, and people of Muslim origin.


Now-PM Justin Trudeau has finally tabled some reforms to C-51, but they leave the bill’s worst provisions intact. Even if Canadians trust Trudeau to use these spying powers wisely, they can’t afford to bet that Trudeau’s successors will not abuse them.


Within living memory, our loved ones were persecuted, hounded to suicide, imprisoned for activities that we recognize today as normal and right: being gay, smoking pot, demanding that settler governments honour their treaties with First Nations. The legitimization of these activities only took place because we had a private sphere in which to agitate for them.

Today, there are people you love, people I love, who sorrow over their secrets about their lives and values and ambitions, who will go to their graves with that sorrow in their minds — unless we give them the private space to choose the time and manner of their disclosure, so as to maximize the chances that we will be their allies in their struggles. If we are to stand on guard for the future of Canada, let us stand on guard for these people, for they are us.


What happens after the ‘good’ politicians give away our rights? Cory Doctorow shares a cautionary tale.

[Cory Doctorow/CBC]


(Image:
Jean-Marc Carisse, CC-BY; Trump’s Hair)

CryptogramThe Secret Code of Beatrix Potter

Interesting:

As codes go, Potter's wasn't inordinately complicated. As Wiltshire explains, it was a "mono-alphabetic substitution cipher code," in which each letter of the alphabet was replaced by a symbol­ -- the kind of thing they teach you in Cub Scouts. The real trouble was Potter's own fluency with it. She quickly learned to write the code so fast that each sheet looked, even to Linder's trained eye, like a maze of scribbles.

TEDWhy TED takes two weeks off every summer

Why_we_close_983pxTED.com is about to go quiet for two weeks. No new TED Talks will be posted on the web until Monday, July 10, 2017, while most of the TED staff takes our annual two-week vacation.

Yes, we all (or almost all) go on vacation at the same time. No, we don’t all go to the same place.

We’ve been doing it this way now for eight years. Our summer break is a little lifehack that solves the problem of a company in perpetual-startup mode where something new is always going on and everyone has raging FOMO. We avoid the fear of missing out on emails and new projects and blah blah blah … by making sure that nothing is going on.

I love how the inventor of this holiday, TED’s founding head of media June Cohen, once explained it: “When you have a team of passionate, dedicated overachievers, you don’t need to push them to work harder, you need to help them rest. By taking the same two weeks off, it makes sure everyone takes vacation,” she said. “Planning a vacation is hard — most of us still feel a little guilty to take two weeks off, and we’d be likely to cancel when something inevitably comes up. This creates an enforced rest period, which is so important for productivity and happiness.”

Bonus: “It’s efficient,” she said. “In most companies, people stagger their vacations through the summer. But this means you can never quite get things done all summer long. You never have all the right people in the room.”

So, as the bartender said: You don’t have to go home, but you can’t stay here. We won’t post new TED Talks on the web for the next two weeks. (Though — check out audio talks on iTunes, where we’re curating two weeks of talks on the theme of Journeys.) The office is three-quarters empty. And we stay off email. The whole point is that vacation time should be truly restful, and we should be able to recharge without having to check in or worry about what we’re missing back at the office.

See you on Monday, July 10!

Note: This piece was first posted on July 17, 2014. It was updated on July 27, 2015, again on July 20, 2016, and again on June 23, 2017.


LongNowThe Artangel Longplayer Letters: Iain Sinclair writes to Alan Moore

Iain Sinclair (left) chose Alan Moore as the recipient of his Longplayer letter.


In November 02015, Manuel Arriga  wrote a letter to Giles Fraser as part of the Artangel Longplayer Letters series. The series is a relay-style correspondence: The first letter was written by Brian Eno to Nassim Taleb. Nassim Taleb then wrote to Stewart Brand, and Stewart wrote to Esther Dyson, who wrote to Carne Ross, who wrote to John Burnside, who wrote to Manuel Arriaga, who wrote to Giles Fraser, which remains unanswered.

In June 02017, the Longplayer Trust initiated a new correspondence, beginning with Iain Sinclair, a writer and filmmaker whose recent work focuses on the psychogeography of London, writing to graphic novel writer Alan Moore, who will respond with a letter to a recipient of his choosing.

The discussion thus far has focused on the extent and ways government and technology can foster long-term thinking. You can find the previous correspondences here.


Hackney: 30 January 2017

Dear Alan,

We are being invited, by means of predatory technologies neither of us advocate or employ, to consider ‘long-term thinking’. But already I’m coughing up the fishbone of that hyphen and going into electroconvulsive spasms over this requirement to think about thinking – and at a late stage in my own terrestrial transit when I know all too well that there is no longterm. The diminishing future, protected by a feeble envelope of identity, has already been used up, wantonly. And the past was always a looped mistake plaguing us with repeated flares of shame. Those smells and textures, wet and warm, cabbage and custard, get sharper even as our faculties fail. I pick them up very easily by fingering the close-planted acres of your Jerusalem. The first great English scratch-and-sniff epic.

“And now,” as Sebald said, “I am living the wrong life.” Having tried, for too many years, to muddy the waters with untrustworthy fictions and ‘alternative truths’, books that detoured into other books, I am now colonised by longplaying images and private obsessions, vinyl ghosts in an unmapped digital multiverse. This is the fate we must accept, some more gratefully than others, before we let the whole slithery viscous mess go and sink into nothingness.

“Unexplained but not suspicious,” they concluded about the premature death of George Michael. They could, just as easily, have been talking about his life. About all our lives.

I remember Jeremy Prynne, when I first came across him, being affronted (and amused) by a request from a Canadian academic/poet for: ‘an example of your thought’. ‘Like a lump of basalt,’ he snorted. Reaching for his geological toffee-hammer. Thinking was something else: an energy field, a process that happened outside and beyond the will of the thinker. Like snow. Or waves. Or television. And with the unstated aim of eliminating egoic interference. A solitary amputated ‘thought’, framed for display, would be as horrifying as that morning radio interlude when listeners channel-hop or make their cups of tea: Thought for the Day. Hospital homilies with ecumenical bent for an immobile and chemically-coshed constituency.

But I do think (misrepresent, subvert) about a notion you once expressed: time as a solid. ‘Eternalism’ as a sort of Swedenborgian block – like a form of discontinued public housing in which pastpresentfuture coexist, shoulder-to-shoulder: legions of the persistent and half-erased dead, fictional avatars more real now than their creators, the unborn, aborted and nearly-born, and the vegetative buddhas on hard benches, all whispering and jabbering and going about their meaningless business. Each of them invisible to the others. Probably in Northampton. Probably in a few streets of Northampton. Your beloved Boroughs. Which are also burrows (and Burroughs). They are hidden in plain sight in that narcoleptic trance between slow-waking and swift-dying, adrift in the nuclear fusion of dusk and dawn. In moving meadows by some unmoving river. They sweat uphill on arterial roads: tramps, pilgrims, levellers, ranters, bootmakers, working mothers festooned with infants, prostitutes, immigrants, damaged seers and local artists, incarcerated poets and skewed uncles around a snooker table in some defunct and cobwebby Labour Club. And all the ones who are still waiting to become Alan Moore.

“The living can assist the imagination of the dead,” Yeats wrote in A Vision. I started my journey through London with that sentence and I’ve never got beyond it. The ambition remains: to be ventriloquised, tapped, channelled. “Life rewritten by life,” as Brian Catling puts it. Longterm is a deranged Xerox printer spewing out copies of copies, until the image is bleached to snowblind illegibility. Examine any seriously popular production, any universally endorsed philosophy, and you can peel it back, layer by layer, to some obscure and unheralded madman in a cluttered cabin, muttering to himself and sketching occulted diagrams of influences and interconnections. Successive reboots bring the unspeakable (better left in silence) closer to the ear, a process infinitely accommodated now by the speed of the digital web. Where nothing is true and none of it matters. And you finish with Donald Trump. Ubu of the internet.

“The illusion of mortality, post-Einstein,” you say. The neighbourly undead patrol their limitless limits: “soiled simultaneity.” That pregnant now in which the past is struggling to suppress its dreadful future. To escape the cull of gravity.  I have never been able to deal in abstractions. I like detail, glinting particulars. Anecdotes.

After noticing uniformed kids tramping, every morning, to their flatpack Academy by the canal, infested with hissing earworms, Nuremberg headphones, tablets held out in front of them like tiny trays of cocktail sausages, I registered a boy and a girl talking very quietly, not wanting to break the concentration of an older girl – who is reading as she walks: James Joyce. And, at the same time, under a Shoreditch railway bridge, there appeared, above a set of recycling bins (‘Trade Mixed Glass’), a portrait of the Ulysses author, with one blackened lens and an unnecessary title: REBEL. Which set me ‘thinking’ about your Jerusalem and the way you tap Lucia Joyce, or recover the aftershock of her Northampton confinement by total immersion in the Babel of Finnegans Wake. Your speculative punt calls up the Burroughs notion of the ‘image vine’: once you have committed to a single image (or word), the next one is fated to follow. By the time of those methadone-managed twilight years in Kansas, Burroughs had exorcised the demons that made him write, the karma of shooting his wife in Mexico City. He used up the days that were left in attending to his cats, making splat-art with his guns and recording his dreams.

“Couldn’t find my room as usual in the Land of the Dead. Followed by bounty hunters.” Postmortem, Bill is still looking for trigger episodes. “It seems that cities are being moved from one place to another.” The Place of Dead Roads, he calls it. And in Jerusalem, you catch very well those freaks of random, punctured illumination. “Each vital second of her life was there as an exquisite moving miniature, filled with the most intense significance and limned in colours so profound they blazed, yet not set in any noticeable order.”

My hunch is this: that Eternalism, the long-player’s ultimate longplay, is located in residues of sleep, in the community of sleepers, between worlds, beyond mortality. I dreamed my genesis in sweat of sleep. There was a dream, one of a series, that I failed to record, but which felt like a reprise of aspects of that film with which we were both involved, The Cardinal and the Corpse. So many of the cast are now dead, locked up, disappeared, but still in play, their voices, their persons, that they secure territory (and time), a privileged past. We were in the Princelet Street synagogue, climbing the stairs (as you did in that house with the peeling pink door), towards an attic chamber that was also a curtained confessional box. With Chris Petit, obviously, as the hovering cardinal (actually a madhouse keeper from Sligo). We managed a ritual exchange of velvet cricket caps before the world outside the window started to spin, day to night, years to centuries, stars going out, suns born, like The House on the Borderland. Martin Stone, laying out a pattern of white lines on his black case, told us that he had just found, in an abandoned villa outside Nice, a lavishly inscribed copy of the first edition that once belonged to Aleister Crowley. But he had decided it to keep it.

If the integrity of time breaks down, place is confirmed. I’m thinking about the Northampton Boroughs, about your friend and mentor, Steve Moore, on Shooter’s Hill. And how Steve sourced the dream of what would happen, his abrupt transference, while sticking around, polishing the Japanese energy shield, long enough to confirm his own predictions, and to allow others to appreciate the narrative arc of his death. That long, long preparation – in vision and domestic reality – you describe in City of Disappearances.

The trick then, the quest we’re all on, is to identify and honour those neural pathways: the trench you print out in Jerusalem, worn by steel-shod boots, between Northampton and Lambeth. The fugue of movement. A man who is here. Who vanishes. And reappears. Is he the same? Are you? Something carries this walker, like John Clare, out on an English road: foot-foundered, gobbling at verges, sleeping in ditches. In the expectation of reconnecting with an extinguished muse: youth, innocence, desire. That is the only longplay I have encountered: one journey fading into the next. No thought. No thinking. Drift. Reverie. As you say, ‘Panoramic portrait over lofty landscape.’ Every time.


Iain Sinclair was born in Cardiff. He left almost immediately. He has lived and worked around Hackney for almost 50 years, but the local terrain is a strange and enticing as ever. Books – including Lud HeatDownriverLondon OrbitalAmerican Smoke – have been published. And there have been filmic collaborations with Chris Petit and Andrew Kötting, among others. Sinclair has recently completed The Last London, the final volume in a long sequence.

Alan Moore was born in Northampton in 1953 and is a writer, performer, recording artist, activist and magician.His comic-book work includes Lost Girls with Melinda Gebbie, From Hell with Eddie Campbell and The League of Extraordinary Gentlemen with Kevin O’Neill. He has worked with director Mitch Jenkins on the Showpieces cycle of short films and on forthcoming feature film The Show, while his novels include Voice of the Fire (1996) and his current epic Jerusalem (2016). Only about half as frightening as he looks, he lives in Northampton with his wife and collaborator Melinda Gebbie.

TEDTEDWomen update: Black Lives Matter wins Sydney Peace Prize

Founders of the Black Lives Matter movement — from left, Alicia Garza, Patrisse Cullors and Opal Tometi, interviewed onstage by TEDWomen cohost Mia Birdsong at TEDWomen 2016 in San Francisco. Photo: Marla Aufmuth / TED

Cross-posted from TEDWomen curator Pat Mitchell’s blog on the Huffington Post.

Last month, the Black Lives Matter movement was awarded the Sydney Peace Prize, a global prize that honors those who pursue “peace with justice.” Past honorees include South African Archbishop Desmond Tutu and Irish President Mary Robinson.

The prize “recognizes the vital contributions of leading global peacemakers, creates a platform so that their voices are heard, and supports their vital work for a fairer world.” Winners receive $50,000 to help them continue their work.

One of the highlights of last year’s TEDWomen was a conversation with Black Lives Matter founders Alicia Garza, Patrisse Cullors and Opal Tometi. They spoke with Mia Birdsong about the movement and their commitment to working collaboratively for change. As Tometi told Birdsong: “We need to acknowledge that different people contribute different strengths, and that in order for our entire team to flourish, we have to allow them to share and allow them to shine.”

This year’s TEDWomen conference (registration is open), which will be held in New Orleans November 1–3, 2017, will expand on many of the themes Garza, Cullors, Tometi and Birdsong touched on during their conversation last year. This year’s conference theme is Bridges — and we’ll be looking at how individuals and organizations create bridges between races, cultures, people, and places — and, as modeled by the Black Lives Matter movement, how we build bridges to a more equal and just world.

In announcing the award, the Sydney Peace Foundation said, “This is the first time that a movement and not a person has been awarded the peace prize — a timely choice. Climate change is escalating fast, increasing inequality and racism are feeding divisiveness, and we are in the middle of the worst refugee crisis since World War II. Yet many establishment leaders across the world stick their heads in the sand or turn their backs on justice, fairness and equality.”

Founders Garza, Cullors and Tometi will travel to Australia later this year to formally accept the prize.

Congratulations to them!


Planet DebianElena 'valhalla' Grandi: On brokeness, the live installer and being nice to people

On brokeness, the live installer and being nice to people

This morning I've read this blog.einval.com/2017/06/22#tro.

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enough to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.

Planet DebianJonathan Dowland: WD drive head parking update

An update for my post on Western Digital Hard Drive head parking: disabling the head-parking completely stopped the Load_Cycle_Count S.M.A.R.T. attribute from incrementing. This is probably at the cost of power usage, but I am not able to assess the impact of that as I'm not currently monitoring the power draw of the NAS (Although that's on my TODO list).

Sociological ImagesIs it ethical to give your child “every advantage”?

Flashback Friday.

Stiff competition for entrance to private preschools and kindergartens in Manhattan has created a test prep market for children under 5. The New York Times profiled Bright Kids NYC. The owner confesses that “the parents of the 120 children her staff tutored [in 2010] spent an average of $1,000 on test prep for their 4-year-olds.”  This, of course, makes admission to schools for the gifted a matter of class privilege as well as intelligence.

The article also tells the story of a woman without the resources to get her child, Chase, professional tutoring:

Ms. Stewart, a single mom working two jobs, didn’t think the process was fair. She had heard widespread reports of wealthy families preparing their children for the kindergarten gifted test with $90 workbooks, $145-an-hour tutoring and weekend “boot camps.”

Ms. Stewart used a booklet the city provided and reviewed the 16 sample questions with Chase. “I was online trying to find sample tests,” she said. “But everything was $50 or more. I couldn’t afford that.”

Ms. Stewart can’t afford tutoring for Chase; other parents can. It’s unfair that entrance into kindergarten level programs is being gamed by people with resources, disadvantaging the most disadvantaged kids from the get go. I think many people will agree.

But the more insidious value, the one that almost no one would identify as problematic, is the idea that all parents should do everything they can to give their child advantages. Even Ms. Stewart thinks so. “They want to help their kids,” she said. “If I could buy it, I would, too.”

Somehow, in the attachment to the idea that we should all help our kids get every advantage, the fact that advantaging your child disadvantages other people’s children gets lost.  If it advantages your child, it must be advantaging him over someone else; otherwise it’s not an advantage, you see?

I felt like this belief (that you should give your child every advantage) and it’s invisible partner (that doing so is hurting other people’s children) was rife in the FAQs on the Bright Kids NYC website.

Isn’t my child too young to be tutored?

These programs are very competitive, the answers say, and you need to make sure your kid does better than other children.  It’s never too soon to gain an advantage.

My child is already bright, why does he or she need to be prepared?

Because being bright isn’t enough.  If you get your kid tutoring, she’ll be able to show she’s bright in exactly the right way. All those other bright kids that can’t get tutoring won’t get in because, after all, being bright isn’t enough.

Is it fair to “prep” for the standardized testing?

Of course it’s fair, the website claims!  It’s not only fair, it’s “rational”!  What parent wouldn’t give their child an advantage!?  They avoid actually answering the question. Instead, they make kids who don’t get tutoring invisible and then suggest that you’d be crazy not to enroll your child in the program.

My friend says that her child got a very high ERB [score] without prepping.  My kid should be able to do the same.

Don’t be foolish, the website responds. This isn’t about being bright, remember. Besides, your friend is lying. They’re spending $700,000 dollars on their kid’s schooling (aren’t we all!?) and we can’t disclose our clients but, trust us, they either forked over a grand to Bright Kids NYC or test administrators.

Test prep for kindergartners seems like a pretty blatant example of class privilege. But, of course, the argument that advantaging your own kid necessarily involves disadvantaging someone else’s applies to all sorts of things, from tutoring, to a leisurely summer with which to study for the SAT, to financial support during their unpaid internships, to helping them buy a house and, thus, keeping home prices high.

I think it’s worth re-evaluating. Is giving your kid every advantage the moral thing to do?

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianBits from Debian: Hewlett Packard Enterprise Platinum Sponsor of DebConf17

HPElogo

We are very pleased to announce that Hewlett Packard Enterprise (HPE) has committed support to DebConf17 as a Platinum sponsor.

"Hewlett Packard Enterprise is excited to support Debian's annual developer conference again this year", said Steve Geary, Senior Director R&D at Hewlett Packard Enterprise. "As Platinum sponsors and member of the Debian community, HPE is committed to supporting Debconf. The conference, community and open distribution are foundational to the development of The Machine research program and will our bring our Memory Driven Computing agenda to life."

HPE is one of the largest computer companies in the world, providing a wide range of products and services, such as servers, storage, networking, consulting and support, software, and financial services.

HPE is also a development partner of Debian, and provides hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

With this additional commitment as Platinum Sponsor, HPE contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Hewlett Packard Enterprise, for your support of DebConf17!

Become a sponsor too!

DebConf17 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf17 website at https://debconf17.debconf.org.

Krebs on SecurityFBI: Extortion, CEO Fraud Among Top Online Fraud Complaints in 2016

Online extortion, tech support scams and phishing attacks that spoof the boss were among the most costly cyber scams reported by consumers and businesses last year, according to new figures from the FBI’s Internet Crime Complaint Center (IC3).

The IC3 report released Thursday correctly identifies some of the most prevalent and insidious forms of cybercrimes today, but the total financial losses tied to each crime type also underscore how infrequently victims actually report such crimes to law enforcement.

Source: Internet Crime Complaint Center (IC3).

Source: Internet Crime Complaint Center (IC3).

For example, the IC3 said it received 17,146 extortion-related complaints, with an adjusted financial loss totaling just over $15 million. In that category, the report identified 2,673 complaints identified as ransomware — malicious software that scrambles a victim’s most important files and holds them hostage unless and until the victim pays a ransom (usually in a virtual currency like Bitcoin).

According to the IC3, the losses associated with those ransomware complaints totaled slightly more than $2.4 million. Writing for BleepingComputer.com — a tech support forum I’ve long recommended that helps countless ransomware victims — Catalin Cimpanu observes that the FBI’s ransomware numbers “are ridiculously small compared to what happens in the real world, where ransomware is one of today’s most prevalent cyber-threats.”

“The only explanation is that people are paying ransoms, restoring from backups, or reinstalling PCs without filing a complaint with authorities,” Cimpanu writes.

It’s difficult to know how what percentage of ransomware victims paid the ransom or were able to restore from backups, but one thing is for sure: Relatively few victims are reporting cyber fraud to federal investigators.

The report notes that only an estimated 15 percent of the nation’s fraud victims report their crimes to law enforcement. For 2016, 298,728 complaints were received, with a total victim loss of $1.33 billion.

If that 15 percent estimate is close to accurate, that means the real cost of cyber fraud for Americans last year was probably closer to $9 billion, and the losses from ransomware attacks upwards of $16 million.

The IC3 reports that last year it received slightly more than 12,000 complaints about CEO fraud attacks — e-mail scams in which the attacker spoofs the boss and tricks an employee at the organization into wiring funds to the fraudster. The fraud-fighting agency said losses from CEO fraud (also known as the “business email compromise” or BEC scam) totaled more than $360 million.

Applying that same 15 percent rule, that brings the likely actual losses from CEO fraud schemes to around $2.4 billion last year.

Some 10,850 businesses and consumers reported being targeted by tech support scams last year, with the total reported loss at around $7.8 million. Perhaps unsurprisingly, the IC3 report observed that victims in older age groups reported the highest losses.

Many other, more established types of Internet crimes — such as romance scams and advanced fee fraud — earned top rankings in the report. Check out the full report here (PDF). The FBI urges all victims of computer crimes to report the incidents at IC3.gov. The IC3 unit is part of the FBI’s Cyber Operations Section, and it uses the reports to compile and refer cases for investigation and prosecution.

Source: IC3

Source: IC3

Worse Than FailureError'd: Perfectly Logical

"Outlook can't open an attachment because it claims that it was made in Outlook, which Outlook doesn't think is installed...or something," writes Gavin.

 

Mitch wrote, "So, the problems I'm having with activating Windows 10 is that I need to install Windows 10. Of course!"

 

"I don't expect 2018 to come around," writes Adam K., "Instead we'll all be transported back to 2014!"

 

"Here I thought that the world had gone mad, but then I remembered that I had a currency converter add-on installed," writes Shahim M.

 

John S. wrote, "It's good to know that the important notices are getting priority!"

 

Michael D. wrote, "It's all fun and games until someone tries to exit the conference room while someone else is quenching their thirst."

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianArturo Borrero González: Backup router/switch configuration to a git repository

git

Most routers/switches out there store their configuration in plain text, which is nice for backups. I’m talking about Cisco, Juniper, HPE, etc. The configuration of our routers are being changed several times a day by the operators, and in this case we lacked some proper way of tracking these changes.

Some of these routers come with their own mechanisms for doing backups, and depending on the model and version perhaps they include changes-tracking mechanisms as well. However, they mostly don’t integrate well into our preferred version control system, which is git.

After some internet searching, I found rancid, which is a suite for doing tasks like this. But it seemed rather complex and feature-full for what we required: simply fetch the plain text config and put it into a git repo.

Worth noting that the most important drawback of not triggering the change-tracking from the router/switch is that we have to follow a polling approach: loggin into each device, get the plain text and the commit it to the repo (if changes detected). This can be hooked in cron, but as I said, we lost the sync behaviour and won’t see any changes until the next cron is run.

In most cases, we lost authorship information as well. But it was not important for us right now. In the future this is something that we will have to solve.

Also, some routers/switches lack some basic SSH security improvements, like public-key authentication, so we end having to hard-code user/pass in our worker script.

Since we have several devices of the same type, we just iterate over their names.

For example, this is what we use for hp comware devices:

#!/bin/bash
# run this script by cron

USER="git"
PASSWORD="readonlyuser"
DEVICES="device1 device2 device3 device4"

FILE="flash:/startup.cfg"
GIT_DIR="myrepo"
GIT="/srv/git/${GIT_DIR}.git"

TMP_DIR="$(mktemp -d)"
if [ -z "$TMP_DIR" ] ; then
	echo "E: no temp dir created" >&2
	exit 1
fi

GIT_BIN="$(which git)"
if [ ! -x "$GIT_BIN" ] ; then
	echo "E: no git binary" >&2
	exit 1
fi

SCP_BIN="$(which scp)"
if [ ! -x "$SCP_BIN" ] ; then
	echo "E: no scp binary" >&2
	exit 1
fi

SSHPASS_BIN="$(which sshpass)"
if [ ! -x "$SSHPASS_BIN" ] ; then
	echo "E: no sshpass binary" >&2
	exit 1
fi

# clone git repo
cd $TMP_DIR
$GIT_BIN clone $GIT
cd $GIT_DIR

for device in $DEVICES; do
	mkdir -p $device
	cd $device

	# fetch cfg
	CONN="${USER}@${device}"
	$SSHPASS_BIN -p "$PASSWORD" $SCP_BIN ${CONN}:${FILE} .

	# commit
	$GIT_BIN add -A .
	$GIT_BIN commit -m "${device}: configuration change" \
		-m "A configuration change was detected" \
		--author="cron <cron@example.com>"

	$GIT_BIN push -f
	cd ..
done

# cleanup
rm -rf $TMP_DIR

You should create a read-only user ‘git’ in the devices. And beware that each device model has the config file stored in a different place.

For reference, in HP comware, the file to scp is flash:/startup.cfg. And you might try creating the user like this:

local-user git class manage
 password hash xxxxx
 service-type ssh
 authorization-attribute user-role security-audit
#

In Junos/Juniper, the file you should scp is /config/juniper.conf.gz and the script should gunzip the data before committing. For the read-only user, try is something like this:

system {
	[...]
	login {
		[...]
		class git {
			permissions maintenance;
			allow-commands scp.*;
		}
		user git {
			uid xxx;
			class git;
			authentication {
				encrypted-password "xxx";
			}
		}
	}
}

The file to scp in HP procurve is /cfg/startup-config. And for the read-only user, try something like this:

aaa authorization group "git user" 1 match-command "scp.*" permit
aaa authentication local-user "git" group "git user" password sha1 "xxxxx"

What would be the ideal situation? Get the device controlled directly by git (i.e. commit –> git hook –> device update) or at least have the device to commit the changes by itself to git. I’m open to suggestions :-)

Don MartiFun with dlvr.it

Check it out—I'm "on Facebook" again. Just fixed my gateway through dlvr.it. If you're reading this on Facebook, that's why.

Dlvr.it is a nifty service that will post to social sites from an RSS feed. If you don't run your own linklog feed, the good news is that Pocket will generate RSS feeds from the articles you save, so if you want to share links with people still on Facebook, the combination of Pocket and dlvr.it makes that easy to do without actually spending human eyeball time there.

There's a story about Thomas Nelson, Jr., leader of the Virginia Militia in the Revolutionary War.

During the siege and battle Nelson led the Virginia Militia whom he had personally organized and supplied with his own funds. Legend had it that Nelson ordered his artillery to direct their fire on his own house which was occupied by Cornwallis, offering five guineas to the first man who hit the house.

Would Facebook's owners do the same, now that we know that foreign interests use Facebook to subvert America? Probably not. The Nelson story is just an unconfirmed patriotic anecdote, and we can't expect that kind of thing from today's post-patriotic investor class. Anyway, just seeing if I can move Facebook's bots/eyeballs ratio up a little.

,

Planet Linux AustraliaChris Neugebauer: Hire me!

tl;dr: I’ve recently moved to the San Francisco Bay Area, received my US Work Authorization, so now I’m looking for somewhere  to work. I have a résumé and an e-mail address!

I’ve worked a lot in Free and Open Source Software communities over the last five years, both in Australia and overseas. While much of my focus has been on the Python community, I’ve also worked more broadly in the Open Source world. I’ve been doing this community work entirely as a volunteer, most of the time working in full-time software engineering jobs which haven’t related to my work in the Open Source world.

It’s pretty clear that I want to move into a job where I can use the skills I’ve been volunteering for the last few years, and put them to good use both for my company, and for the communities I serve.

What I’m interested in doing fits best into a developer advocacy or community management sort of role. Working full-time on helping people in tech be better at what they do would be just wonderful. That said, my background is in code, and working in software engineering with a like-minded company would also be pretty exciting (better still if I get to write a lot of Python).

  • Something with a strong developer relations element. I enjoy working with other developers, and I love having the opportunity to get them excited about things that I’m excited about. As a conference organiser, I’m very aware of the line between terrible marketing shilling, and genuine advocacy by and for developers: I want to help whoever I work for end up on the right side of that line.
  • Either in San Francisco, North of San Francisco, or Remote-Friendly. I live in Petaluma, a lovely town about 50 minutes north of San Francisco, with my wonderful partner, Josh. We’re pretty happy up here, but I’m happy to regularly commute as far as San Francisco. I’ll consider opportunities in other cities, but they’d need to primarily be remote.
  • Relevant to Open Source. The Open Source world is where my experience is, it’s where I know people, and it’s the world where I can be most credible. This doesn’t mean I need to be working on open source itself, but I’d love to be able to show up at OSCON or linux.conf.au and be excited to have my company’s name on my badge.

Why would I be good at this? I’ve been working on building and interacting with communities of developers, especially in the Free and Open Source Software world, for the last five years.

You can find a complete list of what I’ve done in my résumé, but here’s a selection of what I think’s notable:

  • Co-organised two editions of PyCon Australia, and led the linux.conf.au 2017 team. I’ve led PyCon AU, from inception, to bidding, to the successful execution for two years in a row. As the public face of PyCon AU, I made sure that the conference had the right people interested in speaking, and that we had many from Australian Python community interested in attending. I took what I learned at PyCon AU and applied it to run linux.conf.au 2017, where our CFP attracted its largest ever response (beating the previous record by more than 30%).
  • Developed Registrasion, an open source conference ticket system. I designed and developed a ticket sales system that allowed for automation of the most significant time sinks that linux.conf.au and PyCon Australia registration staff had experienced in previous years. Registrasion was Open Sourced, and several other conferences are considering adopting it.
  • Given talks at countless open source and developer events, both in Australia, and overseas. I’ve presented at OSCON, PyCons in five countries, and myriad other conferences. I’ve presented on a whole lot of technical topics, and I’ve recently started talking more about the community-level projects I’ve been involved with.
  • Designed, ran, and grew PyCon Australia’s outreach and inclusion programmes. Each year, PyCon Australia has offered upwards of $10,000 (around 10% of conference budget) in grants to people who otherwise wouldn’t be able to attend the conference: this is not just speakers, but people whose presence would improve the conference just by being there. I’ve led a team to assess applications for these grants, and lead our outreach efforts to make sure we find the right people to receive these grants.
  • Served as a council member for Linux Australia. Linux Australia is the peak body for Open Source communities in Australia, as well as underwriting the region’s more popular Open Source and Developer conferences. In particular, I led a project to design governance policies to help make sure the conferences we underwrite are properly budgeted and planned.

So, if you know of anything going at the moment, I’d love to hear about it. I’m reachable by e-mail (mail@chrisjrn.com) but you can also find me on Twitter (@chrisjrn), or if you really need to, LinkedIn.

TEDAn updated design for TED Talks

TED Talks design

It’s been a few years since the TED Talks video page was last updated, but a new design begins rolling out this week. The update aims to provide a simple, straightforward viewing experience for you while surfacing other ideas worth spreading that you might also like.

A few changes to highlight …

More talks to watch

Today there are about 2,500 TED Talks in the catalog, and each is unique. However, most of them are connected to other talks in some way — on similar topics, or given by the same speaker. Think of it as part of a conversation. That’s why, in our new design, it’s easier to see other talks you might be interested in. Those smart recommendations are shown along the right side of the screen.

As our library of talks grows, the updated design will help you discover the most relevant talks.

Beyond the video: More brain candy

Most ideas are rich in nuanced information far beyond what an 18 minute talk can contain. That’s why we collected deeper content around the idea for you to explore— like books by the speaker, articles relating to the talk, and ways to take action and get involved — in the Details section.

Many speakers provide annotations for viewers (now with clickable time codes that take you right to the relevant moment in the video) as well as their own resources and personal recommendations. You can find all of that extra content in the Footnotes and Reading list sections.

Transcripts, translations, and subtitling

Reaching a global community has always been a foundation of TED’s mission, so working to improve the experience for our non-English speaking viewers is an ongoing effort. This update gives you one-click access to our most requested subtitles (when available), displayed in their native endonyms. We’ve also improved the subtitles themselves, making the text easier for you to read across languages.

What’s next?

While there are strong visual differences, this update is but one mark in a series of improvements we plan on making for how you view TED Talks on TED.com. We’d appreciate your feedback to measure our progress and influence our future changes!


LongNowThe Nuclear Bunker Preserving Movie History

During the Cold War, this underground bunker in Culpeper, Virginia was where the government would have taken the president if a nuclear war broke out. Now, the Library of Congress is using it to preserve all manner of films, from Casablanca to Harry Potter. The oldest films were made on nitrate, a fragile and highly combustible film base that shares the same chemical compound as gunpowder. Great Big Story takes us inside the vault, and introduces us to archivist George Willeman, the man in charge of restoring and preserving the earliest (and most incendiary) motion pictures.

Krebs on SecurityWhy So Many Top Hackers Hail from Russia

Conventional wisdom says one reason so many hackers seem to hail from Russia and parts of the former Soviet Union is that these countries have traditionally placed a much greater emphasis than educational institutions in the West on teaching information technology in middle and high schools, and yet they lack a Silicon Valley-like pipeline to help talented IT experts channel their skills into high-paying jobs. This post explores the first part of that assumption by examining a breadth of open-source data.

The supply side of that conventional wisdom seems to be supported by an analysis of educational data from both the U.S. and Russia, which indicates there are several stark and important differences between how American students are taught and tested on IT subjects versus their counterparts in Eastern Europe.

computered

Compared to the United States there are quite a few more high school students in Russia who choose to specialize in information technology subjects. One way to measure this is to look at the number of high school students in the two countries who opt to take the advanced placement exam for computer science.

According to an analysis (PDF) by The College Board, in the ten years between 2005 and 2016 a total of 270,000 high school students in the United States opted to take the national exam in computer science (the “Computer Science Advanced Placement” exam).

Compare that to the numbers from Russia: A 2014 study (PDF) on computer science (called “Informatics” in Russia) by the Perm State National Research University found that roughly 60,000 Russian students register each year to take their nation’s equivalent to the AP exam — known as the “Unified National Examination.” Extrapolating that annual 60,000 number over ten years suggests that more than twice as many people in Russia — 600,000 — have taken the computer science exam at the high school level over the past decade.

In “A National Talent Strategy,” an in-depth analysis from Microsoft Corp. on the outlook for information technology careers, the authors warn that despite its critical and growing importance computer science is taught in only a small minority of U.S. schools. The Microsoft study notes that although there currently are just over 42,000 high schools in the United States, only 2,100 of them were certified to teach the AP computer science course in 2011.

A HEAD START

If more people in Russia than in America decide to take the computer science exam in secondary school, it may be because Russian students are required to study the subject beginning at a much younger age. Russia’s Federal Educational Standards (FES) mandate that informatics be compulsory in middle school, with any school free to choose to include it in their high school curriculum at a basic or advanced level.

“In elementary school, elements of Informatics are taught within the core subjects ‘Mathematics’ and ‘Technology,” the Perm University research paper notes. “Furthermore, each elementary school has the right to make [the] subject “Informatics” part of its curriculum.”

The core components of the FES informatics curriculum for Russian middle schools are the following:

1. Theoretical foundations
2. Principles of computer’s functioning
3. Information technologies
4. Network technologies
5. Algorithmization
6. Languages and methods of programming
7. Modeling
8. Informatics and Society

SECONDARY SCHOOL

There also are stark differences in how computer science/informatics is taught in the two countries, as well as the level of mastery that exam-takers are expected to demonstrate in their respective exams.

Again, drawing from the Perm study on the objectives in Russia’s informatics exam, here’s a rundown of what that exam seeks to test:

Block 1: “Mathematical foundations of Informatics”,
Block 2: “Algorithmization and programming”, and
Block 3: “Information and computer technology.”

The testing materials consist of three parts.

Part 1 is a multiple-choice test with four given options, and it covers all the blocks. Relatively little time is set aside to complete this part.

Part 2 contains a set of tasks of basic, intermediate and advanced levels of complexity. These require brief answers such as a number or a sequence of characteristics.

Part 3 contains a set of tasks of an even higher level of complexity than advanced. These tasks usually involve writing a detailed answer in free form.

According to the Perm study, “in 2012, part 1 contained 13 tasks; Part 2, 15 tasks; and Part 3, 4 tasks. The examination covers the key topics from the Informatics school syllabus. The tasks with detailed answers are the most labor intensive. These include tasks on the analysis of algorithms, drawing up computer programs, among other types. The answers are checked by the experts of regional examination boards based on standard assessment criteria.”

Image: Perm State National Research University, Russia.

Image: Perm State National Research University, Russia.

In the U.S., the content of the AP computer science exam is spelled out in this College Board document (PDF).

US Test Content Areas:

Computational Thinking Practices (P)

P1: Connecting Computing
P2: Creating Computational Artifacts
P3: Abstracting
P4: Analyzing Problems and Artifacts
P5: Communicating
P6: Collaborating

The Concept Outline:

Big Idea 1: Creativity
Big idea 2: Abstraction
Big Idea 3: Data and Information
Big Idea 4: Algorithms
Big idea 5: Programming
Big idea 6: The Internet
Big idea 7: Global Impact

ADMIRING THE PROBLEM

How do these two tests compare? Alan Paller, director of research for the SANS Institute — an information security education and training organization — says topics 2, 3, 4 and 6 in the Russian informatics curriculum above are the “basics” on which cybersecurity skills can be built, and they are present beginning in middle school for all Russian students.

“Very few middle schools teach this in the United States,” Paller said. “We don’t teach these topics in general and we definitely don’t test them. The Russians do and they’ve been doing this for the past 30 years. Which country will produce the most skilled cybersecurity people?”

Paller said the Russian curriculum virtually ensures kids have far more hands-on experience with computer programming and problem solving. For example, in the American AP test no programming language is specified and the learning objectives are:

“How are programs developed to help people and organizations?”
“How are programs used for creative expression?”
“How do computer programs implement algorithms?”
“How does abstraction make the development of computer programs possible?”
“How do people develop and test computer programs?”
“Which mathematical and logical concepts are fundamental to programming?”

“Notice there is almost no need to learn to program — I think they have to write one program (in collaboration with other students),” Paller wrote in an email to KrebsOnSecurity. “It’s like they’re teaching kids to admire it without learning to do it. The main reason that cyber education fails is that much of the time the students come out of school with almost no usable skills.”

THE WAY FORWARD

On the bright side, there are signs that computer science is becoming a more popular focus for U.S. high school students. According to the latest AP Test report (PDF) from the College Board, almost 58,000 Americans took the AP exam in computer science last year — up from 49,000 in 2015.

However, computer science still is far less popular than most other AP test subjects in the United States. More than a half million students opted for the English AP exam in 2016; 405,000 took English literature; almost 283,000 took AP government, while some 159,000 students went for an AP test called “Human Geography.”

A breakdown of subject specialization in the 2016 v. 2015 AP tests in the United States. Source: The College Board.

A breakdown of subject specialization in the 2016 v. 2015 AP tests in the United States. Source: The College Board.

This is not particularly good news given the dearth of qualified cybersecurity professionals available to employers. ISACA, a non-profit information security advocacy group, estimates there will be a global shortage of two million cyber security professionals by 2019. A report from Frost & Sullivan and (ISC)2 prognosticates there will be more than 1.5 million cybersecurity jobs unfilled by 2020.

The IT recruitment problem is especially acute for companies in the United States. Unable to find enough qualified cybersecurity professionals to hire here in the U.S., companies increasingly are counting on hiring foreigners who have the skills they’re seeking. However, the Trump administration in April ordered a full review of the country’s high-skilled immigration visa program, a step that many believe could produce new rules to clamp down on companies that hire foreigners instead of Americans.

Some of Silicon Valley’s biggest players are urging policymakers to adopt a more forward-looking strategy to solving the skills gap crisis domestically. In its National Talent Strategy report (PDF), Microsoft said it spends 83 percent of its worldwide R&D budget in the United States.

“But companies across our industry cannot continue to focus R&D jobs in this country if we cannot fill them here,” reads the Microsoft report. “Unless the situation changes, there is a growing probability that unfilled jobs will migrate over time to countries that graduate larger numbers of individuals with the STEM backgrounds that the global economy so clearly needs.”

Microsoft is urging U.S. policymakers to adopt a nationwide program to strengthen K-12 STEM education by recruiting and training more teachers to teach it. The software giant also says states should be given more funding to broaden access to computer science in high school, and that computer science learning needs to start much earlier for U.S. students.

“In the short-term this represents an unrealized opportunity for American job growth,” Microsoft warned. “In the longer term this may spur the development of economic competition in a field that the United States pioneered.”

CryptogramNSA Insider Security Post-Snowden

According to a recently declassified report obtained under FOIA, the NSA's attempts to protect itself against insider attacks aren't going very well:

The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department's inspector general completed in 2016.

[...]

The agency also failed to meaningfully reduce the number of officials and contractors who were empowered to download and transfer data classified as top secret, as well as the number of "privileged" users, who have greater power to access the N.S.A.'s most sensitive computer systems. And it did not fully implement software to monitor what those users were doing.

In all, the report concluded, while the post-Snowden initiative -- called "Secure the Net" by the N.S.A. -- had some successes, it "did not fully meet the intent of decreasing the risk of insider threats to N.S.A. operations and the ability of insiders to exfiltrate data."

Marcy Wheeler comments:

The IG report examined seven of the most important out of 40 "Secure the Net" initiatives rolled out since Snowden began leaking classified information. Two of the initiatives aspired to reduce the number of people who had the kind of access Snowden did: those who have privileged access to maintain, configure, and operate the NSA's computer systems (what the report calls PRIVACs), and those who are authorized to use removable media to transfer data to or from an NSA system (what the report calls DTAs).

But when DOD's inspectors went to assess whether NSA had succeeded in doing this, they found something disturbing. In both cases, the NSA did not have solid documentation about how many such users existed at the time of the Snowden leak. With respect to PRIVACs, in June 2013 (the start of the Snowden leak), "NSA officials stated that they used a manually kept spreadsheet, which they no longer had, to identify the initial number of privileged users." The report offered no explanation for how NSA came to no longer have that spreadsheet just as an investigation into the biggest breach thus far at NSA started. With respect to DTAs, "NSA did not know how many DTAs it had because the manually kept list was corrupted during the months leading up to the security breach."

There seem to be two possible explanations for the fact that the NSA couldn't track who had the same kind of access that Snowden exploited to steal so many documents. Either the dog ate their homework: Someone at NSA made the documents unavailable (or they never really existed). Or someone fed the dog their homework: Some adversary made these lists unusable. The former would suggest the NSA had something to hide as it prepared to explain why Snowden had been able to walk away with NSA's crown jewels. The latter would suggest that someone deliberately obscured who else in the building might walk away with the crown jewels. Obscuring that list would be of particular value if you were a foreign adversary planning on walking away with a bunch of files, such as the set of hacking tools the Shadow Brokers have since released, which are believed to have originated at NSA.

Read the whole thing. Securing against insiders, especially those with technical access, is difficult, but I had assumed the NSA did more post-Snowden.

Worse Than FailureI Need More Space

Beach litter, Winterton Dunes - geograph.org.uk - 966905

Shawn W. was a newbie support tech at a small company. Just as he was beginning to familiarize himself with its operational quirks, he got a call from Jim: The Big Boss.

Dread seized Shawn. Aside from a handshake on Interview Day, the only "interaction" he'd had with Jim thus far was overhearing him tear into a different support rep about having to deal with "complicated computer crap" like changing passwords. No doubt, this call was bound to be a clinic in saintly patience.

"Tech Support," Shawn greeted. "How may—?"

"I'm out of space and I need more!" Jim barked over the line.

"Oh." Shawn mentally geared up for a memory or hard drive problem. "Did you get a warning or error mes—?"

"Just get up here and bring some more space with you!" Jim hung up.

"Oh, boy," Shawn muttered to himself.

Deciding that he was better off diagnosing the problem firsthand, Shawn trudged upstairs to Jim's office. To his pleasant surprise, he found it empty. He sank into the cushy executive-level chair. Jim hadn't been away long enough for any screensavers or lock screens to pop up, so Shawn had free rein to examine the machine.

There wasn't much to find. The only program running was a web browser, with a couple of tabs open to ESPN.com and an investment portfolio. The hardware itself was fairly new. CPU, memory, hard drive all looked fine.

"See, I'm out of space. Did you bring me more?"

Shawn glanced up to find Jim barreling toward him, steaming mug of coffee in hand. He braced himself as though facing down an oncoming freight train. "I'm not sure I see the problem yet. Can you show me what you were doing when you noticed you needed more space?"

Jim elbowed his way over to the mouse, closed the browser, then pointed to the monitor. "There! Can't you see I'm out of space?"

Indeed, Jim's desktop was full. So many shortcuts, documents, widgets, and other icons crowded the screen that the tropical desktop background was barely recognizable as such.

While staring at what resembled the aftermath of a Category 5 hurricane, Shawn debated his response. "OK, I see what you mean. Let's see if we can—"

"Can't you just get me more screen?" Jim pressed.

More screen? "You mean another monitor?" Shawn asked. "Well, yes, I could add a second monitor if you want one, but we could also organize your desktop a little and—"

"Good, get me one of those! Don't touch my icons!" Jim shooed Shawn away like so much lint. "Get out of my chair so I can get back to work."

A short while later, Shawn hooked up a second monitor to Jim's computer. This prompted a huge and unexpected grin from the boss. "I like you, you get things done. Those other guys would've taken a week to get me more space!"

Shawn nodded while stifling a snort. "Let me know if you need anything else."

Once Jim had left for the day, Shawn swung past the boss' office out of morbid curiosity. Jim had already scattered a few dozen shortcuts across his new real estate. Another lovely vacation destination was about to endure a serious littering problem.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

CryptogramCeramic Knife Used in Israel Stabbing

I have no comment on the politics of this stabbing attack, and only note that the attacker used a ceramic knife -- that will go through metal detectors.

I have used a ceramic knife in the kitchen. It's sharp.

EDITED TO ADD (6/22): It looks like the knife had nothing to do with the attack discussed in the article.

Don MartiStuff I'm thankful for

I'm thankful that the sewing machine was invented a long time ago, not today. If the sewing machine were invented today, most sewing tutorials would be twice as long, because all the thread would come in proprietary cartridges, and you would usually have to hack the cartridge to get the type of thread you need in a cartridge that works with your machine.

Don Marti1. Write open source. 2. ??? 3. PROFIT

Studies keep showing that open source developers get paid more than people who develop software but do not contribute to open source.

Good recent piece: Tabs, spaces and your salary - how is it really? by Evelina Gabasova.

But why?

Is open source participation a way to signal that you have skills and are capable of cooperation with others?

Is open source a way to build connections and social capital so that you have more awareness of new job openings and can more easily move to a higher-paid position?

Does open source participation just increase your skills so that you do better work and get paid more for it?

Are open source codebases a complementary good to open source maintenance programming, so that a lower price for access to the codebase tends to drive up the price for maintenance programming labor?

Is "we hire open source people" just an excuse for bias, since the open source scene at least in the USA is less diverse than the general pool of programming job applicants?

,

CryptogramIs Continuing to Patch Windows XP a Mistake?

Last week, Microsoft issued a security patch for Windows XP, a 16-year-old operating system that Microsoft officially no longer supports. Last month, Microsoft issued a Windows XP patch for the vulnerability used in WannaCry.

Is this a good idea? This 2014 essay argues that it's not:

The zero-day flaw and its exploitation is unfortunate, and Microsoft is likely smarting from government calls for people to stop using Internet Explorer. The company had three ways it could respond. It could have done nothing­ -- stuck to its guns, maintained that the end of support means the end of support, and encouraged people to move to a different platform. It could also have relented entirely, extended Windows XP's support life cycle for another few years and waited for attrition to shrink Windows XP's userbase to irrelevant levels. Or it could have claimed that this case is somehow "special," releasing a patch while still claiming that Windows XP isn't supported.

None of these options is perfect. A hard-line approach to the end-of-life means that there are people being exploited that Microsoft refuses to help. A complete about-turn means that Windows XP will take even longer to flush out of the market, making it a continued headache for developers and administrators alike.

But the option Microsoft took is the worst of all worlds. It undermines efforts by IT staff to ditch the ancient operating system and undermines Microsoft's assertion that Windows XP isn't supported, while doing nothing to meaningfully improve the security of Windows XP users. The upside? It buys those users at best a few extra days of improved security. It's hard to say how that was possibly worth it.

This is a hard trade-off, and it's going to get much worse with the Internet of Things. Here's me:

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn't true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

At least Microsoft has security engineers on staff that can write a patch for Windows XP. There will be no one able to write patches for your 16-year-old thermostat and refrigerator, even assuming those devices can accept security patches.

Sociological ImagesOn “voluntary conformism,” or how we use our freedom to fit in

Originally posted at Montclair Socioblog.

“Freedom of opinion does not exist in America,” said DeTocqueville 250 years ago. He might have held the same view today.

But how could a society that so values freedom and individualism be so demanding of conformity?  I had blogged about this in 2010 with references to old sitcoms, but for my class this semester I needed something more recent. Besides, Cosby now carries too much other baggage. ABC’s “black-ish”* came to the rescue.

The idea I was offering in class was, first, that our most cherished American values can conflict with one another. For example, our desire for family-like community can clash with our value on independence and freedom. Second, the American solution to this conflict between individual and group is often what Claude Fischer calls “voluntarism.”  We have freedom – you can voluntarily choose which groups to belong to. But once you choose to be a member, you have to conform.  The book I had assigned my class (My Freshman Year by Rebekah Nathan*) uses the phrase “voluntary conformism.”

In a recent episode of “black-ish,” the oldest daughter, Zoey, must choose which college to go to. She has been accepted at NYU, Miami, Vanderbilt, and Southern Cal. She leans heavily towards NYU, but her family, especially her father Dre, want her to stay close to home. The conflict is between Family – family togetherness, community – and Independence. If Zoey goes to NYU, she’ll be off on her own; if she stays in LA, she’ll be just a short drive from her family. New York also suggests values on Achievement, Success, even Risk-taking (“If I can make it there” etc.)

Zoey decides on NYU, and her father immediately tries to undermine that choice, reminding her of how cold and dangerous it will be. It’s typical sitcom-dad buffonery, and his childishness tips us off that this position, imposing his will, is the wrong one. Zoey, acting more mature, simply goes out and buys a bright red winter coat.

The argument for Independence, Individual Choice, and Success is most clearly expressed by Pops (Dre’s father, who lives with them), and it’s the turning point in the show. Dre and his wife are complaining about the kids growing up too fast. Pops says, “Isn’t this what you wanted? Isn’t this why you both worked so hard — movin’ to this White-ass neighborhood, sendin’ her to that White-ass school so she could have all these White-ass opportunities? Let. Her. Go.”

That should be the end of it. The final scene should be the family bidding a tearful goodbye to Zoey at LAX. But a few moments later, we see Zoey talking to her two younger siblings (8-year old twins – Jack and Diane). They remind her of how much family fun they have at holidays. Zoey has to tell them that New York is far, so she won’t be coming back till Christmas – no Thanksgiving, no Halloween.

Jack reminds her about the baby that will arrive soon. “He won’t even know you.”

In the next scene, Zoey walks into her parents room carrying the red winter coat. “I need to return this.”

“Wrong size?” asks her father.

“Wrong state.”

She’s going to stay in LA and go to USC.

Over a half-century ago, David McClelland wrote that a basic but unstated tenet of American culture is: “I want to freely choose to do what others expect me to do.” Zoey has chosen to do what others want her to do – but she has made that individual choice independently. It’s “voluntary conformism,” and it’s the perfect American solution (or at least the perfect American sitcom solution).

* For those totally unfamiliar with the show, the premise is this: Dre Johnson, a Black man who grew up in a working-class Black neighborhood of LA, has become a well-off advertising man, married a doctor (her name is Rainbow, or usually Bow), and moved to a big house in an upscale neighborhood. They have four children, and the wife is pregnant with a fifth.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramThe Dangers of Secret Law

Last week, the Department of Justice released 18 new FISC opinions related to Section 702 as part of an EFF FOIA lawsuit. (Of course, they don't mention EFF or the lawsuit. They make it sound as if it was their idea.)

There's probably a lot in these opinions. In one Kafkaesque ruling, a defendant was denied access to the previous court rulings that were used by the court to decide against it:

...in 2014, the Foreign Intelligence Surveillance Court (FISC) rejected a service provider's request to obtain other FISC opinions that government attorneys had cited and relied on in court filings seeking to compel the provider's cooperation.

[...]

The provider's request came up amid legal briefing by both it and the DOJ concerning its challenge to a 702 order. After the DOJ cited two earlier FISC opinions that were not public at the time -- one from 2014 and another from 2008­ -- the provider asked the court for access to those rulings.

The provider argued that without being able to review the previous FISC rulings, it could not fully understand the court's earlier decisions, much less effectively respond to DOJ's argument. The provider also argued that because attorneys with Top Secret security clearances represented it, they could review the rulings without posing a risk to national security.

The court disagreed in several respects. It found that the court's rules and Section 702 prohibited the documents release. It also rejected the provider's claim that the Constitution's Due Process Clause entitled it to the documents.

This kind of government secrecy is toxic to democracy. National security is important, but we will not survive if we become a country of secret court orders based on secret interpretations of secret law.

Worse Than FailureCodeSOD: A Lazy Cat

The innermost circle of Hell, as we all know, is trying to resolve printer driver issues for all eternity. Ben doesn’t work with the printers that we mere mortals deal with on a regular basis, though. He runs a printing press, three stories of spinning steel and plates and ink and rolls of paper that could crush a man.

Like most things, the press runs Linux- a highly customized, modified version of Linux. It’s a system that needs to be carefully configured, as “disaster recovery” has a slightly different meaning on this kind of heavy equipment. The documentation, while thorough and mostly clear, was obviously prepared by someone who speaks English as a second language. Thus, Ben wanted to check the shell scripts to better understand what they did.

The first thing he caught was that each script started with variable declarations like this:

GREP="/bin/grep"
CAT="/bin/cat"

In some cases, there were hundreds of such variable declarations, because presumably, someone doesn’t trust the path variable.

Now, it’s funny we bring up cat, as a common need in these scripts is to send a file to STDOUT. You’d think that cat is just the tool for the job, but you’d be mistaken. You need a shell function called cat_file:

# function send an file to STDOUT
#
# Usage: cat_file <Filename>
#

function cat_file ()
{
        local temp
        local error
        error=0
        if [ $# -ne 1 ]; then
                temp=""
                error=1
        else
                if [ -e ${1} ]; then
                        temp="`${CAT} ${1}`"
                else
                        temp=""
                        error=1
                fi
        fi
        echo "${temp}"
        return $((error))
}

This ‘belt and suspenders’ around cat ensures that you called it with parameters, that the parameters exist, and failing that, it… well… fails. Much like cat would, naturally. This gives you the great advantage, however, that instead of writing code like this:

dev="`cat /proc/dev/net | grep eth`"

You can instead write code like this:

dev="`cat_file /proc/dev/net | ${GREP} eth`"

Much better, yes?

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Don MartiCatching up to Safari?

Earlier this month, Apple Safari pulled ahead of other mainstream browsers in tracking protection. Tracking protection in the browser is no longer a question of should the browser do it, but which browser best protects its users. But Apple's early lead doesn't mean that another browser can't catch up.

Tracking protection is still hard. You have to provide good protection from third-party tracking, which users generally don't want, without breaking legit third-party services such as content delivery networks, single sign-on systems, and shopping carts. Protection is a balance, similar to the problem of filtering spam while delivering legit mail. Just as spam filtering helps enable legit email marketing, tracking protection tends to enable legit advertising that supports journalism and cultural works.

In the long run, just as we have seen with spam filters, it will be more important to make protection hard to predict than to run the perfect protection out of the box. A spam filter, or browser, that always does the same thing will be analyzed and worked around. A mail service that changes policies to respond to current spam runs, or an unpredictable ecosystem of tracking protection add-ons that browser users can install in unpredictable combinations, is likely to be harder.

But most users aren't in the habit of installing add-ons, so browsers will probably have to give them a nudge, like Microsoft Windows does when it nags the user to pick an antivirus package (or did last time I checked.) So the decentralized way to catch up to Apple could end up being something like:

  • When new tracking protection methods show up in the privacy literature, quietly build the needed browser add-on APIs to make it possible for new add-ons to implement them.

  • Do user research to guide the content and timing of nudges. (Some atypical users prefer to be tracked, and should be offered a chance to silence the warnings by affirmatively choosing a do-nothing protection option.)

  • Help users share information about the pros and cons of different tools. If a tool saves lots of bandwidth and battery life but breaks some site's comment form, help the user make the right choice.

  • Sponsor innovation challenges to incentivize development, testing, and promotion of diverse tracking protection tools.

Any surveillance marketer can install and test a copy of Safari, but working around an explosion of tracking protection tools would be harder. How to set priorities when they don't know which tools will get popular?

What about adfraud?

Tracking protection strategies have to take adfraud into account. Marketers have two choices for how to deal with adfraud:

  • flight to quality

  • extra surveillance

Flight to quality is better in the long run. But it's a problem from the point of view of adtech intermediaries because it moves more ad money to high-reputation sites, and the whole point of adtech is to reach big-money eyeballs on cheap sites. Adtech firms would rather see surveillance-heavy responses to adfraud. One way to help shift marketing budgets away from surveillance, and toward flight to quality, is to make the returns on surveillance investments less predictable.

This is possible to do without making value judgments about certain kinds of sites. If you like a site enough to let it see your personal info, you should be able to do it, even if in my humble opinion it's a crappy site. But you can have this option without extending to all crappy sites the confidence that they'll be able to live on leaked data from unaware users.

,

TEDListen in on couples therapy with Esther Perel, Tabby’s star dims again, and more

Behold, your recap of TED-related news:

The truth about couples. Ever wonder what goes on in couples therapy? You may want to tune in to Esther Perel’s new podcast “Where Should We Begin?” Each episode invites the reader to listen to a real session with a real couple working out real issues, from a Christian couple bored with their sex life to a couple dealing with the aftermath of an affair, learning how to cope and communicate, express and excite. Perel hopes her audience will walk away with a sense of “truth” surrounding relationships — and maybe take away something for their own relationships. As she says: “You very quickly realize that you are standing in front of the mirror, and that the people that you are listening to are going to give you the words and the language for the conversations you want to have.” The first four episodes of “Where Should We Begin?” are available on Audible, with new episodes added every Friday. (Watch Perel’s TED Talk)

Three TEDsters join the Media Lab. MIT’s Media Lab has chosen its Director’s Fellows for 2017, inviting nine extraordinary people to spend two years working with each other, MIT faculty and students to move their work forward. Two of the new Fellows are TED speakers — Adam Foss and Jamila Raqib — and a third is a TED Fellow, activist Esra’a Al Shafei. In a press release, Media Lab Director (and fellow TED speaker) Joi Ito said the new crop of fellows “aligns with our mission to create a better future for all,” with an emphasis on “civic engagement, social change, education, and creative disruption.” (Watch Foss’ TED Talk and Raqib’s TED Talk)

The mystery of KIC 8462852 deepens. Tabby’s Star, notorious for “dipping,” is making headlines again with a dimming event that started in May. Astronomer Tabetha Boyajian, the star’s namesake, has been trying to crack the mystery since the flickering was noticed in 2011. The star’s dimming is erratic—sometimes losing up to 20 percent of its brightness—and has prompted a variety of potential explanations. Some say it’s space debris, others say it’s asteroids. Many blame aliens. Nobody knows for sure, still, but you can follow Boyajian on Twitter for updates. (Watch Boyajian’s TED talk)

AI: friend or foe? The big fear with AI is that humanity will be replaced or overrun, but Nicholas Christakis has been entertaining an alternative view: how can AI complement human beings? In a new study conducted at Yale, Christakis experimented with human and AI interaction. Subjects worked with anonymous AI bots in a collaborative color-coordination game, and the bots were programmed with varying behavioral randomness — in other words, they made mistakes. Christakis’ findings showed that even when paired with error-prone AI, human performance still improved. Groups solved problems 55.6% faster when paired with bots—particularly when faced with difficult problems. “The bots can help humans to help themselves,” Christakis said. (Watch Christakis’ TED Talk)

A bundle of news from TED architects. Alejandro Aravena’s Chile-based design team, Elemental, won the competition to design the Art Mill, a new museum in Doha, Qatar. The museum site is now occupied by Qatar Flour Mills, and Elemental’s design pays homage to the large grain silos it will replace. Meanwhile, The Shed, a new building in New York City designed by Liz Diller and David Rockwell, recently underwent testing. The building is designed in two parts: an eight-level tower and a teflon-based outer shell that can slide on runners over an adjacent plaza. The large shell moves using a gantry crane, and only requires the horsepower of a Toyota Prius.  When covering the plaza, the shell can house art exhibits and performances. Finally, Joshua Prince-Ramus’ architecture firm, REX, got the nod to design a new performing arts center at Brown University. It will feature a large performance hall, smaller rehearsal spaces, and rooms for practice and instruction—as well as a lobby and cafe. (Watch Aravena’s TED Talk, Diller’s TED Talk, Rockwell’s TED Talk, and Prince-Ramus’ TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this round-up.


TEDA noninvasive method for deep brain stimulation, a new class of Emerging Explorers, and much more

As usual, the TED community has lots of news to share this week. Below, some highlights.

Surface-level brain stimulation. The delivery of an electric current to the part of the brain involved in movement control, known as deep brain stimulation, is sometimes used to treat people with Parkinson’s disease, depression, epilepsy and obsessive compulsive disorder. However, the process isn’t risk-free — and there are few people who possess the skill set to open a skull and implant electrodes in the brain. A new study, of which MIT’s Ed Boyden was the senior author, has found a noninvasive method: placing electrodes on the scalp rather than in the skull. This may make deep brain stimulation available to more patients and allow the technique to be more easily adapted to treat other disorders. (Watch Boyden’s TED Talk)

Rooms for refugees. Airbnb unveiled a new platform, Welcome, which provides housing to refugees and evacuees free of charge. Using its extensive network, Airbnb is partnering with global and local organizations that will have access to Welcome in order to pair refugees with available lodging. The company aims to provide temporary housing for 100,000 displaced persons over the next five years. Airbnb co-founder, Joe Gebbia, urges anybody with a spare room to “play a small role in tackling this global challenge”; so far, 6,000 people have answered his call. (Watch Gebbia’s TED Talk)

A TEDster joins The Shed. Kevin Slavin has been named Chief Science and Technology Officer of The Shed. Set to open in 2019, The Shed is a uniquely-designed space in New York City that will bring together leading thinkers in the arts, the humanities and the sciences to create innovative art. Slavin’s multidisciplinary—or, as he puts it, anti-disciplinary—mindset seems a perfect fit for The Shed’s mission of “experimentation, innovation, and collaboration.” Slavin, who was behind the popular game Drop 7, has run a research lab at MIT’s Media Lab, and has showcased his work in MoMA, among other museums. The Shed was designed by TEDsters Liz Diller and David Rockwell. (Watch Slavin’s TED Talk, Diller’s TED Talk and Rockwell’s TED Talk)

Playing with politics. Designing a video to feel as close to real life as possible often means intricate graphics and astutely crafted scripts. For game development studio Klang, it also means replicating politics. That’s why Klang has brought on Lawrence Lessig to build the political framework for their new game, Seed. Described as “a boundless journey for human survival, fuelled by discovery, collaboration and genuine emotion,” Seed is a vast multiplayer game whose simulation continues even after a player has logged off. Players are promised “endless exploration of a living, breathing exoplanet” and can traverse this new planet forming colonies, developing relationships, and collaborating with other players. Thanks to Lessig, they can also choose their form of government and appointed officials. While the game will not center on politics, Lessig’s contributions will help the game evolve to more realistically resemble real life. (Watch Lessig’s TED Talk)

A new class of explorers. National Geographic has announced this year’s Emerging Explorers. TED Speaker Anand Varma and TED Fellows Keolu Fox and Danielle N, Lee are among them. Varma is a photographer who uses the medium to turn science into stories, as he did in his TED talk about threats faced by bees. Fox’s work connects the human genome to disease; he advocates for more diversity in the field of genetics. He believes that indigenous peoples should be included in genome sequencing not only for the sake of social justice, but for science. Studying Inuit genetics, for example, may provide insight into how they keep a traditionally fat-rich diet but have low rates of heart disease. Danielle N. Lee studies practical applications for rodents—like the African giant pouched rats trained to locate landmines. The rats are highly trainable and low-maintenance, and Lee’s research aims to tap into this unlikely resource. (Watch Varma’s TED Talk, Fox’s TED Talk and Lee’s TED Talk)

Collaborative fellowship awarded to former head of DARPA. Joining the ranks of past fellows Ruth Bader Ginsberg, Deborah Tannen and Amos Tversky is Arati Prabhakar, who has been selected for the 2017-18 fellowship at Stanford’s Center for Advanced Study in the Behavioral Sciences (CASBS). While Prabhakar’s field of expertise is in electrical engineering and applied physics, she is one of 37 fellows of various backgrounds ranging from architecture to law, and religion to statistics, to join the program. CASBS seeks to solve societal problems through interdisciplinary collaborative projects and research. At the heart of this mission is their fellowship program, says associate director Sally Schroeder. “Fellows represent all that is great about this place. It’s imperative that we continue to attract the highest quality, innovative thinkers, and we’re confident we’ve reached that standard of excellence once again with the 2017-18 class.” (Watch Prabhakar’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.


Worse Than FailureThe CMS From Hell

Hortus Deliciarum - Hell

Contracting can be really hit or miss. Sometimes, you're given a desk and equipment and treated just like an employee, except better paid and exempt from team-building exercises. Sometimes, however, you're isolated in your home office, never speaking to anyone, working on tedious, boring crap they can't convince their normal staff to do.

Eric was contracted to perform basic website updating tasks for a government agency. Most of the work consisted of receiving documents, uploading them to the server, and adding them to a page. There were 4 document categories, each one organized by year. Dull as dishwater, but easy.

The site was hosted by a third party in a shared hosting environment. It ran on a CMS produced by another party. WTFCMS was used in many high-profile sites, so the agency figured it had to be good. Eric was given login credentials and—in the way of techies given boring tasks everywhere—immediately began automating the task at hand.

Step 1 of this automation was to get a list of articles with their IDs. Eric was pleased to discover that the browser-based interface for the CMS used a JSON request to get the list of pages. With the help of good old jq, he soon had that running in a BASH shell script. To get the list of children for an article, he passed the article's ID to the getChildren endpoint.

Usually, in a heirarchy like this, there's some magic number that means "root element." Eric tried sending a series of likely candidates, like 0, -1, MAX_INT, and MIN_INT. It turned out to be -1 ... but he also got a valid list when he passed in 0.

Curious, he thought to himself. This appears to be a list of articles ... and hey, here's the ones I got for this site. These others ...? No way.

Sure enough, passing in a parent ID of 0 had gotten Eric some sort of super-root: every article across every site in the entire CMS system. Vulnerability number 1.

Step 2 was to take the ID list and get the article data so he could associate the new file with it. This wasn't nearly as simple. There was no good way to get the text of the article from the JSON interface; the CMS populated the articles server-side.

Eric was in too deep to stop now, though. He wrote a scraper for the edit page, using an XML parser to handle the HTML form that held the article text. Once he had the text, he compared it by hand to the POST request sent from his Firefox instance to ensure he had the right data.

And he did ... mostly. Turns out, the form was manipulated by on-page Javascript before being submitted: fields were disabled or enabled, date/time formats were tweaked, and the like. Eric threw together some more scripting to get the job done, but now he wasn't sure if he would hit an edge case or somehow break the database if he ran it. Still, he soldiered on.

Step 3 was to upload the files so they could be linked to the article. With Firebug open, Eric went about adding an upload.

Now, WTFCMS seemed to offer the usual flow: enter a name, select a file, and click Upload to both upload the file and save it as the given name. When he got to step 2, however, the file was uploaded immediately—but he still had to click the Upload button to "save" it.

What happens if I click Cancel? Eric wondered. No, never mind, I don't want to know. What does the POST look like?

It was a mess of garbage. Eric was able to find the file he uploaded, and the name he'd given it ... and also a bunch of server-side information the user shouldn't be privvy to, let alone be able to tamper with. Things like, say, the directory on the server where the file should be saved. Vulnerability number 2.

The response to the POST contained, unexpectedly, HTML. That HTML contained an iframe. The iframe contained an iframe. iframe2 contained iframe3; iframe3 contained a form. In that form were two fields: a submit button, reading "Upload", and a hidden form field containing the path of the uploaded file. In theory, he could change that to read anything on the server. Now he had both read and write access to any arbitrary destination in the CMS, maybe even on the server itself. Vulnerability number 3.

It was at this point that Eric gave up on his script altogether. This is the kind of task that Selenium IDE is perfect for. He just kept his head down, hoped that the server had some kind of validation to prevent curious techies like himself from actually exploiting any of these potential vulnerabilities, and served out the rest of his contract.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

LongNowThe Industrial Sublime: Edward Burtynsky Takes the Long View

“Oil Bunkering #1, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

The New Yorker recently profiled photographer, former SALT speaker, and 02016 sponsor of the Conversations at the Interval livestream Edward Burtynsky and his quest to document a changing planet in the anthropocene age.

“What I am interested in is how to describe large-scale human systems that impress themselves upon the land,” Burtynsky told New Yorker staff writer Raffi Khatchadourian as they surveyed the decimated, oil-covered landscapes of Lagos, Nigeria from a helicopter.

“Saw Mills #1, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

For over three decades, Edward Burtynsky has been taking large-format photographs of industrial landscapes which include mining locations around the globe and the building of Three Gorges Dam in China. His work has been noted for beautiful images which are often at odds with their subject’s negative environmental impacts.

Photograph by Benedicte Kurzen / Noor for The New Yorker

“This is the sublime of our time,” said Burtynsky in his 02008 SALT Talk, which included a formal proposal for a permanent art gallery in the chamber that encloses the 10,000-year Clock, as well as the results of his research into methods of capturing images that might have the best chance to survive in the long-term.

“Oil Bunkering #4, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

As the Khatchadourian notes, Burtynsky’s approach has at times attracted controversy:

Over the years, greater skepticism has been voiced about […] Burtynsky’s inclination to depict toxic landscapes in visually arresting terms. A critic responding to “Oil” wondered whether the fusing of beauty with monumentalism, of extreme photographic detachment with extreme ecological damage, could trigger only apathy as a response. [Curator] Paul Roth had a different view: “Maybe these people are a bit immune to the sublime—being terribly anxious while also being attracted to the beauty of an image.”

“Oil Bunkering #2, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

Burtynsky does not seek to be heavy-handed or pedantic in his work, but neither does he seek to be amoral. The environmental and human rights issues are directly shown, rather than explicitly proclaimed.

“Oil Bunkering #5, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

In recent years Burtynsky’s work has focused on water, including oil spills around the world, like the ones he was documenting in Lagos, a city he calls a “hyper crucible of globalism.”

As the global consequences of human activity have become unmistakably pressing, Burtynsky has connected his photography more directly with environmentalism. “There has been a discussion for a long time about climate change, but we don’t seem to be ceasing anything,” he says. “That has begun to bring a sense of urgency to me.”

Burtynsky is currently working on the film Anthropocene, which documents unprecedented human impact on the natural world.

Read The New Yorker profile of Burtynsky in full.

,

Sociological ImagesAre Millennials having less sex? Or more? And what’s coming next?

Based on analyses of General Social Survey data, a well-designed and respected source of data about American life, members of the Millennial generation are acquiring about the same number of sexual partners as the Baby Boomers. This data suggests that the big generational leap was between the Boomers and the generation before them, not the Boomers and everyone that came after. And rising behavioral permissiveness definitely didn’t start with the Millennials. Sexually speaking, Millennials look a lot like their parents at the same age and are perhaps even less sexually active then Generation X.

Is it true?

It doesn’t seem like it should be true. In terms of attitudes, American society is much more sexually permissive than it was for Boomers, and Millennials are especially more permissive. Boomers had to personally take America through the sexual revolution at a time when sexual permissiveness was still radical, while Generation X had to contend with a previously unknown fatal sexually transmitted pandemic. In comparison, the Millennials have it so easy. Why aren’t they having sex with more people?

A new study using data from the National Survey of Family Growth (NSFG) (hat tip Paula England) contrasts with previous studies and reports an increase. It finds that nine out of ten Millennial women had non-marital sex by the time they were 25 years old, compared to eight out of ten Baby Boomers. And, among those, Millennials reported two additional total sexual partners (6.5 vs. 4.6).

Nonmarital Sex by Age 25, Paul Hemez

Are Millennials acquiring more sexual partners after all?

I’m not sure. The NSFG report used “early” Millennials (only ones born between 1981 and 1990). In a not-yet-released book, the psychologist Jean Twenge uses another survey — the Youth Risk Behavior Surveillance System — to argue that the next generation (born between 1995 and 2002), which she calls the “iGen,” are even less likely to be sexually active than Millennial. According to her analysis, 37% of 9th graders in 1995 (born in 1981, arguably the first Millennial year) had lost their virginity, compared to 34% in 2005, and 24% in 2015.

Percentage of high school students who have ever had sex, by grade. Youth Risk Behavior Surveillance System, 1991-2015.

iGen, Jean Twenge

If Twenge is right, then we’re seeing a decline in the rate of sexual initiation and possibly partner acquisition that starts somewhere near the transition between Gen X and Millennial, proceeds apace throughout the Millennial years, and is continuing — Twenge argues accelerating — among the iGens. So, if the new NSFG report finds an increase in sexual partners between the Millennials and the Boomers, it might be because they sampled on “early” Millennials, those closer to Gen Xers, on the top side of the decline.

Honestly, I don’t know. It’s interesting though. And it’s curious why the big changes in sexually permissive attitudes haven’t translated into equally sexually permissive behaviors. Or, have actually accompanied a decrease in sexual behavior. It depends a lot on how you chop up the data, too. Generations, after all, all artificial categories. And variables like “nonmarital sex by age 25” are specific and may get us different findings than other measures. Sociological questions have lots of moving parts and it looks as if we’re still figuring this one out.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

CryptogramNew Technique to Hijack Social Media Accounts

Access Now has documented it being used against a Twitter user, but it also works against other social media accounts:

With the Doubleswitch attack, a hijacker takes control of a victim's account through one of several attack vectors. People who have not enabled an app-based form of multifactor authentication for their accounts are especially vulnerable. For instance, an attacker could trick you into revealing your password through phishing. If you don't have multifactor authentication, you lack a secondary line of defense. Once in control, the hijacker can then send messages and also subtly change your account information, including your username. The original username for your account is now available, allowing the hijacker to register for an account using that original username, while providing different login credentials.

Three news stories.

Worse Than FailureRepresentative Line: Highly Functional

For a brief period of time, say, about 3–4 years ago, if you wanted to sound really smart, you’d bring up “functional programming”. Name-dropping LISP or even better, Haskell during an interview marked you as a cut above the hoi polloi. Even I, surly and too smart for this, fell into the trap of calling JavaScript “LISP with curly braces”, just because it had closures.

Still, functional programming features have percolated through other languages because they work. They’re another tool for the job, and like any tool, when used by the inexpert, someone might lose a finger. Or perhaps someone should lose a finger, if only as a warning to others.

For example, what if you wanted to execute a loop 100 times in JavaScript? You could use a crummy old for loop, but that’s not functional. The functional solution comes from an anonymous submitter:

Array.apply(null, {length: 99}).map(Number.call, Number).forEach(function (element, index) {
// do some more crazy stuff
});

This is actually an amazing abuse of JavaScript’s faculties, and I thought I saw the worst depredations one could visit on JavaScript while working with Angular code. When I first read this line, my initial reaction was, “oh, that’s not so bad.” Then I tried to trace through its logic. Then I realized, no, this is actually really bad. Not just extraneous arrays bad, but full abused of JavaScript bad. Like call Language Protective Services bad. This is easier to explain if you look at it backwards.

forEach applies a function to each element in the array, supplying the element and the index of that element.

Number.call invokes the Number function, used to convert things into numbers (shocking, I know), but it allows you to supply the this against which the function is executed. map takes a callback function, and supplies an array item for the currentValue, the index, and the whole array as parameters. map also allows you to specify what this is, for the callback itself- which they set to be Number- the function they’re calling.

So, remember, map expects a callback in the form f(currentValue, index, array). We’re supplying a function: call(thisValue, numberToConvert). So, the end result of map in this function is that we’re going to emit an array with each element equal to its own index, which makes the forEach look a bit silly.

Finally, at the front, we call Array.apply, which is mostly the same as Array.call, with a difference in how arguments get passed. This allows the developer to deftly avoid writing new Array(99), which would have the same result, but would look offensively object-oriented.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

,

Planet Linux AustraliaOpenSTEM: HASS Additional Activities

OK, so you’ve got the core work covered for the term and now you have all those reports to write and admin to catch up on. Well, the OpenSTEM™ Understanding Our World® HASS plus Science material has heaps of activities which help students to practise core curricular skills and can keep students occupied. Here are some ideas:

 Aunt Madge’s Suitcase Activity

Aunt Madge

Aunt Madge is a perennial favourite with students of all ages. In this activity, students use clues to follow Aunt Madge around the world trying to return her forgotten suitcase. There’s a wide range of locations to choose from on every continent – both natural and constructed places. This activity can be tailored for group work, or the whole class, and by adjusting the number of locations to be found, the teacher can adjust to the available time, anywhere from 10-15 minutes to a whole lesson. Younger students enjoy matching the pictures of locations and trying to find the countries on the map. Older students can find out further information about the locations on the information sheets. Teachers can even choose a theme for the locations (such as “Ancient History” or “Aboriginal Places”) and see if students can guess what it is.

 Ancient Sailing Ships Activity

Sailing Ships (History + Science)Science

Students in Years 3 to 6 have undertaken the Ancient Sailing Ships activity this term, however, there is a vast scope for additional aspects to this activity. Have students compared the performance of square-rigged versus lateen sails? How about varying the number of masts? Have students raced the vessels against each other? (a water trough and a fan is all that’s needed for some exciting races) Teachers can encourage the students to examine the effects of other changes to ship design, such as adding a keel or any other innovations students can come up with, which can be tested. Perhaps classes or grades can even race their ships against each other.

Trade and Barter Activity

Students in years 5 and 6 in particular enjoy the Trade and Barter activity, which teaches them the basics of Economics without them even realising it! This activity covers so many different aspects of the curriculum, that it is always a good one to revisit, even though it was not in this term’s units. Students enjoy the challenge and will find the activity different each time. It is a particularly good choice for a large chunk of time, or for smaller groups; perhaps a more experienced group can coach other students. The section of the activity which has students developing their own system of writing is one that lends itself to extension and can even be spun off as a separate activity.

Games from the Past

Kids Playing TagKids Playing Tag

Students of all ages enjoy many of the games listed in the resource Games From The Past. Several of these games are best done whilst running around outside, so if that is an option, then choose from the Aboriginal, Chinese or Zulu games. Many of these games can be played by large groups. Older students might like to try recreating some of the rules for some of the games of Ancient Egypt or the Aztecs. If this resource wasn’t part of the resources for your particular unit, it can be downloaded from the OpenSTEM™ site directly.

 

Class Discussions

The b) and c) sections of the Teacher Handbooks contain suggestions for topics of discussion – such as Women Explorers or global citizenship, or ideas for drawings that the students can do. These can also be undertaken as additional activities. Teachers could divide students into groups to research and explore particular aspects of these topics, or stage debates, allowing students to practise persuasive writing skills as well.

OpenSTEM A0 world map: Country Outlines and Ice Age CoastlineAdding events to a timeline, or the class calendar, also good ways to practise core skills.

The OpenSTEM™ Our World map is used as the perfect complement to many of the Understanding Our World® units. This map comes blank and country names are added to the map during activities. The end of term is also a good chance for students to continue adding country names to the map. These can be cut out of the resource World Countries, which supplies the names in a suitable font size. Students can use the resource World Maps to match the country names to their locations.

We hope you find these suggestions useful!

Enjoy the winter holidays – not too long now to a nice, cosy break!

Rondam RamblingsTrumpcare and the TPP: Republicans have learned nothing from history

As long as I'm ranting about Republican hypocrisy, I feel I should say a word about the secretive and thoroughly undemocratic process being employed by them to pass the Trumpcare bill.  If history is any guide, this will come back to bite them badly.  But Republicans don't seem to learn from history.  (Neither do Democrats, actually, but they aren't the ones trying to take my health insurance

Rondam RamblingsAnd the Oscar for Most Extreme Hypocrisy by a Republican goes to...

New Gingrich!  For saying that "the president “technically” can’t even obstruct justice" after leading the charge to impeach Bill Clinton for obstructing justice.  Congratulations, Mr. Gingrich!  Being the most hypocritical Republican is quite an achievement in this day and age.

Debian Administration Debian Stretch Released

Today the Debian project is pleased to announce the release of the next stable release of Debian GNU/Linux, code-named Stretch.

,

Krebs on SecurityCredit Card Breach at Buckle Stores

The Buckle Inc., a clothier that operates more than 450 stores in 44 U.S. states, disclosed Friday that its retail locations were hit by malicious software designed to steal customer credit card data. The disclosure came hours after KrebsOnSecurity contacted the company regarding reports from sources in the financial sector about a possible breach at the retailer.

buckle

On Friday morning, KrebsOnSecurity contacted The Buckle after receiving multiple tips from sources in the financial industry about a pattern of fraud on customer credit and debit cards which suggested a breach of point-of-sale systems at Buckle stores across the country.

Later Friday evening, The Buckle Inc. released a statement saying that point-of-sale malware was indeed found installed on cash registers at Buckle retail stores, and that the company believes the malware was stealing customer credit card data between Oct. 28, 2016 and April 14, 2017. The Buckle said purchases made on its online store were not affected.

As with the recent POS-malware based breach at Kmart, The Buckle said all of its stores are equipped with EMV-capable card terminals, meaning the point-of-sale machines can accommodate newer, more secure chip-based credit and debit cards. The malware copies account data stored on the card’s magnetic stripe. Armed with that information, thieves can clone the cards and use them to buy high-priced merchandise from electronics stores and big box retailers.

The trouble is that not all banks have issued chip-enabled cards, which are far more expensive and difficult for thieves to counterfeit. Customers who shopped at compromised Buckle stores using a chip-based card would not be in danger of having their cards cloned and used elsewhere, but the stolen card data could still be used for e-commerce fraud.

Visa said in March 2017 there were more than 421 million Visa chip cards in the country, representing 58 percent of Visa cards. According to Visa, counterfeit fraud has been declining month over month — down 58 percent at chip-enabled merchants in December 2016 when compared to the previous year.

The United States is the last of the G20 nations to make the shift to chip-based cards. Visa has said it typically took about three years after the liability shifts in other countries before 90% of payment card transactions were “chip-on-chip,” or generated by a chip card used at a chip-based terminal.

Virtually every other country that has made the jump to chip-based cards saw fraud trends shifting from card-present to card-not-present (online, phone) fraud as it became more difficult for thieves to counterfeit physical credit cards. Data collected by consumer credit bureau Experian suggests that e-commerce fraud increased 33 percent last year over 2015.

TED5 TED Radio Hour episodes that explore what it’s like to be human

TED Radio Hour started in 2013, and while I’ve only been working on the show for about a year, it’s one of my favorite parts of my job. We work with an incredibly creative team over at NPR, and helping them weave different ideas into a narrative each week adds a whole new dimension to the talks.

On Friday, the podcast published its 100th episode. The theme is A Better You, and in the hour we explore the many ways we as humans try to improve ourselves. We look at the role of our own minds when it comes to self-improvement, and the tension in play between the internal and the external in this struggle.

New to the show, or looking to dip back into the archive? Below are five of my favorite episodes so far that explore what it means to be human.

The Hero’s Journey

What makes a hero? Why are we so drawn to stories of lone figures, battling against the odds? We talk about space and galaxies far, far away a lot at TED, but in this episode we went one step further and explored the concept of the Hero’s Journey relates to the Star Wars universe – and the ideas of TED speakers. Dame Ellen MacArthur shares the transformative impact of her solo sailing trip around the world. Jarrett J. Krosoczka pays homage to the surprising figures that formed his path in life. George Takei tells his powerful story of being held in a Japanese-American internment camp during WWII, and how he managed to forgive, and even love, the country that treated him this way. We finish up the hour with Ismael Nazario’s story of spending 300 days in solitary confinement before he was even convicted of a crime, and how this ultimately set him on a journey to help others.

Anthropocene

In this episode, four speakers make the case that we are now living in a new geological age called the Anthropocene, where the main force impacting the earth – is us. Kenneth Lacovara opens the show by taking us on a tour of the earth’s ages so far. Next Emma Marris calls us to connect with nature in a new way so we’ll actually want to protect it. Then, Peter Ward looks at what past extinctions can tell us about the earth – and ourselves. Finally Cary Fowler takes us deep within a vault in Svalbard, where a group of scientists are storing seeds in an attempt to ultimately preserve our species. While the subject could easily be a ‘doom and gloom’ look at the state of our planet, ultimately it left me hopeful and optimistic for our ability to solve some of these monumental problems. If you haven’t yet heard of the Anthropocene, I promise that after this episode you’ll start coming across it everywhere.

The Power of Design

Doing an episode on design seemed like an obvious choice, and we were excited about the challenge of creating an episode about such a visual discipline for radio. We looked at the ways good or bad design affects us, and the ways we can make things more elegant and beautiful. Tony Fadell starts out the episode by bringing us back to basics, calling out the importance of noticing design flaws in the world around us in order to solve problems. Marc Kushner predicts how architectural design is going to be increasingly shaped by public perception and social media. Airbnb co-founder Joe Gebbia takes us inside the design process that helped people establish enough trust to open up their homes to complete strangers. Next we take an insightful design history lesson with Alice Rawsthorn to pay homage to bold and innovative design thinkers of the past, and their impact on the present. We often think of humans as having a monopoly on design, but our final speaker in this episode, Janine Benyus, examines the incredible design lessons we can take from the natural world.

Beyond Tolerance

We throw around the word ‘tolerance’ a lot – especially in the last year as politics has grown even more polarized. But how can we push past mere tolerance to true understanding and empathy? I remember when we first started talking about this episode Guy said he wanted it to be a deep dive into things you wouldn’t talk about at the dinner table, and we did just that: from race, to politics, to abortion, all the way to Israeli-Palestinian relations. Arthur Brooks tackles the question of how liberals and conservatives can work together – and why it’s so crucial. Diversity advocate Vernā Myers gives some powerful advice on how to conquer our unconscious biases. In the fraught and often painful debate around abortion, Aspen Baker emphasizes the need to listen: to be pro-voice, rather than pro-life or pro-choice. Finally Aziz Abu Sarah describes the tours he leads which bring Jews, Muslims and Christians across borders to break bread and forge new cultural ties.

Headspace

What I really love about this episode is that it takes a dense and difficult subject – mental health – and approaches it with this very human optimism, ultimately celebrating the resilience and power of our minds. The show opens up with Andrew Solomon, one of my favorite TED speakers, who shares what he has learned from his battle with depression, including how he forged meaning and identity from his experience with the illness. He has some fascinating and beautiful ideas around mental health and personality, which still resonate so strongly with me. Next, Alix Generous explains some of the misconceptions around Asperger’s Syndrome; she beautifully articulates the gap between her “complex inner life” and how she communicates with the world. David Anderson looks at the biology of emotion and how our brains function, painting a picture of how new research could revolutionize the way we understand and care for our mental health. Our fourth speaker, psychologist Guy Winch, gives some strong takeaways on how we can incorporate caring for our ‘emotional health’ in our daily lives.

Happy listening! To find out more about the show, follow us on Facebook and Twitter.


,

CryptogramFriday Squid Blogging: Squids from Space Video Game

An early preview.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramNSA Links WannaCry to North Korea

There's evidence:

Though the assessment is not conclusive, the preponderance of the evidence points to Pyongyang. It includes the range of computer Internet protocol addresses in China historically used by the RGB, and the assessment is consistent with intelligence gathered recently by other Western spy agencies. It states that the hackers behind WannaCry are also called "the Lazarus Group," a name used by private-sector researchers.

One of the agencies reported that a prototype of WannaCry ransomware was found this spring in a non-Western bank. That data point was a "building block" for the North Korea assessment, the individual said.

Honestly, I don't know what to think. I am skeptical, but I am willing to be convinced. (Here's the grugq, also trying to figure it out.) What I would like to see is the NSA evidence in more detail than they're probably comfortable releasing.

More commentary. Slashdot thread.

Sociological ImagesLonely Hearts: Estranged Fathers on Father’s Day

I work with one of the most heartbroken groups of people in the world: fathers whose adult children want nothing to do with them. While every day has its challenges, Father’s Day—with its parade of families and feel-good ads—makes it especially difficult for these Dads to avoid the feelings of shame, guilt and regret always lurking just beyond the reach of that well-practiced compartmentalization. Like birthdays, and other holidays, Father’s Day creates the wish, hope, or prayer that maybe today, please today, let me hear something, anything from my kid.

Many of these men are not only fathers but grandfathers who were once an intimate part of their grandchildren’s lives. Or, more tragically, they discovered they were grandfathers through a Facebook page, if they hadn’t yet been blocked. Or, they learn from an unwitting relative bearing excited congratulations, now surprised by the look of grief and shock that greets the newly announced grandfather. Hmm, what did I do with those cigars I put aside for this occasion?

And it’s not just being involved as a grandfather that gets denied. The estrangement may foreclose the opportunity to celebrate other developmental milestones he always assumed he’d attend, such as college graduations, engagement parties, or weddings. Maybe he was invited to the wedding but told he wouldn’t get to walk his daughter down the aisle because that privilege was being reserved for her father-in-law whom she’s decided is a much better father than he ever was.

Most people assume that a Dad would have to do something pretty terrible to make an adult child not want to have contact. My clinical experience working with estranged parents doesn’t bear this out. While those cases clearly exist, many parents get cut out as a result of the child needing to feel more independent and less enmeshed with the parent or parents. A not insignificant number of estrangements are influenced by a troubled or compelling son-in-law or daughter-in-law. Sometimes a parent’s divorce creates the opportunity for one parent to negatively influence the child against the other parent, or introduce people who compete for the parent’s love, attention or resources. In a highly individualistic culture such as ours, divorce may cause the child to view a parent more as an individual with relative strengths and weaknesses rather than a family unit of which they’re a part.

Little binds adult children to their parents today beyond whether or not the adult child wants that relationship. And a not insignificant number decide that they don’t.

While my clinical work hasn’t shown fathers to be more vulnerable to estrangement than mothers, they do seem to be more at risk of a lower level of investment from their adult children. A recent Pew survey found that women more commonly say their grown children turn to them for emotional support while men more commonly say this “hardly ever” or “never” occurs. This same study reported that half of adults say they are closer with their mothers, while only 15 percent say they are closer with their fathers.

So, yes, let’s take a moment to celebrate fathers everywhere. And another to feel empathy for those Dads who won’t have any contact with their child on Father’s Day.

Or any other day.

Josh Coleman is Co-Chair, Council on Contemporary Families, and author most recently of When Parents Hurt. Originally posted at Families as They Really Are.

(View original at https://thesocietypages.org/socimages)