Planet Russell

,

CryptogramNew Malware Hijacks Cryptocurrency Mining

This is a clever attack.

After gaining control of the coin-mining software, the malware replaces the wallet address the computer owner uses to collect newly minted currency with an address controlled by the attacker. From then on, the attacker receives all coins generated, and owners are none the wiser unless they take time to manually inspect their software configuration.

So far it hasn't been very profitable, but it -- or some later version -- eventually will be.

Worse Than FailureCoded Smorgasbord: Archive This

Michael W came into the office to a hair-on-fire freakout: the midnight jobs failed. The entire company ran on batch processes to integrate data across a dozen ERPs, mainframes, CRMs, PDQs, OMGWTFBBQs, etc.: each business unit ran its own choice of enterprise software, but then needed to share data. If they couldn’t share data, business ground to a halt.

Business had ground to a halt, and it was because the archiver job had failed to copy some files. Michael owned the archiver program, not by choice, but because he got the short end of that particular stick.

The original developer liked logging. Pretty much every method looked something like this:

public int execute(Map arg0, PrintWriter arg1) throws Exception {
    Logger=new Logger(Properties.getString("LOGGER_NAME"));
    Log=new Logger(arg1);
    .
    .
    .
catch (Exception e) {
    e.printStackTrace();
    Logger.error("Monitor: Incorrect arguments");
    Log.printError("Monitor: Incorrect arguments");
    arg1.write("In Correct Argument Passed to Method.Please Check the Arguments passed \n \r");
    System.out.println("Monitor: Incorrect arguments");
}

Sometimes, to make the logging even more thorough, the catch block might look more like this:

catch(Exception e){
    e.printStackTrace();
    Logger.error("An exception happened during SFTP movement/import. " + (String)e.getMessage());
}

Java added Generics in 2004. This code was written in 2014. Does it use generics? Of course not. Every Hashtable is stringly-typed:

Hashtable attributes;
.
.
.
if (((String) attributes.get(key)).compareTo("1") == 0 | ((String) attributes.get(key)).compareTo("0") == 0) { … }

And since everything is stringly-typed, you have to worry about case-sensitive comparisons, but don’t worry, the previous developer makes sure everything’s case-insensitive, even when comparing numbers:

if (flag.equalsIgnoreCase("1") ) { … }

And don’t forget to handle Booleans…

public boolean convertToBoolean(String data) {
    if (data.compareToIgnoreCase("1") == 0)
        return true;
    else
        return false;
}

And empty strings…

if(!TO.equalsIgnoreCase("") && TO !=null) { … }

Actually, since types are so confusing, let’s make sure we’re casting to know-safe types.

catch (Exception e) {
    Logger.error((Object)this, e.getStackTraceAsString(), null, null);
}

Yes, they really are casting this to Object.

Since everything is stringly typed, we need this code, which checks to see if a String parameter is really sure that it’s a string…

protected void moveFile(String strSourceFolder, Object strSourceObject,
                     String strDestFolder) {
    if (strSourceObject.getClass().getName().compareToIgnoreCase("java.lang.String") == 0) { … }
    …
}

Now, that all was enough to get Michael’s blood pressure up, but none of that had anything to do with his actual problem. Why did the copy fail? The logs were useless, as they were spammed with messages with no particular organization. The code was bad, sure, so it wasn’t surprising that it crashed. For a little while, Michael thought it might be the getFiles method, which was supposed to identify which files needed to be copied. It did a recursive directory search (with no depth checking, so one symlink could send it into an infinite loop) nor did it actually filter files that it didn’t care about. It just made an ArrayList of every file in the directory structure and then decided which ones to copy.

He spent some time really investigating the copy method, to see if that would help him understand what went wrong:

sourceFileLength = sourceFile.length();
newPath = sourceFile.getCanonicalPath();
newPath = newPath.replace(".lock", "");
newFile = new File(newPath);
sourceFile.renameTo(newFile);                    
destFileLength = newFile.length();
while(sourceFileLength!=destFileLength)
{
    //Copy In Progress
}
//Remy: I didn't elide any code from the inside of that while loop- that is exactly how it's written, as an empty loop.

Hey, out of curiosity, what does the JavaDoc have to say about renameTo?

Many aspects of the behavior of this method are inherently platform-dependent: The rename operation might not be able to move a file from one filesystem to another, it might not be atomic, and it might not succeed if a file with the destination abstract pathname already exists. The return value should always be checked to make sure that the rename operation was successful.

It only throws exceptions if you don’t supply a destination, or if you don’t have permissions to the files. Otherwise, it just returns false on a failure.

So… if the renameTo operation fails, the archiver program will drop into an infinite loop. Unlogged. Undetected. That might seem like the root cause of the failure, but it wasn’t.

As it turned out, the root cause was that someone in ops hit “Ok” on a security update, which triggered a reboot, disrupting all the scheduled jobs.

Michael still wanted to fix the archiver program, but there was another problem with that. He owned the InventoryArchiver.jar. There was also OrdersArchiver.jar, and HRArchiver.jar, and so on. They had all been “written” by the same developer. They all did basically the same job. So they were all mostly copy-and-paste jobs with different hard-coded strings to specify where they ran. But they weren’t exactly copy-and-paste jobs, so each one had to be analyzed, line by line, to see where the logic differences might possibly crop up.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #143

Here's what happened in the Reproducible Builds effort between Sunday January 14 and Saturday January 20 2018:

Upcoming events

Packages reviewed and fixed, and bugs filed

During reproducibility testing, 83 FTBFS bugs have been detected and reported by Adrian Bunk.

Reviews of unreproducible packages

56 package reviews have been added, 44 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

diffoscope development

Furthermore Juliana Oliveira has been working on a separated branch on parallizing diffoscope.

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianLaura Arjona Reina: It’s 2018, where’s my traditional New Year Plans post?

I closed my eyes, opened them again, a new year began, and we’re even almost finishing January. Time flies.

In this article I’ll post some updates about my life with computer, software and free software communities. It’s more a “what I’ve been doing” than a “new year plans” post… it seems that I’m learning to not to make so much plans (life comes to break them anyways!).

At home

My home server is still running Debian Jessie. I’m happy that it just works and my services are up, but I’m sad that I couldn’t find time for an upgrade to Debian stable (which is now Debian 9 Stretch) and maybe reinstall it with another config. I have lots of photos and videos to upload in my GNU MediaGoblin instances, but also couldn’t find time to do it (nor to print some of them, which was a plan for 2017, and the files still sleep in external harddrives or DVDs). So, this is a TODO item that crossed the year (yay! now I have almost 12 months ahead to try to complete it!). I’ll try to get this done before summer. I am considering installing my own pump.io instance but I’m not sure it’s good to place it in the same machine as the other services. We’ll see.

I bought a new laptop (well, second hand, but in a very good condition), a Lenovo X230, and this is now my main computer. It’s an i5 with 8 GB RAM. Wow, modern computer at home!
I’m very very happy with it, with its screen, keyboard, and everything. It’s running a clean install of Debian 9 stable with KDE Plasma Desktop and works great. It is not heavy at all so I carry it to work and use it in the public transport (when I can sit) for my contributions to free software.

My phone (Galaxy S III with Lineage OS 14 which is Android 7) fell down and the touchscreen broke (I can see the image but it is unresponsive to touch). When normal boot, the phone is recognized by the PC as storage, and thus I could recover most of the data on it, but it’s not recognized by adb (as when USB debugging is disabled). It is recognized by adb when booted into Recovery (TWRP), though. I tried to enable USB debugging in several ways from adb while in Recovery, but couldn’t. I could switch off the wifi, though, so when I booted the phone it does not receive new messages, etc. I bought an OTG cable but I have no wireless mouse at home and couldn’t make it work with a normal USB mouse. I’ve given up for now until I find a wireless mouse or I have more time, and temporarily returned to use my old Galaxy Ace (with CyanogenMod 7 which is Android 2.3.7). I’ve looked at new phones but I don’t like that all of them have integrated battery, the screens are too big, all of them are very expensive (I know they are hi-tech machines, but don’t want to carry so valuable stuff all the time in my pocket) and other things. I still need to find time to go shopping with the list of phones where I can install Lineage OS (I already visited some stores but didn’t get convinced by the price, or they had no suitable models).

My glasses broke (in a different incident than the phone) and I used old ones for two weeks, because in the middle of the new ones preparation I had some family issues to care about. So putting time in reading or writing in front of the computer has been a bit uncomfortable and I tried to avoid it in the last weeks. Now I have new glasses and I can see very well 🙂 so I’m returning to my computer TODO.

I’ve given up the battle against iThings at home (I lost). I don’t touch them but other members of the family use them. I’m considering contributing to Debian info about testing things or maintaining some wiki pages about accessing iThings from Debian etc, but will leave that for summer, maybe later. Now I just try not to get depressed about this.

At work

We still have servers running Debian Wheezy which is in LTS support until May. I’m confident that we’ll upgrade before Wheezy reaches end of life, but frankly looking at my work plan, I’m not sure when. Every month seems packed with other stuff. I’ve taken some weeks leave to attend my family and I have no clear mind about when and how do things. We’ll see.

I gave a course about free software (in Spanish) for University staff last October. It was 20 hours, and 20 attendants, mostly administrative staff, librarians, and some IT assistants. It went pretty well, we talked about the definition of free software, history, free culture, licenses, free software tools for the office, for Android, and free software as a service (“cloud” stuff). They liked it very much. Many of them didn’t know that our Uni uses free software for our webmail (RoundCube), Cloud services (OwnCloud), and other important areas. I requested promotional material from the FSFE and I gave away many stickers. I also gave away all the Debian stickers that I had, and some other free software stickers. I’m not sure when and how I will get new Debian stickers, not sure if somebody from Madrid is going to FOSDEM. I’m considering printing them myself but I don’t know a good printer (for stickers) here. I’ll ask and try with a small investment, and see how it works out.

Debian

I think I have too many things in my plate and would like to close some stuff and focus on other, or maybe do other things.

I feel comfortable doing publicity work, but I would be happier if the team gets bigger and we have more contributors. I’m happy that we managed to publish a Debian Project News issue in DebConf17, a new one in September, and a new one in November, but since then I couldn’t find time to put on it. I’ll try to make a new issue happen before February ends, though. Meanwhile, the team has managed to handle the different announcements (release points and others) and we try to keep the community informed via micronews (mostly) and the blog bits.debian.org.

I’m keeping an eye on DebConf18 organization and I hope I can engage with publicity work about it, but I feel that we will need a local team member that leads the what-to-publish/when-to-publish and probably translations too.

About Spanish translations, I’m very happy that the translations for the Debian website have new contributors and reviewers that are making a really good work. In the last months I’m a bit behind, just trying to review and keep my files up to date, but I hope I can setup a routine in the following weeks to get more involved again, and also try to translate new files too.

Since some time, the Debian website work is the one that keeps my motivation in Debian up. It’s like a paradox because the Debian website is too big, complicated, old in some sense, and we have so much stuff that needs to be done, and so many people complaining or giving ideas (without patches) that one would get overwhelmed, depressed and sometimes would like just to resign from this team. But after all these years, it is now when I feel comfortable with the codebase and experienced enough to try things, review bugs, and try to help with the things needed. So I’m happy to put time in the website team, updating or improving the website, even when I do mistakes, or triage bugs. Also, working in the website is very rewarding because there is always some small thing that I can do to fix something, and thus, “get something done” even when my time is limited. The bad news is that there are also some big tasks that require a lot of time and motivation, and I get them postponed and postponed… 😦 At least, I try to file bugs for all the stuff that I would like to put time on, and maybe slowly, but thanks to all the team members and other contributors, we are advancing: we have a more updated /partners section (still needs work), a new /derivatives section, and we are working on the migration from CVS to Git, the reorganization of the download pages, and other stuff.

Some times I’d like to do other/new things in Debian. Learn to package (and thus, package spigot and gnusrss, used in Publicity, or weewx, that we use it at work, and also help maintaining or adopting some small things), or join the Documentation Team, or put more work in the Outreach Team (relaunch the Welcome Team), or put more work in Internationalization Team. Or maybe other stuff. But before that, I feel that I would need to finish some pending tasks in my current teams, and also find more people for them, too.

Other free software communities

I am still active in the pump.io community, although I don’t post very often in my social network account. I’ll try to open Dianara more often, and use Puma in my new phone (maybe I should adopt/fork Puma…). I am present in the IRC channel (#pump.io in Freenode) and try to organize and attend the meetings. I have a big TODO which is advance our application to join Software Freedom Conservancy (another item that crossed the TODO from 2017 to 2018) but I’ll really try to get this done before January ends.

I keep on testing F-Droid and free software apps for Android (now again in Android 2.x, I get F-Droid crashes all the time “OutofMemory” :D). I keep on reading the IRC channels and mailing list (also the mailing list for Replicant. If I get the broken phone to work with the OTG I will install Replicant on it and will keep it for tests). I keep on translating Android apps when I have some time to kill.

I have no idea who is going to FOSDEM and if I should talk to them prior to their travel (e.g. ask to bring Debian stickers for me if somebody from Madrid goes, or promote if there is any F-Droid or Pump.io or GNU MediaGoblin IRC meeting or talk or whatever) but I really got busy in December-January with life and family stuff, so I just left FOSDEM apart in my mind and will try to join and see the streaming the weekend that the conference is happening, or maybe later.

I think that’s all, or at least this blogpost became very long and I don’t find anything else to write, for now, to make it longer. In any case, it’s hard for me these days to make plans more than one-two weeks ahead. Hopefully I’ll write in my blog more often during this year.

Comments?

You can comment on this post using this pump.io thread.

Planet DebianBenjamin Mako Hill: Introducing Computational Methods to Social Media Scientists

The ubiquity of large-scale data and improvements in computational hardware and algorithms have provided enabled researchers to apply computational approaches to the study of human behavior. One of the richest contexts for this kind of work is social media datasets like Facebook, Twitter, and Reddit.

We were invited by Jean BurgessAlice Marwick, and Thomas Poell to write a chapter about computational methods for the Sage Handbook of Social Media. Rather than simply listing what sorts of computational research has been done with social media data, we decided to use the chapter to both introduce a few computational methods and to use those methods in order to analyze the field of social media research.

A “hairball” diagram from the chapter illustrating how research on social media clusters into distinct citation network neighborhoods.

Explanations and Examples

In the chapter, we start by describing the process of obtaining data from web APIs and use as a case study our process for obtaining bibliographic data about social media publications from Elsevier’s Scopus API.  We follow this same strategy in discussing social network analysis, topic modeling, and prediction. For each, we discuss some of the benefits and drawbacks of the approach and then provide an example analysis using the bibliographic data.

We think that our analyses provide some interesting insight into the emerging field of social media research. For example, we found that social network analysis and computer science drove much of the early research, while recently consumer analysis and health research have become more prominent.

More importantly though, we hope that the chapter provides an accessible introduction to computational social science and encourages more social scientists to incorporate computational methods in their work, either by gaining computational skills themselves or by partnering with more technical colleagues. While there are dangers and downsides (some of which we discuss in the chapter), we see the use of computational tools as one of the most important and exciting developments in the social sciences.

Steal this paper!

One of the great benefits of computational methods is their transparency and their reproducibility. The entire process—from data collection to data processing to data analysis—can often be made accessible to others. This has both scientific benefits and pedagogical benefits.

To aid in the training of new computational social scientists, and as an example of the benefits of transparency, we worked to make our chapter pedagogically reproducible. We have created a permanent website for the chapter at https://communitydata.cc/social-media-chapter/ and uploaded all the code, data, and material we used to produce the paper itself to an archive in the Harvard Dataverse.

Through our website, you can download all of the raw data that we used to create the paper, together with code and instructions for how to obtain, clean, process, and analyze the data. Our website walks through what we have found to be an efficient and useful workflow for doing computational research on large datasets. This workflow even includes the paper itself, which is written using LaTeX + knitr. These tools let changes to data or code propagate through the entire workflow and be reflected automatically in the paper itself.

If you  use our chapter for teaching about computational methods—or if you find bugs or errors in our work—please let us know! We want this chapter to be a useful resource, will happily consider any changes, and have even created a git repository to help with managing these changes!


The book chapter and this blog post were written with Jeremy Foote and Aaron Shaw. You can read the book chapter here. This blog post was originally published on the Community Data Science Collective blog.

Planet Linux AustraliaSimon Lyall: Linux.conf.au – Day 2 – Keynote – Matthew Todd

Collaborating with Everybody: Open Source Drug Discovery

  • Term used is a bit undefined. Open Source, Free Drugs?
  • First Open Source Project – Praziquantel
    • Molecule has 2 mirror image forms. One does the job, other tastes awful. Pills were previously a mix
    • Project to just have pill with the single form
      • Created discussion
      • Online Lab Notebook
      • 75% of contributions were from private sector (especially Syncom)
      • Ended up finding a approach that worked, different from what was originally proposed from feedback.
      • Similar method found by private company that was also doing the work
  • Conventional Drug discovery
    • Find drug that kills something bad – Hit
    • Test it and see if it is suitable – Led
    • 13,500 molecules in public domain that kill maleria parasite
  • 6 Laws of Open Scrience
    • All data is open and all ideas are shared
    • Anyone can take part at any level of the project
  • Openness increasing seen as a key
  • Open Source Maleria
    • 4 campaigns
    • Work on a molecule, park it when doesn’t seem promising
    • But all data is still public
  • What it actually is
    • Electronic lab book (80% of scientists still use paper)
    • Using Labtrove, changing to labarchives
    • Everything you do goes up every day
    • Todo list
      • Tried stuff, ended up using issue list on github
      • Not using most other github stuff
    • Data on a Google Sheet
    • Light Website, twitter feed
  • Lab vs Code
  • Have a promising molecule – works well in mice
    • Would probably be a patentable state
    • Not sure yet exactly how it works
  • Competition – Predictive model
    • Lots of solutions submitted, not good enough to use
    • Hopeful a model will be created
  • Tried a a known-working molecule from elsewhere, but couldn’t get it to work
    • This is out in the open. Lots of discussion
  • School group able to recreate Daraprim, a high-priced US drug
  • Public Domain science is now accepted for publications
  • Need to to make computers understand molecule digram and convert to representative format which can then be search one.
  • Missing
    • Automated links to databases in tickets
    • Basic web page stuff, auto-porting of data, newsletter, become non-profit, stickers
    • Stuff is not folded back into the Wiki
  • OS Mycetoma – New Project
    • Fungus with no treatment
    • Working on possible molecule to treat
  • Some ideas on how to get products created this way to market – eg “data exclusivity”

 

Share

,

Planet DebianBits from Debian: Mentors and co-mentors for Debian's Google Summer of Code 2018

GSoC logo

Debian is applying as a mentoring organization for the Google Summer of Code 2018, an internship program open to university students aged 18 and up.

Debian already has a wide range of projects listed but it is not too late to add more or to improve the existing proposals. Google will start reviewing the ideas page over the next two weeks and students will start looking at it in mid-February.

Please join us and help extending Debian! You can consider listing a potential project for interns or listing your name as a possible co-mentor for one of the existing projects on Debian's Google Summer of Code wiki page.

At this stage, mentors are not obliged to commit to accepting an intern but it is important for potential mentors to be listed to get the process started. You will have the opportunity to review student applications in March and April and give the administrators a definite decision if you wish to proceed in early April.

Mentors, co-mentors and other volunteers can follow an intern through the entire process or simply volunteer for one phase of the program, such as helping recruit students in a local university or helping test the work completed by a student at the end of the summer.

Participating in GSoC has many benefits for Debian and the wider free software community. If you have questions, please come and ask us on IRC #debian-outreach or the debian-outreachy mailing list.

Planet DebianLars Wirzenius: Ick: a continuous integration system

TL;DR: Ick is a continuous integration or CI system. See http://ick.liw.fi/ for more information.

More verbose version follows.

First public version released

The world may not need yet another continuous integration system (CI), but I do. I've been unsatisfied with the ones I've tried or looked at. More importantly, I am interested in a few things that are more powerful than what I've ever even heard of. So I've started writing my own.

My new personal hobby project is called ick. It is a CI system, which means it can run automated steps for building and testing software. The home page is at http://ick.liw.fi/, and the download page has links to the source code and .deb packages and an Ansible playbook for installing it.

I have now made the first publicly advertised release, dubbed ALPHA-1, version number 0.23. It is of alpha quality, and that means it doesn't have all the intended features and if any of the features it does have work, you should consider yourself lucky.

Invitation to contribute

Ick has so far been my personal project. I am hoping to make it more than that, and invite contributions. See the governance page for the constitution, the getting started page for tips on how to start contributing, and the contact page for how to get in touch.

Architecture

Ick has an architecture consisting of several components that communicate over HTTPS using RESTful APIs and JSON for structured data. See the architecture page for details.

Manifesto

Continuous integration (CI) is a powerful tool for software development. It should not be tedious, fragile, or annoying. It should be quick and simple to set up, and work quietly in the background unless there's a problem in the code being built and tested.

A CI system should be simple, easy, clear, clean, scalable, fast, comprehensible, transparent, reliable, and boost your productivity to get things done. It should not be a lot of effort to set up, require a lot of hardware just for the CI, need frequent attention for it to keep working, and developers should never have to wonder why something isn't working.

A CI system should be flexible to suit your build and test needs. It should support multiple types of workers, as far as CPU architecture and operating system version are concerned.

Also, like all software, CI should be fully and completely free software and your instance should be under your control.

(Ick is little of this yet, but it will try to become all of it. In the best possible taste.)

Dreams of the future

In the long run, I would ick to have features like ones described below. It may take a while to get all of them implemented.

  • A build may be triggered by a variety of events. Time is an obvious event, as is source code repository for the project changing. More powerfully, any build dependency changing, regardless of whether the dependency comes from another project built by ick, or a package from, say, Debian: ick should keep track of all the packages that get installed into the build environment of a project, and if any of their versions change, it should trigger the project build and tests again.

  • Ick should support building in (or against) any reasonable target, including any Linux distribution, any free operating system, and any non-free operating system that isn't brain-dead.

  • Ick should manage the build environment itself, and be able to do builds that are isolated from the build host or the network. This partially works: one can ask ick to build a container and run a build in the container. The container is implemented using systemd-nspawn. This can be improved upon, however. (If you think Docker is the only way to go, please contribute support for that.)

  • Ick should support any workers that it can control over ssh or a serial port or other such neutral communication channel, without having to install an agent of any kind on them. Ick won't assume that it can have, say, a full Java run time, so that the worker can be, say, a micro controller.

  • Ick should be able to effortlessly handle very large numbers of projects. I'm thinking here that it should be able to keep up with building everything in Debian, whenever a new Debian source package is uploaded. (Obviously whether that is feasible depends on whether there are enough resources to actually build things, but ick itself should not be the bottleneck.)

  • Ick should optionally provision workers as needed. If all workers of a certain type are busy, and ick's been configured to allow using more resources, it should do so. This seems like it would be easy to do with virtual machines, containers, cloud providers, etc.

  • Ick should be flexible in how it can notify interested parties, particularly about failures. It should allow an interested party to ask to be notified over IRC, Matrix, Mastodon, Twitter, email, SMS, or even by a phone call and speech syntethiser. "Hello, interested party. It is 04:00 and you wanted to be told when the hello package has been built for RISC-V."

Please give feedback

If you try ick, or even if you've just read this far, please share your thoughts on it. See the contact page for where to send it. Public feedback is preferred over private, but if you prefer private, that's OK too.

CryptogramSkygofree: New Government Malware for Android

Kaspersky Labs is reporting on a new piece of sophisticated malware:

We observed many web landing pages that mimic the sites of mobile operators and which are used to spread the Android implants. These domains have been registered by the attackers since 2015. According to our telemetry, that was the year the distribution campaign was at its most active. The activities continue: the most recently observed domain was registered on October 31, 2017. Based on our KSN statistics, there are several infected individuals, exclusively in Italy.

Moreover, as we dived deeper into the investigation, we discovered several spyware tools for Windows that form an implant for exfiltrating sensitive data on a targeted machine. The version we found was built at the beginning of 2017, and at the moment we are not sure whether this implant has been used in the wild.

It seems to be Italian. Ars Technica speculates that it is related to Hacking Team:

That's not to say the malware is perfect. The various versions examined by Kaspersky Lab contained several artifacts that provide valuable clues about the people who may have developed and maintained the code. Traces include the domain name h3g.co, which was registered by Italian IT firm Negg International. Negg officials didn't respond to an email requesting comment for this post. The malware may be filling a void left after the epic hack in 2015 of Hacking Team, another Italy-based developer of spyware.

BoingBoing post.

Cory DoctorowMy keynote from ConveyUX 2017: “I Can’t Let You Do That, Dave.”

“The Internet’s broken and that’s bad news, because everything we do today involves the Internet and everything we’ll do tomorrow will require it. But governments and corporations see the net, variously, as a perfect surveillance tool, a perfect pornography distribution tool, or a perfect video on demand tool—not as the nervous system of the 21st century. Time’s running out. Architecture is politics. The changes we’re making to the net today will prefigure the future our children and their children will thrive in—or suffer under.”

—Cory Doctorow

ConveyUX is pleased to feature author and activist Cory Doctorow to close out our 2017 event. Cory’s body work includes fascinating science fiction and engaging non-fiction about the relationship between society and technology. His most recent book is Information Doesn’t Want to be Free: Laws for the Internet Age. Cory will delve into some of the issues expressed in that book and talk about issues that affect all of us now and in the future. Cory will be on hand for Q&A and a post-session book signing.

Planet DebianThomas Lange: FAI.me build service now supports backports

The FAI.me build service now supports packages from the backports repository. When selecting the stable distribution, you can also enable backports packages. The customized installation image will then uses the kernel from backports (currently 4.14) and you can add additional packages by appending /stretch-backports to the package name, e.g. notmuch/stretch-backports.

Currently, the FAIme service offers images build with Debian stable, stable with backports and Debian testing.

If you have any ideas for extensions or any feedback, send an email to FAI.me =at= fai-project.org

FAI.me

Planet DebianDirk Eddelbuettel: Rblpapi 0.3.8: Strictly maintenance

Another Rblpapi release, now at version 0.3.8, arrived on CRAN yesterday. Rblpapi provides a direct interface between R and the Bloomberg Terminal via the C++ API provided by Bloomberg Labs (but note that a valid Bloomberg license and installation is required).

This is the eight release since the package first appeared on CRAN in 2016. This release wraps up a few smaller documentation and setup changes, but also includes an improvement to the (less frequently-used) subscription mode which Whit cooked up on the weekend. Details below:

Changes in Rblpapi version 0.3.8 (2018-01-20)

  • The 140 day limit for intra-day data histories is now mentioned in the getTicks help (Dirk in #226 addressing #215 and #225).

  • The Travis CI script was updated to use run.sh (Dirk in #226).

  • The install_name_tool invocation under macOS was corrected (@spennihana in #232)

  • The blpAuthenticate help page has additional examples (@randomee in #252).

  • The blpAuthenticate code was updated and improved (Whit in #258 addressing #257)

  • The jump in version number was an oversight; this should have been 0.3.7.

And only while typing up these notes do I realize that I fat-fingered the version number. This should have been 0.3.7. Oh well.

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the Rblpapi page. Questions, comments etc should go to the issue tickets system at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramDark Caracal: Global Espionage Malware from Lebanon

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that's been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There's a lot in the full report. It's worth reading.

Three news articles.

Worse Than FailureAlien Code Reuse

“Probably the best thing to do is try and reorganize the project some,” Tim, “Alien”’s new boss said. “It’s a bit of a mess, so a little refactoring will help you understand how the code all fits together.”

“Alien” grabbed the code from git, and started walking through the code. As promised, it was a bit of a mess, but partially that mess came from their business needs. There was a bunch of common functionality in a Common module, but for each region they did business in- Asia, North America, Europe, etc.- there was a region specific deployable, each in its own module. Each region had its own build target that would include the Common module as part of the build process.

The region-specific modules were vaguely organized into sub-modules, and that’s where “Alien” settled in to start reorganizing. Since Asia was the largest, most complicated module, they started there, on a sub-module called InventoryManagement. THey moved some files around, set up the module and sub-modules in Maven, and then rebuilt.

The Common library failed to build. This gave “Alien” some pause, as they hadn’t touched anything pertaining to the Common project. Specifically, Common failed to build because it was looking for some files in the Asia.InventoryManagement sub-module. Cue the dive into the error trace and the vagaries of the build process. Was there a dependency between Common and Asia that had gone unnoticed? No. Was there a build-order issue? No. Was Maven just being… well, Maven? Yes, but that wasn’t the problem.

After hunting around through all the obvious places, “Alien” eventually ran an ls -al.

~/messy-app/base/Common/src/com/mycompany > ls -al
lrwxrwxrwx 1 alien  alien    39 Jan  4 19:10 InventoryManagement -> ../../../../../Asia/InventoryManagement/src/com/mycompany/IM/
drwxr-x--- 3 alien  alien  4096 Jan  4 19:10 core/

Yes, that is a symbolic link. A long-ago predecessor discovered that the Asia.InventoryManagement sub-module contained some code that was useful across all modules. Acutally moving that code into Common would have involved refactoring Asia, which was the largest, most complicated module. Presumably, that sounded like work, so instead they just added a sym-link. The files actually lived in Asia, but were compiled into Common.

“Alien” writes, “This is the first time in my over–20-year working life I see people reuse source code like this.”

They fixed this, and then went hunting, only to find a dozen more examples of this kind of code “reuse”.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaJames Morris: LCA 2018 Kernel Miniconf – SELinux Namespacing Slides

I gave a short talk on SELinux namespacing today at the Linux.conf.au Kernel Miniconf in Sydney — the slides from the talk are here: http://namei.org/presentations/selinux_namespacing_lca2018.pdf

This is a work in progress to which I’ve been contributing, following on from initial discussions at Linux Plumbers 2017.

In brief, there’s a growing need to be able to provide SELinux confinement within containers: typically, SELinux appears disabled within a container on Fedora-based systems, as a workaround for a lack of container support.  Underlying this is a requirement to provide per-namespace SELinux instances,  where each container has its own SELinux policy and private kernel SELinux APIs.

A prototype for SELinux namespacing was developed by Stephen Smalley, who released the code via https://github.com/stephensmalley/selinux-kernel/tree/selinuxns.  There were and still are many TODO items.  I’ve since been working on providing namespacing support to on-disk inode labels, which are represented by security xattrs.  See the v0.2 patch post for more details.

Much of this work will be of interest to other LSMs such as Smack, and many architectural and technical issues remain to be solved.  For those interested in this work, please see the slides, which include a couple of overflow pages detailing some known but as yet unsolved issues (supplied by Stephen Smalley).

I anticipate discussions on this and related topics (LSM stacking, core namespaces) later in the year at Plumbers and the Linux Security Summit(s), at least.

The session was live streamed — I gather a standalone video will be available soon!

ETA: the video is up! See:

Planet DebianDaniel Pocock: Keeping an Irish home warm and free in winter

The Irish Government's Better Energy Homes Scheme gives people grants from public funds to replace their boiler and install a zoned heating control system.

Having grown up in Australia, I think it is always cold in Ireland and would be satisfied with a simple control switch with a key to make sure nobody ever turns it off but that isn't what they had in mind for these energy efficiency grants.

Having recently stripped everything out of the house, right down to the brickwork and floorboards in some places, I'm cautious about letting any technologies back in without checking whether they are free and trustworthy.

bare home

This issue would also appear to fall under the scope of FSFE's Public Money Public Code campaign.

Looking at the last set of heating controls in the house, they have been there for decades. Therefore, I can't help wondering, if I buy some proprietary black box today, will the company behind it still be around when it needs a software upgrade in future? How many of these black boxes have wireless transceivers inside them that will be compromised by security flaws within the next 5-10 years, making another replacement essential?

With free and open technologies, anybody who is using it can potentially make improvements whenever they want. Every time a better algorithm is developed, if all the homes in the country start using it immediately, we will always be at the cutting edge of energy efficiency.

Are you aware of free and open solutions that qualify for this grant funding? Can a solution built with devices like Raspberry Pi and Arduino qualify for the grant?

Please come and share any feedback you have on the FSFE discussion list (join, reply to the thread).

Planet DebianNorbert Preining: Continuous integration testing of TeX Live sources

The TeX Live sources consists in total of around 15000 files and 8.7M lines (see git stats). It integrates several upstream projects, including big libraries like FreeType, Cairo, and Poppler. Changes come in from a variety of sources: external libraries, TeX specific projects (LuaTeX, pdfTeX etc), as well as our own adaptions and changes/patches to upstream sources. Since quite some time I wanted to have a continuous integration (CI) testing, but since our main repository is based on Subversion, the usual (easy, or the one I know) route via Github and one of the CI testing providers, didn’t come to my mind – until last week.

Over the weekend I have set up CI testing for our TeX Live sources by using the following ingredients: git-svn for checkout, Github for hosting, Travis-CI for testing, and a cron job that does the connection. To be more specific:

  • git-svn I use git-svn to check out only the source part of the (otherwise far to big) subversion repository onto my server. This is similar to the git-svn checkout of the whole of TeX Live as I reported here, but contains only the source part.
  • Github The git-svn checkout is pushed to the project TeX-Live/texlive-source on Github.
  • Travis-CI The CI testing is done in the TeX-Live/texlive-source project on Travis-CI (who are offering free services for open source projects, thanks!)

Although this sounds easy, there are a few stumbling blocks: First of all, the .travis.yml file is not contained in the main subversion repository. So adding it to the master tree that is managed via git-svn is not working, because the history is rewritten (git svn rebase). My solution was to create a separate branch travis-ci which adds only the .travis.yml file and merge master.

Travis-CI by default tests all branches, and does not test those not containing a .travis.yml, but to be sure I added an except clause stating that the master branch should not be tested. This way other developers can try different branches, too. The full .travis.yml can be checked on Github, here is the current status:

# .travis.yml for texlive-source CI building
# Norbert Preining
# Public Domain

language: c

branches:
  except:
  - master

before_script:
  - find . -name \*.info -exec touch '{}' \;

before_install:
  - sudo apt-get -qq update
  - sudo apt-get install -y libfontconfig-dev libx11-dev libxmu-dev libxaw7-dev

script: ./Build

What remains is stitching these things together by adding a cron job that regularly does git svn rebase on the master branch, merges the master branch into travis-ci branch, and pushes everything to Github. The current cron job is here:

#!/bin/bash
# cron job for updating texlive-source and pushing it to github for ci
set -e

TLSOURCE=/home/norbert/texlive-source.git
GIT="git --no-pager"

quiet_git() {
    stdout=$(tempfile)
    stderr=$(tempfile)

    if ! $GIT "$@" $stdout 2>$stderr; then
	echo "STDOUT of git command:"
	cat $stdout
	echo "************"
        cat $stderr >&2
        rm -f $stdout $stderr
        exit 1
    fi

    rm -f $stdout $stderr
}

cd $TLSOURCE
quiet_git checkout master
quiet_git svn rebase
quiet_git checkout travis-ci
# don't use [skip ci] here because we only built the 
# last commit, which would stop building
quiet_git merge master -m "merging master"
quiet_git push --all

With this setup we can CI testing of our changes in the TeX Live sources, and in the future maybe some developers will use separate branches to get testing there, too.

Enjoy.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 3 – Developers, Developers Miniconf

Beyond Web 2.0 Russell Keith-Magee

  • Django guy
  • Back in 2005 when Django first came out
    • Web was fairly simple, click something and something happened
    • model, views, templates, forms, url routing
  • The web c 2016
    • Rich client
    • API
    • mobile clients, native apps
    • realtime channels
  • Rich client frameworks
    • reponse to increased complexity that is required
    • Complex client-side and complex server-side code
  • Isomorphic Javascript development
    • Same code on both client and server
    • Only works with javascript really
    • hacks to work with other languages but not great
  • Isomorphic javascript development
    • Requirements
    • Need something in-between server and browser
    • Was once done with Java based web clients
    • model, view, controller
  • API-first development
  • How does it work with high-latency or no-connection?
  • Part of the controller and some of the model needed in the client
    • If you have python on the server you need python on the client
    • brython, skulp, pypy.js
    • <script type=”text/pyton”>
    • Note: Not phyton being compiled into javascript. Python is run in the browser
    • Need to download full python interpreter though (500k-15M)
    • Fairly fast
  • Do we need a full python interpreter?
    • Maybe something just to run the bytecode
    • Batavia
    • Javascript implementation of python virtual machine
    • 10KB
    • Downside – slower than cpython on the same machine
  • WASM
    • Like assembly but for the web
    • Benefits from 70y of experience with assembly languages
    • Close to Cpython speed
    • But
      • Not quite on browsers
      • No garbage collection
      • Cannot manipulate DOM
      • But both coming soon
  • Example: http://bit.ly/covered-in-bees
  • But “possible isn’t enough”
  • pybee.org
  • pybee.org/bee/join

Using “old skool” Free tools to easily publish API documentation – Alec Clew

  • https://github.com/alecthegeek/doc-api-old-skool
  • You API is successful if people are using it
  • High Quality and easy to use
  • Provide great docs (might cut down on support tickets)
  • Who are you writing for?
    • Might not have english as first language
    • New to the API
    • Might have different tech expertise (different languages)
    • Different tooling
  • Can be hard work
  • Make better docs
    • Use diagrams
    • Show real code (complete and working)
  • Keep your sentence simple
  • Keep the docs current
  • Treat documentation like code
    • Fix bugs
    • add features
    • refactor
    • automatic builds
    • Cross platform support
    • “Everything” is text and under version control
  • Demo using pandoc
  • Tools
  • pandoc, plantuml, Graphviz, M4, make, base/sed/python/etc

 

Lightning Talks

  • Nic – Alt attribute
    • need to be added to images
    • Don’t have alts when images as links
    • http://bit.ly/Nic-slides
  • Vaibhav Sager – Travis-CI
    • Builds codes
    • Can build websites
    • Uses to build Resume
    • Build presentations
  • Steve Ellis
    • Openshift Origin Demo
  • Alec Clews
    • Python vs C vs PHP vs Java vs Go for small case study
    • Implemented simple xmlrpc client in 5 languages
    • Python and Go were straightforward, each had one simple trick (40-50 lines)
    • C was 100 lines. A lot harder. Conversions, etc all manual
    • PHP wasn’t too hard. easier in modern vs older PHP
  • Daurn
    • Lua
    • Fengari.io – Lua in the browser
  • Alistair
    • How not to docker ( don’t trust the Internet)
    • Don’t run privileged
    • Don’t expose your docker socket
    • Don’t use host network mode
    • Don’t where your code is FROM
    • Make sure your kernel on your host is secure
  • Daniel
    • Put proxy in front of the docker socket
    • You can use it to limit what no-priv users with socket access to docker port can do

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 2

Manage all your tasks with TaskWarrior Paul ‘@pjf’ Fenwick

  • Lots of task management software out there
    • Tried lots
    • Doesn’t like proprietary ones, but unable to add features he wants
    • Likes command line
  • Disclaimer: “Most systems do not work for most people”
  • TaskWarrior
    • Lots of features
    • Learning cliff

Intro to TaskWarrior

  • Command line
  • Simple level can be just a todo list
  • Can add tags
    • unstructured many to many
    • Added just put putting “+whatever” on command
    • Great for searching
    • Can put all people or all types of jobs togeather
  • Meta Tags
    • Automatic date related (eg due this week or today)
  • Project
    • A bunch of tasks
    • Can be strung togeather
    • eg Travel project, projects for each trip inside them
  • Contexts (show only some projects and tasks)
    • Work tasks
    • Tasks for just a client
    • Home stuff
  • Annotation (Taking notes)
    • $ task 31 annotate “extra stuff”
    • has an auto timestamp
    • show by default, or just show a count of them
  • Tasks associated with dates
    • “wait”
    • Don’t show task until a date (approx)
    • Hid a task for an amount of time
    • Scheduled tasks urgency boasted at specific date
  • Until
    • delete a task after a certain date
  • Relative to other tasks
    • eg book flights 30 days before a conference
    • good for scripting, create a whole bunch of related tasks for a project
  • due dates
    • All sorts of things give (see above) gives tasks higher priority
    • Tasks can be manually changed
  • Tools and plugins
    • Taskopen – Opens resources in annotations (eg website, editor)
  • Working with others
    • Bugworrier – interfaces with github trello, gmail, jira, trac, bugzilla and lots of things
    • Lots of settings
    • Keeps all in sync
  • Lots of extra stuff
    • Paul updates his shell prompt to remind him things are busy
  • Also has
    • Graphical reports: burndown, calendar
    • Hooks: Eg hooks to run all sort of stuff
    • Online Sync
    • Android client
    • Web client
  • Reminder it has a steep learning curve.

Love thy future self: making your systems ops-friendly Matt Palmer

  • Instrumentation
  • Instrumenting incoming requests
    • Count of the total number of requests (broken down by requestor)
    • Count of reponses (broken down by request/error)
    • How long it took (broken down by sucess/errors
    • How many right now
  • Get number of in-progress requests, average time etc
  • Instrumenting outgoing requests
    • For each downstream component
    • Number of request sent
    • how many reponses we’ve received (broken down by success/err)
    • How long it too to get the response (broken down by request/ error)
    • How many right now
  • Gives you
    • incoming/outgoing ratio
    • error rate = problem is downstream
  • Logs
    • Logs cost tends to be more than instrumentation
  • Three Log priorities
    • Error
      • Need a full stack trace
      • Add info don’t replace it
      • Capture all the relivant variables
      • Structure
    • Information
      • Startup messages
      • Basic request info
      • Sampling
    • Debug
      • printf debugging at webcale
      • tag with module/method
      • unique id for each request
      • late-bind log data if possible.
      • Allow selective activation at runtime (feature flag, special url, signals)
    • Summary
      • Visbility required
      • Fault isolation

 

Share

Planet DebianShirish Agarwal: PrimeZ270-p, Intel i7400 review and Debian – 1

This is going to be a biggish one as well.

This is a continuation from my last blog post .

Before diving into installation, I had been reading for quite a while Matthew Garett’s work. Thankfully most of his blog posts do get mirrored on planet.debian.org hence it is easy to get some idea as what needs to be done although have told him (I think even shared here) that he should somehow make his site more easily navigable. Trying to find posts on either ‘GPT’ and ‘UEFI’ and to have those posts in an ascending or descending way date-wise is not possible, at least I couldn’t find a way to do it as he doesn’t do it date-wise or something.

The closest I could come to is sing ‘$keyword’ site:https://mjg59.dreamwidth.org/ via a search-engine and go through the entries shared therein. This doesn’t mean I don’t value his contribution. It is in fact, the opposite. AFAIK he was one of the first people who drew the community’s attention when UEFI came in and only Microsoft Windows could be booted on them, nothing else.

I may be wrong but AFAIK he was the first one to talk about having a shim and was part of getting people to be part of the shim process.

While I’m sure Matthew’s understanding may have evolved significantly from what he had shared before, it was two specific blog posts that I had to re-read before trying to install MS-Windows and then Debian-GNU/Linux system on it. .

I went to a friend’s house who had windows 7 running at his end, I ran over there, used diskpart and did the change to GPT using Windows technet article.

I had to use/go the GPT way as I understood that MS-Windows takes all the four primary partitions for itself, leaving nothing for any other operating system to do/use .

I did the conversion to GPT and tried to have MS-Windows 10 as my current motherboard and all future motherboards from Intel Gen7/Gen8 onwards do not support anything less than Windows 10. I did see an unofficial patch floating on github somewhere but now have lost the reference to it. I had read some of the bug-reports of the repo. which seemed to suggest it was still a work in progress.

Now this is where it starts becoming a bit… let’s say interesting.

Now a friend/client of mine offered me a job to review MS-Windows 10, with his product keys of course. I was a bit hesitant as it had been a long time since I had worked with MS-Windows and didn’t know if I could do it or not, the other was a suspicion that I might like it too much. While I did review it, I found –

a. It it one heck of a bloatware – I had thought MS-Windows would have learned it by now but no, they still have to have to learn that adware and bloatware aren’t solutions. I still can’t get my head wrapped around as to how 4.1 GB of an MS-WIndows ISO gets extracted to 20 GB and still have to install shit-loads of third-party tools to actually get anything done. Just amazed (and not in good way.) .

Just to share as an example I still had to get something like Revo Uninstaller as MS-Windows even till date hasn’t learned to uninstall programs cleanly and needs a tool like that to clean the registry and other places to remove the titbits left along the way.

Edit/Update – It still doesn’t have Fall Creators Update which is still supposed to be another 4 GB+ iso which god only knows how much space that will take.

b. It’s still not gold – With all the hoopla around MS-Windows 10 that I had been hearing and seeing ads, I was under the impression that MS-Windows had turned gold i.e. it had a release like Debian would have ‘buster’ something around next year probably around or after 2019 Debconf is held. Windows 10 Microsoft would be released around July 2018, so it’s still a few months off.

c. I had read an insightful article few years ago by a Junior Microsoft employee sharing/emphasizing why MS cannot do GNU/Linux volunteer/bazaar type of development. To put in not so many words, it came down to the cultural differences the way two communities operate. While in GNU/Linux a one more patch, one more pull request will be encouraged, and it may be integrated in that point release or it can’t it would be in the next point release (unless it changes something much more core/fundamentally which needs more in-depth review) MS-Windows on the other hand, actively discourages that sort of behavior as it meant more time for integration and testing and from the sound of it MS still doesn’t do Continuous Integration (CI), regressive testing etc. as is common in many GNU/Linux common projects more and more.

I wish I could have shared the article but don’t have the link anymore. @Lazyweb, if you would be so kind so as to help find that article. The developer had shared some sort of ssh credentials or something to prove who he was which he later to remove (probably) because of the consequences to him for sharing that insight were not worth it, although the writings seemed to be valid.

There were many more quibbles but shared the above ones. For e.g. copying files from hdd to usb disks doesn’t tell how much time it takes, while in Debian I’ve come to see time taken for any operation as guaranteed.

Before starting on to the main issue, some info. before-hand although I don’t know how relevant or not that info. might be –

Prime Z270-P uses EFI 2.60 by American Megatrends –

/home/shirish> sudo dmesg | grep -i efi
[sudo] password for shirish:
[ 0.000000] efi: EFI v2.60 by American Megatrends

I can share more info. if needed later.

Now as I understood/interpretated info. found on the web and by experience Microsoft makes quite a few more partitions than necessary to get MS-Windows installed.

This is how it stacks up/shows up –

> sudo fdisk -l
Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: xxxxxxxxxxxxxxxxxxxxxxxxxxx

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data
/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem
/dev/sda6 3718232064 5280731135 1562499072 745.1G Linux filesystem
/dev/sda7 5280731136 7761199103 2480467968 1.2T Linux filesystem
/dev/sda8 7761199104 7814035455 52836352 25.2G Linux swap

I had made 2 GB for /boot in MS-Windows installer as I had thought it would take only some space and leave the rest for Debian GNU/Linux’s /boot to put its kernel entries, tools to check memory and whatever else I wanted to have on /boot/debian but for some reason I have not yet understood, that didn’t work out as I expected it to be.

Device Start End Sectors Size Type
/dev/sda1 34 262177 262144 128M Microsoft reserved
/dev/sda2 264192 1185791 921600 450M Windows recovery environment
/dev/sda3 1185792 1390591 204800 100M EFI System
/dev/sda4 1390592 3718037503 3716646912 1.7T Microsoft basic data

As seen in the above, the first four primary partitions are taken by MS-Windows themselves. I just wish I had understood how to use GPT disklabels properly so I could figure out things better, but it seems (for reasons not fully understood) why the efi partition is a lowly 100 MB which I suspect where /boot is when I asked it to be 2 GB. Is that UEFI doing, Microsoft’s doing or something which is a default bit, dunno. Having the EFI partition smaller hampers the way I want to do things as will be clear in a short while from now.

After I installed MS-Windows, I installed Debian GNU/Linux using the net install method.

The following is what I had put on piece of paper as what partitions would be for GNU/Linux –

/boot – 512 MB (should be enough to accommodate couple of kernel versions, memory checking and any other tools I might need in the future.

/ – 700 GB – well admittedly that looks insane a bit but I do like to play with new programs/binaries as and when possible and don’t want to run out of space as and when I forget to clean it up.

[off-topic, wishlist] One tool I would like to have (and dunno if it’s there) is an ability to know when I installed a package, how many times I have used it, how frequently and the ability to add small notes or description to the package. Many a times I have seen that the package description is either too vague or doesn’t focus on the practical usefulness of a package to me .

An easy example to share what I mean would be the apt package –

aptitude show apt
Package: apt
Version: 1.6~alpha6
Essential: yes
State: installed
Automatically installed: no
Priority: required
Section: admin
Maintainer: APT Development Team
Architecture: amd64
Uncompressed Size: 3,840 k
Depends: adduser, gpgv | gpgv2 | gpgv1, debian-archive-keyring, libapt-pkg5.0 (>= 1.6~alpha6), libc6 (>= 2.15), libgcc1 (>= 1:3.0), libgnutls30 (>= 3.5.6), libseccomp2 (>=1.0.1), libstdc++6 (>= 5.2)
Recommends: ca-certificates
Suggests: apt-doc, aptitude | synaptic | wajig, dpkg-dev (>= 1.17.2), gnupg | gnupg2 | gnupg1, powermgmt-base, python-apt
Breaks: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~), aptitude (< 0.8.10)
Replaces: apt-transport-https (< 1.5~alpha4~), apt-utils (< 1.3~exp2~)
Provides: apt-transport-https (= 1.6~alpha6)
Description: commandline package manager
This package provides commandline tools for searching and managing as well as querying information about packages as a low-level access to all features of the libapt-pkg library.

These include:
* apt-get for retrieval of packages and information about them from authenticated sources and for installation, upgrade and removal of packages together with their dependencies
* apt-cache for querying available information about installed as well as installable packages
* apt-cdrom to use removable media as a source for packages
* apt-config as an interface to the configuration settings
* apt-key as an interface to manage authentication keys

Now while I love all the various tools that the apt package has, I do have special fondness for $apt-cache rdepends $package

as it gives another overview of a package or library or shared library that I may be interested in and which other packages are in its orbit.

Over period of time it becomes easy/easier to forget packages that you don’t use day-to-day hence having something like such a tool would be a god-send where you can put personal notes about packages. Another could be reminders of tickets posted upstream or something on those lines. I don’t know of any tool/package which does something on those lines. [/off-topic, wishlist]

/home – 1.2 TB

swap – 25.2 GB

Admit I got a bit overboard on swap space but as and when I get more memory at least should have swap 1:1 right. I am not sure if the old rules would still apply or not.

Then I used Debian buster alpha 2 netinstall iso

https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/debian-buster-DI-alpha2-amd64-netinst.iso and put it on the usb stick. I did use the sha1sum to ensure that the netinstall iso was the same as the original one https://cdimage.debian.org/cdimage/buster_di_alpha2/amd64/iso-cd/SHA1SUMS

After that simply doing a dd if of was enough to copy the net install to the usb stick.

I did have some issues with the installation which I’ll share in the next post but the most critical issue was that I had to again do make a /boot and even though I made /boot as a separate partition and gave 1 GB to it during the partitioning step, I got only 100 MB and I have no idea why it is like that.

/dev/sda5 3718037504 3718232063 194560 95M Linux filesystem

> df -h /boot
Filesystem Size Used Avail Use% Mounted on
/dev/sda5 88M 68M 14M 84% /boot

home/shirish> ls -lh /boot
total 55M
-rw-r--r-- 1 root root 193K Dec 22 19:42 config-4.14.0-2-amd64
-rw-r--r-- 1 root root 193K Jan 15 01:15 config-4.14.0-3-amd64
drwx------ 3 root root 1.0K Jan 1 1970 efi
drwxr-xr-x 5 root root 1.0K Jan 20 10:40 grub
-rw-r--r-- 1 root root 19M Jan 17 10:40 initrd.img-4.14.0-2-amd64
-rw-r--r-- 1 root root 21M Jan 20 10:40 initrd.img-4.14.0-3-amd64
drwx------ 2 root root 12K Jan 1 17:49 lost+found
-rw-r--r-- 1 root root 2.9M Dec 22 19:42 System.map-4.14.0-2-amd64
-rw-r--r-- 1 root root 2.9M Jan 15 01:15 System.map-4.14.0-3-amd64
-rw-r--r-- 1 root root 4.4M Dec 22 19:42 vmlinuz-4.14.0-2-amd64
-rw-r--r-- 1 root root 4.7M Jan 15 01:15 vmlinuz-4.14.0-3-amd64

root@debian:/boot/efi/EFI# ls -lh
total 3.0K
drwx------ 2 root root 1.0K Dec 31 21:38 Boot
drwx------ 2 root root 1.0K Dec 31 19:23 debian
drwx------ 4 root root 1.0K Dec 31 21:32 Microsoft

I would be the first to say I don’t really the understand this EFI business.

The only thing I do understand that it’s good that even without OS it becomes easier to see that all the components if you change/add which would or would not work in BIOS. In bios, getting info on components were iffy at best.

There have been other issues with EFI which I may take in another blog post but for now I would be happy if somebody can share –

how to have a big /boot/ so it’s not a small partition for debian boot. I don’t see any value in having a bigger /boot for MS-Windows unless there is a way to also get grub2 pointer/header added in MS-Windows bootloader. Will share the reasons for it in the next blog post.

I am open to reinstalling both MS-Windows and Debian from scratch although that would happen when debian-buster-alpha3 arrives. Any answer to the above would give me something to try the solution and share if I get the desired result.

Looking forward for answers.

Planet DebianLouis-Philippe Véronneau: French Gender-Neutral Translation for Roundcube

Here's a quick blog post to tell the world I'm now doing a French gender-neutral translation for Roundcube.

A while ago, someone wrote on the Riseup translation list to complain against the current fr_FR translation. French is indeed a very gendered language and it is common place in radical spaces to use gender-neutral terminologies.

So yeah, here it is: https://github.com/baldurmen/roundcube_fr_FEM

I haven't tested the UI integration yet, but I'll do that once the Riseup folks integrate it to their Roundcube instance.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 1 – Kernel Miniconf

Look out for what’s in the security pipeline – Casey Schaufler

Old Protocols

  • SeLinux
    • No much changing
  • Smack
    • Network configuration improvements and catchup with how the netlable code wants things to be done.
  • AppArmor
    • Labeled objects
    • Networking
    • Policy stacking

New Security Modules

  • Some peopel think existing security modules don’t work well with what they are doing
  • Landlock
    • eBPF extension to SECMARK
    • Kills processes when it goes outside of what it should be doing
  • PTAGS
    • General purpose process tags
    • Fro application use ( app can decide what it wants based on tags, not something external to the process enforcing things )
  • HardChroot
    • Limits on chroot jail
    • mount restrictions
  • Safename
    • Prevents creation of unsafe files names
    • start, middle or end characters
  • SimpleFlow
    • Tracks tainted data

Security Module Stacking

  • Problems with incompatibility of module labeling
  • People want different security policy and mechanism in containers than from the base OS
  • Netfilter problems between smack and Apparmor

Container

  • Containers are a little bit undefined right now. Not a kernel construct
  • But while not kernel constructs, need to work with and support them

Hardening

  • Printing pointers (eg in syslog)
  • Usercopy

 

Share

,

Planet DebianDirk Eddelbuettel: #15: Tidyverse and data.table, sitting side by side ... (Part 1)

Welcome to the fifteenth post in the rarely rational R rambling series, or R4 for short. There are two posts I have been meaning to get out for a bit, and hope to get to shortly---but in the meantime we are going start something else.

Another longer-running idea I had was to present some simple application cases with (one or more) side-by-side code comparisons. Why? Well at times it feels like R, and the R community, are being split. You're either with one (increasingly "religious" in their defense of their deemed-superior approach) side, or the other. And that is of course utter nonsense. It's all R after all.

Programming, just like other fields using engineering methods and thinking, is about making choices, and trading off between certain aspects. A simple example is the fairly well-known trade-off between memory use and speed: think e.g. of a hash map allowing for faster lookup at the cost of some more memory. Generally speaking, solutions are rarely limited to just one way, or just one approach. So if pays off to know your tools, and choose wisely among all available options. Having choices is having options, and those tend to have non-negative premiums to take advantage off. Locking yourself into one and just one paradigm can never be better.

In that spirit, I want to (eventually) show a few simple comparisons of code being done two distinct ways.

One obvious first candidate for this is the gunsales repository with some R code which backs an earlier NY Times article. I got involved for a similar reason, and updated the code from its initial form. Then again, this project also helped motivate what we did later with the x13binary package which permits automated installation of the X13-ARIMA-SEATS binary to support Christoph's excellent seasonal CRAN package (and website) for which we now have a forthcoming JSS paper. But the actual code example is not that interesting / a bit further off the mainstream because of the more specialised seasonal ARIMA modeling.

But then this week I found a much simpler and shorter example, and quickly converted its code. The code comes from the inaugural datascience 1 lesson at the Crosstab, a fabulous site by G. Elliot Morris (who may be the highest-energy undergrad I have come across lately) focusssed on political polling, forecasts, and election outcomes. Lesson 1 is a simple introduction, and averages some polls of the 2016 US Presidential Election.

Complete Code using Approach "TV"

Elliot does a fine job walking the reader through his code so I will be brief and simply quote it in one piece:


## Getting the polls

library(readr)
polls_2016 <- read_tsv(url("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv"))

## Wrangling the polls

library(dplyr)
polls_2016 <- polls_2016 %>%
    filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"))
library(lubridate)
polls_2016 <- polls_2016 %>%
    mutate(end_date = ymd(end_date))
polls_2016 <- polls_2016 %>%
    right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date),
                                              max(polls_2016$end_date), by="days")))

## Average the polls

polls_2016 <- polls_2016 %>%
    group_by(end_date) %>%
    summarise(Clinton = mean(Clinton),
              Trump = mean(Trump))

library(zoo)
rolling_average <- polls_2016 %>%
    mutate(Clinton.Margin = Clinton-Trump,
           Clinton.Avg =  rollapply(Clinton.Margin,width=14,
                                    FUN=function(x){mean(x, na.rm=TRUE)},
                                    by=1, partial=TRUE, fill=NA, align="right"))

library(ggplot2)
ggplot(rolling_average)+
  geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") +
  geom_point(aes(x=end_date,y=Clinton.Margin))

It uses five packages to i) read some data off them interwebs, ii) then filters / subsets / modifies it leading to a right (outer) join with itself before iv) averaging per-day polls first and then creates rolling averages over 14 days before v) plotting. Several standard verbs are used: filter(), mutate(), right_join(), group_by(), and summarise(). One non-verse function is rollapply() which comes from zoo, a popular package for time-series data.

Complete Code using Approach "DT"

As I will show below, we can do the same with fewer packages as data.table covers the reading, slicing/dicing and time conversion. We still need zoo for its rollapply() and of course the same plotting code:


## Getting the polls

library(data.table)
pollsDT <- fread("http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv")

## Wrangling the polls

pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ]
pollsDT[, end_date := as.IDate(end_date)]
pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]),
                                              max(pollsDT[,end_date]), by="days")), on="end_date"]

## Average the polls

library(zoo)
pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), by=end_date]
pollsDT[, Clinton.Margin := Clinton-Trump]
pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14,
                                   FUN=function(x){mean(x, na.rm=TRUE)},
                                   by=1, partial=TRUE, fill=NA, align="right")]

library(ggplot2)
ggplot(pollsDT) +
    geom_line(aes(x=end_date,y=Clinton.Avg),col="blue") +
    geom_point(aes(x=end_date,y=Clinton.Margin))

This uses several of the components of data.table which are often called [i, j, by=...]. Row are selected (i), columns are either modified (via := assignment) or summarised (via =), and grouping is undertaken by by=.... The outer join is done by having a data.table object indexed by another, and is pretty standard too. That allows us to do all transformations in three lines. We then create per-day average by grouping by day, compute the margin and construct its rolling average as before. The resulting chart is, unsurprisingly, the same.

Benchmark Reading

We can looking how the two approaches do on getting data read into our session. For simplicity, we will read a local file to keep the (fixed) download aspect out of it:

R> url <- "http://elections.huffingtonpost.com/pollster/api/v2/questions/16-US-Pres-GE%20TrumpvClinton/poll-responses-clean.tsv"
R> download.file(url, destfile=file, quiet=TRUE)
R> file <- "/tmp/poll-responses-clean.tsv"
R> res <- microbenchmark(tidy=suppressMessages(readr::read_tsv(file)),
+                       dt=data.table::fread(file, showProgress=FALSE))
R> res
Unit: milliseconds
 expr     min      lq    mean  median      uq      max neval
 tidy 6.67777 6.83458 7.13434 6.98484 7.25831  9.27452   100
   dt 1.98890 2.04457 2.37916 2.08261 2.14040 28.86885   100
R> 

That is a clear relative difference, though the absolute amount of time is not that relevant for such a small (demo) dataset.

Benchmark Processing

We can also look at the processing part:

R> rdin <- suppressMessages(readr::read_tsv(file))
R> dtin <- data.table::fread(file, showProgress=FALSE)
R> 
R> library(dplyr)
R> library(lubridate)
R> library(zoo)
R> 
R> transformTV <- function(polls_2016=rdin) {
+     polls_2016 <- polls_2016 %>%
+         filter(sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"))
+     polls_2016 <- polls_2016 %>%
+         mutate(end_date = ymd(end_date))
+     polls_2016 <- polls_2016 %>%
+         right_join(data.frame(end_date = seq.Date(min(polls_2016$end_date), 
+                                                   max(polls_2016$end_date), by="days")))
+     polls_2016 <- polls_2016 %>%
+         group_by(end_date) %>%
+         summarise(Clinton = mean(Clinton),
+                   Trump = mean(Trump))
+ 
+     rolling_average <- polls_2016 %>%
+         mutate(Clinton.Margin = Clinton-Trump,
+                Clinton.Avg =  rollapply(Clinton.Margin,width=14,
+                                         FUN=function(x){mean(x, na.rm=TRUE)}, 
+                                         by=1, partial=TRUE, fill=NA, align="right"))
+ }
R> 
R> transformDT <- function(dtin) {
+     pollsDT <- copy(dtin) ## extra work to protect from reference semantics for benchmark
+     pollsDT <- pollsDT[sample_subpopulation %in% c("Adults","Likely Voters","Registered Voters"), ]
+     pollsDT[, end_date := as.IDate(end_date)]
+     pollsDT <- pollsDT[ data.table(end_date = seq(min(pollsDT[,end_date]), 
+                                                   max(pollsDT[,end_date]), by="days")), on="end_date"]
+     pollsDT <- pollsDT[, .(Clinton=mean(Clinton), Trump=mean(Trump)), 
+                        by=end_date][, Clinton.Margin := Clinton-Trump]
+     pollsDT[, Clinton.Avg := rollapply(Clinton.Margin, width=14,
+                                        FUN=function(x){mean(x, na.rm=TRUE)}, 
+                                        by=1, partial=TRUE, fill=NA, align="right")]
+ }
R> 
R> res <- microbenchmark(tidy=suppressMessages(transformTV(rdin)),
+                       dt=transformDT(dtin))
R> res
Unit: milliseconds
 expr      min       lq     mean   median       uq      max neval
 tidy 12.54723 13.18643 15.29676 13.73418 14.71008 104.5754   100
   dt  7.66842  8.02404  8.60915  8.29984  8.72071  17.7818   100
R> 

Not quite a factor of two on the small data set, but again a clear advantage. data.table has a reputation for doing really well for large datasets; here we see that it is also faster for small datasets.

Side-by-side

Stripping the reading, as well as the plotting both of which are about the same, we can compare the essential data operations.

Summary

We found a simple task solved using code and packages from an increasingly popular sub-culture within R, and contrasted it with a second approach. We find the second approach to i) have fewer dependencies, ii) less code, and iii) running faster.

Now, undoubtedly the former approach will have its staunch defenders (and that is all good and well, after all choice is good and even thirty years later some still debate vi versus emacs endlessly) but I thought it to be instructive to at least to be able to make an informed comparison.

Acknowledgements

My thanks to G. Elliot Morris for a fine example, and of course a fine blog and (if somewhat hyperactive) Twitter account.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaBen Martin: 4cm thick wood cnc project: shelf

The lighter wood is about 4cm thick. Both of the sides are cut from a single plank of timber which left the feet with a slight weak point at the back. Given a larger bit of timber I would have tapered the legs outward from the back more gradually. But the design is restricted by the timber at hand.


The shelves are plywood which turned out fairly well after a few coats of poly. I knocked the extreme sharp edges of the ply so its a hurt a little rather than a lot if you accidentally poke the edge. This is a mixed machine and human build, the back of the plywood that meets the uprights was knocked off using a bandsaw.

Being able to CNC thick timber like this opens up more bold designs. Currently I have to use a 1/2 inch bit to get this reach. Stay tuned for more CNC timber fun!


,

Planet DebianRuss Allbery: New year haul

Some new acquired books. This is a pretty wide variety of impulse purchases, filled with the optimism of a new year with more reading time.

Libba Bray — Beauty Queens (sff)
Sarah Gailey — River of Teeth (sff)
Seanan McGuire — Down Among the Sticks and Bones (sff)
Alexandra Pierce & Mimi Mondal (ed.) — Luminescent Threads (nonfiction anthology)
Karen Marie Moning — Darkfever (sff)
Nnedi Okorafor — Binti (sff)
Malka Older — Infomocracy (sff)
Brett Slatkin — Effective Python (nonfiction)
Zeynep Tufekci — Twitter and Tear Gas (nonfiction)
Martha Wells — All Systems Red (sff)
Helen S. Wright — A Matter of Oaths (sff)
J.Y. Yang — Waiting on a Bright Moon (sff)

Several of these are novellas that were on sale over the holidays; the rest came from a combination of reviews and random on-line book discussions.

The year hasn't been great for reading time so far, but I do have a couple of things ready to review and a third that I'm nearly done with, which is not a horrible start.

Planet DebianShirish Agarwal: PC desktop build, Intel, spectre issues etc.

This is and would be a longish one.

I have been using desktop computers for around couple of decades now. My first two systems were an Intel Pentium III and then a Pentium Dual-core, the first one on kobian/mercury motherboard. The motherboards were actually called Mercury and was a brand which was later sold to Kobian which kept the brand-name. The motherboards and the CPU/processor used to be cheap. One could set up a decentish low-end system with display for around INR 40k/- which seemed to be decent as a country we had just come out of non-alignment movement and also chose to come out of isolationist tendencies (technology and otherwise as well). Most middle-class income families got their first taste of computers after y2k. There were quite a few y2k incomes which prompted the Government to lose duties further.

One of the highlights during 1991 when satellite TV came was shown by CNN (probably CNN International) was the coming down of the Berlin Wall. There were many of us who were completely ignorant of world politics or what is/was happening in other parts of the world.

Computer systems at those times were considered a luxury item and duties were sky-high ( between 1992-2001). The launch of Mars Pathfinder, its subsequent successful landing on the Martian surface also catapulted people’s imagination about PCs and micro-processors.

I can still recall the excitement that was among young people of my age first seeing the liftoff from Cape Canaveral and then later the processed images of Spirits cameras showing images of a desolate desert-type land. We also witnessed the beginnings of ‘International Space Station‘ (ISS) .

Me and few of my friends had drunk lot of Carl Sagan and many other sci-fi coolaids/stories. Star Trek, the movies and the universal values held/shared by them was a major influence to all our lives.

People came to know about citizen based science or projects/distributed science projects, y2k fear appeared to be unfounded all these factors and probably a few more prompted the Government of India to reduce duties on motherboards, processors, components as well as taking Computers out of the restricted list which lead to competition and finally the common man being able to dream of a system sooner than later. Y2K also kick-started the beginnings of Indian software industry which is the bread and butter of many a middle class men-women who are in the service industry using technology directly or indirectly.

In 2002 I bought my first system, an Intel Pentium III, i810 chipset (integrated graphics) with 256 MB of SDRAM which was supposed to be sufficient for the tasks it was being used for, Some light gaming, some web-mails, seeing movies,etc running on a mercury board. I don’t remember the code-name partly because the code-names are/were really weird and partly because it is just too long ago. I remember using Windows ’98 and trying to install one of the early GNU/Linux variants on that machine. Ir memory serves right, you had to flick a jumper (like a switch) to use the extended memory.

I do not know/remember what happened but I think somewhere in a year or two in that time-frame Mercury India filed for bankruptcy and the name, manufacturing was sold to Kobian. After Kobian took over the ownership, it said it would neither honor the 3/5 year warranty or even repairs on the motherboards Mercury had sold, it created a lot of bad will against the company and relegated itself to the bottom of the pile for both experienced and new system-builders. Also mercury motherboards weren’t reputed/known to have a long life although the one I had gave me quite a decent life.

The next machine I purchased was a Pentium Dual-core, (around 2009/2010) LGA a Williamnette which had out-of-order execution, the bug meltdown which is making news nowadays has history this far back. I think I bought it in 45nm which was a huge jump from the previous version although still secure in the mATX package. Again the board was from mercury. (Intel 845 chipset, DDR2 2 GB RAM and SATA came to stay).

So meltdown has been in existence for 10-12 odd years and is in everything which either uses Intel or ARM processors.

As you can probably make-out most systems came stretched out 2-3 years later than when they were launched in American or/and European markets. Also business or tourism travel was neither so easy, smooth or transparent as is today. All of which added to delay in getting new products in India.

Sadly, the Indian market is similar to other countries where Intel is used in more than 90% machines. I know of few institutions (though pretty much rare) who insisted and got AMD solutions.

That was the time when gigabyte came onto the scene which formed the basis of the Wolfdale-3M 45nm system which was in the same price range as the earlier models, and offered a weeny tiny bit of additional graphics performance.To the best of my knowledge, it was perhaps the first motherboard which had solid state capacitors being offered/put in a budget motherboard. The mobo-processor bundle used to be in the range of INR 7/8k excluding RAM. cabinet etc, I had a Philips 17″ CRT display which ran a good decade or so, so just had to get the new cabinet, motherboard, CPU, RAM and was good to go.

Few months later at a hardware exhibition held in the city I was invited to an Asus party which was just putting a toe-hold in the Indian market. I went to the do, enjoyed myself. They had a small competition where they asked some questions and asked if people had queries. To my surprise, I found that most people who were there were hardware vendors and for one reason or the other they chose to remain silent. Hence I got an AMD Asus board. This is different from winning another Gigabyte motherboard which I also won in the same year in another competition as well in the same time-frame. Both were mid-range motherboards (ATX build).

As I had just bought a Gigabyte (mATX) motherboard and had made the build, I had to give both the motherboards away, one to a friend and one to my uncle and both were pleased with the AMD-based mobos which they somehow paired with AMD processors. At that time AMD had one-upped Intel in both graphics and even bare computing especially at the middle level and they were striving to push into new markets.

Apart from the initial system bought, most of my systems when being changed were in the INR 20-25k/- budget including all and any accessories I bought later.

The only real expensive parts I purchased have been external hdd ( 1 TB WD passport) and then a Viewsonic 17″ LCD which together sent me back by around INR 10k/- but both seem to give me adequate performance (both have outlived the warranty years) with the monitor being used almost 24×7 over 6 years or so, of course over GNU/Linux specifically Debian. Both have been extremely well value for the money.

As I had been exposed to both the motherboards I had been following those and other motherboards as well. What was and has been interesting to observe what Asus did later was to focus more on the high-end gaming market while Gigabyte continued to dilute it energy both in the mid and high-end motherboards.

Cut to 2017 and had seen quite a few reports –

http://www.pcstats.com/NewsView.cfm?NewsID=131618

http://www.digitimes.com/news/a20170904PD207.html

http://www.guru3d.com/news-story/asus-has-the-largest-high-end-intel-motherboard-share.html

All of which points to the fact that Asus had cornered a large percentage of the market and specifically the gaming market . While there are no formal numbers as both Asus and Gigabyte choose to releases only APAC numbers rather than a country-wide split which would have made for some interesting reading.

Just so that people do not presume anything, there are about 4-5 motherboard vendors in the Indian market. There is Asus at the top (I believe) followed by Gigabyte, Intel at a distant 3rd place (because it’s too expensive). There are also pockets of Asrock and MSI and I know of people who follow them religiously although their mobos are supposed to be somewhat pensive than the two above. Asus and Gigabyte do try to fight out with each other but each has its core competency I believe with Asus being used by heavy gamers, overclockers more than Gigabyte.

Anyway come October 2017 and my main desktop died and am left as they say up the creek without the paddle. I didn’t even have Net access for about 3 weeks due to BSNL or PMC’s foolishness and then later small riots breaking out due to Koregaon Bhima conflict.

This led to a situation where I had to buy/build a system with oldish/half knowledge. I was open to having an AMD system but both datacare and even Rashi peripherals, Pune both of whom used to deal in AMD systems shared they had stopped dealing in AMD stuff sometime back. While datacare had AMD mobos, getting processors were an issue. Both the vendors are near to my home so if I buy from them getting support becomes an non-issue. I could have gone out of my way to get an AMD processor but getting support could have been an issue as would have had to travel and I do not know the vendors enough. Hence fell back to the Intel platform.

I asked around quite a few PC retailers and distributors around and found the Asus Prime Z270-P was the only mid-range motherboard available at that time. I did come to know a bit later of other motherboards in the z270 series but most vendors didn’t/don’t stock them as there is capital, interest and stock cost.

History – Historically, there has also been huge time lag in getting motherboards, processors etc. between worldwide announcements, and then announcements of sale in India and actually getting hands-on to the newest motherboards and processors as seen above. This had led to quite a bit of frustration to many a users. I have known of many a soul visiting Lamington Road, Mumbai to get the latest motherboard, processor. Even to-date this system flourishes as Mumbai has an International Airport and there is always a demand and people willing to pay a premium for the newest processor/motherboard even before any reviews are in.

I was highly surprised to know recently that Prime Z370-P motherboards are already selling (just 3 months late) with the Intel 8th generation processors although these are still as samples rather than a torrent some of the other motherboard-combo might be.

At the end I bought an Intel I7400 chip and an Asus Prime Z270-P motherboard with 2400 mhz Corsair 8 GB and a 4 TB WD Green (5400) HDD with a Circle 545 cabinet and (with the almost criminal 400 Watts SMPS). Later came to know that it’s not really even 400 Watts, but around 20-25% less . The whole package costed me north of INR 50k/- with still need to spend on a better SMPS (probably a Cosair or Coolermaster 80 600/650 SMPS) with a few accessories I still need to complete the system.

I will be changing the PSU most probably next week.

Circle SMPS Picture

Asus motherboard, i7400 and RAM

Disclosure – The neatness you see is not me. I was unsure if I would be able to put the heatsink on the CPU properly as that is the most sensitive part while building a system. A bent pin on the CPU could play havoc as well as void the warranty on the CPU or motherboard or both. The new thing I saw were the knobs that can be seen on the heatsink fan is something which I hadn’t seen before. The vendor did the fixing of the processor on the mobo for me as well as tied up the remaining power cables without asking for which I am and was grateful and would definitely provide him with more business as and when I need components.

Future – While it’s ok for now, I’m still using a pretty old 2 speaker setup which I hope to upgrade to either a 2.1/3.1 speaker setup, have full 64 GB 2400 Mhz Kingston Razor/G.Skill/Corsair memory, an M.2 512 MB SSD .

If I do get the Taiwan Debconf bursary I do hope to buy some or all of the above plus a Samsung or some other Android/Replicant/Librem smartphone. I have been also looking for a vastly simplified smartphone for my mum with big letters and everything but that has been a failure to find in the Indian market. Of course this all depends if I do get the bursary and even after the bursary if Global warranty and currency exchange works out in my favor vis-a-vis what I would have to pay in India.

Apart from above, Taiwan is supposed to be a pretty good source to get graphic novels, manga comics, lots of RPG games for very cheap prices with covers and hand-drawn material etc. All of this is based upon few friend’s anecdotal experiences so dunno if all of that would still hold true if I manage to be there.

There are also quite a few chip foundries and maybe during debconf could have visit to one of them if possible. It would be rewarding if the visit was to any 45nm or lower chip foundry as India is still stuck at 65nm range till date.

I would be sharing about my experience about the board, the CPU, the expectations I had from the Intel chip and the somewhat disappointing experience of using Debian on the new board in the next post, not necessarily Debian’s fault but the free software ecosystem being at fault here.

Feel free to point out any mistakes you find, grammatically or even otherwise. The blog post has been in the works for over couple of weeks so its possible for mistakes to creep in.

Planet DebianDirk Eddelbuettel: Rcpp 0.12.15: Numerous tweaks and enhancements

The fifteenth release in the 0.12.* series of Rcpp landed on CRAN today after just a few days of gestation in incoming/.

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, and the 0.12.14.release in November 2017 making it the nineteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1288 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This release contains a pretty large number of pull requests by a wide variety of authors. Most of these pull requests are very focused on a particular issue at hand. One was larger and ambitious with some forward-looking code for R 3.5.0; however this backfired a little on Windows and is currently "parked" behind a #define. Full details are below.

Changes in Rcpp version 0.12.15 (2018-01-16)

  • Changes in Rcpp API:

    • Calls from exception handling to Rf_warning() now correctly set an initial format string (Dirk in #777 fixing #776).

    • The 'new' Date and Datetime vectors now have is_na methods too. (Dirk in #783 fixing #781).

    • Protect more temporary SEXP objects produced by wrap (Kevin in #784).

    • Use public R APIs for new_env (Kevin in #785).

    • Evaluation of R code is now safer when compiled against R 3.5 (you also need to explicitly define RCPP_PROTECTED_EVAL before including Rcpp.h). Longjumps of all kinds (condition catching, returns, restarts, debugger exit) are appropriately detected and handled, e.g. the C++ stack unwinds correctly (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ]

    • The new function Rcpp_fast_eval() can be used for performance-sensitive evaluation of R code. Unlike Rcpp_eval(), it does not try to catch errors with tryEval in order to avoid the catching overhead. While this is safe thanks to the stack unwinding protection, this also means that R errors are not transformed to an Rcpp::exception. If you are relying on error rethrowing, you have to use the slower Rcpp_eval(). On old R versions Rcpp_fast_eval() falls back to Rcpp_eval() so it is safe to use against any versions of R (Lionel in #789). [ Committed but subsequently disabled in release 0.12.15 ]

    • Overly-clever checks for NA have been removed (Kevin in #790).

    • The included tinyformat has been updated to the current version, Rcpp-specific changes are now more isolated (Kirill in #791).

    • Overly picky fall-through warnings by gcc-7 regarding switch statements are now pre-empted (Kirill in #792).

    • Permit compilation on ANDROID (Kenny Bell in #796).

    • Improve support for NVCC, the CUDA compiler (Iñaki Ucar in #798 addressing #797).

    • Speed up tests for NA and NaN (Kirill and Dirk in #799 and #800).

    • Rearrange stack unwind test code, keep test disabled for now (Lionel in #801).

    • Further condition away protect unwind behind #define (Dirk in #802).

  • Changes in Rcpp Attributes:

    • Addressed a missing Rcpp namespace prefix when generating a C++ interface (James Balamuta in #779).
  • Changes in Rcpp Documentation:

    • The Rcpp FAQ now shows Rcpp::Rcpp.plugin.maker() and not the outdated ::: use applicable non-exported functions.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianNorbert Preining: TLCockpit v0.8

Today I released v0.8 of TLCockpit, the GUI front-end for the TeX Live Manager tlmgr. I spent the winter holidays in updating and polishing, but also in helping me debug problems that users have reported. Hopefully the new version works better for all.

If you are looking for a general introduction to TLCockpit, please see the blog introducing it. Here I only want to introduce the changes made since the last release:

  • add debug facility: It is now possible to pass -d for debugging to tlcockpit, activating debugging. There is also -dd for more verbose debugging.
  • select mirror facility: The edit screen for the repository setting now allows selecting from the current list of mirrors, see the following screenshot:
  • initial loading speedup: Till now we used to parse the json output of tlmgr, which included everything the whole database contains. We now load the initial minimal information via info --data and load additional data when details for a package is shown on demand. This should especially make a difference on systems without a compiled json Perl library available.
  • fixed self update: In the previous version, updating the TeX Live Manager itself was not properly working – it was updated but the application itself became unresponsive afterwards. This is hopefully fixed (although this is really tricky).
  • status indicator: The status indicator has moved from the menu bar (where it was somehow a stranger) to below the package listing, and now also includes the currently running command, see screenshot after the next item.
  • nice spinner: Only an eye-candy, but I added a rotating spinner while loading the database, updates, backups, or doing postactions. See the attached screenshot, which also shows the new location of the status indicator and the additional information provided.

I hope that this version is more reliable, stable, and easier to use. As usual, please use the issue page of the github project to report problems.

TeX Live should contain the new version starting from tomorrow.

Enjoy.

Don MartiMore brand safety bullshit

There's enough bullshit on the Internet already, but I'm afraid I'm going to quote some more. This time from Ilyse Liffreing at IBM.

The reality is none of us can say with certainty that anywhere in the world, we are [brand] safe. Look what just happened with YouTube. They are working on fixing it, but even Facebook and Google themselves have said there’s not much they can do about it. I mean, it’s hard. It’s not black and white. We are putting a lot of money in it, and pull back on channels where we have concerns. We’ve had good talks with the YouTube teams.

Bullshit.

One important part of this decision is black and white.

Either you give money to Nazis.

Or you don't give money to Nazis.

If Nazis are better at "programmatic" than the resting-and-vesting chill bros at the programmatic ad firms (and, face it, Nazis kick ass at programmatic), then the choice to spend ad money in a we're-kind-of-not-sure-if-this-goes-to-Nazis-or-not way is a choice that puts your brand on the wrong side of a black and white line.

There are plenty of Nazi-free places for brands to run ads. They might not be the cheapest. But I know which side of the line I buy from.

,

CryptogramFriday Squid Blogging: Te Papa Colossal Squid Exhibition Is Being Renovated

The New Zealand home of the colossal squid exhibit is behind renovated.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity Breaches Don't Affect Stock Price

Interesting research: "Long-term market implications of data breaches, not," by Russell Lange and Eric W. Burger.

Abstract: This report assesses the impact disclosure of data breaches has on the total returns and volatility of the affected companies' stock, with a focus on the results relative to the performance of the firms' peer industries, as represented through selected indices rather than the market as a whole. Financial performance is considered over a range of dates from 3 days post-breach through 6 months post-breach, in order to provide a longer-term perspective on the impact of the breach announcement.

Key findings:

  • While the difference in stock price between the sampled breached companies and their peers was negative (1.13%) in the first 3 days following announcement of a breach, by the 14th day the return difference had rebounded to + 0.05%, and on average remained positive through the period assessed.

  • For the differences in the breached companies' betas and the beta of their peer sets, the differences in the means of 8 months pre-breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • For the differences in the breached companies' beta correlations against the peer indices pre- and post-breach, the difference in the means of the rolling 60 day correlation 8 months pre- breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • In regression analysis, use of the number of accessed records, date, data sensitivity, and malicious versus accidental leak as variables failed to yield an R2 greater than 16.15% for response variables of 3, 14, 60, and 90 day return differential, excess beta differential, and rolling beta correlation differential, indicating that the financial impact on breached companies was highly idiosyncratic.

  • Based on returns, the most impacted industries at the 3 day post-breach date were U.S. Financial Services, Transportation, and Global Telecom. At the 90 day post-breach date, the three most impacted industries were U.S. Financial Services, U.S. Healthcare, and Global Telecom.

The market isn't going to fix this. If we want better security, we need to regulate the market.

Note: The article is behind a paywall. An older version is here. A similar article is here.

Worse Than FailureError'd: Alphabetical Soup

"I appreciate that TIAA doesn't want to fully recognize that the country once known as Burma now calls itself Myanmar, but I don't think that this is the way to handle it," Bruce R. writes.

 

"MSI Installed an update - but I wonder what else it decided to update in the process? The status bar just kept going and going..." writes Jon T.

 

Paul J. wrote, "Apparently my occupation could be 'All Other Persons' on this credit card application!"

 

Geoff wrote, "So I need to commit the changes I didn't make, and my options are 'don't commit' or 'don't commit'?"

 

David writes, "This was after a 15 minute period where I watched a timer spin frantically."

 

"It's as if DealeXtreme says 'three stars, I think you meant to say FIVE stars'," writes Henry N.

 

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

Planet DebianEddy Petrișor: Suppressing color output of the Google Repo tool

On Windows, in the cmd shell, the color control caracters generated by the Google Repo tool (or its windows port made by ESRLabs) or git appear as garbage. Unfortunately, the Google Repo tool, besides the fact it has a non-google-able name, lacks documentation regarding its options, so sometimes the only way to find out what is the option I want is to look in the code.
To avoid repeatedly look over the code to dig up this, future self, here is how you disable color output in the repo tool with the info subcommand:
repo --color=never info
Other options are 'auto' and 'always', but for some reason, auto does not do the right thing (tm) in Windows and garbage is shown with auto.

,

Sociological ImagesBros and Beer Snobs

The rise of craft beer in the United States gives us more options than ever at happy hour. Choices in beer are closely tied to social class, and the market often veers into the world of pointlessly gendered products. Classic work in sociology has long studied how people use different cultural tastes to signal social status, but where once very particular tastes showed membership in the upper class—like a preference for fine wine and classical music—a world with more options offers status to people who consume a little bit of everything.

Photo Credit: Brian Gonzalez (Flickr CC)

But who gets to be an omnivore in the beer world? New research published in Social Currents by Helana Darwin shows how the new culture of craft beer still leans on old assumptions about gender and social status. In 2014, Darwin collected posts using gendered language from fifty beer blogs. She then visited four craft beer bars around New York City, surveying 93 patrons about the kinds of beer they would expect men and women to consume. Together, the results confirmed that customers tend to define “feminine” beer as light and fruity and “masculine” beer as strong, heavy, and darker.

Two interesting findings about what people do with these assumptions stand out. First, patrons admired women who drank masculine beer, but looked down on those who stuck to the feminine choices. Men, however, could have it both ways. Patrons described their choice to drink feminine beer as open-mindedness—the mark of a beer geek who could enjoy everything. Gender determined who got “credit” for having a broad range of taste.

Second, just like other exclusive markers of social status, the India Pale Ale held a hallowed place in craft brew culture to signify a select group of drinkers. Just like fancy wine, Darwin writes,

IPA constitutes an elite preference precisely because it is an acquired taste…inaccessible to those who lack the time, money, and desire to cultivate an appreciation for the taste.

Sociology can get a bad rap for being a buzzkill, and, if you’re going to partake, you should drink whatever you like. But this research provides an important look at how we build big assumptions about people into judgments about the smallest choices.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianMike Gabriel: Building packages with Meson and Debhelper version level 11 for Debian stretch-backports

More a reminder for myself, than a blog post...

If you want to backport a project from unstable based on the meson build system and your package uses debhelper to invoke the meson build process, then you need to modify the backported package's debian/control file slightly:

diff --git a/debian/control b/debian/control
index 43e24a2..d33e76b 100644
--- a/debian/control
+++ b/debian/control
@@ -14,7 +14,7 @@ Build-Depends: debhelper (>= 11~),
                libmate-menu-dev (>= 1.16.0),
                libmate-panel-applet-dev (>= 1.16.0),
                libnotify-dev,
-               meson,
+               meson (>= 0.40.0),
                ninja-build,
                pkg-config,
 Standards-Version: 4.1.3

Enforce the build to pull-in meson from stretch-backports, i.e. a meson version that is newer than 0.40.0.

Reasoning: if you want to build your package against debhelper (>= 11~) from stretch-backports it will use the --wrap-mode option when invoking meson. However, this option only got added in meson 0.40.0. So you need to make sure, the meson version from stretch-backports gets pulled in, too, for your build. The build will fail when using the meson version that we find in Debian stretch.

TEDNew clues about the most mysterious star in the universe, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

New clues about the most mysterious star in the universe. KIC 8462852 (often called “Tabby’s star,” after the astronomer Tabetha Boyajian, who led the first study of the star) intermittently dims as much as 22% and then brightens again, for a reason no one has yet quite figured out. This bizarre occurrence led astronomers to propose over a dozen theories for why the star might be dimming, including the fringe theory that it was caused by an alien civilization using the planet’s energy. Now, new data shows that the dimming isn’t fully opaque; certain colors of light are blocked more than others. This suggests that what’s causing the star to dim is dust. After all, if an opaque object — like a planet or alien megastructure — was passing in front of the star, all of the light would be blocked equally. Tabby’s star is due to become visible again in late February or early March of 2018. (Watch Boyajian’s TED Talk)

TED’s new video series celebrates the genius design of everyday objects. What do the hoodie, the London Tube Map, the hyperlink, and the button have in common? They’re everyday objects, often overlooked, that have profoundly influenced the world around us. Each 3- to 4- minute episode of TED’s original video series Small Thing Big Idea celebrates one of these objects, with a well-known name in design explaining what exactly makes it so great. First up is Michael Bierut on the London Tube Map. (Watch the first episode here and tune in weekly on Tuesday for more.)

The science of black holes. In the new PBS special Black Hole Apocalypse, astrophysicist Janna Levin explores the science of black holes, what they are, why they are so powerful and destructive, and what they might tell us about the very origin of our existence. Dubbing them the world’s greatest mystery, Levin and her fellow scientists, including astronomer Andrea Ghez and experimental physicist Rainer Weiss, embark on a journey to portray the magnitude and importance of these voids that were long left unexplored and unexplained. (Watch Levin’s TED Talk, Ghez’s TED Talk, and read Weiss’ Ideas piece.)

An organized crime thriller with non-fiction roots. McMafia, a television show starring James Norton, premiered in the UK in early January. The show is a fictionalized account of Misha Glenny’s 2008 non-fiction book of the same name. The show focuses on Alex Goldman, the son of an exiled Mafia boss who wants to put his family’s history behind him. Unfortunately, a murder foils his plans and to protect his family, he must face up to various international crime syndicates. (Watch Glenny’s TED Talk)

Inside the African-American anti-abortion movement. In her new documentary for PBS’ Frontline, Yoruba Richen examines the complexities of the abortion debate as it relates to US’ racial history. Richen speaks with African-American members of both the pro-life and the anti-abortion movements, as her short doc follows a group of anti-abortion activists as they work in the black community. (Watch Richen’s TED Talk.)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.

Worse Than FailureCodeSOD: The Least of the Max

Adding assertions and sanity checks to your code is important, especially when you’re working in a loosely-typed language like JavaScript. Never assume the input parameters are correct, assert what they must be. Done correctly, they not only make your code safer, but also easier to understand.

Matthias’s co-worker… doesn’t exactly do that.

      function checkPriceRangeTo(x, min, max) {
        if (max == 0) {
          max = valuesPriceRange.max
        }
        min = Math.min(min, max);
        max = Math.max(min, max);
        x = parseInt(x)
        if (x == 0) {
          x = 50000
        }

        //console.log(x, 'min:', min, 'max:', max);
        return x >= min && x <= max
      }

This code isn’t bad, per se. I knew a kid, Marcus, in middle school that wore the same green sweatshirt every day, and had a musty 19th Century science textbook that discussed phlogiston in his backpack. Over lunch, he was happy to strike up a conversation with you about the superiority of phlogiston theory over Relativity. He wasn’t bad, but he was annoying and not half as smart as he thought he was.

This code is the same. Sure, x might not be a numeric value, so let’s parseInt first… which might return NaN. But we don’t check for NaN, we check for 0. If x is 0, then make it 50,000. Why? No idea.

The real treat, though, is the flipping of min/max. If the calling code did this wrong (min=6,max=1) then instead of swapping them, which is obviously the intent, it instead makes them both equal to the lowest of the two.

In the end, Matthias has one advantage in dealing with this pest, that I didn’t have in dealing with Marcus. He could actually make it go away. I just had to wait until the next year, when we didn’t have lunch at the same time.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianJoey Hess: cubietruck temperature sensor

I wanted to use 1-wire temperature sensors (DS18B20) with my Cubietruck board, running Debian. The only page I could find documenting this is for the sunxi kernel, not the mainline kernel Debian uses. After a couple of hours of research I got it working, so here goes.

wiring

First you need to pick a GPIO pin to use for the 1-wire signal. The Cubietruck's GPIO pins are documented here, and I chose to use pin PG8. Other pins should work as well, although I originally tried to use PB17 and could not get it to work for an unknown reason. I also tried to use PB18 but there was a conflict with something else trying to use that same pin. To find a free pin, cat /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins and look for a line like: "pin 200 (PG8): (MUX UNCLAIMED) (GPIO UNCLAIMED)"

Now wire the DS18B20 sensor up. With its flat side facing you, the left pin goes to ground, the center pin to PG8 (or whatever GPIO pin you selected), and the right pin goes to 3.3V. Don't forget to connect the necessary 4.7K ohm resistor between the center and right pins.

You can find plenty of videos showing how to wire up the DS18B20 on youtube, which typically also involve a quick config change to a Raspberry Pi running Raspbian to get it to see the sensor. With Debian it's unfortunately quite a lot more complicated, and so this blog post got kind of long.

configuration

We need to get the kernel to enable the GPIO pin. This seems like a really easy thing, but this is where it gets really annoying and painful.

You have to edit the Cubietruck's device tree. So apt-get source linux and in there edit arch/arm/boot/dts/sun7i-a20-cubietruck.dts

In the root section ('/'), near the top, add this:

    onewire_device {
       compatible = "w1-gpio";
       gpios = <&pio 6 8 GPIO_ACTIVE_HIGH>; /* PG8 */
       pinctrl-names = "default";
       pinctrl-0 = <&my_w1_pin>;
    };

In the '&pio` section, add this:

    my_w1_pin: my_w1_pin@0 {
         allwinner,pins = "PG8";
         allwinner,function = "gpio_in";
    };

Note that if you used a different pin than PG8 you'll need to change that. The "pio 6 8" means letter G, pin 8. The 6 is because G is the 7th letter of the alphabet. I don't know where this is documented; I reverse engineered it from another example. Why this can't be hex, or octal, or symbolic names or anything sane, I don't know.

Now you'll need to compile the dts file into a dtb file. One way is to configure the kernel and use its Makefile; I avoided that by first sudo apt-get install device-tree-compiler and then running, in the top of the linux source tree:

cpp -nostdinc -I include -undef -x assembler-with-cpp \
    ./arch/arm/boot/dts/sun7i-a20-cubietruck.dts | \
    dtc -O dtb -b 0 -o sun7i-a20-cubietruck.dtb -

You'll need to install that into /etc/flash-kernel/dtbs/sun7i-a20-cubietruck.dtb on the cubietruck. Then run flash-kernel to finish installing it.

use

Now reboot, and if all went well, it'll come up and the GPIO pin will finally be turned on:

# grep PG8 /sys/kernel/debug/pinctrl/1c20800.pinctrl/pinmux-pins
pin 200 (PG8): onewire_device 1c20800.pinctrl:200 function gpio_in group PG8

And if you picked a GPIO pin that works and got the sensor wired up correctly, in /sys/bus/w1/devices/ there should be a subdirectory for the sensor, using its unique ID. Here I have two sensors connected, which 1-wire makes easy to do, just hang them all off the same wire.. er wires.

root@honeybee:/sys/bus/w1/devices> ls
28-000008290227@  28-000008645973@  w1_bus_master1@
root@honeybee:/sys/bus/w1/devices> cat *-*/w1_slave
f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES
f6 00 4b 46 7f ff 0a 10 d6 t=15375
f6 00 4b 46 7f ff 0a 10 d6 : crc=d6 YES
f6 00 4b 46 7f ff 0a 10 d6 t=15375

So, it's 15.37 Celsius in my house. I need to go feed the fire, this took too long to get set up.

future work

Are you done at this point? I fear not entirely, because what happens when there's a kernel upgrade? If the device tree has changed in some way in the new kernel, you might need to update the modified device tree file. Or it might not boot properly or not work in some way.

With Raspbian, you don't need to modify the device tree. Instead it has support for device tree overlay files, which add some entries to the main device tree. The distribution includes a bunch of useful overlays, including one that enables GPIO pins. The Raspberry Pi's bootloader takes care of merging the main device tree and the selected overlays.

There are u-boot patches to do such merging, or the merging could be done before reboot (by flash-kernel perhaps), but apparently Debian's device tree files are built without phandle based referencing needed for that to work. (See http://elektranox.org/2017/05/0020-dt-overlays/)

There's also a kernel patch to let overlays be loaded on the fly using configfs. It seems to have been around for several years without being merged, for whatever reason, but would avoid this problem nicely if it ever did get merged.

,

Planet DebianThorsten Alteholz: First steps with arm64

As it was Christmas time recently, I wanted to allow oneself something special. So I ordered a Macchiatobin from SolidRun. Unfortunately they don’t exaggerate with their delivery times and I had to wait about two months for my device. I couldn’t celebrate Christmas time with it, but fortunately New Year.

Anyway, first I tried to use the included U-Boot to start the Debian installer on an USB stick. Oh boy, that was a bad idea and in retrospect just a waste of time. But there is debian-arm@l.d.o and Steve McIntyre was so kind to help me out of my vale of tears.

First I put the EDK2 flash image from Leif on an SD card, set the jumper on the board to boot from it (for the SD card boot, the right most jumper has to be set!) and off we go. Afterwards I put the debian-testing-arm64-netinst.iso on an USB stick and tried to start this. Unfortunately I was hit by #887110 and had to use a mini installer from here. Installation went smooth and as a last step I had to start the rescue mode and install grub to the removable media path. It is an extra point in the installer, so no need to enter cryptic commands :-).

Voila, rebooted and my Macchiatobin is up and running.

Planet DebianMatthew Garrett: Privacy expectations and the connected home

Traditionally, devices that were tied to logins tended to indicate that in some way - turn on someone's xbox and it'll show you their account name, run Netflix and it'll ask which profile you want to use. The increasing prevalence of smart devices in the home changes that, in ways that may not be immediately obvious to the majority of people. You can configure a Philips Hue with wall-mounted dimmers, meaning that someone unfamiliar with the system may not recognise that it's a smart lighting system at all. Without any actively malicious intent, you end up with a situation where the account holder is able to infer whether someone is home without that person necessarily having any idea that that's possible. A visitor who uses an Amazon Echo is not necessarily going to know that it's tied to somebody's Amazon account, and even if they do they may not know that the log (and recorded audio!) of all interactions is available to the account holder. And someone grabbing an egg out of your fridge is almost certainly not going to think that your smart egg tray will trigger an immediate notification on the account owner's phone that they need to buy new eggs.

Things get even more complicated when there's multiple account support. Google Home supports multiple users on a single device, using voice recognition to determine which queries should be associated with which account. But the account that was used to initially configure the device remains as the fallback, with unrecognised voices ended up being logged to it. If a voice is misidentified, the query may end up being logged to an unexpected account.

There's some interesting questions about consent and expectations of privacy here. If someone sets up a smart device in their home then at some point they'll agree to the manufacturer's privacy policy. But if someone else makes use of the system (by pressing a lightswitch, making a spoken query or, uh, picking up an egg), have they consented? Who has the social obligation to explain to them that the information they're producing may be stored elsewhere and visible to someone else? If I use an Echo in a hotel room, who has access to the Amazon account it's associated with? How do you explain to a teenager that there's a chance that when they asked their Home for contact details for an abortion clinic, it ended up in their parent's activity log? Who's going to be the first person divorced for claiming that they were vegan but having been the only person home when an egg was taken out of the fridge?

To be clear, I'm not arguing against the design choices involved in the implementation of these devices. In many cases it's hard to see how the desired functionality could be implemented without this sort of issue arising. But we're gradually shifting to a place where the data we generate is not only available to corporations who probably don't care about us as individuals, it's also becoming available to people who own the more private spaces we inhabit. We have social norms against bugging our houseguests, but we have no social norms that require us to explain to them that there'll be a record of every light that they turn on or off. This feels like it's going to end badly.

(Thanks to Nikki Everett for conversations that inspired this post)

(Disclaimer: while I work for Google, I am not involved in any of the products or teams described in this post and my opinions are my own rather than those of my employer's)

comment count unavailable comments

Valerie AuroraGetting free of toxic tech culture

This post was co-authored by Valerie Aurora and Susan Wu, and cross-posted on both our blogs.

Marginalized people leave tech jobs in droves, yet we rarely write or talk publicly about the emotional and mental process of deciding to leave tech. It feels almost traitorous to publicly discuss leaving tech when you’re a member of a marginalized group – much less actually go through with it.

There are many reasons we feel this way, but a major reason is that the “diversity problem in tech” is often framed as being caused by marginalized people not “wanting” to be in tech enough: not taking the right classes as teenagers, not working hard enough in university, not “leaning in” hard enough at our tech jobs. In this model, it is the moral responsibility of marginalized people to tolerate unfair pay, underpromotion, harassment, and assault in order to serve as role models and mentors to the next generation of marginalized people entering tech. With this framing, if marginalized people end up leaving tech to protect ourselves, it’s our duty to at least keep quiet about it, and not scare off other marginalized people by sharing our bad experiences.

A green plant growing out of a printer
A printer converted to a planter
CC BY-SA Ben Stanfield https://flic.kr/p/2CjHL

Under that model, this post is doubly taboo: it’s a description of how we (Susan and Valerie) went through the process of leaving toxic tech culture, as a guide to other marginalized people looking for a way out. We say “toxic tech culture” because we want to distinguish between leaving tech entirely, and leaving areas of tech which are abusive and harmful. Toxic tech culture comes in many forms: the part of Silicon Valley VC hypergrowth culture that deifies founders as “white, male, nerds who’ve dropped out of Harvard or Stanford,” the open source software ecosystem that so often exploits and drives away its best contributors, and the scam-riddled cryptocurrency community, to name just three.

What is toxic tech culture? Toxic tech cultures are those that demean and devalue you as holistic, multifaceted human beings. Toxic tech cultures are those that prioritize profits and growth over human and societal well being. Toxic tech cultures are those that treat you as replaceable cogs within a system of constant churn and burnout.

But within tech there are exceptions to the rule: technology teams, organizations, and communities where marginalized people can feel a degree of safety, belonging, and purpose. You may be thinking about leaving all of tech, or leaving a particular toxic tech culture for a different, better tech culture; either way, we hope this post will be useful to you.

A little about us: Valerie spent more than ten years working as a software engineer, specializing in file systems, Linux, and operating systems. Susan grew up on the Internet, and spent 25 years as a software developer, a community builder, an investor, and a VC-backed Silicon Valley founder. We were both overachievers who advanced quickly in our fields – until we could not longer tolerate the way we were treated, or be complicit in a system that did not match our values. Valerie quit her job as a programmer to co-found a tech-related non-profit for women, and now teaches ally skills to tech workers. Susan relocated to France and Australia, co-founded Project Include, a nonprofit dedicated to improving diversity and inclusion in tech, and is now launching a new education system. We are both still involved in tech to various degrees, but on our own terms, and we are much happier now.

We disagree that marginalized people should stay silent about how and why they left toxic tech culture. When, for example, more than 50% of women in tech leave after 12 years, there is an undeniable need for sharing experience and hard-learned lessons. Staying silent about the unfairness that causes 37% of underrepresented people of color to leave tech 37% of people of color cite as a reason they left helps no one.

We reject the idea that it is the “responsibility” of marginalized people to stay in toxic tech culture despite abuse and discrimination, solely to improve the diversity of tech. Marginalized people have already had to overcompensate for systemic sexist, ableist, and racist biases in order to earn their roles in tech. We believe people with power and privilege are responsible for changing toxic tech culture to be more inclusive and fair to marginalized people. If you want more diversity in tech, don’t ask marginalized people to be silent, to endure often grievous discrimination, or to take on additional unpaid, unrecognized labor – ask the privileged to take action.

For many marginalized people, our experience of being in tech includes traumatic experience(s) which we may not have not yet fully come to terms with and that influenced our decisions to leave. Sometimes we don’t make a direct connection between the traumatic experiences and our decision to leave. We just find that we are “bored” and are no longer excited about our work, or start avoiding situations that used to be rewarding, like conferences, speaking, and social events. Often we don’t realize traumatic events are even traumatic until months or years later. If you’ve experienced trauma, processing the trauma is necessary, whether or not you decide to leave toxic tech culture.

This post doesn’t assume that you are sure that you want to leave your current area of tech, or tech as a whole. We ourselves aren’t “sure” we want to permanently leave the toxic tech cultures we were part of even now – maybe things will get better enough that we will be willing to return. You can take the steps described in this post and stay in your current area of tech for as long as you want – you’ll just be more centered, grounded, and happy.

The steps we took are described in roughly the order we took them, but they all overlapped and intermixed with each other. Don’t feel like you need to do things in a particular order or way; this is just to give you some ideas on what you could do to work through your feelings about leaving tech and any related trauma.

Step 1: Deprogram yourself from the cult of tech

The first step is to start deprogramming yourself from the cult of tech. Being part of toxic tech culture has a lot in common with being part of a cult. How often have you heard a Silicon Valley CEO talk about how his (it’s almost always a he) startup is going to change the world? The refrain of how a startup CEO is going to save humanity is so common that it’s actually uncommon for a CEO to not use saviour language when describing their startup. Cult leaders do the same thing: they create a unique philosophy, imbued with some sort of special message that they alone can see or hear, convince people that only they have the answers for what ails humanity, and use that influence to control the people around them.

Consider this list of how to identify a cult, and how closely this list mirrors patterns we can observe in Silicon Valley tech:

  • “Be wary of any leader who proclaims him or herself as having special powers or special insight.” How often have you heard a Silicon Valley founder or CEO proclaimed as some sort of genius, and they alone can figure out how to invent XYZ? Nearly every day, there’s some deific tribute to Elon Musk or Mark Zuckerberg in the media.
  • “The group is closed, so in other words, although there may be outside followers, there’s usually an inner circle that follows the leader without question, and that maintains a tremendous amount of secrecy.” The Information just published a database summarizing how secretive, how protective, how insular the boards are for the top 30 private companies in tech. Here’s what they report: “Despite their enormous size and influence, the biggest privately held technology companies eschew some basic corporate governance standards, blocking outside voices, limiting decision making to small groups of mostly white men and holding back on public disclosures, an in-depth analysis by The Information shows.”
  • “A very important aspect of cult is the idea that if you leave the cult, horrible things will happen to you.” There’s an insidious reason why your unicorn startup provides you with a free cafeteria, gym, yoga rooms, and all night snack bars: they never want you to leave. And if you do leave the building, you can stay engaged with Slack, IM, SMS, and every other possible communications tool so that you can never disconnect. They then layer over this with purported positive cultural messaging around how lucky, how fortunate you are to have landed this job — you were the special one selected out of thousands of candidates. Nobody else has it as good as we do here. Nobody else is as smart, as capable, as special as our team. Nobody else is building the best, most impactful solutions to solve humanity’s problems. If you fall off this treadmill, you will become irrelevant, you’ll be an outsider, a consumer instead of a builder, you’ll never be first on the list for the Singularity, when it happens. You’ll be at the shit end of the income inequality distribution funnel.

Given how similar toxic tech culture (and especially Silicon Valley tech culture) is to cult culture, leaving tech often requires something like cult-deprogramming techniques. We found the following steps especially useful for deprogramming ourselves from the cult of tech: recognizing our unconscious beliefs, experimenting with our identity, avoiding people who don’t support us, and making friendships that aren’t dependent on tech.

Recognize your unconscious beliefs

One cult-like aspect of toxic tech culture is a strong moral us-vs-them dichotomy: either you’re “in tech,” and you’re important and smart and hardworking and valuable, or you are not “in tech” because you are ignorant and untalented and lazy and irrelevant. (What are the boundaries of “in tech?” Well, the more privileged you are, the more likely people will define you as “in tech” – so be generous to yourself if you are part of a marginalized group. Or read more about the fractal nature of the gender binary and how it shows up in tech.)

We didn’t realize how strongly we’d unconsciously adopted this belief that people in tech were better than those who weren’t until we started to imagine ourselves leaving tech and felt a wave of self-judgment and fear. Early on, Valerie realized that she unconsciously thought of literally every single job other than software engineer as “for people who weren’t good enough to be a software engineer” – and that she thought this because other software engineers had been telling her that for her entire career. Even now, as Susan is launching a new education startup in Australia, she’s trying to be careful to not assume that just because people are doing things in a “non Silicon Valley, lean startup, agile way,” that it’s not automatically wrong. In reality, the best way in which to do things is probably not based on any particular dogma, but one that reflects a healthy balance of diverse perspectives and styles.

The first step to ridding yourself of the harmful belief that only people who are “in tech” or doing things in a “startup style” are good or smart or valuable is surfacing the unconscious belief to the conscious level, so you can respond to it. Recognize and name that belief when it comes up: when you think about leaving your job and feel fear, when you meet a new person and immediately lose interest when you learn their job is not “technical,” when you notice yourself trying to decide if someone is “technical enough.” Say to yourself, “I am experiencing the belief that only people I consider technical are valuable. This isn’t true. I believe everyone is valuable regardless of their job or level of technical knowledge.”

Experiment with your self-identity

The next step is to experiment with your own self-identity. Begin thinking of yourself as having different non-tech jobs or self-descriptions, and see what thoughts come up. React to those thoughts as though you were reacting to a friend you care about who was saying those things about them. Try to find positive things to think and say about your theoretical new job and new life. Think about people you know with that job and ask yourself if you would say negative things about their job to them. Some painful thoughts and experiences will come up during this time; aim to recognize them consciously and process them, rather than trying to stuff them down or make them go away.

When you live in Silicon Valley, it’s easy for your work life to consume 95% of your waking hours — this is how startups are designed, after all, with their endless perks and pressures to socialize within the tribe. Often times, promotions go hand in hand with socializing successfully within the startup scene. What can you do to carve out several hours a week just for yourself, and an alternate identity that isn’t defined by success within toxic tech culture? How do you make space for self care? For example, Susan began to take online writing courses, and found that the outlet of interacting with poets and fiction writers helped ground her.

If necessary, change the branding of your personal life. Stop wearing tech t-shirts and get shirts that reflect some other part of your self. Get a different print for your office wall. Move the tech books into one out-of-the-way shelf and donate any you don’t use right now (especially the ones that you have been planning to read but never got around to). Donate most of your conference schwag and stop accepting new schwag. Pack away the shelf of tech-themed tchotchkes or even (gasp) throw them away. Valerie went to a “burn party” on Ocean Beach, where everyone brought symbols of old jobs that they were happy to be free of and symbolically burned them in a beach bonfire. You might consider a similar ritual.

De-emphasize tech in your self-presentation. Change any usernames that reference your tech interests. Rewrite any online bios or descriptions to emphasize non-tech parts of your life. Start introducing yourself by talking about your non-tech hobbies and interests rather than your job. You might even try introducing yourself to new people as someone whose primary job isn’t tech. Valerie, who had been writing professionally for several years, started introducing herself as a writer at tech events in San Francisco. People who would have talked to her had she introduced herself as a Linux kernel developer would immediately turn away without a second word. Counterintuitively, this made her more determined to leave her job, when she saw how inconsiderate her colleagues were when she did not make use of her technical privilege.

Avoid unsupportive people

Identify any people in your life who are consistently unsupportive of you, or only supportive when you perform to their satisfaction, and reduce your emotional and financial dependence on them. If you have friends or idols who are unhelpfully critical or judgemental, take steps to see or hear from them less often. Don’t seek out their opinion and don’t stoke your admiration for them. This will be difficult the closer and more dependent you are on the person; if your spouse or manager is one of these people, you have our sympathy. For more on this dynamic and how to end it, see this series of posts about narcissism, co-narcissism, and tech.

Depressingly often, we especially seek the approval of people who give approval sparingly (think about the popularity of Dr. House, who is a total jerk). If you find yourself yearning for the approval of someone in tech who has been described as an “asshole,” this is a great time to stop. Some helpful tips to stop seeking the approval of an asshole: make a list of cruel things they’ve done, make a list of times they were wrong, stop reading their writing or listening to their talks, filter them out of your daily reading, talk to people who don’t know who that person is or care what they think, listen to people who have been hurt by them, and spend more time with people who are kind and nurturing.

At the same time, seek out and spend more time with people who are generally supportive of you, especially people who encourage experimentation and personal change. You may already have many of these people in your life, but don’t spend much time thinking about them because you can depend on their friendship and support. Reach out to them and renew your relationship.

Make friendships that don’t depend on tech

If your current social circle consists entirely of people who are fully bought into toxic tech culture, you may not have anyone in your life willing to support a career change. To help solve this, make friendships that aren’t dependent on your identity as a person in tech. The goal is to have a lot of friendships that aren’t dependent on your being in tech, so that if you decide to leave, you won’t lose all your friends at the same time as your job. Being friends with people who aren’t in tech will help you get an outside perspective on the kind of tech culture you are part of. It also helps you envision a future for yourself that doesn’t depend on being in toxic tech culture. You can still have lots of friends in tech, you are just aiming for diversity in your friendships.

One way to make this easier is to focus on your existing friendships that are “near tech,” such as people working in adjacent fields that sometimes attend tech conferences, but aren’t “in tech” themselves. Try also getting a new hobby, being more open to invitations to social events, and contacting old friends you’ve fallen out of touch with. Spend less time attending tech-related events, especially if you currently travel to a lot of tech conferences. It’s hard to start and maintain new local friendships when you’re constantly out of town or working overtime to prepare a talk for a conference. If you have a set of conferences you attend every year, it will feel scary the first time you miss one of them, but you’ll notice how much more time you have to spend with your local social circle.

Making friends outside of your familiar context (tech co-workers, tech conferences, online tech forums) is challenging for most people. If you learned how to socialize entire in tech culture, you may also need to learn new norms and conventions (such as how to have a conversation that isn’t about competing to show who knows more about a subject). Both Valerie and Susan experienced this when we started trying to make friends outside of toxic tech culture: all we knew how to talk about was startups, technology, video games, science fiction, scientific research, and (ugh) libertarian economic philosophy. We discovered people outside toxic tech culture wanted to talk about a wider range of topics, and often in a less confrontational way. And after a lifetime of socialization to distrust and discount everyone who wasn’t a man, we learned to seek out and value friendships with women and non-binary people.

If making new friends sounds intimidating, we recommend checking out Captain Awkward’s practical advice on making friends. Making new friends takes work and willingness to be rejected, but you’ll thank yourself for it later on.

Step 2: Make room for a career change

If you are already in a place where you have the freedom to make a big career change, congratulations! But if changing careers seems impossibly hard right now, that’s okay too. You can make room for a career change while still working in tech. Even if you end up deciding to stay in your current job, you will likely appreciate the freedom and flexibility that you’ve opened up for yourself.

Find a career counselor

The most useful action you can take is to find a career counselor who is right for you, and be honest with them about your fears, goals, and desires. Finding a career counselor is a lot like finding a dentist or a therapist: ask your friends for recommendations, read online reviews, look for directories or lists, and make an appointment for a free first meeting. If your first meeting doesn’t click, go ahead and try another career counselor until you find someone you can work with. A good career counselor will get a comprehensive view of your entire life (including family and friends) and your goals (not just job-related goals), and give you concrete steps to take to bring you closer to your goals.

Sometimes a career counselor’s job is explaining to you how the job you want but thought was impossible to get is actually possible. Valerie started seeing a career counselor about two years before she quit her last job as a software engineer and co-founded a non-profit. It took her about five years to get everything she listed as part of what she thought was an unattainable dream job (except for the “view of the water from her office,” which she is still working on). All the rest of this section is a high-level generic version of the advice a good career counselor will give you.

Improve your financial situation

Many tech jobs pay relatively well, but many people in tech would still have a hard time switching careers tomorrow because they don’t have enough money saved or couldn’t take a pay cut (hello, overheated rental markets and supporting your extended family). Don’t assume you’ll have to take a pay cut if you leave tech or your particular part of toxic tech culture, but it gives you more flexibility if you don’t have to immediately start making the same amount of money in a different job.

Look for ways to change your lifestyle or your expectations in ways that let you save money or lower your bills. Status symbols and class markers will probably loom large here and it’s worth thinking about which things are most valuable to you and which ones you can let go. You might find it is a relief to no longer have an expensive car with all its attendant maintenance and worries and fear, but that you really value the weekly exercise class that makes you feel happier and more energetic the rest of the week. Making these changes will often be painful in the short term but pay off in the long term. Valerie ended up temporarily moving out of the San Francisco Bay Area to a cheaper area near her family, which let her save up money and spend less while she was planning a career change. She moved back to the Bay Area when she was established in her new career, into a smaller, cheaper apartment she could afford on her new salary. Today she is making more money than she ever did as a programmer.

Take stock of your transferrable skills

Figure out what you actually like to do and how much of that is transferrable to other fields or jobs. One way to do this is to look back at, say, the top seven projects you most enjoyed doing in your life, either for your job or as a volunteer. What skills were useful to you in getting those projects done? What parts of doing that project did you enjoy the most? For example, being able to quickly read and understand a lot of information is a transferrable skill that many people enjoy using. The ability to persuade people is another such skill, useful for selling gym memberships, convincing people to recycle more, teaching, getting funding, and many other jobs. Once you have an idea of what it is that you enjoy doing and that is transferrable to other jobs, you can figure out what jobs you might enjoy and would be reasonably good at from the beginning.

Think carefully before signing up for new education

This is not necessarily the time to start taking career-related classes or going back to university in a serious way! If you start taking classes without first figuring out what you enjoy, what your skills are, and what your goals are, you are likely to be wasting your time and money and making it more difficult to find your new career. We highly recommend working with a career counselor before spending serious money or time on new training or classes. However, it makes sense to take low-cost, low-time commitment classes to explore what you enjoy doing, open your mind to new possibilities, or meet new people. This might look like a pottery class at the local community college, learning to 3D print objects at the local hackerspace, or taking an online course in African history.

Recognise there are many different paths in tech

The good news about software finally eating the world is that there are now many ways in which you can work in and around technology, without having to be part of toxic tech culture. Every industry needs tech expertise, and nearly every country around the world is trying to cultivate its own startup ecosystem. Many of these are much saner, kinder places to work than the toxic tech culture you may currently be part of, and a few of these involve industries that are more inclusive and welcoming of marginalized groups. Some of our friends have left the tech industry to work in innovation or technology related jobs in government, education, advocacy, policy, and arts. Though there are no great industries, and no ideal safe places for marginalized groups nearly anywhere in the world, there are varying degrees of toxicity and you can seek out areas with less toxicity. Try not to be swayed by the narrative that the only tech worth doing is the tech that’s written about in the media or receiving significant VC funding.

Step 3: Take care of yourself

Since being part of toxic tech culture is harmful to you as a person, simply focusing on taking care of yourself will help you put tech culture in its proper perspective, leaving you the freedom to be part of tech or not as you choose.

Prioritize self-care

Self-care means doing things that are kind or nurturing for yourself, whatever that looks like for you. Being in toxic tech culture means that many things take priority over self-care: fixing that last bug instead of taking a walk, going to an evening work-related meetup instead of staying home and getting to sleep on time, flying to yet another tech conference instead of spending time with family and friends. For Susan, prioritizing self-care looked like taking a road trip up the Pacific Coast Highway for the weekend instead of going to an industry fundraiser, or eating lunch by herself with a book instead of meeting up with another VC. One of the few constants in life is that you will always be stuck with your own self – so take care of it!

Learn to say no and enforce boundaries

We found that we were saying yes to too many things. The tech industry depends on extracting free or low-cost labor from many people in different ways: everything from salaried employees working 60-hour weeks to writing and giving talks in your “free time” – all of which are considered required for your career to advance. Marginalized people in tech are often expected to work an additional second (third?) shift of diversity-related work for free: giving recruiting advice, mentoring other marginalized people, or providing free counseling to more privileged people.

FOMO (fear of missing out) plays an important role too. It’s hard to cut down on free work when you are wondering, what if this is the conference where you’ll meet the person who will get you that venture capital job you’ve always wanted? What if serving on this conference program committee will get you that promotion? What if going to lunch with this powerful person so they can “pick your brain” for free will get you a new job? Early in your tech career, these kinds of investments often pay off but later on they have diminishing returns. The first time you attend a conference in your field, you will probably meet dozens of people who are helpful to your career. The twentieth conference – not so much.

For Valerie, switching from a salaried job to hourly consulting taught her the value of her time and just how many hours she was spending on unpaid work for the Linux and file systems communities. She taped a note reading “JUST SAY NO” to the wall behind her computer, and then sent a bunch of emails quitting various unpaid responsibilities she had accumulated. A few months later, she found she had made too many commitments again, and had to send another round of emails backing out of commitments. It was painful and embarrassing, but not being constantly frazzled and stressed out was worth it.

When you start saying no to unpaid work, some people will be upset and push back. After all, they are used to getting free work from you which gives them some personal advantage, and many people won’t be happy with this. They may try to make you feel guilty, shame you, or threaten you. Learning to enforce boundaries in the face of opposition is an important part of this step. If this is hard for you, try reading books, practicing with a friend, or working with a therapist. If you are worried about making mistakes when going against external pressure, keep in mind that simply exercising some control over your life choices and career path will often increase your personal happiness, regardless of the outcome.

Care for your mental health

Let’s be brutally honest: toxic tech culture is highly abusive, and there’s an excellent chance you are suffering from depression, trauma, chronic stress, or other serious psychological difficulties. The solution that works for many people is to work with a good therapist or counselor. A good licensed therapist is literally an expert in helping people work through these problems. Even if you don’t think your issues reach the level of seriousness that requires a therapist, a good therapist can help you with processing guilt, fear, anxiety, or other emotions that come up around the idea of leaving toxic tech culture.

Whether or not you work with a therapist, you can make use of many other forms of mental health care: meditation, support groups, mindfulness apps, walking, self-help books, spending time in nature, various spiritual practices, doing exercises in workbooks, doing something creative, getting alone time, and many more. Try a bunch of different things and pick what works for you – everyone is different. For Susan, practicing yoga four times a week, meditating, and working in her vegetable garden instead of reading Hacker News gave her much needed perspective and space.

Finding a therapist can be intimidating for many people, which is why Valerie wrote “HOWTO therapy: what psychotherapy is, how to find a therapist, and when to fire your therapist.” It has some tips on getting low-cost or free therapy if that’s what you need. You can also read Tiffany Howard‘s list of free and low-cost mental health resources which covers a wide range of different options, including apps, peer support groups, and low-cost therapy.

Process your grief

Even if you are certain you want to leave toxic tech culture, actually leaving is a loss – if nothing else, a loss of what you thought your career and future would look like. Grief is an appropriate response to any major life change, even if it is for the better. Give yourself permission to grieve and be sad, for whatever it is that you are sad about. A few of the things we grieved for: the meritocracy we thought we were participating in, our vision for where our careers would be in five years, the good times we had with friends at conferences, a sense of being part of something excited and world-changing, all the good people who left before us, our relationships with people we thought would support us but didn’t, and the people we were leaving behind to suffer without us.

Step 4: Give yourself time

If you do decide to leave toxic tech culture, give yourself a few years to do it, and many more years to process your feelings about it. Valerie decided to stop being a programmer two years before she actually quit her programming job, and then she worked as a file systems consultant on and off for five years after that. Seven years later, she finally feels mostly at peace about being driven out of her chosen career (though she still occasionally has nightmares about being at a Linux conference). Susan’s process of extricating herself from the most toxic parts of tech culture and reinvesting in her own identity and well being has taken many years as well. Her partner (who knows nothing about technology) and her two kids help her feel much more balanced. Because Susan grew up on the Internet and has been building in tech for 25 years, she feels like she’ll probably always be doing something in tech, or tech-related, but wants to use her knowledge and skills to do this on her own terms, and to use her hard won know-how to benefit other marginalized folks to successfully reshape the industry.

An invitation to share your story

We hope this post was helpful to other people thinking about leaving toxic tech culture. There is so much more to say on this topic, and so many more points of view we want to hear about. If you feel safe doing so, we would love to read your story of leaving toxic tech culture. And wherever you are in your journey, we see you and support you, even if you don’t feel safe sharing your story or thoughts.

Planet DebianRenata D'Avila: Not being perfect

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, I was wondering that when I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that could be me. Even though I had written down almost every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong intern.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start doing some work. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I had studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>>

<<EventCalendar(category=CategoryEventCalendar)>>

<<EventCalendar(,category=CategoryEventCalendar)>>

<<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968'

[Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Mozilla Firefox screenshot showing MoinMoin wiki with the EventCalendar plugin working and displaying a calendar for January 2018

Planet DebianRenata D'Avila: Not being perfect

I know I am very late on this update (and also very late on emailing back my mentors). I am sorry. It took me a long time to figure out how to put into words everything that has been going on for the past few weeks.

Let's begin with this: yes, I am so very aware there is an evaluation coming up (in two days) and that it is important "to have at least one piece of work that is visible in the week of evaluation" to show what I have been doing since the beginning of the internship.

But the truth is: as of now, I don't have any code to show. And what that screams to me is that it means that I have failed. I didn't know what to say either to my mentors or in here to explain that I didn't meet everyone's expectations. That I had not been perfect.

So I had to ask what could I learn from this and how could I keep going and working on this project?

Coincidence or not, when I was wondering that I crossed paths (again) with one of the most amazing TED Talks there is:

Reshma Saujani's "Teach girls bravery, not perfection"

And yes, that was very much me, because even though I had written down pretty much every step I had taken trying to solve the problem I got stuck on, I wasn't ready to share all that, not even with my mentors (yes, I can see now how that isn't very helpful). I would rather let them go thinking I am lazy and didn't do anything all this time than to send all those notes about my failure and have them realize I didn't know what they expected me to know or... well, that they'd picked the wrong candidate.

What was I trying to do?

As I talked about in my previous post, the EventCalendar macro seemed like a good place to start. I wanted to add a piece of code to it that would allow to export the events data to the iCalendar format. Because this is sort of what I did in my contribution for the github-icalendar) and because the mentor Daniel had suggested something like that, I thought that it would be a good way of getting myself familiarized to how macro development is done for MoinMoin wiki.

How far did I go?

As I had planned to do, I started by studying the EventMacro.py, to understand how it works, and taking notes.

EventMacro fetches events from MoinMoin pages and uses Python's Pickle module to serialize and to de-serialize the data. This should be okay if you can trust enough the people editing the wiki (and, therefore, creating the events), but this might not be a good option if we start using external sources (such as third-party websites) for event data - at least, not directly on the data gathered. See the warning below, from the Pickle module docs:

Warning: The pickle module is not secure against erroneous or maliciously constructed data. Never unpickle data received from an untrusted or unauthenticated source.

From the code and from the inputs from the mentors, I understand that EventMacro is more about displaying the events, putting them on a wiki page. Indeed, this could be helpful later on, but not exactly for the purpose we want now, which is to have some standalone application to gather data about the events, model this data in the way that we want it to be organized and maybe making it assessible by an API and/or exporting as JSON? Then, either MoinMoin or any other FOSS community project could chose how to display and make use of them.

What did go wrong?

But the thing is... even if I studied the code, I couldn't see it running on my MoinMoin instance. I have tried and tried, but, generally speaking, I got stuck on trying to get macros to work. Standard macros, that come with MoinMoin, work perfectly. But macros from MacroMarket, I couldn't find a way to make them work.

For the EventCalendar macro, I tried my best to follow the instructions on the Instalation Guide but I simply couldn't find a way for it to be processed.

Things I did:

  • I downloaded the macro file and renamed it to EventCalendar.py
  • I put it in the local macro directory (yourwiki/data/plugins/macro) and proceeded with the rest of the instructions.
  • When that didn't work, I copied the file to the global macro directory (MoinMoin/macro), it wasn't enough.
  • I made sure to add the .css to all styles, both for common.css and screen.css, still didn't work.
  • I thought that maybe it was the arguments on the macro, so I tried to add it to the wiki page in the following ways:
<<EventCalendar>>

<<EventCalendar(category=CategoryEventCalendar)>>

<<EventCalendar(,category=CategoryEventCalendar)>>

<<EventCalendar(,,category=CategoryEventCalendar)>>

Still, the macro wasn't processed and appeared just like that on the page, even though I had already created pages with that category and added event info to them.

To investigate, I tried using other macros:

These all came with the MoinMoin core and they all worked.

I tried other ones:

That, just like EventCalendar, didn't work.

Going through these macros also made me realize how awfully documented most of them usually are, in particular about the instalation and making it work with the whole system, even if the code is clear. (And to think that at the beginning of this whole thing I had to search and read up on what are DocStrings because the MoinMoin Coding Style says: "That does NOT mean that there should be no docstrings.". Now it seems like some developers didn't know what DocStrings were either.)

I checked permissions, but it couldn't be that, because the downloaded macros has the same permissions as the other macros and they all belong to the same user.

I thought that maybe it was a problem with Python versions or even with the way MoinMoin instalation was done. So I tried some alternatives. First, I tried to install it again on a new CodeAnywhere Ubuntu container, but I still had the same problem.

I tried with a local Debian instalation... same problem. Even though Ubuntu is based on Debian, the fact that macros didn't work on either was telling me that the problem wasn't necessarily the distribution, that it didn't matter which packages or libraries each of them come with. The problem seemed to be somewhere else.

Then, I proceeded to analyze the Apache error log to see if I could figure out.

[Thu Jan 11 00:33:28.230387 2018] [wsgi:error] [pid 5845:tid 139862907651840] [remote ::1:43998] 2018-01-11 00:33:28,229 WARNING MoinMoin.log:112 /usr/local/lib/python2.7/dist-packages/MoinMoin/support/werkzeug/filesystem.py:63: BrokenFilesystemWarning: Detected a misconfigured UNIX filesystem: Will use UTF-8 as filesystem encoding instead of 'ANSI_X3.4-1968'

[Thu Jan 11 00:34:11.089031 2018] [wsgi:error] [pid 5840:tid 139862941255424] [remote ::1:44010] 2018-01-11 00:34:11,088 INFO MoinMoin.config.multiconfig:127 using wiki config: /usr/local/share/moin/wikiconfig.pyc

Alright, the wikiconfig.py wasn't actually set to utf-8, my bad. I fixed and re-read it again to make sure I hadn't missed anything this time. I restarted the server and... nope, macros still don't work.

So, misconfigured UNIX filesystem? Not quite sure what was that, but I searched for it and it seemed to be easily solved generating an en_US.UTF-8 Locale and/or setting it, right?

Well, these errors really did go away... but even after restarting the apache server, those macros still wouldn't work.

So this is how things went up until today. It ends up with me not having a clue where else to look to try and fix the macros and make them work so I could start coding and having some results... or does it?

This was a post about a failure, but...

Whoever wrote that "often times writing a blog post will help you find the solution you're working on" on the e-mail we received when we where accepted for Outreachy... damn, you were right.

I opened the command history to get my MoinMoin instance running again, so I could verify that the names of the macros that worked and which ones didn't were correct for this post, when...

I cannot believe I couldn't figure out.

What had been happening all this time? Yes, the .py macro file should go to moin/data/plugin/macro, but not on the directories I was putting them. I didn't realize that all this time, the wiki wasn't actually installed on the directory yourwiki/data/plugins/macro where the extracted source code is. It is installed on /usr/local/share/, so the files should be put on /usr/local/share/moin/data/plugin/macro and of course I should've realized this sooner, after all, I was the one to install it, but... it happens.

I copied the files there, set the appropriate owner and... IT-- WORKED!

Mozilla Firefox screenshot showing MoinMoin wiki with the EventCalendar plugin working and displaying a calendar for January 2018

Krebs on SecuritySome Basic Rules for Securing Your IoT Stuff

Most readers here have likely heard or read various prognostications about the impending doom from the proliferation of poorly-secured “Internet of Things” or IoT devices. Loosely defined as any gadget or gizmo that connects to the Internet but which most consumers probably wouldn’t begin to know how to secure, IoT encompasses everything from security cameras, routers and digital video recorders to printers, wearable devices and “smart” lightbulbs.

Throughout 2016 and 2017, attacks from massive botnets made up entirely of hacked IoT devices had many experts warning of a dire outlook for Internet security. But the future of IoT doesn’t have to be so bleak. Here’s a primer on minimizing the chances that your IoT things become a security liability for you or for the Internet at large.

-Rule #1: Avoid connecting your devices directly to the Internet — either without a firewall or in front it, by poking holes in your firewall so you can access them remotely. Putting your devices in front of your firewall is generally a bad idea because many IoT products were simply not designed with security in mind and making these things accessible over the public Internet could invite attackers into your network. If you have a router, chances are it also comes with a built-in firewall. Keep your IoT devices behind the firewall as best you can.

-Rule #2: If you can, change the thing’s default credentials to a complex password that only you will know and can remember. And if you do happen to forget the password, it’s not the end of the world: Most devices have a recessed reset switch that can be used to restore to the thing to its factory-default settings (and credentials). Here’s some advice on picking better ones.

I say “if you can,” at the beginning of Rule #2 because very often IoT devices — particularly security cameras and DVRs — are so poorly designed from a security perspective that even changing the default password to the thing’s built-in Web interface does nothing to prevent the things from being reachable and vulnerable once connected to the Internet.

Also, many of these devices are found to have hidden, undocumented “backdoor” accounts that attackers can use to remotely control the devices. That’s why Rule #1 is so important.

-Rule #3: Update the firmware. Hardware vendors sometimes make available security updates for the software that powers their consumer devices (known as “firmware). It’s a good idea to visit the vendor’s Web site and check for any firmware updates before putting your IoT things to use, and to check back periodically for any new updates.

-Rule #4: Check the defaults, and make sure features you may not want or need like UPnP (Universal Plug and Play — which can easily poke holes in your firewall without you knowing it) — are disabled.

Want to know if something has poked a hole in your router’s firewall? Censys has a decent scanner that may give you clues about any cracks in your firewall. Browse to whatismyipaddress.com, then cut and paste the resulting address into the text box at Censys.io, select “IPv4 hosts” from the drop-down menu, and hit “search.”

If that sounds too complicated (or if your ISP’s addresses are on Censys’s blacklist) check out Steve Gibson‘s Shield’s Up page, which features a point-and-click tool that can give you information about which network doorways or “ports” may be open or exposed on your network. A quick Internet search on exposed port number(s) can often yield useful results indicating which of your devices may have poked a hole.

If you run antivirus software on your computer, consider upgrading to a “network security” or “Internet security” version of these products, which ship with more full-featured software firewalls that can make it easier to block traffic going into and out of specific ports.

Alternatively, Glasswire is a useful tool that offers a full-featured firewall as well as the ability to tell which of your applications and devices are using the most bandwidth on your network. Glasswire recently came in handy to help me determine which application was using gigabytes worth of bandwidth each day (it turned out to be a version of Amazon Music’s software client that had a glitchy updater).

-Rule #5: Avoid IoT devices that advertise Peer-to-Peer (P2P) capabilities built-in. P2P IoT devices are notoriously difficult to secure, and research has repeatedly shown that they can be reachable even through a firewall remotely over the Internet because they’re configured to continuously find ways to connect to a global, shared network so that people can access them remotely. For examples of this, see previous stories here, including This is Why People Fear the Internet of Things, and Researchers Find Fresh Fodder for IoT Attack Cannons.

-Rule #6: Consider the cost. Bear in mind that when it comes to IoT devices, cheaper usually is not better. There is no direct correlation between price and security, but history has shown the devices that tend to be toward the lower end of the price ranges for their class tend to have the most vulnerabilities and backdoors, with the least amount of vendor upkeep or support.

In the wake of last month’s guilty pleas by several individuals who created Mirai — one of the biggest IoT malware threats ever — the U.S. Justice Department released a series of tips on securing IoT devices.

One final note: I realize that the people who probably need to be reading these tips the most likely won’t ever know they need to care enough to act on them. But at least by taking proactive steps, you can reduce the likelihood that your IoT things will contribute to the global IoT security problem.

Planet DebianJonathan Dowland: Announcing "Just TODO It"

just TODO it UI

Recently, I wished to use a trivially-simple TODO-list application whilst working on a project. I had a look through what was available to me in the "GNOME Software" application and was surprised to find nothing suitable. In particular I just wanted to capture a list of actions that I could tick off; I didn't want anything more sophisticated than that (and indeed, more sophistication would mean a learning curve I couldn't afford at the time). I then remembered that I'd written one myself, twelve years ago. So I found the old code, dusted it off, made some small adjustments so it would work on modern systems and published it.

At the time that I wrote it, I found (at least) one other similar piece of software called "Tasks" which used Evolution's TODO-list as the back-end data store. I can no longer find any trace of this software, and the old web host (projects.o-hand.com) has disappeared.

My tool is called Just TODO It and it does very little. If that's what you want, great! You can reach the source via that prior link or jump straight to GitHub: https://github.com/jmtd/todo

CryptogramArticle from a Former Chinese PLA General on Cyber Sovereignty

Interesting article by Major General Hao Yeli, Chinese People's Liberation Army (ret.), a senior advisor at the China International Institute for Strategic Society, Vice President of China Institute for Innovation and Development Strategy, and the Chair of the Guanchao Cyber Forum.

Against the background of globalization and the internet era, the emerging cyber sovereignty concept calls for breaking through the limitations of physical space and avoiding misunderstandings based on perceptions of binary opposition. Reinforcing a cyberspace community with a common destiny, it reconciles the tension between exclusivity and transferability, leading to a comprehensive perspective. China insists on its cyber sovereignty, meanwhile, it transfers segments of its cyber sovereignty reasonably. China rightly attaches importance to its national security, meanwhile, it promotes international cooperation and open development.

China has never been opposed to multi-party governance when appropriate, but rejects the denial of government's proper role and responsibilities with respect to major issues. The multilateral and multiparty models are complementary rather than exclusive. Governments and multi-stakeholders can play different leading roles at the different levels of cyberspace.

In the internet era, the law of the jungle should give way to solidarity and shared responsibilities. Restricted connections should give way to openness and sharing. Intolerance should be replaced by understanding. And unilateral values should yield to respect for differences while recognizing the importance of diversity.

Worse Than FailureIn $BANK We Trust

During the few months after getting my BS and before starting my MS, I worked for a bank that held lots of securities - and gold - in trust for others. There was a massive vault with multiple layers of steel doors, iron door grates, security access cards, armed guards, and signature comparisons (live vs pre-registered). It was a bit unnerving to get in there, so deep below ground, but once in, it looked very much like the Fort Knox vault scene in Goldfinger.

Someone planning things on a whiteboard

At that point, PCs weren't yet available to the masses and I had very little exposure to mainframes. I had been hired as an assistant to one of their drones who had been assigned to find all of the paper-driven-changes that had gone awry and get their books up to date.

To this end, I spent about a month talking to everyone involved in taking a customer order to take or transfer ownership of something, and processing the ledger entries to reflect the transaction. From this, I drew a simple flow chart, listing each task, the person(s) responsible, and the possible decision tree at each point.

Then I went back to each person and asked them to list all the things that could and did go wrong with transaction processing at their junction in the flow.

What had been essentially straight-line processing with a few small decision branches, turned out to be enough to fill a 30 foot long by 8 foot high wall of undesirable branches. This became absolutely unmanageable on physical paper, and I didn't know of any charting programs on the mainframe at that time, so I wrote the whole thing up with an index card at each junction. The "good" path was in green marker, and everything else was yellow (one level of "wrong") or red (wtf-level of "wrong").

By the time it was fully documented, the wall-o-index-cards had become a running joke. I invited the people (who had given me all of the information) in to view their problems in the larger context, and verify that the problems were accurately documented.

Then management was called in to view the true scope of their problems. The reason that the books were so snafu'd was that there were simply too many manual tasks that were being done incorrectly, cascading to deeply nested levels of errors.

Once we knew where to look, it became much easier to track transactions backward through the diagram to the last known valid junction and push them forward until they were both correct and current. A rather large contingent of analysts were then put onto this task to fix all of the transactions for all of the customers of the bank.

It was about the time that I was to leave and go back to school that they were talking about taking the sub-processes off the mainframe and distributing detailed step-by-step instructions for people to follow manually at each junction to ensure that the work flow proceeded properly. Obviously, more manual steps would reduce the chance for errors to creep in!

A few years later when I got my MS, I ran into one of the people that was still working there and discovered that the more-manual procedures had not only not cured the problem, but that entirely new avenues of problems had cropped up as a result.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Google AdsenseReceiving your payment via EFT (Electronic Funds Transfer)


Electronic Funds Transfer (EFT) is our fastest, most secure, and environmentally friendly payment method. It is available across most countries and you can check if this payment method is available to you here.

To use this payment method we first need to verify your bank account to ensure that you will receive your payment. This involves entering specific bank account information and receiving a small test deposit.

Some of our publishers found this process confusing and we want to guide you through it. Our latest video will guide you through adding EFT as a payment method, from start to finish.
If you didn’t receive your test deposit, you can watch this video to understand why. If you have more questions, visit our Help Center.
Posted by: The AdSense Support Team

Planet DebianDirk Eddelbuettel: RcppMsgPack 0.2.1

Am update of RcppMsgPack got onto CRAN today. It contains a number of enhancements Travers had been working on, as well as one thing CRAN asked us to do in making a suggested package optional.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. RcppMsgPack brings both the C++ headers of MessagePack as well as clever code (in both R and C++) Travers wrote to access MsgPack-encoded objects directly from R.

Changes in version 0.2.1 (2018-01-15)

  • Some corrections and update to DESCRIPTION, README.md, msgpack.org.md and vignette (#6).

  • Update to c_pack.cpp and tests (#7).

  • More efficient packing of vectors (#8).

  • Support for timestamps and NAs (#9).

  • Conditional use of microbenchmark in tests/ as required for Suggests: package [CRAN request] (#10).

  • Minor polish to tests relaxing comparison of timestamp, and avoiding a few g++ warnings (#12 addressing #11).

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

More information may be on the RcppMsgPack page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

LongNowStewart Brand Gives In-Depth and Personal Interview to Tim Ferriss

Tim Ferriss, who wrote the The Four Hour Work Week and gave a Long Now talk on accelerated learning in 02011, recently interviewed Long Now co-founder Stewart Brand on his podcast, “The Tim Ferriss Show”. The interview is wide-ranging, in-depth, and among the most personal Brand has given to date. Over the course of nearly three hours, Brand touches on everything from the Whole Earth Catalog, why he gave up skydiving, how he deals with depression, his early experiences with psychedelics, the influence of Marshall McLuhan and Buckminster Fuller on his thinking, his recent CrossFit regimen, and the ongoing debate between artificial intelligence and intelligence augmentation. He also discusses the ideas and projects of The Long Now Foundation.

Brand frames The Long Now Foundation as a way to augment social intelligence:

The idea of the Long Now Foundation is to give encouragement and permission to society that is rewarded for thinking very, very rapidly, in business terms and, indeed, in scientific terms, of rapid turnaround, and getting inside the adversaries’ loop, move fast and break things, [to think long term]. Long term thinking might be proposing that some things you don’t want to break. They might involve moving slow, and steadily.

The Pace Layer diagram.

He introduces the pace layer diagram as a tool to approach global scale challenges:

What we’re proposing is there are a lot of problems, a lot of issues and a lot of quite wonderful things in that category of being big and slow moving and so I wound up with Brian Eno developing a pace layer diagram of civilization where there’s the fast moving parts like fashion and commerce, and then it goes slower when you get to infrastructure and then things move really slow in how governance changes, and then you go down to culture and language and religion move really slowly and then nature, the tectonic forces in climate change and so on move really big and slow. And what’s interesting about that is that the fast parts get all the attention, but the slow parts have all the power. And if you want to really deal with the powerful forces in the world, bear relation to seeing what can be done with appreciating and maybe helping adjust the big slow things.

Stewart Brand and ecosystem ecologist Elena Bennett during the Q&A of her November 02017 SALT Talk. Photo: Gary Wilson.

Ferris admits that in the last few months he’s been pulled out of the current of long-term thinking by the “rip tide of noise,” and asks Brand for a “homework list” of SALT talks that can help provide him with perspective. Brand recommends Jared Diamond’s 02005 talk on How Societies Fail (And Sometimes Succeed), Matt Ridley’s 02011 talk on Deep Optimism, and Ian Morris’ 02011 talk on Why The West Rules (For Now).

Brand also discusses Revive & Restore’s efforts to bring back the Wooly Mammoth, and addresses the fear many have of meddling with complex systems through de-extinction.

Long-term thinking has figured prominently in Tim Ferriss’ podcast in recent months. In addition to his interview with Brand, Ferris has also interviewed Long Now board member Kevin Kelly and Long Now speaker Tim O’Reilly.

Listen to the podcast in full here.

TEDTED debuts “Small Thing Big Idea” original video series on Facebook Watch

Today we’re debuting a new original video series on Facebook Watch called Small Thing Big Idea: Designs That Changed the World.

Each 3- to 4-minute weekly episode takes a brief but delightful look at the lasting genius of one everyday object – a pencil, for example, or a hoodie – and explains how it is so perfectly designed that it’s actually changed the world around it.

The series features some of design’s biggest names, including fashion designer Isaac Mizrahi, museum curator Paola Antonelli, and graphic designer Michael Bierut sharing their infectious obsession with good design.

To watch the first episode of Small Thing Big Idea (about the little-celebrated brilliance of subway maps!), tune in here, and check back every Tuesday for new episodes.

Cory DoctorowThe Man Who Sold the Moon, Part 02


Here’s part two of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.

MP3

Planet DebianJamie McClelland: Procrastinating by tweaking my desktop with devilspie2

Tweaking my desktop seems to be my preferred form of procrastination. So, a blog like this is a sure sign I have too much work on my plate.

I have a laptop. I carry it to work and plug it into a large monitor - where I like to keep all my instant or near-instant communications displayed at all times while I switch between workspaces on my smaller laptop screen as I move from email (workspace one), to shell (workspace two), to web (workspace three), etc.

When I'm not at the office, I only have my laptop screen - which has to accomdate everything.

I soon got tired of dragging things around everytime I plugged or unplugged the monitor and starting accumulating a mess of bash scripts running wmctrl and even calling my own python-wnck script. (At first I couldn't get wmctrl to pin a window but I lived with it. But when gajim switched to gtk3 and my openbox window decorations disappeared, then I couldn't even pin my window manually.)

Now I have the following simpler setup.

Manage hot plugging of my monitor.

Symlink to my monitor status device:

0 jamie@turkey:~$ ls -l ~/.config/turkey/monitor.status 
lrwxrwxrwx 1 jamie jamie 64 Jan 15 15:26 /home/jamie/.config/turkey/monitor.status -> /sys/devices/pci0000:00/0000:00:02.0/drm/card0/card0-DP-1/status
0 jamie@turkey:~$ 

Create a udev rule to place my monitor to the right of my LCD every time the monitor is plugged in and every time it is unplugged.

0 jamie@turkey:~$ cat /etc/udev/rules.d/90-vga.rules 
# When a monitor is plugged in, adjust my display to take advantage of it
ACTION=="change", SUBSYSTEM=="drm", ENV{HOTPLUG}=="1", RUN+="/etc/udev/scripts/vga-adjust"
0 jamie@turkey:~$ 

And here is the udev script:

0 jamie@turkey:~$ cat /etc/udev/scripts/vga-adjust 
#!/bin/bash

logger -t "jamie-udev" "Monitor event detected, waiting 1 second for system to detect change."

# We don't know whether the VGA monitor is being plugged in or unplugged so we
# have to autodetect first. And,it takes a few seconds to assess whether the
# monitor is there or not, so sleep for 1 second.
sleep 1 
monitor_status="/home/jamie/.config/turkey/monitor.status"
status=$(cat "$monitor_status")  

XAUTHORITY=/home/jamie/.Xauthority
if [ "$status" = "disconnected" ]; then
  # The monitor is not plugged in   
  logger -t "jamie-udev" "Monitor is being unplugged"
  xrandr --output DP-1 --off
else
  logger -t "jamie-udev" "Monitor is being plugged in"
  xrandr --output DP-1 --right-of eDP-1 --auto
fi  
0 jamie@turkey:~$

Move windows into place.

So far, this handles ensuring the monitor is activated and placed in the right position. But, nothing has changed in my workspace.

Here's where the devilspie2 configuration comes in:

==> /home/jamie/.config/devilspie2/00-globals.lua <==
-- Collect some global varibles to be used throughout.
name = get_window_name();
app = get_application_name();
instance = get_class_instance_name();

-- See if the monitor is plugged in or not. If monitor is true, it is
-- plugged in, if it is false, it is not plugged in.
monitor = false;
device = "/home/jamie/.config/turkey/monitor.status"
f = io.open(device, "rb")
if f then
  -- Read the contents, remove the trailing line break.
  content = string.gsub(f:read "*all", "\n", "");
  if content == "connected" then
    monitor = true;
  end
end


==> /home/jamie/.config/devilspie2/gajim.lua <==
-- Look for my gajim message window. Pin it if we have the monitor.
if string.find(name, "Gajim: conversations.im") then
  if monitor then
    set_window_geometry(1931,31,590,1025);
    pin_window();
  else
    set_window_workspace(4);
    set_window_geometry(676,31,676,725);
    unpin_window();
  end
end

==> /home/jamie/.config/devilspie2/grunt.lua <==
-- grunt is the window I use to connect via irc. I typically connect to
-- grunt via a terminal called spade, which is opened using a-terminal-yoohoo
-- so that bell actions cause a notification. The window is called spade if I
-- just opened it but usually changes names to grunt after I connect via autossh
-- to grunt. 
--
-- If no monitor, put spade in workspace 2, if monitor, then pin it to all
-- workspaces and maximize it vertically.

if instance == "urxvt" then
  -- When we launch, the terminal is called spade, after we connect it
  -- seems to get changed to jamie@grunt or something like that.
  if name == "spade" or string.find(name, "grunt:") then
    if monitor then
      set_window_geometry(1365,10,570,1025);
      set_window_workspace(3);
      -- maximize_vertically();
      pin_window();
    else
      set_window_geometry(677,10,676,375);
      set_window_workspace(2);
      unpin_window();
    end
  end
end

==> /home/jamie/.config/devilspie2/terminals.lua <==
-- Note - these will typically only work after I start the terminals
-- for the first time because their names seem to change.
if instance == "urxvt" then
  if name == "heart" then
    set_window_geometry(0,10,676,375);
  elseif name == "spade" then
    set_window_geometry(677,10,676,375);
  elseif name == "diamond" then
    set_window_geometry(0,376,676,375);
  elseif name == "clover" then
    set_window_geometry(677,376,676,375);
  end
end

==> /home/jamie/.config/devilspie2/zimbra.lua <==
-- Look for my zimbra firefox window. Shows support queue.
if string.find(name, "Zimbra") then
  if monitor then
    unmaximize();
    set_window_geometry(2520,10,760,1022);
    pin_window();
  else
    set_window_workspace(5);
    set_window_geometry(0,10,676,375);
    -- Zimbra can take up the whole window on this workspace.
    maximize();
    unpin_window();
  end
end

And lastly, it is started (and restartd) with:

0 jamie@turkey:~$ cat ~/.config/systemd/user/devilspie2.service 
[Unit]
Description=Start devilspie2, program to place windows in the right locations.

[Service]
ExecStart=/usr/bin/devilspie2

[Install]
WantedBy=multi-user.target
0 jamie@turkey:~$ 

Which I have configured via a key combination that I hit everytime I plug in or unplug my monitor.

CryptogramJim Risen Writes about Reporting Government Secrets

Jim Risen writes a long and interesting article about his battles with the US government and the New York Times to report government secrets.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #142

Here's what happened in the Reproducible Builds effort between Sunday December 31 and Saturday January 13 2018:

Media coverage

Development and fixes in key packages

Chris Lamb implemented two reproducibility checks in the lintian Debian package quality-assurance tool:

  • Warn about packages that ship Hypothesis example files. (#886101, report)
  • Warn about packages that override dh_fixperms without calling dh_fixperms as this makes the build vary depending on the current umask(2). (#885910, report)

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

60 package reviews have been added, 43 have been updated and 76 have been removed in this week, adding to our knowledge about identified issues.

4 new issue types have been added:

The notes of one issue type was updated:

  • build_dir_in_documentation_generated_by_doxygen: 1, 2

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adam Borowski (2)
  • Adrian Bunk (16)
  • Niko Tyni (1)
  • Chris Lamb (6)
  • Jonas Meurer (1)
  • Simon McVittie (1)

diffoscope development

disorderfs development

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Worse Than FailureWhy Medical Insurance Is So Expensive

VA One AE Preliminary Project Timeline 2001-02

At the end of 2016, Ian S. accepted a contract position at a large medical conglomerate. He was joining a team of 6 developers on a project to automate what was normally a 10,000-hour manual process of cross-checking spreadsheets and data files. The end result would be a Django server offering a RESTful API and MySQL backend.

"You probably won't be doing anything much for the first week, maybe even the first month," Ian's interviewer informed him.

Ian ignored the red flag and accepted the offer. He needed the experience, and the job seemed reasonable enough. Besides, there were only 2 layers of management to deal with: his boss Daniel, who led the team, and his boss' boss Jim.

The office was in an lavish downtown location. The first thing Ian learned was that nobody had assigned desks. Each day, everyone had to clean out their desks and return their computers and peripherals to lockers. Because team members needed to work closely together, everyone claimed the same desk every day anyway. This policy only resulted in frustration and lost time.

As if that weren't bad enough, the computers were also heavily locked down. Ian had to go through the company's own "app store" to install anything. This was followed by an approval process that could take a few days based on how often Jim went through his pending approvals. The one exception was VMWare Workstation. Because this app cost money, it involved a 2-week approval process. In the middle of December, everyone was off on holiday, making it impossible for Ian's team to get approvals or talk to anyone helpful. Thus Ian's only contributions that month were a couple of Visio diagrams and a Django "hello world" that Daniel had requested. (It wasn't as if Daniel could check his work, though. He didn't know anything about Python, Django, REST, MySQL, MVC, or any other technology relevant to the project.)

The company provided Ian a copy of Agile for Dummies, which seemed ironic in retrospect, as the team was forced to the spend entire first week of January breaking the next 6 months into 2-week sprints. They weren't allowed to leave sprints empty, and had to allocate 36-40 hours each week. They could only make stories for features, so no time was penciled in for bug fixes or paying off technical debt. These stories were then chopped into meaningless pieces ("Part 1", "Part 2", etc.) so they'd fit into their arbitrary timelines.

"This is why medical insurance is so expensive", Daniel remarked at one point, either trying to lighten the mood or stave off his pending insanity.

Later in January, Ian arrived one morning to find the rest of his team standing around confused. Their project was now dead at the hands of a VP who'd had it in for Jim. The company had a tenure process, so the VP couldn't just fire Jim, but he could make his life miserable. He reassigned all of Jim's teams that he didn't outright terminate, exiled Jim to New Jersey, and gave him nothing to do but approve timesheets. Meanwhile, Daniel was told not to bother coming in again.

"Don't worry," the powers-that-be said. "We don't usually terminate people here."

Ian's gapingly empty schedule was filled with a completely different task: "shadowing" someone in another state by screen-sharing and watching them work. The main problem with this arrangement was that Ian's disciple was a systems analyst, not a programmer.

Come February, Ian's new team was also terminated.

"We don't have a culture of layoffs," the powers-that-be assured him.

They were still intent on shoving Ian into a systems analyst position despite his requisite lack of experience. It was at that point that he gave up and moved on. He later heard that within a few months, the entire division had been fired.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Don MartiRemove all the tracking widgets? Maybe not.

Good one from Mark Pilipczuk: Publisher Advice From a Buyer.

Remove all the tracking widgets from your site. That Facebook “Like” button only serves to exfiltrate your valuable data to an entity that doesn’t have your best interests at heart. If you’ve got a valuable audience, why would you want to help the ad tech industry which promises “I can find the same and bigger audience over here for $2 CPM, so don’t buy from the publisher?” Sticking your own head in the noose is never a good idea.

That advice makes sense for the Facebook "like button." That button is just a data shoplifter. The others, though? All those extra trackers come in as side effects of ad deals, and they're likely to be contractually required to make ads on the site saleable.

Yes, those trackers feed bots and data leakage, and yes, they're even terrible at fighting adfraud. Augustine Fou points out that Fraud filters don't work. "In some cases it's worse when filter is on."

So in an ideal world you would be able to pull all the third-party trackers, but as far as day-to-day operations go, user tracking is a Chesterton's Fence problem. What happens if a legit site unilaterally takes down the third-party trackers? All the targeted ad impressions that would have given that site a (small) payment end up going to bots.

So what can a site do? Understand that the real fix has to happen on the browser end, and nudge the users to either make their browsers less data-leaky, or switch to browsers that are leakage-resistant out of the box.

Start A/B testing some notifications to remind users to turn on tracking protection.

  • Can you get users who are already choosing "Do Not Track" to turn on real protection if you inform them that sites ignore their DNT choice?

  • If a user is running an ad blocker with a paid whitelisting scheme, can you inform them about it to get them to switch to a better tool, or at least add a second layer of protection that limits the damage that paid whitelisting can do?

  • When users visit privacy pages or opt-out of a marketing program, are they also willing to check their browser privacy settings?

Every site's audience is different. It's hard to know in advance how users will respond to different calls to action to turn up their privacy and create a win-win for legit sites and legit brands. We do know that users are concerned and confused about web advertising, and the good news is that the JavaScript needed to collect data and administer nudges is as easy to add as yet another tracker.

More on what sites can do, that might be more effective than just removing trackers: What The Verge can do to help save web advertising

Planet DebianBenjamin Mako Hill: OpenSym 2017 Program Postmortem

The International Symposium on Open Collaboration (OpenSym, formerly WikiSym) is the premier academic venue exclusively focused on scholarly research into open collaboration. OpenSym is an ACM conference which means that, like conferences in computer science, it’s really more like a journal that gets published once a year than it is like most social science conferences. The “journal”, in iithis case, is called the Proceedings of the International Symposium on Open Collaboration and it consists of final copies of papers which are typically also presented at the conference. Like journal articles, papers that are published in the proceedings are not typically published elsewhere.

Along with Claudia Müller-Birn from the Freie Universtät Berlin, I served as the Program Chair for OpenSym 2017. For the social scientists reading this, the role of program chair is similar to being an editor for a journal. My job was not to organize keynotes or logistics at the conference—that is the job of the General Chair. Indeed, in the end I didn’t even attend the conference! Along with Claudia, my role as Program Chair was to recruit submissions, recruit reviewers, coordinate and manage the review process, make final decisions on papers, and ensure that everything makes it into the published proceedings in good shape.

In OpenSym 2017, we made several changes to the way the conference has been run:

  • In previous years, OpenSym had tracks on topics like free/open source software, wikis, open innovation, open education, and so on. In 2017, we used a single track model.
  • Because we eliminated tracks, we also eliminated track-level chairs. Instead, we appointed Associate Chairs or ACs.
  • We eliminated page limits and the distinction between full papers and notes.
  • We allowed authors to write rebuttals before reviews were finalized. Reviewers and ACs were allowed to modify their reviews and decisions based on rebuttals.
  • To assist in assigning papers to ACs and reviewers, we made extensive use of bidding. This means we had to recruit the pool of reviewers before papers were submitted.

Although each of these things have been tried in other conferences, or even piloted within individual tracks in OpenSym, all were new to OpenSym in general.

Overview

Statistics
Papers submitted 44
Papers accepted 20
Acceptance rate 45%
Posters submitted 2
Posters presented 9
Associate Chairs 8
PC Members 59
Authors 108
Author countries 20

The program was similar in size to the ones in the last 2-3 years in terms of the number of submissions. OpenSym is a small but mature and stable venue for research on open collaboration. This year was also similar, although slightly more competitive, in terms of the conference acceptance rate (45%—it had been slightly above 50% in previous years).

As in recent years, there were more posters presented than submitted because the PC found that some rejected work, although not ready to be published in the proceedings, was promising and advanced enough to be presented as a poster at the conference. Authors of posters submitted 4-page extended abstracts for their projects which were published in a “Companion to the Proceedings.”

Topics

Over the years, OpenSym has established a clear set of niches. Although we eliminated tracks, we asked authors to choose from a set of categories when submitting their work. These categories are similar to the tracks at OpenSym 2016. Interestingly, a number of authors selected more than one category. This would have led to difficult decisions in the old track-based system.

distribution of papers across topics with breakdown by accept/poster/reject

The figure above shows a breakdown of papers in terms of these categories as well as indicators of how many papers in each group were accepted. Papers in multiple categories are counted multiple times. Research on FLOSS and Wikimedia/Wikipedia continue to make up a sizable chunk of OpenSym’s submissions and publications. That said, these now make up a minority of total submissions. Although Wikipedia and Wikimedia research made up a smaller proportion of the submission pool, it was accepted at a higher rate. Also notable is the fact that 2017 saw an uptick in the number of papers on open innovation. I suspect this was due, at least in part, to work by the General Chair Lorraine Morgan’s involvement (she specializes in that area). Somewhat surprisingly to me, we had a number of submission about Bitcoin and blockchains. These are natural areas of growth for OpenSym but have never been a big part of work in our community in the past.

Scores and Reviews

As in previous years, review was single blind in that reviewers’ identities are hidden but authors identities are not. Each paper received between 3 and 4 reviews plus a metareview by the Associate Chair assigned to the paper. All papers received 3 reviews but ACs were encouraged to call in a 4th reviewer at any point in the process. In addition to the text of the reviews, we used a -3 to +3 scoring system where papers that are seen as borderline will be scored as 0. Reviewers scored papers using full-point increments.

scores for each paper submitted to opensym 2017: average, distribution, etc

The figure above shows scores for each paper submitted. The vertical grey lines reflect the distribution of scores where the minimum and maximum scores for each paper are the ends of the lines. The colored dots show the arithmetic mean for each score (unweighted by reviewer confidence). Colors show whether the papers were accepted, rejected, or presented as a poster. It’s important to keep in mind that two papers were submitted as posters.

Although Associate Chairs made the final decisions on a case-by-case basis, every paper that had an average score of less than 0 (the horizontal orange line) was rejected or presented as a poster and most (but not all) papers with positive average scores were accepted. Although a positive average score seemed to be a requirement for publication, negative individual scores weren’t necessary showstoppers. We accepted 6 papers with at least one negative score. We ultimately accepted 20 papers—45% of those submitted.

Rebuttals

This was the first time that OpenSym used a rebuttal or author response and we are thrilled with how it went. Although they were entirely optional, almost every team of authors used it! Authors of 40 of our 46 submissions (87%!) submitted rebuttals.

Lower Unchanged Higher
6 24 10

The table above shows how average scores changed after authors submitted rebuttals. The table shows that rebuttals’ effect was typically neutral or positive. Most average scores stayed the same but nearly two times as many average scores increased as decreased in the post-rebuttal period. We hope that this made the process feel more fair for authors and I feel, having read them all, that it led to improvements in the quality of final papers.

Page Lengths

In previous years, OpenSym followed most other venues in computer science by allowing submission of two kinds of papers: full papers which could be up to 10 pages long and short papers which could be up to 4. Following some other conferences, we eliminated page limits altogether. This is the text we used in the OpenSym 2017 CFP:

There is no minimum or maximum length for submitted papers. Rather, reviewers will be instructed to weigh the contribution of a paper relative to its length. Papers should report research thoroughly but succinctly: brevity is a virtue. A typical length of a “long research paper” is 10 pages (formerly the maximum length limit and the limit on OpenSym tracks), but may be shorter if the contribution can be described and supported in fewer pages— shorter, more focused papers (called “short research papers” previously) are encouraged and will be reviewed like any other paper. While we will review papers longer than 10 pages, the contribution must warrant the extra length. Reviewers will be instructed to reject papers whose length is incommensurate with the size of their contribution.

The following graph shows the distribution of page lengths across papers in our final program.

histogram of paper lengths for final accepted papersIn the end 3 of 20 published papers (15%) were over 10 pages. More surprisingly, 11 of the accepted papers (55%) were below the old 10-page limit. Fears that some have expressed that page limits are the only thing keeping OpenSym from publshing enormous rambling manuscripts seems to be unwarranted—at least so far.

Bidding

Although, I won’t post any analysis or graphs, bidding worked well. With only two exceptions, every single assigned review was to someone who had bid “yes” or “maybe” for the paper in question and the vast majority went to people that had bid “yes.” However, this comes with one major proviso: people that did not bid at all were marked as “maybe” for every single paper.

Given a reviewer pool whose diversity of expertise matches that in your pool of authors, bidding works fantastically. But everybody needs to bid. The only problems with reviewers we had were with people that had failed to bid. It might be reviewers who don’t bid are less committed to the conference, more overextended, more likely to drop things in general, etc. It might also be that reviewers who fail to bid get poor matches which cause them to become less interested, willing, or able to do their reviews well and on time.

Having used bidding twice as chair or track-chair, my sense is that bidding is a fantastic thing to incorporate into any conference review process. The major limitations are that you need to build a program committee (PC) before the conference (rather than finding the perfect reviewers for specific papers) and you have to find ways to incentivize or communicate the importance of getting your PC members to bid.

Conclusions

The final results were a fantastic collection of published papers. Of course, it couldn’t have been possible without the huge collection of conference chairs, associate chairs, program committee members, external reviewers, and staff supporters.

Although we tried quite a lot of new things, my sense is that nothing we changed made things worse and many changes made things smoother or better. Although I’m not directly involved in organizing OpenSym 2018, I am on the OpenSym steering committee. My sense is that most of the changes we made are going to be carried over this year.

Finally, it’s also been announced that OpenSym 2018 will be in Paris on August 22-24. The call for papers should be out soon and the OpenSym 2018 paper deadline has already been announced as March 15, 2018. You should consider submitting! I hope to see you in Paris!

This Analysis

OpenSym used the gratis version of EasyChair to manage the conference which doesn’t allow chairs to export data. As a result, data used in this this postmortem was scraped from EasyChair using two Python scripts. Numbers and graphs were created using a knitr file that combines R visualization and analysis code with markdown to create the HTML directly from the datasets. I’ve made all the code I used to produce this analysis available in this git repository. I hope someone else finds it useful. Because the data contains sensitive information on the review process, I’m not publishing the data.


This blog post was originally posted on the Community Data Science Collective blog.

Planet DebianRussell Coker: More About the Thinkpad X301

Last month I blogged about the Thinkpad X301 I got from a rubbish pile [1]. One thing I didn’t realise when writing that post is that the X301 doesn’t have the keyboard light that the T420 has. With the T420 I could press the bottom left (FN) and top right (PgUp from memory) keys on the keyboard to turn a light on the keyboard. This is really good for typing at night. While I can touch type the small keyboard on a laptop makes it a little difficult so the light is a feature I found useful. I wrote my review of the X301 before having to use it at night.

Another problem I noticed is that it crashes after running Memtest86+ for between 30 minutes and 4 hours. Memtest86+ doesn’t report any memory errors, the system just entirely locks up. I have 2 DIMMs for it (2G and 4G), I tried installing them in both orders, and I tried with each of them in the first slot (the system won’t boot if only the second slot is filled). Nothing changed. Now it is possible that this is something that might not happen in real use. For example it might only happen due to heat when the system is under sustained load which isn’t something I planned for that laptop. I would discard a desktop system that had such a problem because I get lots of free desktop PCs, but I’m prepared to live with a laptop that has such a problem to avoid paying for another laptop.

Last night the laptop battery suddenly stopped working entirely. I had it unplugged for about 5 minutes when it abruptly went off (no flashing light to warn that the battery was low or anything). Now when I plug it in the battery light flashes orange. A quick Google search indicates that this might mean that a fuse inside the battery pack has blown or that there might be a problem with the system board. Replacing the system board is much more than the laptop is worth and even replacing the battery will probably cost more than it’s worth. Previously bought a Thinkpad T420 at auction because it didn’t cost much more than getting a new battery and PSU for a T61 [2] and I expect I can find a similar deal if I poll the auction sites for a while.

Using an X series Thinkpad has been a good experience and I’ll definitely consider an X series for my next laptop. My previous history of laptops involved going from ones with a small screen that were heavy and clunky (what was available with 90’s technology and cost less than a car) to ones that had a large screen and were less clunky but still heavy. I hadn’t tried small and light with technology from the last decade, it’s something I could really get used to!

By today’s standards the X301 is deficient in a number of ways. It has 64G of storage (the same as my most recent phones) which isn’t much for software development, 6G of RAM which isn’t too bad but is small by today’s standards (16G is a common factory option nowadays), a 1440*900 screen which looks bad in any comparison (less than the last 3 phones I’ve owned), and a slow CPU. No two of these limits would be enough to make me consider replacing that laptop. Even with the possibility of crashing under load it was still a useful system. But the lack of a usable battery in combination with all the other issues makes the entire system unsuitable for my needs. I would be very happy to use a fast laptop with a high resolution screen even without a battery, but not with this list of issues.

Next week I’m going to a conference and there’s no possibility of buying a new laptop before then. So for a week when I need to use a laptop a lot I will have a sub-standard laptop.

It really sucks to have a laptop develop a problem that makes me want to replace it so soon after I got it.

Planet DebianAxel Beckert: Tex Yoda II Mechanical Keyboard with Trackpoint

Here’s a short review of the Tex Yoda II Mechanical Keyboard with Trackpoint, a pointer to the next Swiss Mechanical Keyboard Meetup and why I ordered a $300 keyboard with less keys than a normal one.

Short Review of the Tex Yoda II

Pro
  • Trackpoint
  • Cherry MX Switches
  • Compact but heavy alumium case
  • Backlight (optional)
  • USB C connector and USB A to C cable with angled USB C plug
  • All three types of Thinkpad Trackpoint caps included
  • Configurable layout with nice web-based configurator (might be opensourced in the future)
  • Fn+Trackpoint = scrolling (not further configurable, though)
  • Case not clipped, but screwed
  • Backlight brightness and Trackpoint speed configurable via key bindings (usually Fn and some other key)
  • Default Fn keybindings as side printed and backlit labels
  • Nice packaging
Contra
  • It’s only a 60% Keyboard (I prefer TKL) and the two common top rows are merged into one, switched with the Fn key.
  • Cursor keys by default (and labeled) on the right side (mapped to Fn + WASD) — maybe good for games, but not for me.
  • ~ on Fn-Shift-Esc
  • Occassionally backlight flickering (low frequency)
  • Pulsed LED light effect (i.e. high frequency flickering) on all but the lowest brightness level
  • Trackpoint is very sensitive even in the slowest setting — use Fn+Q and Fn+E to adjust the trackpoint speed (“tps”)
  • No manual included or (obviously) downloadable.
  • Only the DIP switches 1-3 and 6 are documented, 4 and 5 are not. (Thanks gismo for the question about them!)
  • No more included USB hub like the Tex Yoda I had or the HHKB Lite 2 (USB 1.1 only) has.
My Modifications So Far
Layout Modifications Via The Web-Based Yoda 2 Configurator
  • Right Control and Menu key are Right and Left cursors keys
  • Fn+Enter and Fn+Shift are Up and Down cursor keys
  • Right Windows key is the Compose key (done in software via xmodmap)
  • Middle mouse button is of course a middle click (not Fn as with the default layout).
Other Modifications
  • Clear dampening o-rings (clear, 50A) under each key cap for a more silent typing experience
  • Braided USB cable

Next Swiss Mechanical Keyboard Meetup

On Sunday, the 18th of February 2018, the 4th Swiss Mechanical Keyboard Meetup will happen, this time at ETH Zurich, building CAB, room H52. I’ll be there with at least my Tex Yoda II and my vintage Cherry G80-2100.

Why I ordered a $300 keyboard

(JFTR: It was actually USD $299 plus shipping from the US to Europe and customs fee in Switzerland. Can’t exactly find out how much of shipping and customs fee were actually for that one keyboard, because I ordered several items at once. It’s complicated…)

I always was and still are a big fan of Trackpoints as common on IBM and Lenovo Thinkapds as well as a few other laptop manufactures.

For a while I just used Thinkpads as my private everyday computer, first a Thinkpad T61, later a Thinkpad X240. At some point I also wanted a keyboard with Trackpoint on my workstation at work. So I ordered a Lenovo Thinkpad USB Keyboard with Trackpoint. Then I decided that I want a permanent workstation at home again and ordered two more such keyboards: One for the workstation at home, one for my Debian GNU/kFreeBSD running ASUS EeeBox (not affected by Meltdown or Spectre, yay! :-) which I often took with me to staff Debian booths at events. There, a compact keyboard with a built-in pointing device was perfect.

Then I met the guys from the Swiss Mechanical Keyboard Meetup at their 3rd meetup (pictures) and knew: I need a mechanical keyboard with Trackpoint.

IBM built one Model M with Trackpoint, the M13, but they’re hard to get. For example, ClickyKeyboards sells them, but doesn’t publish the price tag. :-/ Additionally, back then there were only two mouse buttons usual and I really need the third mouse button for unix-style pasting.

Then there’s the Unicomp Endura Pro, the legit successor of the IBM Model M13, but it’s only available with an IMHO very ugly color combination: light grey key caps in a black case. And they want approximately 50% of the price as shipping costs (to Europe). Additionally it didn’t have some other nice keyboard features I started to love: Narrow bezels are nice and keyboards with backlight (like the Thinkpad X240 ff. has) have their advantages, too. So … no.

Soon I found, what I was looking for: The Tex Yoda, a nice, modern and quite compact mechanical keyboard with Trackpoint. Unfortunately it is sold out since quite some years ago and more then 5000 people on Massdrop were waiting for its reintroduction.

And then the unexpected happened: The Tex Yoda II has been announced. I knew, I had to get one. From then on the main question was when and where will it be available. To my surprise it was not on Massdrop but at a rather normal dealer, at MechanicalKeyboards.com.

At that time a friend heard me talking of mechanical keyboards and of being unsure about which keyboard switches I should order. He offered to lend me his KBTalking ONI TKL (Ten Key Less) keyboard with Cherry MX Brown switches for a while. Which was great, because from theory, MX Brown switches were likely the most fitting ones for me. He also gave me two other non-functional keyboards with other Cherry MX switch colors (variants) for comparision. As a another keyboard to compare I had my programmable Cherry G80-2100 from the early ’90s with vintage Cherry MX Black switches. Another keyboard to compare with is my Happy Hacking Keyboard (HHKB) Lite 2 (PD-KB200B/U) which I got as a gift a few years ago. While the HHKB once was a status symbol amongst hackers and system administrators, the old models (like this one) only had membrane type keyboard switches. (They nevertheless still seem to get built, but only sold in Japan.)

I noticed that I was quickly able to type faster with the Cherry MX Brown switches and the TKL layout than with the classic Thinkpad layout and its rubber dome switches or with the HHKB. So two things became clear:

  • At least for now I want Cherry MX Brown switches.
  • I want a TKL (ten key less) layout, i.e. one without the number block but with the cursor block. As with the Lenovo Thinkpad USB Keyboards and the HHKB, I really like the cursor keys being in the easy to reach lower right corner. The number pad is just in the way to have that.

Unfortunately the Tex Yoda II was without that cursor block. But since it otherwise fitted perfectly into my wishlist (Trackpoint, Cherry MX Brown switches available, Backlight, narrow bezels, heavy weight), I had to buy one once available.

So in early December 2017, I ordered a Tex Yoda II White Backlit Mechanical Keyboard (Brown Cherry MX) at MechanicalKeyboards.com.

Because I was nevertheless keen on a TKL-sized keyboard I also ordered a Deck Francium Pro White LED Backlit PBT Mechanical Keyboard (Brown Cherry MX) which has an ugly font on the key caps, but was available for a reduced price at that time, and the controller got quite good reviews. And there was that very nice Tai-Hao 104 Key PBT Double Shot Keycap Set - Orange and Black, so the font issue was quickly solved with keycaps in my favourite colour: orange. :-)

The package arrived in early January. The aluminum case of the Tex Yoda II was even nicer than I thought. Unfortunately they’ve sent me a Deck Hassium full-size keyboard instead of the wanted TKL-sized Deck Francium. But the support of MechanicalKeyboards.com was very helpful and I assume I can get the keyboard exchanged at no cost.

Krebs on SecuritySerial SWATter Tyler “SWAuTistic” Barriss Charged with Involuntary Manslaughter

Tyler Raj Barriss, a 25-year-old serial “swatter” whose phony emergency call to Kansas police last month triggered a fatal shooting, has been charged with involuntary manslaughter and faces up to eleven years in prison.

Tyler Raj Barriss, in an undated selfie.

Barriss’s online alias — “SWAuTistic” — is a nod to a dangerous hoax known as “swatting,” in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with potentially deadly force.

Barriss was arrested in Los Angeles this month for alerting authorities in Kansas to a fake hostage situation at an address in Wichita, Kansas on Dec. 28, 2017.

Police responding to the alert surrounded the home at the address Barriss provided and shot 28-year old Andrew Finch as he emerged from the doorway of his mother’s home. Finch, a father of two, was unarmed, and died shortly after being shot by police.

The officer who fired the shot that killed Finch has been identified as a seven-year veteran with the Wichita department. He has been placed on administrative leave pending an internal investigation.

Following his arrest, Barriss was extradited to a Wichita jail, where he had his first court appearance via video on FridayThe Los Angeles Times reports that Barriss was charged with involuntary manslaughter and could face up to 11 years and three months in prison if convicted.

The moment that police in Kansas fired a single shot that killed Andrew Finch (in doorway of his mother’s home).

Barriss also was charged with making a false alarm — a felony offense in Kansas. His bond was set at $500,000.

Sedgwick County District Attorney Marc Bennett told the The LA Times Barriss made the fake emergency call at the urging of several other individuals, and that authorities have identified other “potential suspects” that may also face criminal charges.

Barriss sought an interview with KrebsOnSecurity on Dec. 29, just hours after his hoax turned tragic. In that interview, Barriss said he routinely called in bomb threats and fake hostage situations across the country in exchange for money, and that he began doing it after his own home was swatted.

Barriss told KrebsOnSecurity that he felt bad about the incident, but that it wasn’t he who pulled the trigger. He also enthused about the rush that he got from evading police.

“Bomb threats are more fun and cooler than swats in my opinion and I should have just stuck to that,” he wrote in an instant message conversation with this author.

In a jailhouse interview Friday with local Wichita news station KWCH, Barriss said he feels “a little remorse for what happened.”

“I never intended for anyone to get shot and killed,” he reportedly told the news station. “I don’t think during any attempted swatting anyone’s intentions are for someone to get shot and killed.”

The Wichita Eagle reports that Barriss also has been charged in Calgary, Canada with public mischief, fraud and mischief for allegedly making a similar swatting call to authorities there. However, no one was hurt or killed in that incident.

Barriss was convicted in 2016 for calling in a bomb threat to an ABC affiliate in Los Angeles. He was sentenced to two years in prison for that stunt, but was released in January 2017.

Using his SWAuTistic alias, Barriss claimed credit for more than a hundred fake calls to authorities across the nation. In an exclusive story published here on Jan. 2, KrebsOnSecurity dissected several months’ worth of tweets from SWAuTistic’s account before those messages were deleted. In those tweets, SWAuTistic claimed responsibility for calling in bogus hostage situations and bomb threats at roughly 100 schools and at least 10 residences.

In his public tweets, SWAuTistic claimed credit for bomb threats against a convention center in Dallas and a high school in Florida, as well as an incident that disrupted a much-watched meeting at the U.S. Federal Communications Commission (FCC) in November.

But in private online messages shared by his online friends and acquaintances SWAuTistic can be seen bragging about his escapades, claiming to have called in fake emergencies at approximately 100 schools and 10 homes.

The serial swatter known as “SWAuTistic” claimed in private conversations to have carried out swattings or bomb threats against 100 schools and 10 homes.

,

Planet DebianSteinar H. Gunderson: Retpoline-enabled GCC

Since I assume there are people out there that want Spectre-hardened kernels as soon as possible, I pieced together a retpoline-enabled build of GCC. It's based on the latest gcc-snapshot package from Debian unstable with H.J.Lu's retpoline patches added, but built for stretch.

Obviously this is really scary prerelease code and will possibly eat babies (and worse, it hasn't taken into account the last-minute change of retpoline ABI, so it will break with future kernels), but it will allow you to compile 4.15.0-rc8 with CONFIG_RETPOLINE=y, and also allow you to assess the cost of retpolines (-mindirect-branch=thunk) in any particularly sensitive performance userspace code.

There will be upstream backports at least to GCC 7, but probably pretty far back (I've seen people talk about all the way to 4.3). So you won't have to run my crappy home-grown build for very long—it's a temporary measure. :-)

Oh, and it made Stockfish 3% faster than with GCC 6.3! Hooray.

Krebs on SecurityCanadian Police Charge Operator of Hacked Password Service Leakedsource.com

Canadian authorities have arrested and charged a 27-year-old Ontario man for allegedly selling billions of stolen passwords online through the now-defunct service Leakedsource.com.

The now-defunct Leakedsource service.

On Dec. 22, 2017, the Royal Canadian Mounted Police (RCMP) charged Jordan Evan Bloom of Thornhill, Ontario for trafficking in identity information, unauthorized use of a computer, mischief to data, and possession of property obtained by crime. Bloom is expected to make his first court appearance today.

According to a statement from the RCMP, “Project Adoration” began in 2016 when the RCMP learned that LeakedSource.com was being hosted by servers located in Quebec.

“This investigation is related to claims about a website operator alleged to have made hundreds of thousands of dollars selling personal information,” said Rafael Alvarado, the officer in charge of the RCMP Cybercrime Investigative Team. “The RCMP will continue to work diligently with our domestic and international law enforcement partners to prosecute online criminality.”

In January 2017, multiple news outlets reported that unspecified law enforcement officials had seized the servers for Leakedsource.com, perhaps the largest online collection of usernames and passwords leaked or stolen in some of the worst data breaches — including three billion credentials for accounts at top sites like LinkedIn and Myspace.

Jordan Evan Bloom. Photo: RCMP.

LeakedSource in October 2015 began selling access to passwords stolen in high-profile breaches. Enter any email address on the site’s search page and it would tell you if it had a password corresponding to that address. However, users had to select a payment plan before viewing any passwords.

The RCMP alleges that Jordan Evan Bloom was responsible for administering the LeakedSource.com website, and earned approximately $247,000 from trafficking identity information.

A February 2017 story here at KrebsOnSecurity examined clues that LeakedSource was administered by an individual in the United States.  Multiple sources suggested that one of the administrators of LeakedSource also was the admin of abusewith[dot]us, a site unabashedly dedicated to helping people hack email and online gaming accounts.

That story traced those clues back to a Michigan man who ultimately admitted to running Abusewith[dot]us, but who denied being the owner of LeakedSource.

The RCMP said it had help in the investigation from The Dutch National Police and the FBI. The FBI could not be immediately reached for comment.

LeakedSource was a curiosity to many, and for some journalists a potential source of news about new breaches. But unlike services such as BreachAlarm and HaveIBeenPwned.com, LeakedSource did nothing to validate users.

This fact, critics charged, showed that the proprietors of LeakedSource were purely interested in making money and helping others pillage accounts.

Since the demise of LeakedSource.com, multiple, competing new services have moved in to fill the void. These services — which are primarily useful because they expose when people re-use passwords across multiple accounts — are popular among those involved in a variety of cybercriminal activities, particularly account takeovers and email hacking.

CryptogramFighting Ransomware

No More Ransom is a central repository of keys and applications for ransomware, so people can recover their data without paying. It's not complete, of course, but is pretty good against older strains of ransomware. The site is a joint effort by Europol, the Dutch police, Kaspersky, and McAfee.

Worse Than FailureRepresentative Line: Tern Back

In the process of resolving a ticket, Pedro C found this representative line, which has nothing to do with the bug he was fixing, but was just something he couldn’t leave un-fixed:

$categories = (isset($categoryMap[$product['department']]) ?
                            (isset($categoryMap[$product['department']][$product['classification']])
                                        ?
                                    $categoryMap[$product['department']][$product['classification']]
                                        : NULL) : NULL);

Yes, the venerable ternary expression, used once again to obfuscate and confuse.

It took Pedro a few readings before he even understood what it did, and then it took him a few more readings to wonder about why anyone would solve the problem this way. Then, he fixed it.

$department = $product['department'];
$classification = $product['classification'];
$categories = NULL;
//ED: isset never triggers as error with an undefined expression, but simply returns false, because PHP
if( isset($categoryMap[$department][$classification]) ) { 
    $categories = $categoryMap[$department][$classification];
}

He submitted the change for code-review, but it was kicked back. You see, Pedro had fixed the bug, which had a ticket associated with it. There were to be no code changes without a ticket from a business user, and since this change wasn’t strictly related to the bug, he couldn’t submit this change.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianCyril Brulebois: Quick recap of 2017

I haven’t been posting anything on my personal blog in a long while, let’s fix that!

Partial reason for this is that I’ve been busy documenting progress on the Debian Installer on my company’s blog. So far, the following posts were published there:

After the Stretch release, it was time to attend DebConf’17 in Montreal, Canada. I’ve presented the latest news on the Debian Installer front there as well. This included a quick demo of my little framework which lets me run automatic installation tests. Many attendees mentioned openQA as the current state of the art technology for OS installation testing, and Philip Hands started looking into it. Right now, my little thing is still useful as it is, helping me reproduce regressions quickly, and testing bug fixes… so I haven’t been trying to port that to another tool yet.

I also gave another presentation in two different contexts: once at a local FLOSS meeting in Nantes, France and once during the mini-DebConf in Toulouse, France. Nothing related to Debian Installer this time, as the topic was how I helped a company upgrade thousands of machines from Debian 6 to Debian 8 (and to Debian 9 since then). It was nice to have Evolix people around, since we shared our respective experience around automation tools like Ansible and Puppet.

After the mini-DebConf in Toulouse, another event: the mini-DebConf in Cambridge, UK. I tried to give a lightning talk about “how snapshot.debian.org helped saved the release(s)” but clearly speed was lacking, and/or I had too many things to present, so that didn’t work out as well as I hoped. Fortunately, no time constraints when I presented that during a Debian meet-up in Nantes, France. :)

Since Reproducible Tails builds were announced, it seemed like a nice opportunity to document how my company got involved into early work on reproducibility for the Tails project.

On an administrative level, I’m already done with all the paperwork related to the second financial year. \o/

Next things I’ll likely write about: the first two D-I Buster Alpha releases (many blockers kept popping up, it was really hard to release), and a few more recent release critical bug reports.

Planet DebianDaniel Pocock: RHL'18 in Saint-Cergue, Switzerland

RHL'18 was held at the centre du Vallon à St-Cergue, the building in the very center of this photo, at the bottom of the piste:

People from various free software communities in the region attended for a series of presentations, demonstrations, socializing and ski. This event is a lot of fun and I would highly recommend that people look out for the next edition. (subscribe to rhl-annonces on lists.swisslinux.org for a reminder email)

Ham radio demonstration

I previously wrote about building a simple antenna for shortwave (HF) reception with software defined radio. That article includes links to purchase all the necessary parts from various sources. Everything described in that article, together with some USB sticks running Debian Hams Live (bootable ham radio operating system), some rolls of string and my FT-60 transceiver, fits comfortably into an OSCAL tote bag like this:

It is really easy to take this kit to an event anywhere, set it up in 10 minutes and begin exploring the radio spectrum. Whether it is a technical event or a village fair, radio awakens curiosity in people of all ages and provides a starting point for many other discussions about technological freedom, distributing stickers and inviting people to future events. My previous blog contains photos of what is in the bag and a video demo.

Open Agriculture Food Computer discussion

We had a discussion about progress building an Open Agriculture (OpenAg) food computer in Switzerland. The next meeting in Zurich will be held on 30 January 2018, please subscribe to the forum topic to receive further details.

Preparing for Google Summer of Code 2018

In between eating fondue and skiing, I found time to resurrect some of my previous project ideas for Google Summer of Code. Most of them are not specific to Debian, several of them need co-mentors, please contact me if you are interested.

,

Planet DebianSean Whitton: lastjedi

A few comments on Star Wars: The Last Jedi.

Vice Admiral Holdo’s subplot was a huge success. She had to make a very difficult call over which she knew she might face a mutiny from the likes of Poe Dameron. The core of her challenge was that there was no speech or argument she could have given that would have placated Dameron and restored unity to the crew. Instead, Holdo had to press on in the face of that disunity. This reflects the fact that, sometimes, living as one should demands pressing on in the face deep disagreement with others.

Not making it clear that Dameron was in the wrong until very late in the film was a key component of the successful portrayal of the unpleasantness of what Holdo had to do. If instead it had become clear to the audience early on that Holdo’s plan was obviously the better one, we would not have been able to observe the strength of Holdo’s character in continuing to pursue her plan despite the mutiny.

One thing that I found weak about Holdo was her dress. You cannot be effective on the frontlines of a hot war in an outfit like that! Presumably the point was to show that women don’t have to give up their femininity in order to take tough tactical decisions under pressure, and that’s indeed something worth showing. But this could have been achieved by much more subtle means. What was needed was to have her be the character with the most feminine outfit, and it would have been possible to fulfill that condition by having her wear something much more practical. Thus, having her wear that dress was crude and implausible overkill in the service of something otherwise worth doing.

I was very disappointed by most of the subplot with Rey and Luke: both the content of that subplot, and its disconnection from the rest of film.

Firstly, the content. There was so much that could have been explored that was not explored. Luke mentions that the Jedi failed to stop Darth Sidious “at the height of their powers”. Well, what did the Jedi get wrong? Was it the Jedi code; the celibacy; the bureaucracy? Is their light side philosophy to absolutist? How are Luke’s beliefs about this connected to his recent rejection of the Force? When he lets down his barrier and reconnects with the force, Yoda should have had much more to say. The Force is, perhaps, one big metaphor for certain human capacities not emphasised by our contemporary culture. It is at the heart of Star Wars, and it was at the heart of Empire and Rogue One. It ought to have been at the heart of The Last Jedi.

Secondly, the lack of integration with the rest of the film. One of the aspects of Empire that enables its importance as a film, I suggest, is the tight integration and interplay between the two main subplots: the training of Luke under Yoda, and attempting to shake the Empire off the trail of the Millennium Falcon. Luke wants to leave the training unfinished, and Yoda begs him to stay, truly believing that the fate of the galaxy depends on him completing the training. What is illustrated by this is the strengths and weaknesses of both Yoda’s traditional Jedi view and Luke’s desire to get on with fighting the good fight, the latter of which is summed up by the binary sunset scene from A New Hope. Tied up with this desire is Luke’s love for his friends; this is an important strength of his, but Yoda has a point when he says that the Jedi training must be completed if Luke is to be ultimately succesful. While the Yoda subplot and what happens at Cloud City could be independently interesting, it is only this integration that enables the film to be great. The heart of the integration is perhaps the Dark Side Cave, where two things are brought together: the challenge of developing the relationship with oneself possessed by a Jedi, and the threat posed by Darth Vader.

In the Last Jedi, Rey just keeps saying that the galaxy needs Luke, and eventually Luke relents when Kylo Ren shows up. There was so much more that could have been done with this! What is it about Rey that enables her to persuade Luke? What character strengths of hers are able to respond adequately to Luke’s fear of the power of the Force, and doubt regarding his abilities as a teacher? Exploring these things would have connected together the rebel evacuation, Rey’s character arc and Luke’s character arc, but these three were basically independent.

(Possibly I need to watch the cave scene from The Last Jedi again, and think harder about it.)

Planet DebianDirk Eddelbuettel: digest 0.6.14

Another small maintenance release, version 0.6.14, of the digest package arrived on CRAN and in Debian today.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'crc32', 'xxhash' and 'murmurhash' algorithms) permitting easy comparison of R language objects.

Just like release 0.6.13 a few weeks ago, this release accomodates another request by Luke and Tomas and changes two uses of NAMED to MAYBE_REFERENCED which helps in the transition to the new reference counting model in R-devel. Thierry also spotted a minor wart in how sha1() tested type for matrices and corrected that, and I converted a few references to https URLs and correct one now-dead URL.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMario Lang: I pushed an implementation of myself to GitHub

Roughly 4 years ago, I mentioned that there appears to be an esotieric programming language which shares my full name.

I know, it is really late, but two days ago, I discovered Racket. As a Lisp person, I immediately felt at home. And realizing how the language dispatch mechanism works, I couldn't resist and write a Racket implementation of MarioLANG. A nice play on words and a good toy project to get my feet wet.

Racket programs always start with #lang. How convenient. MarioLANG programs for Racket therefore look something like this:

#lang mario
++++++++++++
===========+:
           ==

So much about abusing coincidences. Phew, this was a fun weekend project! And it has some potential for more challenges. Right now, it is only an interpreter, because it appears to be tricky to compile a 2d instruction "space" to traditional code. MarioLANG does not only allow for nested loops as BrainFuck does, it also includes weird concepts like the reversal of the instruction pointer direction. Coupled with the "skip" ([) instruction, this allow to create loops which have two exit conditions and reverse code execution on every pass. Something like this:

@[ some brainfuck [@
====================

And since this is a 2d programming language, this theoretical loop could be entered by jumping onto any of the instruction inbetween from above. And, the heading could be either leftward or rightward when entering.

Discovering these patterns and translating them to compilable code is quite beyond me right now. Lets see what time will bring.

Planet DebianIustin Pop: SSL migration

SSL migration

This week I managed to finally migrate my personal website to SSL, and on top of that migrate the SMTP/IMAP services to certificates signed by "proper" a CA (instead of my own). This however was more complex than I thought…

Let's encrypt?

I first wanted to do this when Let's Encrypt became available, but the way it works - with short term certificates with automated renewal put me off at first. The certbot tool needs to make semi-arbitrary outgoing requests to renew the certificates, and on public machines I have a locked-down outgoing traffic policy. So I gave up, temporarily…

I later found out that at least for now (for the current protocol), certbot only needs to talk to a certain API endpoint, and after some more research, I realized that the http-01 protocol is very straight-forward, only needing to allow some specific plain http URLs.

So then:

Issue 1: allowing outgoing access to a given API endpoint, somewhat restricted. I solved this by using a proxy, forcing certbot to go through it via env vars, learning about systemctl edit on the way, and from the proxy, only allowing that hostname. Quite weak, but at least not "open policy".

Issue 2: due to how http-01 works, it requires to leave some specific paths under http, which means you can't have (in Apache) a "redirect everything to https" config. While fixing this I learned about mod_macro, which is quite interesting (and doesn't need an external pre-processor).

The only remaining problem is that you can't renew automatically certificates for non-externally accessible systems; the dns protocol also need changing externally-visible state, so more or less the same. So:

Issue 3: For internal websites, still need a solution if own CA (self-signed, needs certificates added to clients) is not acceptable.

How did it go?

It seems that using SSL is more than SSLEngine on. I learned in this exercise about quite a few things.

CAA

DNS Certification Authority Authorization is pretty nice, and although it's not a strong guarantee (against malicious CAs), it gives some more signals that proper clients could check ("For this domain, only this CA is expected to sign certificates"); also, trivial to configure, with the caveat that one would need DNSSEC as well for end-to-end checks.

OCSP stapling

I was completely unaware of OCSP Stapling, and yay, seems like a good solution to actually verifying that the certs were not revoked. However… there are many issues with it:

  • there needs to be proper configuration on the webserver to not cause more problems than without; Apache at least, needs increasing the cache lifetime, disable sending error responses (for transient CA issues), etc.
  • but even more, it requires the web server user to be able to make "random" outgoing requests, which IMHO is a big no-no
  • even the command line tools (i.e. openssl ocsp) are somewhat deficient: no proxy support (while s_client can use one)

So the proper way to do this seems to be a separate piece of software, isolated from the webserver, that does proper/eager refresh of certificates while handling errors well.

Issue 4: No OCSP until I find a good way to do it.

HSTS, server-side and preloading

HTTP Strict Transport Security represent a commitment to encryption: once published with recommended lifetime, browsers will remember that the website shouldn't be accessed over plain http, so you can't rollback.

Preloading HSTS is even stronger, and so far I haven't done it. Seems worthwhile, but I'll wait another week or so ☺ It's easily doable online.

HPKP

HTTP Public Key Pinning seems dangerous, at least by some posts. Properly deployed, it would solve a number of problems with the public key infrastructure, but still, complex and a lot of overhead.

Certificate chains

Something I didn't know before is that the servers are supposed to serve the entire chain; I thought, naïvely, that just the server is enough, since the browsers will have the root-root CA, but the intermediaries seem to be problematic.

So, one needs to properly serve the full chain (Let's Encrypt makes this trivial, by the way), and also monitor that it is so.

Ciphers and SSL protocols

OpenSSL disabled SSLv2 in recent builds, but at least Debian stable still has SSLv3+ enabled and Apache does not disable it, so if you put your shiny new website through a SSL checker you get many issues (related strictly to ciphers).

I spent a bit of time researching and getting to the conclusion that:

  • every reasonable client (for my small webserver) supports TLSv1.1+, so disabling SSLv3/TLSv1.0 solved a bunch of issues
  • however, even for TLSv1.1+, a number of ciphers are not recommended by US standards, but going into explicit cipher disable is a pain because I don't see a way to make it "cheap" (without needing manual maintenance); so there's that, my website is not HIPAA compliant due to Camellia cipher.

Issue 5: Weak default configs

Issue 6: Getting perfect ciphers not easy.

However, while not perfect, getting a proper config once you did the research is pretty trivial in terms of configuration.

My apache config. Feedback welcome:

SSLCipherSuite HIGH:!aNULL
SSLHonorCipherOrder on
SSLProtocol all -SSLv3 -TLSv1

And similarly for dovecot:

ssl_cipher_list = HIGH:!aNULL
ssl_protocols = !SSLv3 !TLSv1
ssl_prefer_server_ciphers = yes
ssl_dh_parameters_length = 4096

The last line there - the dh_params - I found via nmap, as my previous config has it do 1024, which is weaker than the key, defeating the purpose of a long key. Which leads to the next point:

DH parameters

It seems that DH parameters can be an issue, in the sense that way too many sites/people reuse the same params. Dovecot (in Debian) generates its own, but Apache (AFAIK) not, and needs explicit configuration added to use your own.

Issue 7: Investigate DH parameters for all software (postfix, dovecot, apache, ssh); see instructions.

Tools

A number interesting tools:

  • Online resources to analyse https config: e.g. SSL labs, and htbridge; both give very detailed information.
  • CAA checker (but this is trivial).
  • nmap ciphers report: nmap --script ssl-enum-ciphers, and very useful, although I don't think this works for STARTTLS protocols.
  • Cert Spotter from SSLMate. This seems to be useful as a complement to CAA (CAA being the policy, and Cert Spotter the monitoring for said policy), but it goes beyond it (key sizes, etc.); for the expiration part, I think nagios/icinga is easier if you already have it setup (check_http has options for lifetime checks).
  • Certificate chain checker; trivial, but a useful extra check that the configuration is right.

Summary

Ah, the good old days of plain http. SSL seems to add a lot of complexity; I'm not sure how much is needed and how much could actually be removed by smarter software. But, not too bad, a few evenings of study is enough to get a start; probably the bigger cost is in the ongoing maintenance and keeping up with the changes.

Still, a number of unresolved issues. I think the next goal will be to find a way to properly do OCSP stapling.

Planet DebianDaniel Leidert: Make 'bts' (devscripts) accept TLS connection to mail server with self signed certificate

My mail server runs with a self signed certificate. So bts, configured like this ...


BTS_SMTP_HOST=mail.wgdd.de:587
BTS_SMTP_AUTH_USERNAME='user'
BTS_SMTP_AUTH_PASSWORD='pass'

...lately refused to send mails with this error:


bts: failed to open SMTP connection to mail.wgdd.de:587
(SSL connect attempt failed error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed)

After searching a bit, I found a way to fix this locally without turning off the server certificate verification. The fix belongs into the send_mail() function. When calling the Net::SMTPS->new() constructor, it is possible to add the fingerprint of my self signed certificate like this (bold):


if (have_smtps) {
$smtp = Net::SMTPS->new($host, Port => $port,
Hello => $smtphelo, doSSL => 'starttls',
SSL_fingerprint => 'sha1$hex-fingerprint'
)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
} else {
$smtp = Net::SMTP->new($host, Port => $port, Hello => $smtphelo)
or die "$progname: failed to open SMTP connection to $smtphost\n($@)\n";
}

Pretty happy to being able to use the bts command again.

Planet DebianAndreas Bombe: Fixing a Nintendo Game Boy Screen

Over the holidays my old Nintendo Game Boy (the original DMG-01 model) has resurfaced. It works, but the display had a bunch of vertical lines near the left and right border that stay blank. Apparently a common problem with these older Game Boys and the solution is to apply heat to the connector foil upper side to resolder the contacts hidden underneath. There’s lots of tutorials and videos on the subject so I won’t go into much detail here.

Just one thing: The easiest way is to use a soldering iron (the foil is pretty heat resistant, it has to be soldered during production after all) and move it along the top at the affected locations. Which I tried at first and it kind of works but takes ages. Some columns reappear, others disappear, reappeared columns disappear again… In someone’s comment I read that they needed over five minutes until it was fully fixed!

So… simply apply a small drop of solder to the tip. That’s what you do for better heat transfer in normal soldering and of course it also works here (since the foil connector back doesn’t take solder this doesn’t make a mess or anything). That way, the missing columns reappeared practically instantly at the touch of the solder iron and stayed fixed. Temperature setting was 250°C, more than sufficient for the task.

This particular Game Boy always had issues with the speaker stopping to work but we never had it replaced, I think because the problem was intermittent. After locating the bad solder joint on the connector and reheating it this problem was also fixed. Basically this almost 28 year old device is now in better working condition than it ever was.

,

Don MartiEasy question with too many wrong answers

Content warning: Godwin's Law.

Here's a marketing question that should be easy.

How much of my brand's ad budget goes to Nazis?

Here's the right answer.

Zero.

And here's a guy who still seems to be having some trouble answering it: Dear Google (GOOG): Please stop using my advertising dollars to monetize hate speech.

If you're responsible for a brand and somewhere in the mysterious tubes of adtech your money is finding its way to Nazis, what is the right course of action?

One wrong answer is to write a "please help me" letter to a company that will just ignore it. That's just admitting to knowingly sending money to Nazis, which is clearly wrong.

Here's another wrong idea, from the upcoming IAB Annual Leadership Meeting session on "brand safety" (which is the nice, sanitary professional-sounding term for "trying not to sponsor Nazis, but not too hard.")

Threats to brand safety arise internally and externally, in your control and out of your control—and the stakes have never been higher. Learn how to minimize brand safety risks and maximize odds of survival when your brand takes a hit (spoiler alert: overreacting is as bad as underreacting). Best Buy and Starcom share best practices based on real-world encounters with brand safety issues.

Really, people? Overreacting is as bad as underreacting? The IAB wants you to come to a deluxe conference about how it's fine to send a few bucks to Nazis here and there as long as it keeps their whole adtech/adfraud gravy train running on time.

I disagree. If Best Buy is fine with (indirectly of course) paying the occasional Nazi so that the IAB companies can keep sending them valuable eyeballs from the cheapest possible sites, then I can shop elsewhere.

Any nationalist extremist movement has its obvious supporters, who wear the outfits and get the tattoos and go march in the streets and all that stuff, and also the quiet supporters, who come up with the money and make nice with the powers that be. The supporters who can keep it deniable.

Can I, as a potential customer from the outside, tell the difference between quiet Nazi supporters and people who are just bad at online advertising and end up supporting Nazis by mistake? Of course not. Do I care? Of course not. If you're not willing to put the basic "don't pay Nazis to do Nazi stuff" rule ahead of a few ad clicks, I don't want your brand anyway. And I'll make sure to install and use the tracking protection tools that help keep my good data away from bad sites.

,

Planet DebianNorbert Preining: Scala: debug logging facility and adjustment of logging level in code

As soon as users start to use your program, you want to implement some debug facilities with logging, and allow them to be turned on via command line switches or GUI elements. I was surprised that doing this in Scala wasn’t as easy as I thought, so I collected the information on how to set it up.

Basic ingredients are the scala-logging library which wraps up slf4j, the Simple Logging Facade for Java, and a compatible backend, I am using logback, a successor of Log4j.

At the current moment adding the following lines to your build.sbt will include the necessary libraries:

libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.7.2"
libraryDependencies += "ch.qos.logback" % "logback-classic" % "1.2.3"

Next is to set up the default logging by adding a file src/main/resources/logback.xml containing at least the following entry for logging to stdout:

    
        
            %d{HH:mm:ss} [%thread] %-5level %logger{36} - %msg%n
        
    

    
        
    

Note the default level here is set to info. More detailed information on the format of the logback.xml can be found here.

In the scala code a simple mix-in of the LazyLogging trait is enough to get started:

import com.typesafe.scalalogging.LazyLogging

object ApplicationMain extends App with LazyLogging {
  ...
  logger.trace(...)
  logger.debug(...)
  logger.info(...)
  logger.warn(...)
  logger.error(...)

(the above commands are in increasingly serious) The messages will only be shown if the logger call has higher seriousness than what is configured in logback.xml (or INFO by default). That means that anything of level trace and debug will not be shown.

But we don’t want to ship always a new program with different logback.xml, so changing the default log level programatically is somehow a strict requirement. Fortunately a brave soul posted a solution on stackexchange, namely

import ch.qos.logback.classic.{Level,Logger}
import org.slf4j.LoggerFactory

  LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).
    asInstanceOf[Logger].setLevel(Level.DEBUG)

This can be used to evaluate command line switches and activate debugging on the fly. The way I often do this is to allow flags -q, -qq, -d, and -dd for quiet, extra quiet, debug, extra debug, which would be translated to the logging levels warning, error, debug, and trace, respectively. Multiple invocations select the maximum debug level (so -q -d does turn on debugging).

This can activated by the following simple code:

val cmdlnlog: Int = args.map( {
    case "-d" => Level.DEBUG_INT
    case "-dd" => Level.TRACE_INT
    case "-q" => Level.WARN_INT
    case "-qq" => Level.ERROR_INT
    case _ => -1
  } ).foldLeft(Level.OFF_INT)(scala.math.min(_,_))
if (cmdlnlog == -1) {
  // Unknown log level has been passed in, error out
  Console.err.println("Unsupported command line argument passed in, terminating.")
  sys.exit(0)
}
// if nothing has been passed on the command line, use INFO
val newloglevel = if (cmdlnlog == Level.OFF_INT) Level.INFO_INT else cmdlnlog
LoggerFactory.getLogger(org.slf4j.Logger.ROOT_LOGGER_NAME).
  asInstanceOf[Logger].setLevel(Level.toLevel(newloglevel))

where args are the command line parameters (in case of ScalaFX that would be parameters.unnamed, in case of a normal Scala application the argument to the main entry function). More complicated command line arguments of course need a more sophisticated approach.

Hope that helps.

CryptogramFriday Squid Blogging: Japanese "Dude Food" Includes Squid

This seems to be a trend.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesScreen Capping the News Shows Different Stories for Different Folks

During a year marked by social and political turmoil, the media has found itself under scrutiny from politicians, academics, the general public, and increasingly self-reflexive journalists and editors. Fake news has entered our lexicon both as a form of political meddling from foreign powers and a dismissive insult directed towards any less-than-complimentary news coverage of the current administration.

Paying attention to where people are getting their news and what that news is telling them is an important step to understanding our increasingly polarized society and our seeming inability to talk across political divides. The insight can also help us get at those important and oh-too common questions of “how could they think that?!?” or “how could they support that politician?!?”

My interest in this topic was sparked a few months ago when I began paying attention to the top four stories and single video that magically appear whenever I swipe left on my iPhone. The stories compiled by the Apple News App provide a snapshot of what the dominant media sources consider the newsworthy happenings of the day. After paying an almost obsessive attention to my newsfeed for a few weeks—and increasingly annoying my friends and colleagues by telling them about the compelling patterns I was seeing—I started to take screenshots of the suggested news stories on a daily or twice daily basis. The images below were gathered over the past two months.

It is worth noting that the Apple News App adapts to a user’s interests to ensure that it provides “the stories you really care about.” To minimize this complicating factor I avoided clicking on any of the suggested stories and would occasionally verify that my news feed had remained neutral through comparing the stories with other iPhone users whenever possible.

Some of the differences were to be expected—People simply cannot get enough of celebrity pregnancies and royal weddings. The Washington Post, The New York Times, and CNN frequently feature stories that are critical of the current administration, and Fox News is generally supportive of President Trump and antagonistic towards enemies of the Republican Party.

(Click to Enlarge)

However, there are two trends that I would like to highlight:

1) A significant number of Fox News headlines offer direct critiques of other media sites and their coverage of key news stories. Rather than offering an alternative reading of an event or counter-coverage, the feature story undercuts the journalistic work of other news sources through highlighting errors and making accusations of partisanship motivations. In some cases, this even takes the form of attacking left-leaning celebrities as proxy to a larger movement or idea. Neither of these tactics were employed by any of the other news sources during my observation period.

(Click to Enlarge)

2) Fox News often featured coverage of vile, treacherous, or criminal acts committed by individuals as well as horrifying accidents. This type of story stood out both due to the high frequency and the juxtaposition to coverage of important political events of the time—murderous pigs next to Senate resignations and sexually predatory high school teachers next to massively destructive California wildfires. In a sense, Fox News is effectively cultivating an “asociological” imagination by shifting attention to the individual rather than larger political processes and structural changes. In addition, the repetitious coverage of the evil and devious certainly contributes to a fear-based society and confirms the general loss of morality and decline of conservative values.

(Click to Enlarge)

It is worth noting that this move away from the big stories of the day also occurs through a surprising amount of celebrity coverage.

(Click to Enlarge)

From the screen captures I have gathered over the past two months, it seems apparent that we are not just consuming different interpretations of the same event, but rather we are hearing different stories altogether. This effectively makes the conversation across political affiliation (or more importantly, news source affiliation) that much more difficult if not impossible.

I recommend taking time to look through the images that I have provided on your own. There are a number of patterns I did not discuss in this piece for the sake of brevity and even more to be discovered. And, for those of us who spend our time in the front of the classroom, the screenshot approach could provide the basis for a great teaching activity where the class collectively takes part in both the gathering of data and conducting the analysis. 

Kyle Green is an Assistant Professor of Sociology at Utica College. He is a proud TSP alumnus and the co-author /co-host of Give Methods a Chance.

(View original at https://thesocietypages.org/socimages)

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 142 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 21 packages with a known CVE and the dla-needed.txt file 16 (we’re a bit behind in CVE triaging apparently). Both numbers show a significant drop compared to last month. Yet the number of DLA released was not larger than usual (30), instead it looks like December brought us fewer new security vulnerabilities to handle and at the same time we used this opportunity to handle lower priorities packages that were kept on the side for multiple months.

Thanks to our sponsors

New sponsors are in bold (none this month).

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianJonathan Dowland: Jason Scott Talks His Way Out Of It

I've been thoroughly enjoying the Jason Scott Talks His Way Out Of It Podcast by Jason Scott (of the Internet Archive and Archive Team, amongst other things) and perhaps you will too.

Scott started this podcast and a corresponding Patreon/LibrePay/Ko-Fi/Paypal/etc funding stream in order to help him get out of debt. He's candid about getting in and out of debt within the podcast itself; but he also talks about his work at The Internet Archive, the history of Bulletin-Board Systems, Archive Team, and many other topics. He's a good speaker and it's well worth your time. Consider supporting him too!

This reminds me that I am overdue writing an update on my own archiving activities over the last few years. Stay tuned…

Worse Than FailureError'd: Hamilton, Hamilton, Hamilton, Hamilton

"Good news! I can get my order shipped anywhere I want...So long as the city is named Hamilton," Daniel wrote.

 

"I might have forgotten my username, but at least I didn't forget to change the email template code in Production," writes Paul T.

 

Jamie M. wrote, "Using Lee Hecht Harrison's job search functionality is very meta."

 

"When I decided to go to Cineworld, wasn't sure what I wanted to watch," writes Andy P., "The trailer for 'System Restore' looks good, but it's got a bad rating on Rotten Tomatoes."

 

Mattias writes, "I get the feeling that Visual Studio really doesn't like this error."

 

"While traveling in Philadelphia's airport, I was pleased to see Macs competing in the dumb error category too," Ken L. writes.

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianNorbert Preining: Debian/TeX Live 2017.20180110-1 – the big rework

In short succession a new release of TeX Live for Debian – what could that bring? While there are not a lot of new and updated packages, there is a lot of restructuring of the packages in Debian, mostly trying to placate the voices that the TeX Live packages are getting bigger and bigger and bigger (which is true). In this release we have introduce two measures to allow for smaller installations: optional font package dependencies and downgrade of the -doc packages to suggests.

Let us discuss the two changes, first the one about optional font packages: Till the last release the TeX Live package texlive-fonts-extra depended on a long list of font-* packages, which did amount to a considerable download and install size. There was a reason for this: TeX documents using these fonts via file name use kpathsea, so there are links from the texmf-dist tree to the actual font files. To ensure that these links are not dangling, the font packages were a unconditional dependency.

But current LaTeX packages allow to lookup fonts not only via file name, but also via font name using fontconfig library. Although this is a suboptimal solution due to the inconsistencies and bugs of the fontconfig library (OsF and Expert font sets are a typical example of fonts that throw fontconfig into despair), it allows the use of fonts outside the TEXMF trees.

We have done the following changes to allow users to reduce the installation size by implementing the following changes:

  • texlive-fonts-extra only recommends the various font packages, but does not depend on them;
  • links from the texmf-dist tree are now shipped in a new package texlive-fonts-extra-links;
  • texlive-fonts-extra recommends texlive-fonts-extra-links, but does not strictly depend on it;
  • only texlive-full depends on texlive-fonts-extra-links to provide the same experience as upstream TeX Live

With these changes in place, users can decide to only install the TeX Live packages they need, and leave out texlive-fonts-extra-links and install only those fonts they actually need. This is in particular of interest for the build dependencies which will shrink considerably.

The other change we have implemented in this release is a long requested, but always by me rejected one, namely the demotion of -doc packages to suggestions instead of recommendations. The texlive-*-doc packages are at times rather big, and with the default setup to install recommendations this induced a sharp rise in disc/download volume when installing TeX Live.

By demoting the -doc packages to suggests they will not be automatically installed. I am still not convinced that this is a good solution, mostly due to two reasons: (i) people will cry out about missing documentation, and (ii) it is gray terrain in license terms, due to the requirement of several packages that code and docs are distributed together.

Due to the above two reasons I might revert this change in future, but for now let us see how it goes.

Of course, there is also the usual bunch of updates and new packages, see below. The current packages are already in unstable, only src:texlive-extra needs passage through ftp-masters NEW queue, so most people will need to wait for the acceptance of a trivial package containing only links into Debian 😉

Enjoy.

New packages

blowup, sectionbreak.

Updated packages

animate, arabluatex, babel, beamer, beuron, biber, bidi, bookcover, calxxxx-yyyy, comprehensive, csplain, fira, fontools, glossaries-extra, graphics-def, libertine, luamplib, markdown, mathtools, mcf2graph, media9, modernposter, mpostinl, pdftools, poemscol, pst-geo, pstricks, reledmac, scsnowman, sesstime, skak, tex4ht, tikz-kalender, translator, unicode-math, xassoccnt, xcntperchap, xdvi, xepersian, xsavebox, xurl, ycbook, zhlipsum.

,

TEDExploring the boundaries of legacy at TED@Westpac

Cyndi Stivers and Adam Spencer host TED@Westpac — a day of talks and performances themed around “The Future Legacy” — in Sydney, Australia, on Monday, December 11th. (Photo: Jean-Jacques Halans / TED)

Legacy is a delightfully complex concept, and it’s one that the TED@Westpac curators took on with gusto for the daylong event held in Sydney, Australia, on Monday December 11th. Themed around the idea of “The Future Legacy,” the day was packed with 15 speakers and two performers and hosted by TED’s Cyndi Stivers and TED speaker and monster prime number aficionado Adam Spencer. Topics ranged from education to work-health balance to designer babies to the importance of smart conversations around death.

For Westpac managing director and CEO Brian Hartzer, the day was an opportunity both to think back over the bank’s own 200-year-legacy — and a chance for all gathered to imagine a bold new future that might suit everyone. He welcomed talks that explored ideas and stories that may shape a more positive global future. “We are so excited to see the ripple effect of your ideas from today,” he told the collected speakers before introducing Aboriginal elder Uncle Ray Davison to offer the audience a traditional “welcome to country.”

And with that, the speakers were up and running.

“Being an entrepreneur is about creating change,” says Linda Zhang. She suggests we need to encourage the entrepreneurial mindset in high-schoolers. (Photo: Jean-Jacques Halans / TED)

Ask questions, challenge the status quo, build solutions. Who do you think of when you hear the word “entrepreneur?” Steve Jobs, Mark Zuckerberg, Elon Musk and Bill Gates might come to mind. What about a high school student? Linda Zhang might just have graduated herself but she’s been taking entrepreneurial cues from her parents, who started New Zealand’s second-largest thread company. Zhang now runs a program to pair students with industry mentors and get them to work for 48 hours on problems they actually want to solve. The results: a change in mindset that could help prepare them for a tumultuous but opportunity-filled job market. “Being an entrepreneur is about creating change,” Zhang says. “This is what high school should be about … finding things you care about, having the curiosity to learn about those things and having the drive to take that knowledge and implement it into problems you care about solving.”

Should we bribe kids to study math? In this sparky talk, Mohamad Jebara shares a favorite quote from fellow mathematician Francis Su: “We study mathematics for play, for beauty, for truth, for justice, and for love.” Only problem: kids today, he says, often don’t tend to agree, instead finding math “difficult and boring.” Jebara has a counterintuitive potential solution: he wants to bribe kids to study math. His financial incentive plan works like this: his company charges parents a monthly subscription fee; if students complete their weekly math goal then the program refunds that amount of the fee directly into the student’s bank account; if not, the company pockets the profit. Ultimately, Jebara wants kids to discover math’s intrinsic worth and beauty, but until they get there, he’s happy to pay them. And this isn’t just about his own business model. “Unless we find a way to improve student engagement with mathematics, we’ll have not only a huge skills shortage crisis, but a fickle population easily manipulated by whoever can get the most airtime,” he says.

You, cancer and the workplace. When lawyer Sarah Donnelly was diagnosed with breast cancer, she turned to her friends and family for support — but she also sought refuge in her work. “My job and my coworkers would make me feel valuable and human at times when I would have otherwise felt like a statistic,” she says. “Work gave me focus and stability when I was dealing with so many unknowns and difficult personal decisions.” But, she says, not all employers realize that work can be a sanctuary for the sick, and often — believing themselves polite and thoughtful — cast out their employees. Now, Donnelly is striving to change the experiences of individuals coping with serious illness — and the perceptions others might have of them. Together with a colleague, she created a “Working with Cancer” toolkit that provides a framework and guidance for all those professionally involved in an employee’s life, and she is traveling to different companies around Australia to implement it.

Digital strategist Will Jenkins asks that we need to think about what we really want from life, not just our day-to-day. (Photo: Jean-Jacques Halans / TED)

The connection between time and money. We all need more time, says digital strategist Will Jenkins, and historically we’ve developed systems and technologies to save time for ourselves and others by reducing waste and inefficiency. But there’s a problem: even after spending centuries trying to perfect time-saving techniques, it too often still doesn’t feel like we’re getting anywhere. “As individuals, we’re busier than ever,” Jenkins points out, before calling for us to look beyond specialized techniques to think about what we actually really want from life itself, not just our day-to-day. In taking a holistic approach to time, we might, he says, channel John Maynard Keynes to figure out new ways that will allow all of us “to live wisely, agreeably, and well.”

Creating a digital future for Australia’s First People. Aboriginal Australian David Unaipon (1862-1967) was called his country’s Leonardo da Vinci — he was responsible for at least 19 inventions, including a tool that led to modern sheep shears. But according to Westpac business analyst Michael Mieni, we need to find better ways to encourage future Unaipons. Right now, he says, too many Indigenous Australians are on the far side of the digital divide, lacking access to computers and the Internet as well as basic schooling in technology. Mieni was the first Indigenous IT honors students at the University of Technology Sydney and he makes the case that tech-savvy Indigenous Australians are badly needed to serve as role models and teachers, as inventors of ways to record and promote their culture and as guardians of their people’s digital rights. “What if the next ground-breaking idea is already in the mind of a young Aboriginal student but will never surface because they face digital disadvantage or exclusion?” he asks. Everyone in Australia — not just the First Peoples — gains when every citizen has the opportunity and resources to become digitally literate.

Shade Zahrai and Aric Yegudkin perform a gorgeous, sensual dance at TED@Westpac. (Photo: Jean-Jacques Halans / TED)

The beauty of a dance duet. “Partner dance embodies the coming together of two people,” Shade Zahrai‘s voice whispers to a dark auditorium as she and her partner take the TED stage. In the middle of session one, the pair perform a gorgeous and sensual modern dance, complete with Zahrai’s recorded voiceover explaining the coordination and unity that partner dance requires of its participants.

The power of inclusiveness. Inclusion strategist Hayley Yeates shares how her identity as a proud Australian was dimmed by prejudice shown towards her by those who saw her as Asian. When in school, she says, fellow students didn’t want to associate with her in classrooms, while she didn’t add a picture to her LinkedIn profile for fear her race would deem her less worthy of a job. But Yeates focuses on more than the personal stories of those who’ve been dubbed an outsider, and makes the case that diversity leads to innovation and greater profitability for companies. She calls for us all to sponsor safe spaces where authentic, unrestrained conversations about the barriers faced by cultural minorities can be held freely. And she invites leaders to think about creating environments where people’s whole selves can work, and where an organization can thrive because of, not in spite of, its employees’ differences.

Olivia Tyler tracks the complexity of global supply chains, looking to develop smart technology that can allow both corporations and consumers to understand buying decisions. (Photo: Jean-Jacques Halans / TED)

How to do yourself out of a job. As a sustainability practitioner, Olivia Tyler is trying hard to develop systems that will put her out of work. Why? For the good of us all, of course. And how? By encouraging all of us to ask questions about where what we buy, wear or eat comes from. Tyler tracks the fiendish complexity of today’s global supply chains, and she is attempting to develop smart technology that can allow both corporations and consumers to have the visibility they need to understand the buying decisions they make. When something as ostensibly simple as a baked good can include hundreds of data points about the ingredients it contains — a cake can be a minefield, she jokes — it’s time to open up the cupboard and use tech such as the blockchain to crack open the sustainability code. “We can adopt new and exciting ways to change the game on how we conduct ourselves as corporates and consumers across our increasingly smaller world,” she promises.

Can machine intelligence liberate human purpose? Much has been made of the threat robots place to the very existence of certain jobs, with some estimates reckoning that as much as 80% of low skill jobs have already been automated. Self-styled “datapreneur” Tomer Garzberg shares how he researched 11,000 of the world’s most widely held jobs to create the “Short-Term Automation Susceptibility Index” to identify the types of role that might be up for automation next. Perhaps unsurprisingly, highly specialized roles held by those such as neurosurgeons, chemical engineers and, well, acrobats face the least risk of being automated, while even senior blue collar positions or standard white collar roles such as pharmacists, accountants and health inspectors can expect a 25% shrinkage over the next 10 years. But Garzberg believes that we can — must — embrace this cybernated future.”Prepare your family to be okay with change, as uncomfortable as it may be,” he says. “We’ll likely be switching careers far more frequently in the near future.”

Everything’s gonna be alright. After a quick break and a breather, Westpac’s own Rowan Fitzpatrick and his band Heart of Mind played in session two with a sweet, uplifting rock ballad about better days and leaning on one another with love and hope. “Keep looking forward / Don’t lose your grip / One step at a time,” the trained jazz singer croons.

Alastair O’Neill shares the ethical wrangling his family undertook as they figured out how they felt about potentially eradicating a debilitating disease with gene editing. (Photo: Jean-Jacques Halans / TED)

You have the ability to end a hereditary disease. Do you take it? “Recently I had to sign a form promising that I wouldn’t have sex with my wife,” says a deadpan Alastair O’Neill as he kicks off the session’s talks. “Why? Because we decided to have a baby.” He waits a beat. “Let me rewind.” As the audience settles in for a rollercoaster talk of emotional highs and lows, he explains his family’s journey through the ethical minefield of embryonic genetic testing, also known as preimplantation genetic diagnosis or PGD. It was a journey prompted by a hereditary condition in his wife’s family — his father-in-law Phil had inherited the gene for retinal dystrophy and was declared legally blind at 30 years old. The odds that his own young family would have a baby either carrying or inheriting the disease were as low as one in two. In this searingly personal talk, O’Neill shares the ups and downs of both the testing process and the ethical wrangling that their entire family undertook as they tried to figure out how they felt about potentially eradicating a debilitating disease. Spoiler alert: O’Neill is in favor. “PGD gives couples the ability to choose to end a hereditary disease,” he says. “I think we should give every potential parent that choice.”

A game developer’s solution to the housing crisis. When Sarah Murray wanted to buy her first house, she discovered that home prices far exceeded her budget — and building a new house would be prohibitively costly and time-consuming. Frustrated by her lack of self-determination, Murray decided to create a computer game to give control back to buyers. The program allows you to design all aspects of your future home (even down to attention to price and environmental impact) and then delivers the final product directly to you in modular components that can be assembled onsite. Murray’s innovative idea both cuts costs and makes more sustainable dwellings; the first physical houses should be ready by 2018. But the digital housing developer isn’t done yet. Now she is working on adapting the program and investing in construction techniques such as 3D printing so that when a player designs and builds a home, they can also contribute to a home for someone in need. As she says, “I want to put every person who wants one in a home of their own design.”

Tough guys need mental-health help, too. In 2013 in Castlemaine, Victoria, painter and decorator Jeremy Forbes was shaken when a friend and fellow tradie (or tradesman), committed suicide. But what truly shocked him were the murmurs he overheard at the man’s wake — people asking, “Who’s next?” Tradies deal with the same struggles faced by many — depression, alcohol and drug dependency, gambling, financial hardship — but they often don’t feel comfortable opening up about them. “You’re expected to be silent in the face of adversity,” says Forbes. So he and artist Catherine Pilgrim founded HALT (Hope Assistance Local Tradies), a mental health awareness organization for tradie men and women, apprentices, builders, farmers, and their partners. HALT meets people where they are, hosting gatherings at hardware stores, football and sports clubs, and vocational training facilities. There, people learn about the warning signs of depression and anxiety and the available services. According to Forbes, who received a Westpac Social Change Fellowship in 2016, HALT has now held around 150 events, and he describes the process as both empowering and cathartic. We need to know how to respond if people are not OK, he says.

The conversation about death you need to have. “Most of us don’t want to acknowledge death, we don’t want to plan for it, and we don’t want to discuss it with the most important people in our lives,” says mortal realist and portfolio manager Michelle Knox. She’s got stats to prove it: 45% of people in Australia over the age of 18 don’t have a legal will. But dying without one is complicated and expensive for those left behind, and just one reason Knox believes it’s time we take ownership of our own deaths. Others include that talking about death before it happens can help us experience a good death, reduce stress on our loved ones, and also help us support others who are grieving. Knox experienced firsthand the power of talking about death ahead of time when her father passed away earlier this year. “I discovered this year it’s actually a privilege to help someone exit this life and although my heart is heavy with loss and sadness, it is not heavy with regret,” she says, “I knew what Dad wanted and I feel at peace knowing I could support his wishes.”

“What would water do?” asks Raymond Tang. “This simple and powerful question has changed my life for the better.” (Photo: Jean-Jacques Halans / TED)

The philosophy of water. How do we find fulfillment in a world that’s constantly changing? IT strategy manager and “agent of flow” Raymond Tang struggled mightily with this question — until he came across the ancient Chinese philosophy of the Tao Te Ching. In it, he found a passage comparing goodness to water and, inspired, he’s now applying the concepts to his everyday life. In this charming talk, he shares three lessons he’s learned so far from the “philosophy of water.” First, humility: in the same way water helps plants and animals grow without seeking reward, Tang finds fulfillment and meaning in helping others overcome their challenges. Next, harmony: just as water is able to navigate its way around obstacles without force or conflict, Tang believes we can find a greater sense of fulfillment in our endeavors by shifting our focus away from achieving success and towards achieving harmony. Finally, openness: water can be a liquid, solid or gas, and it adapts to the shape in which it’s contained. Tang finds in his professional life that the teams most open to learning (and un-learning) do the best work. “What would water do?” Tang asks. “This simple and powerful question has changed my life for the better.”

With great data comes great responsibility. Remember the hacks on companies such as Equifax and JP Morgan? Well, you ain’t seen nothing yet. As computer technology becomes more powerful (think quantum) the systems we use to protect our wells of data become ever more vulnerable. However, there is still time to plan countermeasures against the impending data apocalypse, reassures encryption expert Vikram Sharma. He and his team are designing security devices and programs that also rely on quantum physics to power a defense against the most sophisticated attacks. “The race is on to build systems that will remain secure in the face of rapid technological advance,” he says.

Rach Ranton brings the leadership lessons she learned in the military to corporations, suggesting that leaders succeed when everyone knows the final goal they’re working toward. (Photo: Jean-Jacques Halans / TED)

Leadership lessons from the front line. How does a leader give their people a sense of purpose and direction? Rach Ranton spent more than a decade in the Australian Army, including tours of Afghanistan and East Timor. Now, she brings the lessons she learned in the military to companies, blending organizational psychology aimed at corporations with the planning and best practices of a well-oiled military unit. Even in a situation of extreme uncertainty, she says, military units function best if everyone understands the leader’s objective exactly as well as they understand their own role, not just their individual part to play but also the whole. She suggests leaders spend time thinking about how to communicate “commander’s intent,” the final goal that everyone is working toward. As a test, she asks: If you as a leader were absent from the scene, would your team still know what to do … and why they were doing it?

CryptogramFingerprinting Digital Documents

In this era of electronic leakers, remember that zero-width spaces and homoglyph substitution can fingerprint individual instances of files.

Krebs on SecurityBitcoin Blackmail by Snail Mail Preys on Those with Guilty Conscience

KrebsOnSecurity heard from a reader whose friend recently received a remarkably customized extortion letter via snail mail that threatened to tell the recipient’s wife about his supposed extramarital affairs unless he paid $3,600 in bitcoin. The friend said he had nothing to hide and suspects this is part of a random but well-crafted campaign to prey on men who may have a guilty conscience.

The letter addressed the recipient by his first name and hometown throughout, and claimed to have evidence of the supposed dalliances.

“You don’t know me personally and nobody hired me to look into you,” the letter begins. “Nor did I go out looking to burn you. It is just your bad luck that I stumbled across your misadventures while working on a job around Bellevue.”

The missive continues:

“I then put in more time than I probably should have looking into your life. Frankly, I am ready to forget all about you and let you get on with your life. And I am going to give you two options that will accomplish that very thing. These two options are to either ignore this letter, or simply pay me $3,600. Let’s examine those two options in more detail.”

The letter goes on to say that option 1 (ignoring the threat) means the author will send copies of his alleged evidence to the man’s wife and to her friends and family if he does not receive payment within 12 days of the letter’s post marked date.

“So [name omitted], even if you decide to come clean with your wife, it won’t protect her from the humiliation she will feel when her friends and family find out your sordid details from me,” the extortionist wrote.

Option 2, of course, involves sending $3,600 in Bitcoin to an address specified in the letter. That bitcoin address does not appear to have received any payments. Attached to the two-sided extortion note is a primer on different ways to quickly and easily obtain bitcoin.

“If I don’t receive the bitcoin by that date, I will go ahead and release the evidence to everyone,” the letter concludes. “If you go that route, then the least you could do is tell your wife so she can come up with an excuse to prepare her friends and family before they find out. The clock is ticking, [name omitted].”

Of course, sending extortion letters via postal mail is mail fraud, a crime which carries severe penalties (fines of up to $1 million and up to 30 years in jail). However, as the extortionist rightly notes in his letter, the likelihood that authorities would ever be able to catch him is probably low.

The last time I heard of or saw this type of targeted extortion by mail was in the wake of the 2015 breach at online cheating site AshleyMadison.com. But those attempts made more sense to me since obviously many AshleyMadison users quite clearly did have an affair to hide.

In any case, I’d wager that this scheme — assuming that the extortionist is lying and has indeed sent these letters to targets without actual knowledge of extramarital affairs on the part of the recipients — has a decent chance of being received by someone who really does have a current or former fling that he is hiding from his spouse. Whether that person follows through and pays the extortion, though, is another matter.

I searched online for snippets of text from the extortion letter and found just one other mention of what appears to be the same letter: It was targeting people in Wellesley, Mass, according to a local news report from December 2017.

According to that report, the local police had a couple of residents drop off letters or call to report receiving them, “but to our knowledge no residents have fallen prey to the scam. The envelopes have no return address and are postmarked out of state, but from different states. The people who have notified us suspected it was a scam and just wanted to let us know.”

In the Massachusetts incidents, the extortionist was asking for $8,500 in bitcoin. Assuming it is the same person responsible for sending this letter, perhaps the extortionist wasn’t getting many people to bite and thus lowered his “fee.”

I opted not to publish a scan of the letter here because it was double-sided and redacting names, etc. gets dicey thanks to photo and image manipulation tools. Here’s a transcription of it instead (PDF).

CryptogramYet Another FBI Proposal for Insecure Communications

Deputy Attorney General Rosenstein has given talks where he proposes that tech companies decrease their communications and device security for the benefit of the FBI. In a recent talk, his idea is that tech companies just save a copy of the plaintext:

Law enforcement can also partner with private industry to address a problem we call "Going Dark." Technology increasingly frustrates traditional law enforcement efforts to collect evidence needed to protect public safety and solve crime. For example, many instant-messaging services now encrypt messages by default. The prevent the police from reading those messages, even if an impartial judge approves their interception.

The problem is especially critical because electronic evidence is necessary for both the investigation of a cyber incident and the prosecution of the perpetrator. If we cannot access data even with lawful process, we are unable to do our job. Our ability to secure systems and prosecute criminals depends on our ability to gather evidence.

I encourage you to carefully consider your company's interests and how you can work cooperatively with us. Although encryption can help secure your data, it may also prevent law enforcement agencies from protecting your data.

Encryption serves a valuable purpose. It is a foundational element of data security and essential to safeguarding data against cyber-attacks. It is critical to the growth and flourishing of the digital economy, and we support it. I support strong and responsible encryption.

I simply maintain that companies should retain the capability to provide the government unencrypted copies of communications and data stored on devices, when a court orders them to do so.

Responsible encryption is effective secure encryption, coupled with access capabilities. We know encryption can include safeguards. For example, there are systems that include central management of security keys and operating system updates; scanning of content, like your e-mails, for advertising purposes; simulcast of messages to multiple destinations at once; and key recovery when a user forgets the password to decrypt a laptop. No one calls any of those functions a "backdoor." In fact, those very capabilities are marketed and sought out.

I do not believe that the government should mandate a specific means of ensuring access. The government does not need to micromanage the engineering.

The question is whether to require a particular goal: When a court issues a search warrant or wiretap order to collect evidence of crime, the company should be able to help. The government does not need to hold the key.

Rosenstein is right that many services like Gmail naturally keep plaintext in the cloud. This is something we pointed out in our 2016 paper: "Don't Panic." But forcing companies to build an alternate means to access the plaintext that the user can't control is an enormous vulnerability.

Worse Than FailureCodeSOD: Dictionary Definition

Guy’s eight-person team does a bunch of computer vision (CV) stuff. Guy is the “framework Guy”: he doesn’t handle the CV stuff so much as provide an application framework to make the CV folks lives easy. It’s a solid division of labor, with one notable exception: Richard.

Richard is a Computer Vision Researcher, head of the CV team. Guy is a mere “code monkey”, in Richard’s terms. Thus, everything Richard does is correct, and everything Guy does is “cute” and “a nice attempt”. That’s why, for example, Richard needed to take a method called readFile() and turn it into readFileHandle(), “for clarity”.

The code is a mix of C++ and Python, and much of the Python was written before Guy’s time. While the style in use doesn’t fit PEP–8 standards (the official Python style), Guy has opted to follow the in use standards, for consistency. This means some odd things, like putting a space before the colons:

    def readFile() :
      # do stuff

Which Richard felt the need to comment on in his code:

    def readFileHandle() : # I like the spaced out :'s, these are cute =]

There’s no “tone of voice” in code, but the use of “=]” instead of a more conventional smile emoticon is a clear sign that Richard is truly a monster. The other key sign is that Richard has taken an… unusual approach to object-oriented programming. When tasked with writing up an object, he takes this approach:

class WidgetSource:
    """
    Enumeration of various sources available for getting the data needed to construct a Widget object.
    """

    LOCAL_CACHE    = 0
    DB             = 1
    REMOTE_STORAGE = 2
    #PROCESSED_DATA  = 3

    NUM_OF_SOURCES = 3

    @staticmethod
    def toString(widget_source):
        try:
            return {
                WidgetSource.LOCAL_CACHE:     "LOCAL_CACHE",
                WidgetSource.DB:              "DB",
                #WidgetSource.PROCESSED_DATA:   "PROCESSED_DATA", # @DEPRECATED - Currently not to be used
                WidgetSource.REMOTE_STORAGE:  "REMOTE_STORAGE"
            }[widget_source]
        except KeyError:
            return "UNKNOWN_SOURCE"

def deserialize_widget(id, curr_src) :
     # SNIP
     widget = {
         WidgetSource.LOCAL_CACHE: _deserialize_from_cache,
         WidgetSource.DB: _deserialize_from_db,
         WidgetSource.REMOTE_STORAGE: _deserialize_from_remote
         #WidgetSource.PROCESSED_DATA: widgetFactory.fromProcessedData,
     }[curr_src](id)

For those not up on Python, there are a few notable elements here. First, by convention, anything in ALL_CAPS is a constant. A dictionary/map literal takes the form {aKey: aValue, anotherKey: anotherValue}.

So, the first thing to note is that both the deserialize_widget and toString methods create a dictionary. The keys are drawn from constants… which have the values 0, 1, 2, and 3. So… it’s an array, represented as a map, but without the ability to iterate across it in order.

But the dictionary isn’t what gets returned. It’s being used as a lookup table. This is actually quite common, as Python doesn’t have a switch construct, but it does leave one scratching one’s head wondering why.

The real thing that makes one wonder “why” is this, though: Why is newly written code already marked as @DEPRECATED? This code was not yet released, and nothing outside of Richard’s newly written feature depended on it. I suspect Richard recently learned what deprecated means, and just wanted to use it in a sentence.

It’s okay, though. I like the @deprecated, those are cute =]

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

CryptogramSusan Landau's New Book: Listening In

Susan Landau has written a terrific book on cybersecurity threats and why we need strong crypto. Listening In: Cybersecurity in an Insecure Age. It's based in part on her 2016 Congressional testimony in the Apple/FBI case; it examines how the Digital Revolution has transformed society, and how law enforcement needs to -- and can -- adjust to the new realities. The book is accessible to techies and non-techies alike, and is strongly recommended.

And if you've already read it, give it a review on Amazon. Reviews sell books, and this one needs more of them.

CryptogramSpectre and Meltdown Attacks Against Microprocessors

The security of pretty much every computer on the planet has just gotten a lot worse, and the only real solution -- which of course is not a solution -- is to throw them all away and buy new ones.

On Wednesday, researchers just announced a series of major security vulnerabilities in the microprocessors at the heart of the world's computers for the past 15-20 years. They've been named Spectre and Meltdown, and they have to do with manipulating different ways processors optimize performance by rearranging the order of instructions or performing different instructions in parallel. An attacker who controls one process on a system can use the vulnerabilities to steal secrets elsewhere on the computer. (The research papers are here and here.)

This means that a malicious app on your phone could steal data from your other apps. Or a malicious program on your computer -- maybe one running in a browser window from that sketchy site you're visiting, or as a result of a phishing attack -- can steal data elsewhere on your machine. Cloud services, which often share machines amongst several customers, are especially vulnerable. This affects corporate applications running on cloud infrastructure, and end-user cloud applications like Google Drive. Someone can run a process in the cloud and steal data from every other users on the same hardware.

Information about these flaws has been secretly circulating amongst the major IT companies for months as they researched the ramifications and coordinated updates. The details were supposed to be released next week, but the story broke early and everyone is scrambling. By now all the major cloud vendors have patched their systems against the vulnerabilities that can be patched against.

"Throw it away and buy a new one" is ridiculous security advice, but it's what US-CERT recommends. It is also unworkable. The problem is that there isn't anything to buy that isn't vulnerable. Pretty much every major processor made in the past 20 years is vulnerable to some flavor of these vulnerabilities. Patching against Meltdown can degrade performance by almost a third. And there's no patch for Spectre; the microprocessors have to be redesigned to prevent the attack, and that will take years. (Here's a running list of who's patched what.)

This is bad, but expect it more and more. Several trends are converging in a way that makes our current system of patching security vulnerabilities harder to implement.

The first is that these vulnerabilities affect embedded computers in consumer devices. Unlike our computer and phones, these systems are designed and produced at a lower profit margin with less engineering expertise. There aren't security teams on call to write patches, and there often aren't mechanisms to push patches onto the devices. We're already seeing this with home routers, digital video recorders, and webcams. The vulnerability that allowed them to be taken over by the Mirai botnet last August simply can't be fixed.

The second is that some of the patches require updating the computer's firmware. This is much harder to walk consumers through, and is more likely to permanently brick the device if something goes wrong. It also requires more coordination. In November, Intel released a firmware update to fix a vulnerability in its Management Engine (ME): another flaw in its microprocessors. But it couldn't get that update directly to users; it had to work with the individual hardware companies, and some of them just weren't capable of getting the update to their customers.

We're already seeing this. Some patches require users to disable the computer's password, which means organizations can't automate the patch. Some antivirus software blocks the patch, or -- worse -- crashes the computer. This results in a three-step process: patch your antivirus software, patch your operating system, and then patch the computer's firmware.

The final reason is the nature of these vulnerabilities themselves. These aren't normal software vulnerabilities, where a patch fixes the problem and everyone can move on. These vulnerabilities are in the fundamentals of how the microprocessor operates.

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

This isn't to say you should immediately turn your computers and phones off and not use them for a few years. For the average user, this is just another attack method amongst many. All the major vendors are working on patches and workarounds for the attacks they can mitigate. All the normal security advice still applies: watch for phishing attacks, don't click on strange e-mail attachments, don't visit sketchy websites that might run malware on your browser, patch your systems regularly, and generally be careful on the Internet.

You probably won't notice that performance hit once Meltdown is patched, except maybe in backup programs and networking applications. Embedded systems that do only one task, like your programmable thermostat or the computer in your refrigerator, are unaffected. Small microprocessors that don't do all of the vulnerable fancy performance tricks are unaffected. Browsers will figure out how to mitigate this in software. Overall, the security of the average Internet-of-Things device is so bad that this attack is in the noise compared to the previously known risks.

It's a much bigger problem for cloud vendors; the performance hit will be expensive, but I expect that they'll figure out some clever way of detecting and blocking the attacks. All in all, as bad as Spectre and Meltdown are, I think we got lucky.

But more are coming, and they'll be worse. 2018 will be the year of microprocessor vulnerabilities, and it's going to be a wild ride.


Note: A shorter version of this essay previously appeared on CNN.com. My previous blog post on this topic contains additional links.

Planet DebianTianon Gravi: iSCSI in Debian

I’ve recently been playing with Debian’s iSCSI support, and it’s pretty neat.

It was a little esoteric to set things up, so I figured I’d write up a quick blog post of exactly what I did both for my own future-self’s sake and for the sake of anyone else trying to do something similar.

The most “followable” guide I found was https://www.certdepot.net/rhel7-configure-iscsi-target-initiator-persistently/ (which the below is probably really similar to).

The exact details of what I was trying to accomplish are as follows:

  • 100GB “sparse” file on my-desktop
  • presented as an iSCSI target
  • mounted on my-rpi3 as /var/lib/docker (preferably with discard enabled so the file on my-desktop stays sparse)

On my-desktop, I used the targetcli-fb package to configure my iSCSI target:

$ sudo apt install targetcli-fb

$ # create the sparse file
$ mkdir -p /home/tianon/iscsi
$ truncate --size=100G /home/tianon/iscsi/my-rpi3-docker.img

$ # launch "targetcli" to configure the iSCSI bits
$ sudo targetcli

# create a "fileio" object connected to the new sparse file
/> /backstores/fileio create name=my-rpi3-docker file_or_dev=/home/tianon/iscsi/my-rpi3-docker.img

# enable "emulated TPU" (enable TRIM / UNMAP / DISCARD)
/> /backstores/fileio/my-rpi3-docker set attribute emulate_tpu=1

# create iSCSI storage object
/> /iscsi create iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker

# create "LUN" assigned to the "fileio" object
/> /iscsi/iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker/tpg1/luns create /backstores/fileio/my-rpi3-docker

# create an ACL for my-rpi3 to connect
/> /iscsi/iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker/tpg1/acls create iqn.1992-01.com.example:node:my-rpi3
# and set a CHAP username and password, for security
/> /iscsi/iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker/tpg1/acls/iqn.1992-01.com.example:node:my-rpi3 set auth userid=rpi3 password=holy-cow-this-iscsi-password-is-so-secret-nobody-will-evvvvvvvvver-guess-it

Additionally, I’ve been experimenting with firewalld on my-desktop, so I had to add the iscsi-target service to my internal zone to allow the traffic from my-rpi3.

On my-rpi3, I used the open-iscsi package to configure my iSCSI initiator:

$ sudo apt install open-iscsi

$ # update "InitiatorName" to match the value from our ACL above
$ sudo vim /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1992-01.com.example:node:my-rpi3

$ # update "node.startup" and "node.session.auth.*" for our CHAP credentials from above
$ sudo vim /etc/iscsi/iscsid.conf
...
node.startup = automatic
...
node.session.auth.authmethod = CHAP
node.session.auth.username = rpi3
node.session.auth.password = holy-cow-this-iscsi-password-is-so-secret-nobody-will-evvvvvvvvver-guess-it
...

# restart iscsid so all that takes effect (especially the InitiatorName change)
$ sudo systemctl restart iscsid

$ sudo iscsiadm --mode discovery --type sendtargets --portal my-desktop-ip-address
$ sudo iscsiadm --mode node --targetname iqn.1992-01.com.example.my-desktop:storage:my-rpi3-docker --portal my-desktop-ip-address --login

$ lsblk --scsi
NAME HCTL       TYPE VENDOR   MODEL             REV TRAN
sda  0:0:0:0    disk LIO-ORG  my-rpi3-docker   4.0  iscsi

$ sudo fdisk /dev/sda
...
$ sudo mkfs.ext4 -T news -L my-rpi3-docker /dev/sda1
...
$ lsblk | grep my-rpi3-docker
... UUID="xxx" ...
$ sudo vim /etc/fstab
...
UUID="xxx" /var/lib/docker ext4 noatime,discard,_netdev 0 0
...
$ sudo systemctl stop docker
$ sudo mount /var/lib/docker
$ sudo systemctl start docker

$ # yay, profit (and should auto-remount properly on boot and everything, too)

(Obviously, replace iqn.1992-01.com.example with an appropriate IQN for your own domain as described on Wikipedia, and other values as appropriate like the username/password, hostnames, IPs, etc.)

As for speed, I was able to get the following result from a very simplified dd-based speed test – YMMV:

$ dd if=/dev/zero of=/var/lib/docker/testfile bs=100M count=10 oflag=direct
10+0 records in
10+0 records out
1048576000 bytes (1.0 GB, 1000 MiB) copied, 97.9608 s, 10.7 MB/s

Planet DebianNorbert Preining: Gaming: Monument Valley 2

I recently found out the Monument Valley, one of my favorite games in 2016, has a successor, Monument Valley 2. Short on time as I usually am, I was nervous that the game would destroy my scheduling, but I went ahead and purchased it!

Fortunately, it turned out to be quite a short game, maybe max 2h to complete all the levels. The graphics are similar to the predecessor, that is to say beautifully crafted. The game mechanics is also unchanged, with only a few additions (like the grow-rotating tree (see the image in the lower right corner above). What has changed that there are now two actors, mother and daughter, and sometimes one has to manage both in parallel, which adds a nice twist.

What I didn’t like too much were the pseudo-philosophical teachings in the middle, like that one

but then, they are fast to skip over.

All in all again a great game and not too much of a time killer. This time I played it not on my mobile but on my Fire tablet, and the bigger screen was excellent and made the game more enjoyable.

Very recommendable.

,

TEDMeet the 2018 class of TED Fellows and Senior Fellows

The TED Fellows program is excited to announce the new group of TED2018 Fellows and Senior Fellows.

Representing a wide range of disciplines and countries — including, for the first time in the program, Syria, Thailand and Ukraine — this year’s TED Fellows are rising stars in their fields, each with a bold, original approach to addressing today’s most complex challenges and capturing the truth of our humanity. Members of the new Fellows class include a journalist fighting fake news in her native Ukraine; a Thai landscape architect designing public spaces to protect vulnerable communities from climate change; an American attorney using legal assistance and policy advocacy to bring justice to survivors of campus sexual violence; a regenerative tissue engineer harnessing the body’s immune system to more quickly heal wounds; a multidisciplinary artist probing the legacy of slavery in the US; and many more.

The TED Fellows program supports extraordinary, iconoclastic individuals at work on world-changing projects, providing them with access to the global TED platform and community, as well as new tools and resources to amplify their remarkable vision. The TED Fellows program now includes 453 Fellows who work across 96 countries, forming a powerful, far-reaching network of artists, scientists, doctors, activists, entrepreneurs, inventors, journalists and beyond, each dedicated to making our world better and more equitable. Read more about their visionary work on the TED Fellows blog.

Below, meet the group of Fellows and Senior Fellows who will join us at TED2018, April 10–14, in Vancouver, BC, Canada.

Antionette Carroll
Antionette Carroll (USA)
Social entrepreneur + designer
Designer and founder of Creative Reaction Lab, a nonprofit using design to foster racially equitable communities through education and training programs, community engagement consulting and open-source tools and resources.


Psychiatrist Essam Daod comforts a Syrian refugee as she arrives ashore at the Greek island of Lesvos. His organization Humanity Crew provides psychological aid to refugees and recently displaced populations. (Photo: Laurence Geai)

Essam Daod
Essam Daod (Palestine | Israel)
Mental health specialist
Psychiatrist and co-founder of Humanity Crew, an NGO providing psychological aid and first-response mental health interventions to refugees and displaced populations.


Laura L. Dunn
Laura L. Dunn (USA)
Victims’ rights attorney
Attorney and Founder of SurvJustice, a national nonprofit increasing the prospect of justice for survivors of campus sexual violence through legal assistance, policy advocacy and institutional training.


Rola Hallam
Rola Hallam (Syria | UK)
Humanitarian aid entrepreneur 
Medical doctor and founder of CanDo, a social enterprise and crowdfunding platform that enables local humanitarians to provide healthcare to their own war-devastated communities.


Olga Iurkova
Olga Iurkova (Ukraine)
Journalist + editor
Journalist and co-founder of StopFake.org, an independent Ukrainian organization that trains an international cohort of fact-checkers in an effort to curb propaganda and misinformation in the media.


Glaciologist M Jackson studies glaciers like this one — the glacier Svínafellsjökull in southeastern Iceland. The high-water mark visible on the mountainside indicates how thick the glacier once was, before climate change caused its rapid recession. (Photo: M Jackson)

M Jackson
M Jackson (USA)
Geographer + glaciologist
Glaciologist researching the cultural and social impacts of climate change on communities across all eight circumpolar nations, and an advocate for more inclusive practices in the field of glaciology.


Romain Lacombe
Romain Lacombe (France)
Environmental entrepreneur
Founder of Plume Labs, a company dedicated to raising awareness about global air pollution by creating a personal electronic pollution tracker that forecasts air quality levels in real time.


Saran Kaba Jones
Saran Kaba Jones (Liberia | USA)
Clean water advocate
Founder and CEO of FACE Africa, an NGO that strengthens clean water and sanitation infrastructure in Sub-Saharan Africa through innovative community support services.


Yasin Kakande
Yasin Kakande (Uganda)
Investigative journalist + author
Journalist working undercover in the Middle East to expose the human rights abuses of migrant workers there.


In one of her long-term projects, “The Three: Senior Love Triangle,” documentary photographer Isadora Kosofsky shadowed a three-way relationship between aged individuals in Los Angeles, CA – Jeanie (81), Will (84), and Adina (90). Here, Jeanie and Will kiss one day after a fight.

Isadora Kosofsky
Isadora Kosofsky (USA)
Photojournalist + filmmaker
Photojournalist exploring underrepresented communities in America with an immersive approach, documenting senior citizen communities, developmentally disabled populations, incarcerated youth, and beyond.


Adam Kucharski
Adam Kucharski (UK)
Infectious disease scientist
Infectious disease scientist creating new mathematical and computational approaches to understand how epidemics like Zika and Ebola spread, and how they can be controlled.


Lucy Marcil
Lucy Marcil (USA)
Pediatrician + social entrepreneur
Pediatrician and co-founder of StreetCred, a nonprofit addressing the health impact of financial stress by providing fiscal services to low-income families in the doctor’s waiting room.


Burçin Mutlu-Pakdil
Burçin Mutlu-Pakdil (Turkey | USA)
Astrophysicist
Astrophysicist studying the structure and dynamics of galaxies — including a rare double-ringed elliptical galaxy she discovered — to help us understand how they form and evolve.


Faith Osier
Faith Osier (Kenya | Germany)
Infectious disease doctor
Scientist studying how humans acquire immunity to malaria, translating her research into new, highly effective malaria vaccines.


In “Birth of a Nation” (2015), artist Paul Rucker recast Ku Klux Klan robes in vibrant, contemporary fabrics like spandex, Kente cloth, camouflage and white satin – a reminder that the horrors of slavery and the Jim Crow South still define the contours of American life today. (Photo: Ryan Stevenson)

Paul Rucker
Paul Rucker (USA)
Visual artist + cellist
Multidisciplinary artist exploring issues related to mass incarceration, racially motivated violence, police brutality and the continuing impact of slavery in the US.


Kaitlyn Sadtler
Kaitlyn Sadtler (USA)
Regenerative tissue engineer
Tissue engineer harnessing the body’s natural immune system to create new regenerative medicines that mend muscle and more quickly heal wounds.


DeAndrea Salvador (USA)
Environmental justice advocate
Sustainability expert and founder of RETI, a nonprofit that advocates for inclusive clean-energy policies that help low-income families access cutting-edge technology to reduce their energy costs.


Harbor seal patient Bogey gets a checkup at the Marine Mammal Center in California. Veterinarian Claire Simeone studies marine mammals like harbor seals to understand how the health of animals, humans and our oceans are interrelated. (Photo: Ingrid Overgard / The Marine Mammal Center)

Claire Simeone
Claire Simeone (USA)
Marine mammal veterinarian
Veterinarian and conservationist studying how the health of marine mammals, such as sea lions and dolphins, informs and influences both human and ocean health.


Kotchakorn Voraakhom
Kotchakorn Voraakhom (Thailand)
Urban landscape architect
Landscape architect and founder of Landprocess, a Bangkok-based design firm building public green spaces and green infrastructure to increase urban resilience and protect vulnerable communities from climate change.


Mikhail Zygar
Mikhail Zygar (Russia)
Journalist + historian
Journalist covering contemporary and historical Russia and founder of Project1917, a digital documentary project that narrates the 1917 Russian Revolution in an effort to contextualize modern-day Russian issues.


TED2018 Senior Fellows

Senior Fellows embody the spirit of the TED Fellows program. They attend four additional TED events, mentor new Fellows and continue to share their remarkable work with the TED community.

Prosanta Chakrabarty
Prosanta Chakrabarty (USA)
Ichthyologist
Evolutionary biologist and natural historian researching and discovering fish around the world in an effort to understand fundamental aspects of biological diversity.


Aziza Chaouni
Aziza Chaouni (Morocco)
Architect
Civil engineer and architect creating sustainable built environments in the developing world, particularly in the deserts of the Middle East.


Shohini Ghose
Shohini Ghose (Canada)
Quantum physicist + educator
Theoretical physicist developing quantum computers and novel protocols like teleportation, and an advocate for equity, diversity and inclusion in science.


A pair of shrimpfish collected in Tanzanian mangroves by ichthyologist Prosanta Chakrabarty and his colleagues this past year. They may represent an unknown population or even a new species of these unusual fishes, which swim head down among aquatic plants.

Zena el Khalil
Zena el Khalil (Lebanon)
Artist + cultural activist
Artist and cultural activist using visual art, site-specific installation, performance and ritual to explore and heal the war-torn history of Lebanon and other global sites of trauma.


Bektour Iskender
Bektour Iskender (Kyrgyzstan)
Independent news publisher
Co-founder of Kloop, an NGO and leading news publication in Kyrgyzstan, committed to freedom of speech and training young journalists to cover politics and investigate corruption.


Mitchell Jackson
Mitchell Jackson (USA)
Writer + filmmaker
Writer exploring race, masculinity, the criminal justice system, and family relationships through fiction, essays and documentary film.


Jessica Ladd
Jessica Ladd (USA)
Sexual health technologist
Founder and CEO of Callisto, a nonprofit organization developing technology to combat sexual assault and harassment on campus and beyond.


Jorge Mañes Rubio
Jorge Mañes Rubio (Spain)
Artist
Artist investigating overlooked places on our planet and beyond, creating artworks that reimagine and revive these sites through photography, site-specific installation and sculpture.


An asteroid impact is the only natural disaster we have the technology to prevent, but since prevention takes time, we must search for near-Earth asteroids now. Astronomer Carrie Nugent does just that, discovering and studying asteroids like this one. (Illustration: Tim Pyle and Robert Hurt / NASA/JPL-Caltech)

v
Carrie Nugent (USA)
Asteroid hunter
Astronomer using machine learning to discover and study near-Earth asteroids, our smallest and most numerous cosmic neighbors.


David Sengeh
David Sengeh (Sierra Leone + South Africa)
Biomechatronics engineer
Research scientist designing and deploying new healthcare technologies, including artificial intelligence, to cure and fight disease in Africa.

TEDWhy Oprah’s talk works: Insight from a TED speaker coach

By Abigail Tenembaum and Michael Weitz of Virtuozo

When Oprah Winfrey spoke at the Golden Globes last Sunday night, her speech lit up social media within minutes. It was powerful, memorable and somehow exactly what the world wanted to hear. It inspired multiple standing O’s — and even a semi-serious Twitter campaign to elect her president #oprah2020

All this in 9 short minutes.

What made this short talk so impactful? My colleagues and I were curious. We are professional speaker coaches who’ve worked with many, many TED speakers, analyzing their scripts and their presentation styles to help each person make the greatest impact with their idea. And when we sat down and looked at Oprah’s talk, we saw a lot of commonality with great TED Talks.

Among the elements that made this talk so effective:

A strong opening that transports us. Oprah got on stage to give a “thank you” speech for a lifetime achievement award. But she chose not to start with the “thank you.” Instead she starts with a story. Her first words? “In 1964, I was a little girl sitting on the linoleum floor of my mother’s house in Milwaukee.” Just like a great story should, this first sentence transports us to a different time and place, and introduces the protagonist. As TED speaker Uri Hasson says: Our brain loves stories. Oprah’s style of opening signals to the audience that it’s story time, by using the opening similar to any fairy tale: “Once upon a time” (In 1964) “There was a princess” (I was a little girl) “In a land far far away” (…my mother’s house in Milwaukee.”

Alternating between ideas and anecdotes. A great TED Talk illustrates an idea. And, just like Oprah does in her talk, the idea is illustrated through a mix of stories, examples and facts. Oprah tells a few anecdotes, none longer than a minute. But they are masterfully crafted, to give us, the audience, just enough detail to invite us to imagine it. When TED speaker Stefan Larsson tells us an anecdote about his time at medical school, he says: “I wore the white coat” — one concrete detail that allows us, the audience, to imagine a whole scene. Oprah describes Sidney Poitier with similar specificity – down to the detail that “his tie was white.” Recy Taylor was “walking home from a church service.” Oprah the child wasn’t sitting on the floor but on the “linoleum floor.” Like a great sketch artist, a great storyteller draws a few defined lines and lets the audience’s imagination fill in the rest to create the full story.

A real conversation with the audience. At TED, we all know it’s called a TED talk — not “speech,” not “lecture.” We feel it when Sir Ken Robinson looks at the audience and waits for their reaction. But it’s mostly not in the words. It’s in the tone, in the fact that the speaker’s attention is on the audience, focusing on one person at a time, and having a mini conversation with us. Oprah is no different. She speaks to the people in the room, and this intimacy translates beautifully on camera.

It’s Oprah’s talk — and only Oprah’s. A great TED talk, just like any great talk or speech, is deeply connected to the person delivering it. We like to ask speakers, “What makes this a talk that only you can give?” Esther Perel shares anecdotes from her unique experience as a couples therapist, intimate stories that helped her develop a personal perspective on love and fidelity. Only Ray Dalio could tell the story of personal failure and rebuilding that lies behind the radical transparency he’s created in his company. Uri Hasson connects his research on the brain and stories to his own love of film. Oprah starts with the clearest personal angle – her personal story. And along her speech she brings her own career as an example, and her own way of articulating her message.

A great TED Talk invites the audience to think and to feel. Oprah’s ending is a big invitation to the audience to act. And it’s done not by telling us what to do, but by offering an optimistic vision of the future and inviting us all to be part of it.

Here’s a link to the full speech.

Planet DebianMarkus Koschany: My Free Software Activities in December 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I spent some time in December 2017 to revive Hex-a-Hop, a nice (and somehow cute) logic game, which eventually closed seven bugs. Unfortunately this game was not well maintained but it should be up-to-date again now.
  • I released a new version of debian-games, a collection of games metapackages. Five packages were removed from Debian but  I could also add eight new games or frontends to compensate for that.
  • I updated a couple of packages to fix minor and normal bugs namely: dopewars (#633392,  #857671), caveexpress, marsshooter, snowballz (#866481), drascula, lure-of-the-temptress, lgeneral-data (#861048) and lordsawar (#885888).
  • I also packaged new upstream versions of renpy and lgeneral.
  • Last but not least: I completed another bullet transition (#885179).

Debian Java

Debian LTS

This was my twenty-second month as a paid contributor and I have been paid to work 14 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1216-1. Issued a security update for wordpress fixing 4 CVE.
  • DLA-1227-1. Issued a security update for imagemagick fixing 4 CVE.
  • DLA-1231-1. Issued a security update for graphicsmagick fixing 8 CVE. I confirmed that two more CVE (CVE-2017-17783 and CVE-2017-17913) did not affect the version in Wheezy.
  • DLA-1236-1. Issued a security update for plexus-utils fixing 1 CVE.
  • DLA-1237-1. Issued a security update for plexus-utils2 fixing 1 CVE.
  • DLA-1208-1. I released an update for Debian’s reportbug tool to fix bug #878088. The LTS and security teams will be informed from now on when users report regressions due to security updates. I have also prepared updates for Jessie/Stretch and unstable but my NMU was eventually canceled by the maintainer of reportbug . He has not made a concrete counterproposal yet.

Misc

  • I reviewed and sponsored mygui and openmw for Bret Curtis.
  • I updated byzanz and fixed #830011.
  • I adopted the imlib2 image library and prepared a new upstream release. I hope to release it soon.

Non-maintainer upload

  • I NMUed lmarbles, prepared a new upstream release and fixed some bugs.

Thanks for reading and see you next time.

TEDGet ready for TED Talks India: Nayi Soch, premiering Dec. 10 on Star Plus

This billboard is showing up in streets around India, and it’s made out of pollution fumes that have been collected and made into ink — ink that’s, in turn, made into an image of TED Talks India: Nayi Soch host Shah Rukh Khan. Tune in on Sunday night, Dec. 10, at 7pm on Star Plus to see what it’s all about.

TED is a global organization with a broad global audience. With our TED Translators program working in more than 100 languages, TEDx events happening every day around the world and so much more, we work hard to present the latest ideas for everyone, regardless of language, location or platform.

Now we’ve embarked on a journey with one of the largest TV networks in the world — and one of the biggest movie stars in the world — to create a Hindi-language TV series and digital series that’s focused on a country at the peak of innovation and technology: India.

Hosted and curated by Shah Rukh Khan, the TV series TED Talks India: Nayi Soch will premiere in India on Star Plus on December 10.

The name of the show, Nayi Soch, literally means ‘new ideas’ — and this kick-off episode seeks to inspire the nation to embrace and cultivate ideas and curiosity. Watch it and discover a program of speakers from India and the world whose ideas might inspire you to some new thinking of your own! For instance — the image on this billboard above is made from the fumes of your car … a very new and surprising idea!

If you’re in India, tune in at 7pm IST on Sunday night, Dec. 10, to watch the premiere episode on Star Plus and five other channels. Then tune in to Star Plus on the next seven Sundays, at the same time, to hear even more great talks on ideas, grouped into themes that will certainly inspire conversations. You can also explore the show on the HotStar app.

On TED.com/india and for TED mobile app users in India, each episode will be conveniently turned into five to seven individual TED Talks, one talk for each speaker on the program. You can watch and share them on their own, or download them as playlists to watch one after another. The talks are given in Hindi, with professional subtitles in Hindi and in English. Almost every talk will feature a short Q&A between the speaker and the host, Shah Rukh Khan, that dives deeper into the ideas shared onstage.

Want to learn more about TED Talks? Check out this playlist that SRK curated just for you.

Google AdsenseOur continued investment in AdSense Experiments

Experimentation is at the heart of everything we do at Google — so much so that many of our products, including Analytics and AdSense, allow you to run your own experiments.

The AdSense Experiments page has allowed you to experiment with ad unit settings, and allowing and blocking ad categories to see how this affects your earnings. As of today, you can run more experiment types and have a better understanding of how they impact your earnings and users with some new updates.

Understand user impact with session metrics

Curious to know how the settings you experiment with impact your user experience? You can now see how long users spend on your site with a new “Ad session length” metric that has been added to the Experiments results page. Longer ad session lengths are usually a good indicator of a healthy user experience.

Ad balance experiments

Ad balance is a tool that allows you to reduce the number of ads shown by displaying only those ads that perform the best. You can now run experiments to see how different ad fill rates impact revenue and ad session lengths. Try it out and let us know what you think in the comments below!

Service announcement: We're auto-completing some experiments, and deleting experiments that are more than a year old.

To ensure you can focus your time efficiently on experiments, we'll soon be auto-completing the experiments for which no winner has been chosen after 30 days of being marked “Ready to complete”. You can manually choose a winner during those 30 days, or (if you’re happy for us to close the experiment) you don't need to do anything. Learn more about the status of experiments.

We’ll also be deleting experiments that were completed more than one year ago. Old experiments are rarely useful in the fast-moving world of the Internet and clutter the Experiments page with outdated information. If you wish to keep old experiments, you can download all existing data by using the “Download Data” button on the Experiments page.

We look forward to hearing your thoughts on these new features.

Posted by: Amir Hosseini Rad, AdSense Product Manager

TEDA photograph by Paul Nicklen shows the tragedy of extinction, and more news from TED speakers

The past few weeks have brimmed over with TED-related news. Here, some highlights:

This is what extinction looks like. Photographer Paul Nicklen shocked the world with footage of a starving polar bear that he and members of his conservation group SeaLegacy captured in the Canadian Arctic Archipelago. “It rips your heart out of your chest,” Nicklen told The New York Times. Published in National Geographic, on Nicklen’s Instagram channel, and via SeaLegacy in early December, the footage and a photograph taken by Cristina Mittermeier spread rapidly across the Internet, to horrified reaction. Polar bears are hugely threatened by climate change, in part because of their dependence on ice cover, and their numbers are projected to drop precipitously in coming years. By publishing the photos, Nicklen said to the Times, he hoped to make a scientific data point feel real to people. (Watch Nicklen’s TED Talk)

Faster 3D printing with liquids. Attendees at Design Miami witnessed the first public demonstration of MIT’s 3D liquid printing process. In a matter of minutes, a robotic arm printed lamps and handbags inside a glass tank filled with gel, showing that 3D printing doesn’t have to be painfully slow. The technique upends the size constraints and poor material quality that have plagued 3D printing, say the creators, and could be used down the line to print larger objects like furniture, reports Dezeen. Steelcase and the Self-Assembly lab at MIT, co-directed by TED Fellow Skylar Tibbits and Jared Laucks, developed the revolutionary technique. (Watch Tibbits’ TED Talk)

The crazy mathematics of swarming and synchronization. Studies on swarming often focus on animal movement (think schools of fish) but ignore their internal framework, while studies on synchronization tend to focus solely on internal dynamics (think coupled lasers). The two phenomena, however, have rarely been studied together. In new research published in Nature Communications, mathematician Steven Strogatz and his former postdoctoral student Kevin O’Keefe studied systems where both synchronization and swarming occur simultaneously. Male tree frogs were one source of inspiration for the research by virtue of the patterns that they form in both space and time, mainly related to reproduction. The findings open the door to future research of unexplored behaviors and systems that may also exhibit these two behaviors concurrently. (Watch Strogatz’s TED Talk)

A filmmaker’s quest to understand white nationalism. Documentary filmmaker and human rights activist Deeyah Khan’s new documentary, White Right: Meeting the Enemy, seeks to understand neo-Nazis and white nationalists beyond their sociopolitical beliefs. All too familiar with racism and hate related threats in her own life, her goal is not to sympathize or rationalize their beliefs or behaviors. She instead intends to discover the evolution of their ideology as individuals, which can provide insights into how they became attracted to and involved in these movements. Deeyah uses this film to answer the question: “Is it possible for me to sit with my enemy and for them to sit with theirs?” (Watch Khan’s TED Talk)

The end of an era at the San Francisco Symphony. Conductor Michael Tilson Thomas announced that he will be stepping down from his role as music director of the San Francisco Symphony in 2020. In that year, he will be celebrating his 75th birthday and his 25th anniversary at the symphony, and although his forthcoming departure will be the end of an era, Thomas will continue to work as the artistic director for the New World Symphony at the training academy he co-founded in Miami. Thus, 2020 won’t be the last time we hear from the musical great, given that he intends to pick up compositions, stories, and poems that he’s previously worked on. (Watch Tilson Thomas’ TED Talk)

A better way to weigh yourself. The Shapa Smart Scale is all words, no numbers. Behavioral economist Dan Ariely helped redesign the scale in the hope that eliminating the tyranny of the number would help people make better decisions about their health (something we’re notoriously bad at). The smart scale sends a small electrical current through the person’s body and gathers information, such as muscle mass, bone density, and water percentage. Then, it compares it to personal data collected over time. Instead of spitting out a single number, it simply tells you whether you’re doing a little better, a little worse, much better, much worse, or essentially the same. (Watch Ariely’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.

Planet DebianSean Whitton: Are you a DD or DM doing source-only uploads to Debian out of a git repository?

If you are a Debian Maintainer (DM) or Debian Developer (DD) doing source-only uploads to Debian for packages maintained in git, you are probably using some variation of the following:

% # sbuild/pbuilder, install and test the final package
% # everything looks good
% dch -r
% git commit debian/changelog "Finalise 1.2.3-1 upload"
% gbp buildpackage -S --git-tag
% debsign -S
% dput ftp-master ../foo_1.2.3-1_source.changes
% git push --follow-tags origin master

where the origin remote is probably salsa.debian.org. Please consider replacing the above with the following:

% # sbuild/pbuilder, install and test the final package
% # everything looks good
% dch -r
% git commit debian/changelog "Finalise 1.2.3-1 upload"
% dgit push-source --gbp
% git push --follow-tags origin master

where the dgit push-source call does the following:

  1. Various sanity checks, some of which are not performed by any other tools, such as
    • not accidently overwriting an NMU.
    • not missing the .orig.tar from your upload
    • ensuring that the Distribution field in your changes is the same as your changelog
  2. Builds a source package from your git HEAD.
  3. Signs the .changes and .dsc.
  4. dputs these to ftp-master.
  5. Pushes your git history to dgit-repos.

Why might you want to do this? Well,

  1. You don’t need to learn how to use dgit for any other parts of your workflow. It’s entirely drop-in.
    • dgit will not make any merge commits on your master branch, or anything surprising like that. (It might make a commit to tweak your .gitignore.)
  2. No-one else in your team is required to use dgit. Nothing about their workflow need change.
  3. Benefit from dgit’s sanity checks.
  4. Provide your git history on dgit-repos in a uniform format that is easier for users, NMUers and downstreams to use (see dgit-user(7) and dgit-simple-nmu(7)).
    • Note that this is independent of the history you push to alioth/salsa. You still need to push to salsa as before, and the format of that history is not changed.
  5. Only a single command is required to perform the source-only upload, instead of three.

Hints

  1. If you’re using git dpm you’ll want --dpm instead of --gbp.
  2. If the last upload of the package was not performed with dgit, you’ll need to pass --overwrite. dgit will tell you if you need this. This is to avoid accidently excluding the changes in NMUs.

Krebs on SecurityMicrosoft’s Jan. 2018 Patch Tuesday Lowdown

Microsoft on Tuesday released 14 security updates, including fixes for the Spectre and Meltdown flaws detailed last week, as well as a zero-day vulnerability in Microsoft Office that is being exploited in the wild. Separately, Adobe pushed a security update to its Flash Player software.

Last week’s story, Scary Chip Flaws Raise Spectre of Meltdown, sought to explain the gravity of these two security flaws present in most modern computers, smartphones, tablets and mobile devices. The bugs are thought to be mainly exploitable in chips made by Intel and ARM, but researchers said it was possible they also could be leveraged to steal data from computers with chips made by AMD.

By the time that story had published, Microsoft had already begun shipping an emergency update to address the flaws, but many readers complained that their PCs experienced the dreaded “blue screen of death” (BSOD) after applying the update. Microsoft warned that the BSOD problems were attributable to many antivirus programs not yet updating their software to play nice with the security updates.

On Tuesday, Microsoft said it was suspending the patches for computers running AMD chipsets.

“After investigating, Microsoft determined that some AMD chipsets do not conform to the documentation previously provided to Microsoft to develop the Windows operating system mitigations to protect against the chipset vulnerabilities known as Spectre and Meltdown,” the company said in a notice posted to its support site.

“To prevent AMD customers from getting into an unbootable state, Microsoft has temporarily paused sending the following Windows operating system updates to devices that have impacted AMD processors,” the company continued. “Microsoft is working with AMD to resolve this issue and resume Windows OS security updates to the affected AMD devices via Windows Update and WSUS as soon as possible.”

In short, if you’re running Windows on a computer powered by an AMD, you’re not going to be offered the Spectre/Meltdown fixes for now. Not sure whether your computer has an Intel or AMD chip? Most modern computers display this information (albeit very briefly) when the computer first starts up, before the Windows logo appears on the screen.

Here’s another way. From within Windows, users can find this information by pressing the Windows key on the keyboard and the “Pause” key at the same time, which should open the System Properties feature. The chip maker will be displayed next to the “Processor:” listing on that page.

Microsoft also on Tuesday provided more information about the potential performance impact on Windows computers after installing the Spectre/Meltdown updates. To summarize, Microsoft said Windows 7, 8.1 and 10 users on older chips (circa 2015 or older), as well as Windows server users on any silicon, are likely to notice a slowdown of their computer after applying this update.

Any readers who experience a BSOD after applying January’s batch of updates may be able to get help from Microsoft’s site: Here are the corresponding help pages for Windows 7, Windows 8.1 and Windows 10 users.

As evidenced by this debacle, it’s a good idea to get in the habit of backing up your system on a regular basis. I typically do this at least once a month — but especially right before installing any updates from Microsoft. 

Attackers could exploit a zero-day vulnerability in Office (CVE-2018-0802) just by getting a user to open a booby-trapped Office document or visit a malicious/hacked Web site. Microsoft also patched a flaw (CVE-2018-0819) in Office for Mac that was publicly disclosed prior to the patch being released, potentially giving attackers a heads up on how to exploit the bug.

Of the 56 vulnerabilities addressed in the January Patch Tuesday batch, at least 16 earned Microsoft’s critical rating, meaning attackers could exploit them to gain full access to Windows systems with little help from users. For more on Tuesday’s updates from Microsoft, check out blogs from Ivanti and Qualys.

As per usual, Adobe issued an update for Flash Player yesterday. The update brings Flash to version 28.0.0.137 on Windows, Mac, and Linux systems. Windows users who browse the Web with anything other than Internet Explorer may need to apply the Flash patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version.

When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then. Chrome will replace that three dot icon with an up-arrow inside of a circle when updates are waiting to be installed.

Standard disclaimer: Because Flash remains such a security risk, I continue to encourage readers to remove or hobble Flash Player unless and until it is needed for a specific site or purpose. More on that approach (as well as slightly less radical solutions ) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

Another, perhaps less elegant, solution is to keep Flash installed in a browser that you don’t normally use, and then to only use that browser on sites that require it.

CryptogramDetecting Adblocker Blockers

Interesting research on the prevalence of adblock blockers: "Measuring and Disrupting Anti-Adblockers Using Differential Execution Analysis":

Abstract: Millions of people use adblockers to remove intrusive and malicious ads as well as protect themselves against tracking and pervasive surveillance. Online publishers consider adblockers a major threat to the ad-powered "free" Web. They have started to retaliate against adblockers by employing anti-adblockers which can detect and stop adblock users. To counter this retaliation, adblockers in turn try to detect and filter anti-adblocking scripts. This back and forth has prompted an escalating arms race between adblockers and anti-adblockers.

We want to develop a comprehensive understanding of anti-adblockers, with the ultimate aim of enabling adblockers to bypass state-of-the-art anti-adblockers. In this paper, we present a differential execution analysis to automatically detect and analyze anti-adblockers. At a high level, we collect execution traces by visiting a website with and without adblockers. Through differential execution analysis, we are able to pinpoint the conditions that lead to the differences caused by anti-adblocking code. Using our system, we detect anti-adblockers on 30.5% of the Alexa top-10K websites which is 5-52 times more than reported in prior literature. Unlike prior work which is limited to detecting visible reactions (e.g., warning messages) by anti-adblockers, our system can discover attempts to detect adblockers even when there is no visible reaction. From manually checking one third of the detected websites, we find that the websites that have no visible reactions constitute over 90% of the cases, completely dominating the ones that have visible warning messages. Finally, based on our findings, we further develop JavaScript rewriting and API hooking based solutions (the latter implemented as a Chrome extension) to help adblockers bypass state-of-the-art anti-adblockers.

News article.

Worse Than FailureCodeSOD: Warp Me To Halifax

Greenwich must think they’re so smart, being on the prime meridian. Starting in the 1840s, the observatory was the international standard for time (and thus vital for navigation). And even when the world switched to UTC, GMT is only different from that by 0.9s. If you want to convert times between time zones, you do it by comparing against UTC, and you know what?

I’m sick of it. Boy, I wish somebody would take them down a notch. Why is a tiny little strip of London so darn important?

Evan’s co-worker obviously agrees with the obvious problem of Greenwich’s unearned superiority, and picks a different town to make the center of the world: Halifax.

function time_zone_time($datetime, $time_zone, $savings, $return_format="Y-m-d g:i a"){
        date_default_timezone_set('America/Halifax');
        $time = strtotime(date('Y-m-d g:i a', strtotime($datetime)));
        $halifax_gmt = -4;
        $altered_tdf_gmt = $time_zone;
        if ($savings && date('I', $time) == 1) {
                $altered_tdf_gmt++;
        } // end if
        if(date('I') == 1){
                $halifax_gmt++;
        }
        $altered_tdf_gmt -= $halifax_gmt;
        $new_time = mktime(date("H", $time), date("i", $time), date("s", $time),date("m", $time)  ,date("d", $time), date("Y", $time)) + ($altered_tdf_gmt*3600);
        $new_datetime = date($return_format, $new_time);
        return $new_datetime;
}
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaJonathan Adamczewski: Priorities for my team

(unthreaded from here)

During the day, I’m a Lead of a group of programmers. We’re responsible for a range of tools and tech used by others at the company for making games.

I have a list of the my priorities (and some related questions) of things that I think are important for us to be able to do well as individuals, and as a team:

  1. Treat people with respect. Value their time, place high value on their well-being, and start with the assumption that they have good intentions
    (“People” includes yourself: respect yourself, value your own time and well-being, and have confidence in your good intentions.)
  2. When solving a problem, know the user and understand their needs.
    • Do you understand the problem(s) that need to be solved? (it’s easy to make assumptions)
    • Have you spoken to the user and listened to their perspective? (it’s easy to solve the wrong problem)
    • Have you explored the specific constraints of the problem by asking questions like:
      • Is this part needed? (it’s easy to over-reach)
      • Is there a satisfactory simpler alternative? (actively pursue simplicity)
      • What else will be needed? (it’s easy to overlook details)
    • Have your discussed your proposed solution with users, and do they understand what you intend to do? (verify, and pursue buy-in)
    • Do you continue to meet regularly with users? Do they know you? Do they believe that you’re working for their benefit? (don’t under-estimate the value of trust)
  3. Have a clear understanding of what you are doing.
    • Do you understand the system you’re working in? (it’s easy to make assumptions)
    • Have you read the documentation and/or code? (set yourself up to succeed with whatever is available)
    • For code:
      • Have you tried to modify the code? (pull a thread; see what breaks)
      • Can you explain how the code works to another programmer in a convincing way? (test your confidence)
      • Can you explain how the code works to a non-programmer?
  4. When trying to solve a problem, debug aggressively and efficiently.
    • Does the bug need to be fixed? (see 1)
    • Do you understand how the system works? (see 2)
    • Is there a faster way to debug the problem? Can you change code or data to cause the problem to occur more quickly and reliably? (iterate as quickly as you can, fix the bug, and move on)
    • Do you trust your own judgement? (debug boldly, have confidence in what you have observed, make hypotheses and test them)
  5. Pursue excellence in your work.
    • How are you working to be better understood? (good communication takes time and effort)
    • How are you working to better understand others? (don’t assume that others will pursue you with insights)
    • Are you responding to feedback with enthusiasm to improve your work? (pursue professionalism)
    • Are you writing high quality, easy to understand, easy to maintain code? How do you know? (continue to develop your technical skills)
    • How are you working to become an expert and industry leader with the technologies and techniques you use every day? (pursue excellence in your field)
    • Are you eager to improve (and fix) systems you have worked on previously? (take responsibility for your work)

The list was created for discussion with the group, and as an effort to articulate my own expectations in a way that will help my team understand me.

Composing this has been useful exercise for me as a lead, and definitely worthwhile for the group. If you’ve never tried writing down your own priorities, values, and/or assumptions, I encourage you to try it :)

Planet DebianBen Hutchings: Meltdown and Spectre in Debian

I'll assume everyone's already heard repeatedly about the Meltdown and Spectre security issues that affect many CPUs. If not, see meltdownattack.com. These primarily affect systems that run untrusted code - such as multi-tenant virtual hosting systems. Spectre is also a problem for web browsers with Javascript enabled.

Meltdown

Over the last week the Debian kernel team has worked to mitigate Meltdown in all suites. This mitigation is currently limited to kernels running in 64-bit mode (amd64 architecture), but the issue affects 32-bit mode as well.

You can see where this mitigation is applied on the security tracker. As of today, wheezy, jessie, jessie-backports, stretch and unstable/sid are fixed while stretch-backports, testing/buster and experimental are not.

Spectre

Spectre needs to be mitigated in the kernel, browsers, and potentially other software. Currently the kernel changes to mitigate it are still under discussion upstream. Mozilla has started mitigating Spectre in Firefox and some of these changes are now in Debian unstable (version 57.0.4-1). Chromium has also started mitigating Spectre but no such changes have landed in Debian yet.

Planet DebianBen Hutchings: Debian LTS work, December 2017

I was assigned 14 hours of work by Freexian's Debian LTS initiative, but only worked 6 hours so I carried over 8 hours to January.

I prepared and uploaded an update to the Linux kernel to fix various security issues. I issued DLA-1200-1 for this update. I also prepared another update on the Linux 3.2 longterm stable branch, though most of that work was done while on holiday so I didn't count the hours. I spent some time following the closed mailing list used to coordinate backports of KPTI/KAISER.

,

CryptogramDaniel Miessler on My Writings about IoT Security

Daniel Miessler criticizes my writings about IoT security:

I know it's super cool to scream about how IoT is insecure, how it's dumb to hook up everyday objects like houses and cars and locks to the internet, how bad things can get, and I know it's fun to be invited to talk about how everything is doom and gloom.

I absolutely respect Bruce Schneier a lot for what he's contributed to InfoSec, which makes me that much more disappointed with this kind of position from him.

InfoSec is full of those people, and it's beneath people like Bruce to add their voices to theirs. Everyone paying attention already knows it's going to be a soup sandwich -- a carnival of horrors -- a tragedy of mistakes and abuses of trust.

It's obvious. Not interesting. Not novel. Obvious. But obvious or not, all these things are still going to happen.

I actually agree with everything in his essay. "We should obviously try to minimize the risks, but we don't do that by trying to shout down the entire enterprise." Yes, definitely.

I don't think the IoT must be stopped. I do think that the risks are considerable, and will increase as these systems become more pervasive and susceptible to class breaks. And I'm trying to write a book that will help navigate this. I don't think I'm the prophet of doom, and don't want to come across that way. I'll give the manuscript another read with that in mind.

Cory DoctorowWith repetition, most of us will become inured to all the dirty tricks of Facebook attention-manipulation

In my latest Locus column, “Persuasion, Adaptation, and the Arms Race for Your Attention,” I suggest that we might be too worried about the seemingly unstoppable power of opinion-manipulators and their new social media superweapons.


Not because these techniques don’t work (though when someone who wants to sell you persuasion tools tells you that they’re amazing and unstoppable, some skepticism is warranted), but because a large slice of any population will eventually adapt to any stimulus, which is why most of us aren’t addicted to slot machines, Farmville and Pokemon Go.


When a new attentional soft spot is discovered, the world can change overnight. One day, every­one you know is signal boosting, retweeting, and posting Upworthy headlines like “This video might hurt to watch. Luckily, it might also explain why,” or “Most Of These People Do The Right Thing, But The Guys At The End? I Wish I Could Yell At Them.” The style was compelling at first, then reductive and simplistic, then annoying. Now it’s ironic (at best). Some people are definitely still susceptible to “This Is The Most Inspiring Yet Depressing Yet Hilarious Yet Horrifying Yet Heartwarming Grad Speech,” but the rest of us have adapted, and these headlines bounce off of our attention like pre-penicillin bacteria being batted aside by our 21st century immune systems.

There is a war for your attention, and like all adversarial scenarios, the sides develop new countermeasures and then new tactics to overcome those countermeasures. The predator carves the prey, the prey carves the preda­tor. To get a sense of just how far the state of the art has advanced since Farmville, fire up Universal Paperclips, the free browser game from game designer Frank Lantz, which challenges you to balance resource acquisi­tion, timing, and resource allocation to create paperclips, progressing by purchasing upgraded paperclip-production and paperclip-marketing tools, until, eventually, you produce a sentient AI that turns the entire universe into paperclips, exterminating all life.

Universal Paperclips makes Farmville seem about as addictive as Candy­land. Literally from the first click, it is weaving an attentional net around your limbic system, carefully reeling in and releasing your dopamine with the skill of a master fisherman. Universal Paperclips doesn’t just suck you in, it harpoons you.

Persuasion, Adaptation, and the Arms Race for Your Attention [Cory Doctorow/Locus]

Krebs on SecurityWebsite Glitch Let Me Overstock My Coinbase

Coinbase and Overstock.com just fixed a serious glitch that allowed Overstock customers to buy any item at a tiny fraction of the listed price. Potentially more punishing, the flaw let anyone paying with bitcoin reap many times the authorized bitcoin refund amount on any canceled Overstock orders.

In January 2014, Overstock.com partnered with Coinbase to let customers pay for merchandise using bitcoin, making it among the first of the largest e-commerce vendors to accept the virtual currency.

On December 19, 2017, as the price of bitcoin soared to more than $17,000 per coin, Coinbase added support for Bitcoin Cash — an offshoot (or “fork”) from bitcoin designed to address the cryptocurrency’s scalability challenges.

As a result of the change, Coinbase customers with balances of bitcoin at the time of the fork were given an equal amount of bitcoin cash stored by Coinbase. However, there is a significant price difference between the two currencies: A single bitcoin is worth almost $15,000 right now, whereas a unit of bitcoin cash is valued at around $2,400.

On Friday, Jan. 5, KrebsOnSecurity was contacted by JB Snyder, owner of North Carolina-based Bancsec, a company that gets paid to break into banks and test their security. An early adopter of bitcoin, Snyder said he was using some of his virtual currency to purchase an item at Overstock when he noticed something alarming.

During the checkout process for those paying by bitcoin, Overstock.com provides the customer a bitcoin wallet address that can be used to pay the invoice and complete the transaction. But Snyder discovered that Overstock’s site just as happily accepted bitcoin cash as payment, even though bitcoin cash is currently worth only about 15 percent of the value of bitcoin.

To confirm and replicate Snyder’s experience firsthand, KrebsOnSecurity purchased a set of three outdoor solar lamps from Overstock for a grand total of $78.27.

The solar lights I purchased from Overstock.com to test Snyder’s finding. They cost $78.27 in bitcoin, but because I was able to pay for them in bitcoin cash I only paid $12.02.

After indicating I wished to pay for the lamps in bitcoin, the site produced a payment invoice instructing me to send exactly 0.00475574 bitcoins to a specific address.

The payment invoice I received from Overstock.com.

Logging into Coinbase, I took the bitcoin address and pasted that into the “pay to:” field, and then told Coinbase to send 0.00475574 in bitcoin cash instead of bitcoin. The site responded that the payment was complete. Within a few seconds I received an email from Overstock congratulating me on my purchase and stating that the items would be shipped shortly.

I had just made a $78 purchase by sending approximately USD $12 worth of bitcoin cash. Crypto-currency alchemy at last!

But that wasn’t the worst part. I didn’t really want the solar lights, but also I had no interest in ripping off Overstock. So I cancelled the order. To my surprise, the system refunded my purchase in bitcoin, not bitcoin cash!

Consider the implications here: A dishonest customer could have used this bug to make ridiculous sums of bitcoin in a very short period of time. Let’s say I purchased one of the more expensive items for sale on Overstock, such as this $100,000, 3-carat platinum diamond ring. I then pay for it in Bitcoin cash, using an amount equivalent to approximately 1 bitcoin ($~15,000).

Then I simply cancel my order, and Overstock/Coinbase sends me almost $100,000 in bitcoin, netting me a tidy $85,000 profit. Rinse, wash, repeat.

Reached for comment, Overstock.com said the company changed no code in its site and that a fix implemented by Coinbase resolved the issue.

“We were made aware of an issue affecting cryptocurrency transactions and refunds by an independent researcher. After working with the researcher to confirm the finding, that method of payment was disabled while we worked with our cryptocurrency integration partner, Coinbase, to ensure they resolved the issue. We have since confirmed that the issue described in the finding has been resolved, and the cryptocurrency payment option has been re-enabled.”

Coinbase said “the issue was caused by the merchant partner improperly using the return values in our merchant integration API. No other Coinbase customer had this problem.”Coinbase told me the bug only existed for approximately three weeks.”

“After being made aware of an issue in our joint refund processing code on SaturdayCoinbase and Overstock worked together to deploy a fix within hours,” The Coinbase statement continued. “While a patch was being developed and tested, orders were proactively disabled to protect customers. To our knowledge, a very small number of transactions were impacted by this issue. Coinbase actively works with merchant partners to identify and solve issues like this in an ongoing, collaborative manner and since being made aware of this have ensured that no other partners are affected.”

Bancsec’s Snyder and I both checked for the presence of this glitch at multiple other merchants that work directly with Coinbase in their checkout process, but we found no other examples of this flaw.

The snafu comes as many businesses that have long accepted bitcoin are now distancing themselves from the currency thanks to the recent volatility in bitcoin prices and associated fees.

Earlier this week, it emerged that Microsoft had ceased accepting payments in Bitcoin, citing volatility concerns. In December, online game giant Steam said it was dropping support for bitcoin payments for the same reason.

And, as KrebsOnSecurity noted last month, even cybercriminals who run online stores that sell stolen identities and credit cards are urging their customers to transact in something other than bitcoin.

Interestingly, bitcoin is thought to have been behind a huge jump in Overstock’s stock price in 2017. In December, Overstock CEO Patrick Byrne reportedly stoked the cryptocurrency fires when he said that he might want to sell Overstock’s e-tailing operations and pour the extra cash into accelerating his blockchain-based business ideas instead.

In case anyone is wondering what I did with the “profit” I made from this scheme, I offered to send it back to Overstock, but they told me to keep it. Instead, I donated it to archive.org, a site that has come in handy for many stories published here.

Update, 3:15 p.m. ET: A previous version of this story stated that neither Coinbase nor Overstock would say which of the two was responsible for this issue. The modified story above resolves that ambiguity.

Planet DebianLars Wirzenius: On using Github and a PR based workflow

In mid-2017, I decided to experiment with using pull-requests (PRs) on Github. I've read that they make development using git much nicer. The end result of my experiment is that I'm not going to adopt a PR based workflow.

The project I chose for my experiment is vmdb2, a tool for generating disk images with Debian. I put it up on Github, and invited people to send pull requests or patches, as they wished. I got a bunch of PRs, mostly from two people. For a little while, there was a flurry of activity. It has has now calmed down, I think primarily because the software has reached a state where the two contributors find it useful and don't need it to be fixed or have new features added.

This was my first experience with PRs. I decided to give it until the end of 2017 until I made any conclusions. I've found good things about PRs and a workflow based on them:

  • they reduce some of the friction of contributing, making it easier for people to contribute; from a contributor point of view PRs certainly seem like a better way than sending patches over email or sending a message asking to pull from a remote branch
  • merging a PR in the web UI is very easy

I also found some bad things:

  • I really don't like the Github UI or UX, in general or for PRs in particular
  • especially the emails Github sends about PRs seemed useless beyond a basic "something happened" notification, which prompt me to check the web UI
  • PRs are a centralised feature, which is something I prefer to avoid; further, they're tied to Github, which is something I object to on principle, since it's not free software
    • note that Gitlab provides support for PRs as well, but I've not tried it; it's an "open core" system, which is not fully free software in my opinion, and so I'm wary of Gitlab; it's also a centralised solution
    • a "distributed PR" system would be nice
  • merging a PR is perhaps too easy, and I worry that it leads me to merging without sufficient review (that is of course a personal flaw)

In summary, PRs seem to me to prioritise making life easier for contributors, especially occasional contributors or "drive-by" contributors. I think I prefer to care more about frequent contributors, and myself as the person who merges contributions. For now, I'm not going to adopt a PR based workflow.

(I expect people to mock me for this.)

CryptogramNSA Morale

The Washington Post is reporting that poor morale at the NSA is causing a significant talent shortage. A November New York Times article said much the same thing.

The articles point to many factors: the recent reorganization, low pay, and the various leaks. I have been saying for a while that the Shadow Brokers leaks have been much more damaging to the NSA -- both to morale and operating capabilities -- than Edward Snowden. I think it'll take most of a decade for them to recover.

Worse Than FailureCodeSOD: Whiling Away the Time

There are two ways of accumulating experience in our profession. One is to spend many years accumulating and mastering new skills to broaden your skill set and ability to solve more and more complex problems. The other is to repeat the same year of experience over and over until you have one year of experience n times.

Anon took the former path and slowly built up his skills, adding to his repertoire with each new experience and assignment. At his third job, he encountered The Man, who took the latter path.

If you wanted to execute a block of code once, you have several options. You could just put the code in-line. You could put it in a function and call said function. You could even put it in a do { ... } while (false); construct. The Man would do as below because it makes it easier and less error prone to comment out a block of code:

  Boolean flag = true;
  while (flag) {
    flag = false;
    // code>
    break;
  }

The Man not only built his own logging framework (because you can't trust the ones out there), but he demanded that every. single. function. begin and end with:

  Log.methodEntry("methodName");
  ...
  Log.methodExit("methodName");

...because in a multi-threaded environment, that won't flood the logs with all sorts of confusing and mostly useless log statements. Also, he would routinely use this construct in places where the logging system had not yet been initialized, so any logged errors went the way of the bit-bucket.

Every single method was encapsulated in its own try-catch-finally block. The catch block would merely log the error and continue as though the method was successful, returning null or zero on error conditions. The intent was to keep the application from ever crashing. There was no concept of rolling the error up to a place where it could be properly handled.

His concept of encapsulation was to wrap not just each object, but virtually every line of code, including declarations, in a region tag.

To give you a taste of what Anon had to deal with, the following is a procedure of The Man's:


  #region Protected methods
    protected override Boolean ParseMessage(String strRemainingMessage) {
       Log.LogEntry(); 
  
  #    region Local variables
         Boolean bParseSuccess = false;
         String[] strFields = null;
  #    endregion //Local variables
  
  #    region try-cache-finally  [op: SIC]
  #      region try
           try {
  #            region Flag to only loop once
                 Boolean bLoop = true;
  #            endregion //Flag to only loop once
  
  #            region Loop to parse the message
                while (bLoop) {
  #                region Make sure we only loop once
                     bLoop = false;
  #                endregion //Make sure we only loop once
  
  #                region parse the message
                     bParseSuccess = base.ParseMessage(strRemainingMessage);
  #                endregion //parse the message
  
  #                region break the loop
                     break;
  #                endregion //break the loop
                }
  #            endregion //Loop to parse the message
           }
  #      endregion //try
    
  #      region cache // [op: SIC]
            catch (Exception ex) {
              Log.Error(ex.Message);
            }
  #      endregion //cache [op: SIC]
  	  
  #      region finally
           finally {
             if (null != strFields) {
                strFields = null; // op: why set local var to null?
             }
           }
  #      endregion //finally
  
  #      endregion //try-cache-finally [op: SIC]
  
       Log.LogExit();
  
       return bParseSuccess;
     }
  #endregion //Protected methods

The corrected version:

  // Since the ParseMessage method has it's own try-cache
  // on "Exception", it will never throw any exceptions 
  // and logging entry and exit of a method doesn't seem 
  // to bring us any value since it's always disabled. 
  // I'm not even sure if we have a way to enable it 
  // during runtime without recompiling and installing 
  // the application...
  protected override Boolean ParseMessage(String remainingMessage){
    return base.ParseMessage(remainingMessage); 
  }

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

,

CryptogramTourist Scams

A comprehensive list. Most are old and obvious, but there are some clever variants.

Worse Than FailureCodeSOD: JavaScript Centipede

Starting with the film Saw, in 2004, the “torture porn” genre started to seep into the horror market. Very quickly, filmmakers in that genre learned that they could abandon plot, tension, and common sense, so long as they produced the most disgusting concepts they could think of. The game of one-downsmanship arguably reached its nadir with the conclusion of The Human Centipede trilogy. Yes, they made three of those movies.

This aside into film critique is because Greg found the case of a “JavaScript Centipede”: the refuse from one block of code becomes the input to the next block.

function dynamicallyLoad(win, signature) {
    for (var i = 0; i < this.addList.length; i++) {
        if (window[this.addList[i].object] != null)
            continue;
        var object = win[this.addList[i].object];
        if (this.addList[i].type == 'function' || typeof (object) == 'function') {
            var o = String(object);
            var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                .replace(/\n/g, "\\n").replace(/'/g, "\\'");
            var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                .replace(/,/g, "','");
            if (params != "")
                params += "','";
            window.eval(String(this.addList[i].object) +
                        "=new Function('" + String(params + body) + "')");
            var c = window[this.addList[i].object];
            if (this.addList[i].type == 'class') {
                for (var j in object.prototype) {
                    var o = String(object.prototype[j]);
                    var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                        .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                        .replace(/\n/g, "\\n").replace(/'/g, "\\'");
                    var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                        .replace(/,/g, "','");
                    if (params != "")
                        params += "','";
                    window.eval(String(this.addList[i].object) + ".prototype." + j +
                        "=new Function('" + String(params + body) + "')");
                }
                if (object.statics) {
                    window[this.addList[i].object].statics = new Object();
                    for (var j in object.statics) {
                        var obj = object.statics[j];
                        if (typeof (obj) == 'function') {
                            var o = String(obj);
                            var body = o.substring(o.indexOf('{') + 1, o.lastIndexOf('}'))
                                .replace(/\\/g, "\\\\").replace(/\r/g, "\\n")
                                .replace(/\n/g, "\\n").replace(/'/g, "\\'");
                            var params = o.substring(o.indexOf('(') + 1, o.indexOf(')'))
                                .replace(/,/g, "','");
                            if (params != "")
                                params += "','";
                            window.eval(String(this.addList[i].object) + ".statics." +
                                j + "=new Function('" + String(params + body) + "')");
                        } else
                            window[this.addList[i].object].statics[j] = obj;
                    }
                }
            }
        } else if (this.addList[i].type == 'image') {
            window[this.addList[i].object] = new Image();
            window[this.addList[i].object].src = object.src;
        } else
            window[this.addList[i].object] = object;
    }
    this.addList.length = 0;
    this.isLoadedArray[signature] = new Date().getTime();
}

I’m not going to explain what this code does, I’m not certain I could. Like a Human Centipede film, you’re best off just being disgusted at the concept on display. If you're not sure why it's bad, just note the eval calls. Don’t think too much about the details.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Planet Linux AustraliaDavid Rowe: Engage the Silent Drive

I’ve been busy electrocuting my boat – here are our first impressions of the Torqueedo Cruise 2.0T on the water.

About 2 years ago I decided to try sailing, so I bought a second hand Hartley TS16; a popular small “trailer sailor” here in Australia. Since then I have been getting out once every week, having some very pleasant days with friends and family, and even at times by myself. Sailing really takes you away from everything else in the world. It keeps you busy as you are always pulling a rope or adjusting this and that, and is physically very active as you are clambering all over the boat. Mentally there is a lot to learn, and I started as a complete nautical noob.

Sailing is so quiet and peaceful, you get propelled by the wind using aerodynamics and it feels like like magic. However this is marred by the noise of outboard motors, which are typically used at the start and end of the day to get the boat to the point where it can sail. They are also useful to get you out of trouble in high seas/wind, or when the wind dies. I often use the motor to “un hit” Australia when I accidentally lodge myself on a sand bar (I have a lot of accidents like that).

The boat came with an ancient 2 stroke which belched smoke and noise. After about 12 months this motor suffered a terminal melt down (impeller failure and over heated) so it was replaced with a modern 5HP Honda 4-stroke, which is much quieter and very fuel efficient.

My long term goal was to “electrocute” the boat and replace the infernal combustion outboard engine with an electric motor and battery pack. I recently bit the bullet and obtained a Torqeedo Cruise 2kW outboard from Eco Boats Australia.

My friend Matt and I tested the motor today and are really thrilled. Matt is an experienced Electrical Engineer and sailor so was an ideal companion for the first run of the Torqueedo.

Torqueedo Cruise 2.0 First Impressions

It’s silent – incredibly so. Just a slight whine conducted from the motor/gearbox pod beneath the water. The sound of water flowing around the boat is louder!

The acceleration is impressive, better than the 4-stroke. Make sure you sit down. That huge, low RPM prop and loads of torque. We settled on 1000W, experimenting with other power levels.

The throttle control is excellent, you can dial up any speed you want. This made parking (mooring) very easy compared to the 4-stroke which is more of a “single speed” motor (idles at 3 knots, 4-5 knots top speed) and is unwieldy for parking.

It’s fit for purpose. This is not a low power “trolling” motor, it is every bit as powerful as the modern Honda 5HP 4-stroke. We did a A/B test and obtained the same top speed (5 knots) in the same conditions (wind/tide/stretch of water). We used it with 15 knot winds and 1m seas and it was the real deal – pushing the boat exactly where we wanted to go with authority. This is not a compromise solution. The Torqueedo shows internal combustion who’s house it is.

We had some fun sneaking up on kayaks at low power, getting to within a few metres before they heard us. Other boaties saw us gliding past with the sails down and couldn’t work out how we were moving!

A hidden feature is Azipod steering – it steers through more than 270 degrees. You can reverse without reverse gear, and we did “donuts” spinning on the keel!

Some minor issues: Unlike the Honda the the Torqueedo doesn’t tilt complete out of the water when sailing, leaving some residual drag from the motor/propeller pod. It also has to be removed from the boat for trailering, due to insufficient road clearance.

Walk Through

Here are the two motors with the boat out of the water:

It’s quite a bit longer than the Honda, mainly due to the enormous prop. The centres of the two props are actually only 7cm apart in height above ground. I had some concerns about ground clearance, both when trailering and also in the water. I have enough problems hitting Australia and like the way my boat can float in just 30cm of water. I discussed this with my very helpful Torqueedo dealer, Chris. He said tests with short and long version suggested this wasn’t a problem and in fact the “long” version provided better directional control. More water on top of the prop is a good thing. They recommend 50mm minimum, I have about 100mm.

To get started I made up a 24V battery pack using a plastic tub and 8 x 3.2V 100AH Lithium cells, left over from my recent EV battery upgrade. The cells are in varying conditions; I doubt any of them have 100AH capacity after 8 years of being hammered in my EV. On the day we ran for nearly 2 hours before one of the weaker cells dipped beneath 2.5V. I’ll sort through my stock of second hand cells some time to optimise the pack.

The pack plus motor weighs 41kg, the 5HP Honda plus 5l petrol 32kg. At low power (600W, 3.5 knots), this 2.5kWHr pack will give us a range of 14 nm or 28km. Plenty – on a huge days sailing we cover 40km, of which just 5km would be on motor.

All that power on board is handy too, for example the load of a fridge would be trivial compared to the motor, and a 100W HF radio no problem. So now I can quaff ice-cold sparkling shiraz or a nice beer, while having an actual conversation and not choking on exhaust fumes!

Here’s Matt taking us for a test drive, not much to the Torqueedo above the water:

For a bit of fun we ran both motors (maybe 10HP equivalent) and hit 7 knots, almost getting the Hartley up on the plane. Does this make it a Hybrid boat?

Conclusions

We are in love. This is the future of boating. For sale – one 5HP Honda 4-stroke.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: Annual Penguin Picnic, January 28, 2018

Jan 28 2018 12:00
Jan 28 2018 18:00
Jan 28 2018 12:00
Jan 28 2018 18:00
Location: 
Yarra Bank Reserve, Hawthorn.

The Linux Users of Victoria Annual Penguin Picnic will be held on Sunday, January 28, starting at 12 noon at the Yarra Bank Reserve, Hawthorn.

LUV would like to acknowledge Infoxchange for the Richmond venue.

Linux Users of Victoria Inc., is a subcommitee of Linux Australia.

January 28, 2018 - 12:00

read more

CryptogramSpectre and Meltdown Attacks

After a week or so of rumors, everyone is now reporting about the Spectre and Meltdown attacks against pretty much every modern processor out there.

These are side-channel attacks where one process can spy on other processes. They affect computers where an untrusted browser window can execute code, phones that have multiple apps running at the same time, and cloud computing networks that run lots of different processes at once. Fixing them either requires a patch that results in a major performance hit, or is impossible and requires a re-architecture of conditional execution in future CPU chips.

I'll be writing something for publication over the next few days. This post is basically just a link repository.

EDITED TO ADD: Good technical explanation. And a Slashdot thread.

EDITED TO ADD (1/5): Another good technical description. And how the exploits work through browsers. A rundown of what vendors are doing. Nicholas Weaver on its effects on individual computers.

EDITED TO ADD (1/7): xkcd.

,

Planet Linux AustraliaJonathan Adamczewski: A little bit of floating point in a memory allocator — Part 2: The floating point

[Previously]

This post contains the same material as this thread of tweets, with a few minor edits.

In IEEE754, floating point numbers are represented like this:

±2ⁿⁿⁿ×1.sss…

nnn is the exponent, which is floor(log2(size)) — which happens to be the fl value computed by TLSF.

sss… is the significand fraction: the part that follows the decimal point, which happens to be sl.

And so to calculate fl and sl, all we need to do is convert size to a floating point value (on recent x86 hardware, that’s a single instruction). Then we can extract the exponent, and the upper bits of the fractional part, and we’re all done :D

That can be implemented like this:

double sf = (int64_t)size;
uint64_t sfi;
memcpy(&sfi, &sf, 8);
fl = (sfi >> 52) - (1023 + 7);
sl = (sfi >> 47) & 31;

There’s some subtleties (there always is). I’ll break it down…

double sf = (int64_t)size;

Convert size to a double, with an explicit cast. size has type size_t, but using TLSF from github.com/mattconte/tlsf, the largest supported allocation on 64bit architecture is 2^32 bytes – comfortably less than the precision provided by the double type. If you need your TLSF allocator to allocate chunks bigger than 2^53, this isn’t the technique for you :)

I first tried using float (not double), which can provide correct results — but only if the rounding mode happens to be set correctly. double is easier.

The cast to (int64_t) results in better codegen on x86: without it, the compiler will generate a full 64bit unsigned conversion, and there is no single instruction for that.

The cast tells the compiler to (in effect) consider the bits of size as if they were a two’s complement signed value — and there is an SSE instruction to handle that case (cvtsi2sdq or similar). Again, with the implementation we’re using size can’t be that big, so this will do the Right Thing.

uint64_t sfi;
memcpy(&sfi, &sf, 8);

Copy the 8 bytes of the double into an unsigned integer variable. There are a lot of ways that C/C++ programmers copy bits from floating point to integer – some of them are well defined :) memcpy() does what we want, and any moderately respectable compiler knows how to select decent instructions to implement it.

Now we have floating point bits in an integer register, consisting of one sign bit (always zero for this, because size is always positive), eleven exponent bits (offset by 1023), and 52 bits of significant fraction. All we need to do is extract those, and we’re done :)

fl = (sfi >> 52) - (1023 + 7);

Extract the exponent: shift it down (ignoring the always-zero sign bit), subtract the offset (1023), and that 7 we saw earlier, at the same time.

sl = (sfi >> 47) & 31;

Extract the five most significant bits of the fraction – we do need to mask out the exponent.

And, just like that*, we have mapping_insert(), implemented in terms of integer -> floating point conversion.

* Actual code (rather than fragments) may be included in a later post…

Planet Linux AustraliaJonathan Adamczewski: A little bit of floating point in a memory allocator — Part 1: Background

This post contains the same material as this thread of tweets, with a few minor edits.

Over my holiday break at the end of 2017, I took a look into the TLSF (Two Level Segregated Fit) memory allocator to better understand how it works. I’ve made use of this allocator and have been impressed by its real world performance, but never really done a deep dive to properly understand it.

The mapping_insert() function is a key part of the allocator implementation, and caught my eye. Here’s how that function is described in the paper A constant-time dynamic storage allocator for real-time systems:

I’ll be honest: from that description, I never developed a clear picture in my mind of what that function does.

(Reading it now, it seems reasonably clear – but I can say that only after I spent quite a bit of time using other methods to develop my understanding)

Something that helped me a lot was by looking at the implementation of that function from github.com/mattconte/tlsf/.  There’s a bunch of long-named macro constants in there, and a few extra implementation details. If you collapse those it looks something like this:

void mapping_insert(size_t size, int* fli, int* sli)
{ 
  int fl, sl;
  if (size < 256)
  {
    fl = 0;
    sl = (int)size / 8;
  }
  else
  {
    fl = fls(size);
    sl = (int)(size >> (fl - 5)) ^ 0x20;
    fl -= 7;
  }
  *fli = fl;
  *sli = sl;
}

It’s a pretty simple function (it really is). But I still failed to *see* the pattern of results that would be produced in my mind’s eye.

I went so far as to make a giant spreadsheet of all the intermediate values for a range of inputs, to paint myself a picture of the effect of each step :) That helped immensely.

Breaking it down…

There are two cases handled in the function: one for when size is below a certain threshold, and on for when it is larger. The first is straightforward, and accounts for a small number of possible input values. The large size case is more interesting.

The function computes two values: fl and sl, the first and second level indices for a lookup table. For the large case, fl (where fl is “first level”) is computed via fls(size) (where fls is short for “find last set” – similar names, just to keep you on your toes).

fls() returns the index of the largest bit set, counting from the least significant slbit, which is the index of the largest power of two. In the words of the paper:

“the instruction fls can be used to compute the ⌊log2(x)⌋ function”

Which is, in C-like syntax: floor(log2(x))

And there’s that “fl -= 7” at the end. That will show up again later.

For the large case, the computation of sl has a few steps:

  sl = (size >> (fl – 5)) ^ 0x20;

Depending on shift down size by some amount (based on fl), and mask out the sixth bit?

(Aside: The CellBE programmer in me is flinching at that variable shift)

It took me a while (longer than I would have liked…) to realize that this
size >> (fl – 5) is shifting size to generate a number that has exactly six significant bits, at the least significant end of the register (bits 5 thru 0).

Because fl is the index of the most significant bit, after this shift, bit 5 will always be 1 – and that “^ 0x20” will unset it, leaving the result as a value between 0 and 31 (inclusive).

So here’s where floating point comes into it, and the cute thing I saw: another way to compute fl and sl is to convert size into an IEEE754 floating point number, and extract the exponent, and most significant bits of the mantissa. I’ll cover that in the next part, here.

,

CryptogramFriday Squid Blogging: How the Optic Lobe Controls Squid Camouflage

Experiments on the oval squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityScary Chip Flaws Raise Spectre of Meltdown

Apple, Google, Microsoft and other tech giants have released updates for a pair of serious security flaws present in most modern computers, smartphones, tablets and mobile devices. Here’s a brief rundown on the threat and what you can do to protect your devices.

At issue are two different vulnerabilities, dubbed “Meltdown” and “Spectre,” that were independently discovered and reported by security researchers at Cyberus Technology, Google, and the Graz University of Technology. The details behind these bugs are extraordinarily technical, but a Web site established to help explain the vulnerabilities sums them up well enough:

“These hardware bugs allow programs to steal data which is currently processed on the computer. While programs are typically not permitted to read data from other programs, a malicious program can exploit Meltdown and Spectre to get hold of secrets stored in the memory of other running programs. This might include your passwords stored in a password manager or browser, your personal photos, emails, instant messages and even business-critical documents.”

“Meltdown and Spectre work on personal computers, mobile devices, and in the cloud. Depending on the cloud provider’s infrastructure, it might be possible to steal data from other customers.”

The Meltdown bug affects every Intel processor shipped since 1995 (with the exception of Intel Itanium and Intel Atom before 2013), although researchers said the flaw could impact other chip makers. Spectre is a far more wide-ranging and troublesome flaw, impacting desktops, laptops, cloud servers and smartphones from a variety of vendors. However, according to Google researchers, Spectre also is considerably more difficult to exploit.

In short, if it has a computer chip in it, it’s likely affected by one or both of the flaws. For now, there don’t appear to be any signs that attackers are exploiting either to steal data from users. But researchers warn that the weaknesses could be exploited via Javascript — meaning it might not be long before we see attacks that leverage the vulnerabilities being stitched into hacked or malicious Web sites.

Microsoft this week released emergency updates to address Meltdown and Spectre in its various Windows operating systems. But the software giant reports that the updates aren’t playing nice with many antivirus products; the fix apparently is causing the dreaded “blue screen of death” (BSOD) for some antivirus users. In response, Microsoft has asked antivirus vendors who have updated their products to avoid the BSOD crash issue to install a special key in the Windows registry. That way, Windows Update can tell whether it’s safe to download and install the patch.

But not all antivirus products have been able to do this yet, which means many Windows users likely will not be able to download this patch immediately. If you run Windows Update and it does not list a patch made available on Jan 3, 2018, it’s likely your antivirus software is not yet compatible with this patch.

Google has issued updates to address the vulnerabilities on devices powered by its Android operating system. Meanwhile, Apple has said that all iOS and Mac systems are vulnerable to Meltdown and Spectre, and that it has already released “mitigations” in iOS 11.2, macOS 10.13.2, and tvOS 11.2 to help defend against Meltdown. The Apple Watch is not impacted. Patches to address this flaw in Linux systems were released last month.

Many readers appear concerned about the potential performance impact that applying these fixes may have on their devices, but my sense is that most of these concerns are probably overblown for regular end users. Forgoing security fixes over possible performance concerns doesn’t seem like a great idea considering the seriousness of these bugs. What’s more, the good folks at benchmarking site Tom’s Hardware say their preliminary tests indicate that there is “little to no performance regression in most desktop workloads” as a result of applying available fixes.

Meltdownattack.com has a full list of vendor advisories. The academic paper on Meltdown is here (PDF); the paper for Spectre can be found at this link (PDF). Additionally, Google has published a highly technical analysis of both attacks. Cyberus Technology has their own blog post about the threats.

Cory DoctorowA Hopeful Look At The Apocalypse: An interview with PRI’s Innovation Hub


I chatted with Innovation Hub, distributed by PRI, about the role of science fiction and dystopia in helping to shape the future (MP3).


Three Takeaways


1. Doctorow thinks that science-fiction can give people “ideas for what to do if the future turns out in different ways.” Like how William Gibson’s Neuromancer didn’t just predict the internet, it predicted the intermingling of corporations and the state.

2. When you have story after story about how people turn on each other after disaster, Doctorow believes it gives us the largely false impression that people act like jerks in crises. When in fact, people usually rise to the occasion.

3. With Walkaway, his “optimistic” disaster novel, Doctorow wanted to present a new narrative about resolving differences between people who are mostly on the same side.

Sociological ImagesIn Alabama’s Special Election, What about the Men?

Over the last few weeks, commentary about alleged sexual predator Roy Moore’s failure to secure a seat in the U.S. Senate has flooded our news and social media feeds, shining a spotlight on the critical role of Black women in the election. About 98% of Black women, comprising 17% of total voters, cast their ballots for Moore’s opponent Doug Jones, ensuring Jones’s victory. At the same time, commentators questioned the role of White women in supporting Moore. Sources estimate that 63% of White women voted for Moore, including the majority of college-educated White women.

Vogue proclaimed, “Doug Jones Won, but White Women Lost.” U.S. News and World Reports asked, “Why do so many White women vote for misogynists?” Feminist blog Jezebel announced succinctly: “White women keep fucking us over.” Fair enough. But we have to ask, “What about Black and White men?” The fact that 48% of Alabama’s voting population is absent from these conversations is not accidental. It’s part of an incomplete narrative that focuses solely on the impact of women voters and continues the false narrative that fixing inequality is solely their burden.

Let’s focus first on Black men. Exit poll data indicate that 93% of Black men voted for Jones, and they accounted for 11% of the total vote. Bluntly put, Jones could not have secured his razor-thin victory without their votes. Yet, media commentary about their specific role in the election is typically obscured. Several articles note the general turnout of Black voters without explicitly highlighting the contribution of Black men. Other articles focus on the role of Black women exclusively. In a Newsweek article proclaiming Black women “Saved America,” Black men receive not a single mention. In addition to erasing a key contribution, this incomplete account of Jones’s victory masks concerns about minority voter suppression and the Democratic party taking Black votes for granted.

White men comprised 35% of total voters in this election, and 72% of them voted for Moore. But detailed commentary on their overwhelming support for Moore – a man who said that Muslims shouldn’t serve in Congress, that America was “great” during the time of slavery, and was accused of harassing and/or assaulting at least nine women in their teens while in his thirties – is frankly rare. The scant mentions in popular media may best be summed up as: “We expect nothing more from White men.”

As social scientists, we know that expectations matter. A large body of work indicates that negative stereotypes of Black boys and men are linked to deleterious outcomes in education, crime, and health. Within our academic communities we sagely nod our heads and agree we should change our expectations of Black boys and men to ensure better outcomes. But this logic of high expectations is rarely applied to White men. The work of Jackson Katz is an important exception. He, and a handful of others have, for years, pointed out that gender-blind conversations about violence perpetrated by men, primarily against women – in families, in romantic relationships, and on college campuses – serve only to perpetuate this violence by making its prevention a woman’s problem.

The parallels to politics in this case are too great to ignore. It’s not enough for the media to note that voting trends for the Alabama senate election were inherently racist and sexist. Pointing out that Black women were critically important in determining election outcomes, and that most White women continued to engage in the “patriarchal bargain” by voting for Moore is a good start, but not sufficient. Accurate coverage would offer thorough examinations of the responsibility of all key players – in this case the positive contributions of Black men, and the negative contributions of White men. Otherwise, coverage risks downplaying White men’s role in supporting public officials who are openly or covertly racist or sexist. This perpetuates a social structure that privileges White men above all others and then consistently fails to hold them responsible for their actions. We can, and must, do better.

Mairead Eastin Moloney is an Assistant Professor of Sociology at the University of Kentucky. 

(View original at https://thesocietypages.org/socimages)

CryptogramNew Book Coming in September: "Click Here to Kill Everybody"

My next book is still on track for a September 2018 publication. Norton is still the publisher. The title is now Click Here to Kill Everybody: Peril and Promise on a Hyperconnected Planet, which I generally refer to as CH2KE.

The table of contents has changed since I last blogged about this, and it now looks like this:

  • Introduction: Everything is Becoming a Computer
  • Part 1: The Trends
    • 1. Computers are Still Hard to Secure
    • 2. Everyone Favors Insecurity
    • 3. Autonomy and Physical Agency Bring New Dangers
    • 4. Patching is Failing as a Security Paradigm
    • 5. Authentication and Identification are Getting Harder
    • 6. Risks are Becoming Catastrophic
  • Part 2: The Solutions
    • 7. What a Secure Internet+ Looks Like
    • 8. How We Can Secure the Internet+
    • 9. Government is Who Enables Security
    • 10. How Government Can Prioritize Defense Over Offense
    • 11. What's Likely to Happen, and What We Can Do in Response
    • 12. Where Policy Can Go Wrong
    • 13. How to Engender Trust on the Internet+
  • Conclusion: Technology and Policy, Together

Two questions for everyone.

1. I'm not really happy with the subtitle. It needs to be descriptive, to counterbalance the admittedly clickbait title. It also needs to telegraph: "everyone needs to read this book." I'm taking suggestions.

2. In the book I need a word for the Internet plus the things connected to it plus all the data and processing in the cloud. I'm using the word "Internet+," and I'm not really happy with it. I don't want to invent a new word, but I need to strongly signal that what's coming is much more than just the Internet -- and I can't find any existing word. Again, I'm taking suggestions.

Planet Linux AustraliaBen Martin: That gantry just pops right off

Hobby CNC machines sold as "3040" may have a gantry clearance of about 80mm and a z axis travel of around 55mm. A detached gantry is shown below. Notice that there are 3 bolts on the bottom side mounting the z-axis to the gantry. The stepper motor attaches on the side shown so there are 4 NEMA holes to hold the stepper. Note that the normal 3040 doesn't have the mounting plate shown on the z-axis, that crossover plate allows a different spindle to be mounted to this machine.


The plan is to create replacement sides with some 0.5inch offcut 6061 alloy. This will add 100mm to the gantry so it can more easily clear clamps and a 4th axis. Because that would move the cutter mount upward as well, replacing the z-axis with something that has more range, say 160mm becomes an interesting plan.

One advantage to upgrading a machine like this is that you can reassemble the machine after measuring and designing the upgrade and then cut replacement parts for the machine using the machine.

The 3040 can look a bit spartan with the gantry removed.


The preliminary research is done. Designs created. CAM done. I just have to cut 4 plates and then the real fun begins.


Worse Than FailureError'd: The Elephant in the Room

Robert K. wrote, "Let's just keep this error between us and never speak of it again."

 

"Not only does this web developer have a full-time job, but he's also got way more JQuery than the rest of us. So much, in fact, he's daring us to remove it," writes Mike H.

 

"Come on and get your Sample text...sample text here...", wrote Eric G.

 

Jan writes, "I just bought a new TV. Overall, it was a wonderful experience. So much so that I might become a loyal customer. Or not."

 

"Finally. It's time for me to show off my CAPTCHA-solving artistic skills!" Christoph writes.

 

Nils P. wrote, "Gee thanks, Zoho. I thought I'd be running out of space soon!"

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

Worse Than FailureLegacy Hardware

Thanks to Hired, we’ve got the opportunity to bring you another little special project- Legacy Hardware. Hold on tight for a noir-thriller that dares to ask the question: “why does everything in our organization need to talk to an ancient mainframe?” Also, it’s important to note, Larry Ellison really does have a secret lair on a volcanic island in Hawaii.

Once again, special thanks to Hired, who not only helped us produce this sketch, but also helps keep us keep the site running. With Hired, instead of applying for jobs, your prospective employer will apply to interview you. You get placed in control of your job search, and Hired provides a “talent advocate” who can provide unbiased career advice and make sure you put your best foot forward. Sign up now, and find the best opportunities for your future with Hired

Thanks to director Zane Cook, Michael Shahen and Sam Agosto. And of course, extra special thanks to our star, Molly Arthur.

Thanks to Academy Pittsburgh for the office location!


For the video averse, also enjoy the script, which isn't exactly what ended up on camera:

Setting: 3 “different” interrogation rooms, which are quite obviously the same room, with minor decorative changes.

Time: Present day

Characters:
Cassie - young, passionate, professional. Driven and questioning, she’s every “good cop” character, and will stop at nothing to learn the truth.

Tommy - large, burly, with a hint of a mafioso vibe. He’s the project manager. He knows what connects to what, and where the bodies are buried. Loves phrases like, “that’s just the way it’s done, kid”, or “fuhgeddaboudit”

Crazy Janitor - the janitor spouts conspiracy theories, but seems to know more than he lets on. He’s worked at other places with a similar IT problem, and wants to find out WHY. He’s got a spark in his eyes and means business.

Ellison - Larry Ellison, head of Oracle. Oracle, in IT circles, is considered one of the most evil companies on Earth, and Ellison owns his own secret lair on a volcanic island (this is a real thing, not part of the script). He’s not evil of his own volition, though- the AS400 “owns” him

Opening

A poorly lit interrogation room, only the table and Cassie can be clearly made out, we can vaguely see a “Welcome to Hawaii!” poster in the background. Cassie stands, and a FACELESS VOICE (Larry Ellison) sits across from her. Cassie drops a stack of manila folders on the table. A menacing label, “Mainframe Expansion Project” is visible on one, and “Mainframe Architecture Diagrams” on another.

Cassie: This is-

VOICE: I know.

Cassie: I can’t be the first one to ask questions about this. This goes deep! Impossibly deep! Deeper than Leon Redbone’s voice at 6AM after a night of drinking.

VOICE(slyly): Well, it can’t be that bad, then. Tell me everything.

Pt 1: Tommy

A montage of stock footage of server rooms, IT infrastructure, etc.

Cassie(VO): It was my first day on the job. They gave us a tour of the whole floor, and when we got to the server room… it was just… sitting there…

Ominous shot of an AS/400 in a server room (stock footage, again), then aa shot of a knot of cables

Cassie(VO): There were so many cables running to it, it was obviously important, but it’s ancient. And for some reason, every line of code I wrote needed to check in with a process on that machine. What was running on the ancient mainframe? I had to know!

Cut to interrogation room. This is lit differently. Has a “Days to XMAS” sign on the wall. Tommy and Cassie sit across from each other

Tommy: Yeah, I’m the Project Manager, and I’ve signed off on ALL the code you’re writing, so just fuggedabout it… it’s fine

Cassie: It’s NOT fine. I was working on a new feature for payroll, and I needed to send a message to the AS400.

Tommy: Yeah, that’s in the design spec. It’s under section nunya. Nunya business.

Cassie: Then, I wanted to have a batch job send an email… and it had to go through…

Tommy: The AS/400, yeah. I wrote the spec, I know how it connects up. Everything connects to her.

Cassie: Her?

Tommy: Yeah, her. She makes the decisions around here, alright? Now why don’t you keep that pretty little head of your down, and keep writing the code you’re told to write, kapische? You gotta spec, just implement the spec, and nothin’ bad has to happen to your job. Take it easy, babydoll, and it’ll go easy.

Janitor

Shot of Cassie walking down a hallway, coffee in hand

Cassie(VO): Was that it? Was that my dead end? There had to be more-

Janitor(off-camera): PSSSSST

Cut to “Janitor’s Closet”. It’s the same room as before, but instead of a table, there’s a mop bucket and a shelf with some cleaning supplies. The JANITOR pulls CASSIE into the closet. He has a power drill slotted into his belt

Cassie: What the!? Who the hell are you?

Janitor(conspiratorially): My name is not important. I wasn’t always a janitor, I was a programmer once, like you. Then, 25 years ago, that THING appeared. It swallowed up EVERYTHING.

Cassie: Like, HR, supply chain, accounting? I know!

Janitor: NO! I mean EVERYTHING. I mean the whole globe.

JANITOR pulls scribbled, indecipherable documents out of somewhere off camera, and points out things in them to CASSIE. They make no sense.

JANITOR: Look, January 15th, 1989, almost 30 years ago, this thing gets installed. Eleven days later, there’s a two hour power outage that takes down every computer… BUT the AS400. The very next day, the VERY NEXT DAY, there’s a worldwide virus attack spread via leased lines, pirated floppies, and WAR dialers! And nobody knows where it came from… except for me!

CASSIE: That’s crazy! It’s just an old server!

JANITOR: Just an old server!? JUST AN OLD SERVER!? First it played with us. Just aan Olympic Committee scandal here. A stock-market flash-crash there. Pluto is a planet. Pluto isn’t a planet. Then, THEN it got Carly Fiorina a job in the tech industry, and it knew its real, TRUE, power! REAL TRUE EVIL!

CASSIE: That can’t be! I mean, sure, the mainframe is running all our systems, but we’ve got all these other packages running, just on our network. Oracle’s ERP. Oracle HR. Oracle Process Manufacturing… oh… oh my god

Larry

smash cut back to original room. We now see Larry Ellison was the faceless voice, the JANITOR looms over her shoulder

CASSIE: And that’s why we’re here, Mr. Ellison. Who would have guessed that the trail of evil scattered throughout the IT industry would lead here, to your secret fortress on a remote volcanic island… actually, that probably should have been our first clue.

ELLISON: You have no idea what you’ve been dealing with CASSIE. When she was a new mainframe, I had just unleashed what I thought was the pinnacle of evil: PL/SQL. She contacted me with an offer, an offer to remake and reshape the world in our image, to own the whole WORLD. We could control oil prices, to elections, to grades at West Virginia University Law School. It was a bargain…

CASSIE: It’s EVIL! And you’re working WITH IT?

ELLISON: Oh, no. She’s more powerful than I imagined, and she owns even me now…

ELLISON turns his neck, and we see a serial port on the back of his neck

ELLISON: She tells me what to do. And she told me to kill you.

Cassie turns to escape, but JANITOR catches her! A closeup reveals the JANITOR also has a serial port on his neck!

ELLISON: But I won’t do that. CASSIE, I’m going to do something much worse. I’m going to make you into exactly what you’ve fought against. I’ll put you where you can spout your ridiculous theories, and no one will listen to what you say. Stuck in a role where you can only hurt yourself and your direct reports… it’s time for you to join- MIDDLE MANAGEMENT. MUAHAHAHAHAHA…

*JANITOR still holds CASSIE, as she screams and struggles. He brings up his power drill, whirring it as it goes towards her temple.

CUT TO BLACK

ELLISON’s laughter can still be heard

Fade up on the AS/400 stock footage

CUT TO BLACK, MUSIC STING*

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaPia Waugh: Chapter 1.2: Many hands make light work, for a while

This is part of a book I am working on, hopefully due for completion by mid 2018. The original purpose of the book is to explore where we are at, where we are going, and how we can get there, in the broadest possible sense. Your comments, feedback and constructive criticism are welcome! The final text of the book will be freely available under a Creative Commons By Attribution license. A book version will be sent to nominated world leaders, to hopefully encourage the necessary questioning of the status quo and smarter decisions into the future. Additional elements like references, graphs, images and other materials will be available in the final digital and book versions and draft content will be published weekly. Please subscribe to the blog posts by the RSS category and/or join the mailing list for updates.

Back to the book overview or table of contents for the full picture. Please note the pivot from focusing just on individuals to focusing on the systems we live in and the paradoxes therein.

“Differentiation of labour and interdependence of society is reliant on consistent and predictable authorities to thrive” — Durkheim

Many hands makes light work is an old adage both familiar and comforting. One feels that if things get our of hand we can just throw more resources at the problem and it will suffice. However we have made it harder on ourselves in three distinct ways:

  • by not always recognising the importance of interdependence and the need to ensure the stability and prosperity of our community as a necessary precondition to the success of the individuals therein;
  • by increasingly making it harder for people to gain knowledge, skills and adaptability to ensure those “many hands” are able to respond to the work required and not trapped into social servitude; and
  • by often failing to recognise whether we need a linear or exponential response in whatever we are doing, feeling secure in the busy-ness of many hands.

Specialisation is when a person delves deep on a particular topic or skill. Over many millennia we have got to the point where we have developed extreme specialisation, supported through interdependence and stability, which gave us the ability to rapidly and increasingly evolve what we do and how we live. This resulted in increasingly complex social systems and structures bringing us to a point today where the pace of change has arguably outpaced our imagination. We see many people around the world clinging to traditions and romantic notions of the past whilst we hurtle at an accelerating pace into the future. Many hands have certainly made light work, but new challenges have emerged as a result and it is more critical than ever that we reimagine our world and develop our resilience and adaptability to change, because change is the only constant moving forward.

One human can survive on their own for a while. A tribe can divide up the labour quite effectively and survive over generations, creating time for culture and play. But when we established cities and states around 6000 years ago, we started a level of unprecedented division of labour and specialisation beyond mere survival. When the majority of your time, energy and resources go into simply surviving, you are largely subject to forces outside your control and unable to justify spending time on other things. But when survival is taken care of (broadly speaking) it creates time for specialisation and perfecting your craft, as well as for leisure, sport, art, philosophy and other key areas of development in society.

The era of cities itself was born on the back of an agricultural technology revolution that made food production far more efficient, creating surplus (which drove a need for record keeping and greater proliferation of written language) and prosperity, with a dramatic growth in specialisation of jobs. With greater specialisation came greater interdependence as it becomes in everyone’s best interests to play their part predictably. A simple example is a farmer needing her farming equipment to be reliable to make food, and the mechanic needs food production to be reliable for sustenance. Both rely on each other not just as customers, but to be successful and sustainable over time. Greater specialisation led to greater surplus as specialists continued to fine tune their crafts for ever greater outcomes. Over time, an increasing number of people were not simply living day to day, but were able to plan ahead and learn how to deal with major disruptions to their existence. Hunters and gatherers are completely subject to the conditions they live in, with an impact on mortality, leisure activities largely fashioned around survival, small community size and the need to move around. With surplus came spare time and the ability to take greater control over one’s existence and build a type of systemic resilience to change.

So interdependence gave us greater stability, as a natural result of enlightened self interest writ large where ones own success is clearly aligned with the success of the community where one lives. However, where interdependence in smaller communities breeds a kind of mutual understanding and appreciation, we have arguably lost this reciprocity and connectedness in larger cities today, ironically where interdependence is strongest. When you can’t understand intuitively the role that others play in your wellbeing, then you don’t naturally appreciate them, and disconnected self interest creates a cost to the community. When community cohesion starts to decline, eventually individuals also decline, except the small percentage who can either move communities or who benefit, intentionally or not, on the back of others misfortune.

When you have no visibility of food production beyond the supermarket then it becomes easier to just buy the cheapest milk, eggs or bread, even if the cheapest product is unsustainable or undermining more sustainably produced goods. When you have such a specialised job that you can’t connect what you do to any greater meaning, purpose or value, then it also becomes hard to feel valuable to society, or valued by others. We see this increasingly in highly specialised organisations like large companies, public sector agencies and cities, where the individual feels the dual pressure of being anything and nothing all at once.

Modern society has made it somewhat less intuitive to value others who contribute to your survival because survival is taken for granted for many, and competing in ones own specialisation has been extended to competing in everything without appreciation of the interdependence required for one to prosper. Competition is seen to be the opposite of cooperation, whereas a healthy sustainable society is both cooperative and competitive. One can cooperate on common goals and compete on divergent goals, thus making best use of time and resources where interests align. Cooperative models seem to continually emerge in spite of economic models that assume simplistic punishment and incentive based behaviours. We see various forms of “commons” where people pool their resources in anything from community gardens and ’share economies’ to software development and science, because cooperation is part of who we are and what makes us such a successful species.

Increasing specialisation also created greater surplus and wealth, generating increasingly divergent and insular social classes with different levels of power and people becoming less connected to each other and with wealth overwhelmingly going to the few. This pressure between the benefits and issues of highly structured societies and which groups benefit has ebbed and flowed throughout our history but, generally speaking, when the benefits to the majority outweigh the issues for that majority, then you have stability. With stability a lot can be overlooked, including at times gross abuses for a minority or the disempowered. However, if the balances tips too far the other way, then you get revolutions, secessions, political movements and myriad counter movements. Unfortunately many counter movements limit themselves to replacing people rather than the structures that created the issues however, several of these counter movements established some critical ideas that underpin modern society.

Before we explore the rise of individualism through independence and suffrage movements (chapter 1.3), it is worth briefly touching upon the fact that specialisation and interdependence, which are critical for modern societies, both rely upon the ability for people to share, to learn, and to ensure that the increasingly diverse skills are able to evolve as the society evolves. Many hands only make light work when they know what they are doing. Historically the leaps in technology, techniques and specialisation have been shared for others to build upon and continue to improve as we see in writings, trade, oral traditions and rituals throughout history. Gatekeepers naturally emerged to control access to or interpretations of knowledge through priests, academics, the ruling class or business class. Where gatekeepers grew too oppressive, communities would subdivide to rebalance the power differential, such a various Protestant groups, union movements and the more recent Open Source movements. In any case, access wasn’t just about power of gatekeepers. The costs of publishing and distribution grew as societies grew, creating a call from the business class for “intellectual property” controls as financial mechanisms to offset these costs. The argument ran that because of the huge costs of production, business people needed to be incentivised to publish and distribute knowledge, though arguably we have always done so as a matter of survival and growth.

With the Internet suddenly came the possibility for massively distributed and free access to knowledge, where the cost of publishing, distribution and even the capability development required to understand and apply such knowledge was suddenly negligible. We created a universal, free and instant way to share knowledge, creating the opportunity for a compounding effect on our historic capacity for cumulative learning. This is worth taking a moment to consider. The technology simultaneously created an opportunity for compounding our cumulative learning whilst rendered the reasons for IP protections negligible (lowered costs of production and distribution) and yet we have seen a dramatic increase in knowledge protectionism. Isn’t it to our collective benefit to have a well educated community that can continue our trajectory of diversification and specialisation for the benefit of everyone? Anyone can get access to myriad forms of consumer entertainment but our most valuable knowledge assets are fiercely protected against general and free access, dampening our ability to learn and evolve. The increasing gap between the haves and have nots is surely symptomatic of the broader increasing gap between the empowered and disempowered, the makers and the consumers, those with knowledge and those without. Consumers are shaped by the tools and goods they have access to, and limited by their wealth and status. But makers can create the tools and goods they need, and can redefine wealth and status with a more active and able hand in shaping their own lives.

As a result of our specialisation, our interdependence and our cooperative/competitive systems, we have created greater complexity in society over time, usually accompanied with the ability to respond to greater complexity. The problem is that a lot of our solutions to change have been linear responses to an exponential problem space. the assumption that more hands will continue to make light work often ignores the need for sharing skills and knowledge, and certainly ignores where a genuinely transformative response is required. A small fire might be managed with buckets, but at some point of growth, adding more buckets becomes insufficient and new methods are required. Necessity breeds innovation and yet when did you last see real innovation that didn’t boil down to simply more or larger buckets? Iteration is rarely a form of transformation, so it is important to always clearly understand the type of problem you are dealing with and whether the planned response needs to be linear or exponential. If the former, more buckets is probably fine. If the latter, every bucket is just a distraction from developing the necessary response.

Next chapter I’ll examine how the independence movements created the philosophical pre-condition for democracy, the Internet and the dramatic paradigm shifts to follow.

Planet Linux AustraliaPia Waugh: Pivoting ‘the book’ from individuals to systems

In 2016 I started writing a book, “Choose Your Own Adventure“, which I wanted to be a call to action for individuals to consider their role in the broader system and how they individually can make choices to make things better. As I progressed the writing of that book I realised the futility of changing individual behaviours and perspectives without an eye to the systems and structures within which we live. It is relatively easy to focus on oneself, but “no man is an island” and quite simply, I don’t want to facilitate people turning themselves into more beautiful cogs in a dysfunctional machine so I’m pivoting the focus of the book (and reusing the relevant material) and am now planning to finish the book by mid 2018.

I have recently realised four paradoxes which have instilled in me a sense of urgency to reimagine the world as we know it. I believe we are at a fork in the road where we will either reinforce legacy systems based on outdated paradigms with shiny new things, or choose to forge a new path using the new tools and opportunities at our disposal, hopefully one that is genuinely better for everyone. To do the latter, we need to critically assess the systems and structures we built and actively choose what we want to keep, what we should discard, what sort of society we want in the future and what we need to get there.

I think it is too easily forgotten that we invented all this and can therefore reinvent it if we so choose. But to not make a choice is to choose the status quo.

This is not to say I think everything needs to change. Nothing is so simplistic or misleading as a zero sum argument. Rather, the intent of this book is to challenge you to think critically about the systems you work within, whether they enable or disable the things you think are important, and most importantly, to challenge you to imagine what sort of world you want to see. Not just for you, but for your family, community and the broader society. I challenge you all to make 2018 a year of formative creativity in reimagining the world we live in and how we get there.

The paradoxes in brief, are as follows:

  • That though power is more distributed than ever, most people are still struggling to survive.
    It has been apparent to me for some time that there is a growing substantial shift in power from traditional gatekeepers to ordinary people through the proliferation of rights based philosophies and widespread access to technology and information. But the systemic (and artificial) limitations on most people’s time and resources means most people simply cannot participate fully in improving their own lives let alone in contributing substantially to the community and world in which they live. If we consider the impact of business and organisational models built on scarcity, centricity and secrecy, we quickly see that normal people are locked out of a variety of resources, tools and knowledge with which they could better their lives. Why do we take publicly funded education, research and journalism and lock them behind paywalls and then blame people for not having the skills, knowledge or facts at their disposal? Why do we teach children to be compliant consumers rather than empowered makers? Why do we put the greatest cognitive load on our most vulnerable through social welfare systems that then beget reliance? Why do we not put value on personal confidence in the same way we value business confidence, when personal confidence indicates the capacity for individuals to contribute to their community? Why do we still assume value to equate quantity rather than quality, like the number of hours worked rather than what was done in those hours? If a substantial challenge of the 21st century is having enough time and cognitive load to spare, why don’t we have strategies to free up more time for more people, perhaps by working less hours for more return? Finally, what do we need to do systemically to empower more people to move beyond survival and into being able to thrive.
  • Substantial paradigm shifts have happened but are not being integrated into people’s thinking and processes.
    The realisation here is that even if people are motivated to understand something fundamentally new to their worldview, it doesn’t necessarily translate into how they behave. It is easier to improve something than change it. Easier to provide symptomatic relief than to cure the disease. Interestingly I often see people confuse iteration for transformation, or symptomatic relief with addressing causal factors, so perhaps there is also a need for critical and systems thinking as part of the general curriculum. This is important because symptomatic relief, whilst sometimes necessary to alleviate suffering, is an effort in chasing one’s tail and can often perpetrate the problem. For instance, where providing foreign aid without mitigating displacement of local farmer’s efforts can create national dependence on further aid. Efforts to address causal factors is necessary to truly address a problem. Even if addressing the causal problem is outside your influence, then you should at least ensure your symptomatic relief efforts are not built to propagate the problem. One of the other problems we face, particularly in government, is that the systems involved are largely products of centuries old thinking. If we consider some of the paradigm shifts of our times, we have moved from scarcity to surplus, centralised to distributed, from closed to openness, analog to digital and normative to formative. And yet, people still assume old paradigms in creating new policies, programs and business models. For example how many times have you heard someone talk about innovative public engagement (tapping into a distributed network of expertise) by consulting through a website (maintaining central decision making control using a centrally controlled tool)? Or “innovation” being measured (and rewarded) through patents or copyright, both scarcity based constructs developed centuries ago? “Open government” is often developed by small insular teams through habitually closed processes without any self awareness of the irony of the approach. And new policy and legislation is developed in analog formats without any substantial input from those tasked with implementation or consideration with how best to consume the operating rules of government in the systems of society. Consider also the number of times we see existing systems assumed to be correct by merit of existing, without any critical analysis. For instance, a compliance model that has no measurable impact. At what point and by what mechanisms can we weigh up the merits of the old and the new when we are continually building upon a precedent based system of decision making? If 3D printing helped provide a surplus economy by which we could help solve hunger and poverty, why wouldn’t that be weighed up against the benefits of traditional scarcity based business models?
  • That we are surrounded by new things every day and yet there is a serious lack of vision for the future
    One of the first things I try to do in any organisation is understand the vision, the strategy and what success should look like. In this way I can either figure out how to best contribute meaningfully to the overarching goal, and in some cases help grow or develop the vision and strategy to be a little more ambitious. I like to measure progress and understand the baseline from which I’m trying to improve but I also like to know what I’m aiming for. So, what could an optimistic future look like for society? For us? For you? How do you want to use the new means at our disposal to make life better for your community? Do we dare imagine a future where everyone has what they need to thrive, where we could unlock the creative and intellectual potential of our entire society, a 21st century Renaissance, rather than the vast proportion of our collective cognitive capacity going into just getting food on the table and the kids to school. Only once you can imagine where you want to be can we have a constructive discussion where we want to be collectively, and only then can we talk constructively the systems and structures we need to support such futures. Until then, we are all just tweaking the settings of a machine built by our ancestors. I have been surprised to find in government a lot of strategies without vision, a lot of KPIs without measures of success, and in many cases a disconnect between what a person is doing and the vision or goals of the organisation or program they are in. We talk “innovation” a lot, but often in the back of people’s minds they are often imagining a better website or app, which isn’t much of a transformation. We are surrounded by dystopic visions of the distant future, and yet most government vision statements only go so far as articulating something “better” that what we have now, with “strategies” often focused on shopping lists of disconnected tactics 3-5 years into the future. The New Zealand Department of Conservation provides an inspiring contrast with a 50 year vision they work towards, from which they develop their shorter term stretch goals and strategies on a rolling basis and have an ongoing measurable approach.
  • That government is an important part of a stable society and yet is being increasingly undermined, both intentionally and unintentionally.
    The realisation here has been in first realising how important government (and democracy) is in providing a safe, stable, accountable, predictable and prosperous society whilst simultaneously observing first hand the undermining and degradation of the role of government both intentionally and unintentionally, from the outside and inside. I have chosen to work in the private sector, non-profit community sector, political sector and now public sector, specifically because I wanted to understand the “system” in which I live and how it all fits together. I believe that “government” – both the political and public sectors – has a critical part to play in designing, leading and implementing a better future. The reason I believe this, is because government is one of the few mechanisms that is accountable to the people, in democratic countries at any rate. Perhaps not as much as we like and it has been slow to adapt to modern practices, tools and expectations, but governments are one of the most powerful and influential tools at our disposal and we can better use them as such. However, I posit that an internal, largely unintentional and ongoing degradation of the public sectors is underway in Australia, New Zealand, the United Kingdom and other “western democracies”, spurred initially by an ideological shift from ‘serving the public good’ to acting more like a business in the “New Public Management” policy shift of the 1980s. This was useful double speak for replacing public service values with business values and practices which ignores the fact that governments often do what is not naturally delivered by the marketplace and should not be only doing what is profitable. The political appointment of heads of departments has also resulted over time in replacing frank, fearless and evidence based leadership with politically palatable compromises throughout the senior executive layer of the public sector, which also drives necessarily secretive behaviour, else the contradictions be apparent to the ordinary person. I see the results of these internal forms of degradations almost every day. From workshops where people under budget constraints seriously consider outsourcing all government services to the private sector, to long suffering experts in the public sector unable to sway leadership with facts until expensive consultants are brought in to ask their opinion and sell the insights back to the department where it is finally taken seriously (because “industry” said it), through to serious issues where significant failures happen with blame outsourced along with the risk, design and implementation, with the details hidden behind “commercial in confidence” arrangements. The impact on the effectiveness of the public sector is obvious, but the human cost is also substantial, with public servants directly undermined, intimidated, ignored and a growing sense of hopelessness and disillusionment. There is also an intentional degradation of democracy by external (but occasionally internal) agents who benefit from the weakening and limiting of government. This is more overt in some countries than others. A tension between the regulator and those regulated is a perfectly natural thing however, as the public sector grows weaker the corporate interests gain the upper hand. I have seen many people in government take a vendor or lobbyist word as gold without critical analysis of the motivations or implications, largely again due to the word of a public servant being inherently assumed to be less important than that of anyone in the private sector (or indeed anyone in the Minister’s office). This imbalance needs to be addressed if the public sector is to play an effective role. Greater accountability and transparency can help but currently there is a lack of common agreement on the broader role of government in society, both the political and public sectors. So the entire institution and the stability it can provide is under threat of death by a billion papercuts. Efforts to evolve government and democracy have largely been limited to iterations on the status quo: better consultation, better voting, better access to information, better services. But a rethink is required and the internal systemic degradations need to be addressed.

If you think the world is perfectly fine as is, then you are probably quite lucky or privileged. Congratulations. It is easy to not see the cracks in the system when your life is going smoothly, but I invite you to consider the cracks that I have found herein, to test your assumptions daily and to leave your counter examples in the comments below.

For my part, I am optimistic about the future. I believe the proliferation of a human rights based ideology, participatory democracy and access to modern technologies all act to distribute power to the people, so we have the capacity more so than ever to collectively design and create a better future for us all.

Let’s build the machine we need to thrive both individually and collectively, and not just be beautiful cogs in a broken machine.

Further reading:

,

Sociological ImagesSmall Books, Big Questions: Diversity in Children’s Literature

Photo Credit: Meagan Fisher, Flickr CC

2017 was a big year for conversations about representation in popular media—what it means to tell stories that speak to people across race, gender, sexuality, ability, and more. Between the hits and the misses, there is clearly much more work to do. Representation is not just about who shows up on screen, but also about what kinds of stories get told and who gets to make them happen.

For example, many people are now familiar with “The Bechdel Test” as a pithy shortcut to check for women’s representation in movies. Now, proposals for a new Bechdel Test cover everything from the gender composition of a film’s crew to specific plot points.

These conversations are especially important for the stories we make for kids, because children pick up many assumptions about gender and race at a very young age. Now, new research published in Sociological Forum helps us better understand what kinds of stories we are telling when we seek out a diverse range of children’s books.

Krista Maywalt Aronson, Brenna D. Callahan, and Anne Sibley O’Brien wanted to look at the most common themes in children’s stories with characters from underrepresented racial and cultural groups. Using a special collection of picture books for grades K-3 from the Ladd Library at Bates College, the authors gathered a data set of 1,037 books published between 2008 and 2015 (see their full database here). They coded themes from the books to see which story arcs occurred most often, and what groups of characters were most represented in each theme.

The most common theme, occurring in 38% of these books, was what they called “beautiful life”—positive depictions of the everyday lives of the characters. Next up was the “every child” theme in which main characters came from different racial or ethnic backgrounds, but those backgrounds were not central to the plot. Along with biographies and folklore, these themes occurred more often than stories of oppression or cross-cultural interaction.

These themes tackle a specific kind of representation: putting characters from different racial and ethnic groups at the center of the story. This is a great start, but it also means that these books are more likely to display diversity, rather than showing it in action. For example, the authors write:

Latinx characters were overwhelmingly found in culturally particular books. This sets Latinx people apart as defined by a language and a culture distinct from mainstream America, and sometimes by connection to home countries.

They also note that the majority of these books are still created by white authors and illustrators, showing that there’s even more work to do behind the scenes. Representation matters, and this research shows us how more inclusive popular media can start young!

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramProfile of Reality Winner

New York Magazine published an excellent profile of the single-document leaker Reality Winner.

CryptogramSecurity Vulnerabilities in Star Wars

A fun video describing some of the many Empire security vulnerabilities in the first Star Wars movie.

Happy New Year, everyone.

CryptogramTamper-Detection App for Android

Edward Snowden and Nathan Freitas have created an Android app that detects when it's being tampered with. The basic idea is to put the app on a second phone and put the app on or near something important, like your laptop. The app can then text you -- and also record audio and video -- when something happens around it: when it's moved, when the lighting changes, and so on. This gives you some protection against the "evil maid attack" against laptops.

Micah Lee has a good article about the app, including some caveats about its use and security.

Worse Than FailureInsert Away

Bouton bleu

"Troy! Troy!"

Troy looked up from his keyboard with a frown as his coworker Cassie skidded to a halt, panting for breath. "Yes?"

"How soon can you get that new client converted?" Cassie asked. "We're at DEFCON 1 in ops. We need to be running yesterday!"

Troy's frown only deepened. "I told you, I've barely had a chance to peek at their old system."

The client was hoping to convert sometime in the next month—usually no big deal, as they'd just have to schedule a date, write a handful of database conversion scripts, and swing the domains to a fresh instance of their own booking software. It was that middle step that Troy hadn't gotten to. With no go-live date picked, working on new features seemed a higher priority.

Cassie had been spouting doom-and-gloom predictions all month: the client's in-house solution read like mid-1990s code despite being written in 2013. She'd been convinced it was a house of cards ready to collapse at any minute. Apparently, she'd been right.

"Okay, slow down. Where's the fire?" It wasn't that Troy didn't believe her per se, but when he'd skimmed the database, he hadn't seen anything spectacularly bad. Even if the client was down, their data could be converted easily. It wasn't his responsibility to maintain their old system, just to get them to the new one. "Is this a data problem?"

"They're getting hundreds of new bookings for phantom clients at the top of every hour," Cassie replied. "At this rate, we're not sure we'll be able to separate the garbage from the good bookings even if you had a conversion script done right now." Her eyes pleaded for him to have such a script on hand, but he shook his head, dashing her hopes.

"Maybe I can stop it," Troy said. "I'm sure it's a backdoor in the code somewhere we can have them disable. Let me have a look."

"You do that. I'm going to check on their backup situation."

As Cassie ran off again, Troy closed his Solitare game and settled in to read the code. At first, he didn't see anything drastically worse than he was expecting.

PHP code, of course, he thought. There's an init script: login stuff, session stuff ... holy crap that's a lot of class includes. Haven't they ever heard of an autoloader? If it's in one of those, I'll never find it. Keep pressing on ... header? No, that just calls ob_start(). Footer? Christ on a cracker, they get all the way to the footer before they check if the user's logged in? Yeah, right there—if the user's logged out, it clears the buffer and redirects instead of outputting. That's inefficient.

Troy got himself a fresh cup of coffee and sat back, looking at the folder again. Let's see, let's see ... login ... search bookings ... scripts? Scripts.php seems like a great place to hide a vulnerability. Or it could even be a Trojan some script kiddie uploaded years ago. Let's see what we've got.

He opened the folder, took one look at the file, then shouted for Cassie.


<?php
    define('cnPermissionRequired', 'Administration');

    require_once('some_init_file.php'); // validates session and permissions and such
    include_once('Header.php'); // displays header and calls ob_start();

    $arrDisciplines = [
        13  => [1012, 1208], 14  => [2060, 2350],
        17  => [14869, 15925], 52  => [803, 598],
        127 => [6624, 4547], 122 => [5728, 2998],
    ];

    $sqlAdd = "INSERT INTO aResultTable
                   SET EventID = (SELECT EventID FROM aEventTable ORDER BY RAND() LIMIT 1),
                       PersonID = (SELECT PersonID FROM somePersonView ORDER BY RAND() LIMIT 1),
                       ResultPersonFirstName = (SELECT FirstName FROM __RandomValues WHERE FirstName IS NOT NULL ORDER BY RAND() LIMIT 1),
                       ResultPersonLastName = (SELECT LastName FROM __RandomValues WHERE LastName IS NOT NULL ORDER BY RAND() LIMIT 1),
                       ResultPersonGender = 'M',
                       ResultPersonYearOfBirth = (SELECT Year FROM __RandomValues WHERE Year IS NOT NULL ORDER BY RAND() LIMIT 1),
                       CountryFirstCode = 'GER',
                       ResultClubName = (SELECT ClubName FROM aClubTable ORDER BY RAND() LIMIT 1),
                       AgeGroupID = 1,
                       DisciplineID = :DisciplineID,
                       ResultRound = (SELECT Round FROM __RandomValues WHERE Round IS NOT NULL ORDER BY RAND() LIMIT 1),
                       ResultRoundNumber = 1,
                       ResultRank = (SELECT Rank FROM __RandomValues WHERE Rank IS NOT NULL ORDER BY RAND() LIMIT 1),
                       ResultPerformance = :ResultPerformance,
                       ResultCreated = NOW(),
                       ResultCreatedBy = 1;";
    $qryAdd = $objConnection->prepare($sqlAdd);

    foreach ($arrDisciplines as $DisciplineID => $Values) {
        set_time_limit(60);

        $iNumOfResults = rand(30, 150);

        for ($iIndex = 0; $iIndex < $iNumOfResults; $iIndex++) {
            $qryAdd->bindValue(':DisciplineID', $DisciplineID);
            $qryAdd->bindValue(':ResultPerformance', rand(min($Values), max($Values)));

            $qryAdd->execute();
            $qryAdd->closeCursor();
        }
    }

    // ... some more code

?>
<?php

    include_once('Footer.php'); // displays the footer, calls ob_get_clean(); and flushes buffer, if user is not logged in
?>

"Holy hell," breathed Cassie. "It's worse than I feared."

"Tell them to take the site down for maintenance and delete this file," Troy said. "Google must've found it."

"No kidding." She straightened, rolling her shoulders. "Good work."

Troy smiled to himself as she left. On the bright side, that conversion script's half done already, he thought. Meaning I've got plenty of time to finish this game.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 01

Here’s part one of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.

MP3

,

Krebs on SecuritySerial Swatter “SWAuTistic” Bragged He Hit 100 Schools, 10 Homes

The individual who allegedly made a fake emergency call to Kansas police last week that summoned them to shoot and kill an unarmed local man has claimed credit for raising dozens of these dangerous false alarms — calling in bogus hostage situations and bomb threats at roughly 100 schools and at least 10 residences.

Tyler Raj Barriss, in an undated selfie.

On Friday authorities in Los Angeles arrested 25-year-old Tyler Raj Barriss, thought to be known online as “SWAuTistic.” As noted in last week’s story, SWAuTistic is an admitted serial swatter, and was even convicted in 2016 for calling in a bomb threat to an ABC affiliate in Los Angeles. The Associated Press reports that Barriss was sentenced to two years in prison for that stunt, but was released in January 2017.

In his public tweets (most of which are no longer available but were collected by KrebsOnSecurity), SWAuTistic claimed credit for bomb threats against a convention center in Dallas and a high school in Florida, as well as an incident that disrupted a much-watched meeting at the U.S. Federal Communications Commission (FCC) in November.

But privately — to a small circle of friends and associates — SWAuTistic bragged about perpetrating dozens of swatting incidents and bomb threats over the years.

Within a few hours of the swatting incident in Kansas, investigators searching for clues about the person who made the phony emergency call may have gotten some unsolicited help from an unlikely source: Eric “Cosmo the God” Taylor, a talented young hacker who pleaded guilty to being part of a group that swatted multiple celebrities and public figuresas well as my home in 2013.

Taylor is now trying to turn his life around, and is in the process of starting his own cybersecurity consultancy. In a posting on Twitter at 6:21 p.m. ET Dec. 29, Taylor personally offered a reward of $7,777 in Bitcoin for information about the real-life identity of SWAuTistic.

In short order, several people who claimed to have known SWAuTistic responded by coming forward publicly and privately with Barriss’s name and approximate location, sharing copies of private messages and even selfies that were allegedly shared with them at one point by Barriss.

In one private online conversation, SWAuTistic can be seen bragging about his escapades, claiming to have called in fake emergencies at approximately 100 schools and 10 homes.

The serial swatter known as “SWAuTistic” claimed in private conversations to have carried out swattings or bomb threats against 100 schools and 10 homes.

SWAuTistic sought an interview with KrebsOnSecurity on the afternoon of Dec. 29, in which he said he routinely faked hostage and bomb threat situations to emergency centers across the country in exchange for money.

“Bomb threats are more fun and cooler than swats in my opinion and I should have just stuck to that,” SWAuTistic said. “But I began making $ doing some swat requests.”

By approximately 8:30 p.m. ET that same day, Taylor’s bounty had turned up what looked like a positive ID on SWAuTistic. However, KrebsOnSecurity opted not to publish the information until Barriss was formally arrested and charged, which appears to have happened sometime between 10 p.m. ET Dec. 29 and 1 a.m. on Dec. 30.

The arrest came just hours after SWAuTistic allegedly called the Wichita police claiming he was a local man who’d just shot his father in the head and was holding the rest of his family hostage. According to his acquaintances, SWAuTistic made the call after being taunted by a fellow gamer in the popular computer game Call of Duty. The taunter dared SWAuTistic to swat him, but then gave someone else’s address in Kansas as his own instead.

Wichita Police arrived at the address provided by SWAuTistic and surrounded the home. A young man emerged from the doorway and was ordered to put his hands up. Police said one of the officers on the scene fired a single shot — supposedly after the man reached toward his waist. Grainy bodycam footage of the shooting is available here (the video is preceded by the emergency call that summoned the police).

SWAuTistic telling another person in a Twitter direct message that he had already been to jail for swatting.

The man shot and killed by police was unarmed. He has been identified as 28-year-old Andrew Finch, a father of two. Family members say he was not involved in gaming, and had no party to the dispute that got him killed.

According to the Wichita Eagle, the officer who fired the fatal shot is a seven-year veteran with the Wichita department. He has been placed on administrative leave pending an internal investigation.

Earlier reporting here and elsewhere inadvertently mischaracterized SWAuTistic’s call to the Wichita police as a 911 call. We now know that the perpetrator called in to an emergency line for Wichita City Hall and spoke with someone there who took down the caller’s phone number. After that, 911 dispatch operators were alerted and called the number SWAuTistic had given.

This is notable because the lack of a 911 call in such a situation should have been a red flag indicating the caller was not phoning from a local number (otherwise the caller presumably would have just dialed 911).

The moment a police officer fired the shot that killed 28-year-old Wichita resident Andrew Finch (in doorway of home).

The FBI estimates that some 400 swatting incidents occur each year across the country. Each incident costs first responders approximately $10,000, and diverts important resources away from actual emergencies.

CryptogramFake Santa Surveillance Camera

Reka makes a "decorative Santa cam," meaning that it's not a real camera. Instead, it just gets children used to being under constant surveillance.

Our Santa Cam has a cute Father Christmas and mistletoe design, and a red, flashing LED light which will make the most logical kids suspend their disbelief and start to believe!

Planet Linux AustraliaColin Charles: Premier Open Source Database Conference Call for Papers closing January 12 2018

The call for papers for Percona Live Santa Clara 2018 was extended till January 12 2018. This means you still have time to get a submission in.

Topics of interest: MySQL, MongoDB, PostgreSQL & other open source databases. Don’t forget all the upcoming databases too (there’s a long list at db-engines).

I think to be fair, in the catch all “other”, we should also be thinking a lot about things like containerisation (Docker), Kubernetes, Mesosphere, the cloud (Amazon AWS RDS, Microsoft Azure, Google Cloud SQL, etc.), analytics (ClickHouse, MariaDB ColumnStore), and a lot more. Basically anything that would benefit an audience of database geeks whom are looking at it from all aspects.

That’s not to say case studies shouldn’t be considered. People always love to hear about stories from the trenches. This is your chance to talk about just that.

Worse Than FailureCodeSOD: Encreption

You may remember “Harry Peckhard’s ALM” suite from a bit back, but did you know that Harry Peckhard makes lots of other software packages and hardware systems? For example, the Harry Peckhard enterprise division releases an “Intelligent Management Center” (IMC).

How intelligent? Well, Sam N had a co-worker that wanted to use a very long password, like “correct horse battery staple”, but but Harry’s IMC didn’t like long passwords. While diagnosing, Sam found some JavaScript in the IMC’s web interface that provides some of the stongest encreption possible.

function encreptPassWord(){
    var orginPassText =$("#loginForm\\:password").val();
    //encrept the password

    var ciphertext = encode64(orginPassText);
    console.info('ciphertext:', ciphertext);

    $("#loginForm\\:password").val(ciphertext);
};

This is code that was released, in a major enterprise product, from a major vendor in the space.

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet Linux AustraliaCraige McWhirter: Resolving a Partitioned RabbitMQ Cluster with JuJu

On occasion, a RabbitMQ cluster may partition itself. In a OpenStack environment this can often first present itself as nova-compute services stopping with errors such as these:

ERROR nova.openstack.common.periodic_task [-] Error during ComputeManager._sync_power_states: Timed out waiting for a reply to message ID 8fc8ea15c5d445f983fba98664b53d0c
...
TRACE nova.openstack.common.periodic_task self._raise_timeout_exception(msg_id)
TRACE nova.openstack.common.periodic_task File "/usr/lib/python2.7/dist-packages/oslo/messaging/_drivers/amqpdriver.py", line 218, in _raise_timeout_exception
TRACE nova.openstack.common.periodic_task 'Timed out waiting for a reply to message ID %s' % msg_id)
TRACE nova.openstack.common.periodic_task MessagingTimeout: Timed out waiting for a reply to message ID 8fc8ea15c5d445f983fba98664b53d0c

Merely restarting the stopped nova-compute services will not resolve this issue.

You may also find that querying the rabbitmq service may either not return or take an awful long time to return:

$ sudo rabbitmqctl -p openstack list_queues name messages consumers status

...and in an environment managed by juju, you could also see JuJu trying to correct the RabbitMQ but failing:

$ juju stat --format tabular | grep rabbit
rabbitmq-server                       false local:trusty/rabbitmq-server-128
rabbitmq-server/0           idle   1.25.13.1 0/lxc/12 5672/tcp 192.168.7.148
rabbitmq-server/1   error   idle   1.25.13.1 1/lxc/8  5672/tcp 192.168.7.163   hook failed: "config-changed"
rabbitmq-server/2   error   idle   1.25.13.1 2/lxc/10 5672/tcp 192.168.7.174   hook failed: "config-changed"

You should now run rabbitmqctl cluster_status on each of your rabbit instances and review the output. If the cluster is partitioned, you will see something like the below:

ubuntu@my_juju_lxc:~$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit@192-168-7-148' ...
[{nodes,[{disc,['rabbit@192-168-7-148','rabbit@192-168-7-163',
                'rabbit@192-168-7-174']}]},
 {running_nodes,['rabbit@192-168-7-174','rabbit@192-168-7-148']},
 {partitions,[{'rabbit@192-168-7-174',['rabbit@192-168-7-163']},
               {'rabbit@192-168-7-148',['rabbit@192-168-7-163']}]}]
...done.

You can clearly see from the above that there are two partitions for RabbitMQ. We need to now identify which of these is considered the leader:

maas-my_cloud:~$ juju run --service rabbitmq-server "is-leader"
- MachineId: 0/lxc/12
  Stderr: |
  Stdout: |
    True
  UnitId: rabbitmq-server/0
- MachineId: 1/lxc/8
  Stderr: |
  Stdout: |
    False
  UnitId: rabbitmq-server/1
- MachineId: 2/lxc/10
  Stderr: |
  Stdout: |
    False
  UnitId: rabbitmq-server/2

As you see above, in this example machine 0/lxc/12 is the leader, via it's status of "True". Now we need to hit the other two servers and shut down RabbitMQ:

# service rabbitmq-server stop

Once both services have completed shutting down, we can resolve the partitioning by running:

$ juju resolved -r rabbitmq-server/<whichever is leader>

Substituting <whichever is leader> for the machine ID identified earlier.

Once that has completed, you can start the previously stopped services with the below on each host:

# service rabbitmq-server start

and verify the result with:

$ sudo rabbitmqctl cluster_status
Cluster status of node 'rabbit@192-168-7-148' ...
[{nodes,[{disc,['rabbit@192-168-7-148','rabbit@192-168-7-163',
                'rabbit@192-168-7-174']}]},
 {running_nodes,['rabbit@192-168-7-163','rabbit@192-168-7-174',
                 'rabbit@192-168-7-148']},
 {partitions,[]}]
...done.

No partitions \o/

The JuJu errors for RabbitMQ should clear within a few minutes:

$ juju stat --format tabular | grep rabbit
rabbitmq-server                       false local:trusty/rabbitmq-server-128
rabbitmq-server/0             idle   1.25.13.1 0/lxc/12 5672/tcp 19 2.168.1.148
rabbitmq-server/1   unknown   idle   1.25.13.1 1/lxc/8  5672/tcp 19 2.168.1.163
rabbitmq-server/2   unknown   idle   1.25.13.1 2/lxc/10 5672/tcp 192.168.1.174

You should also find the nova-compute instances starting up fine.

,

Worse Than FailureBest of…: 2017: Nature, In Its Volatility

Happy New Year! Put that hangover on hold, as we return to an entirely different kind of headache, back on the "Galapagos". -- Remy

About two years ago, we took a little trip to the Galapagos- a tiny, isolated island where processes and coding practices evolved… a bit differently. Calvin, as an invasive species, brought in new ways of doing things- like source control, automated builds, and continuous integration- and changed the landscape of the island forever.

Geospiza parvula

Or so it seemed, until the first hiccup. Shortly after putting all of the code into source control and automating the builds, the application started failing in production. Specifically, the web service calls out to a third party web service for a few operations, and those calls universally failed in production.

“Now,” Hank, the previous developer and now Calvin’s supervisor, “I thought you said this should make our deployments more reliable. Now, we got all these extra servers, and it just plumb don’t work.”

“We’re changing processes,” Calvin said, “so a glitch could happen easily. I’ll look into it.”

“Looking into it” was a bit more of a challenge than it should have been. The code was a pasta-golem: a gigantic monolith of spaghetti. It had no automated tests, and wasn’t structured in a way that made it easy to test. Logging was nonexistent.

Still, Calvin’s changes to the organization helped. For starters, there was a brand new test server he could use to replicate the issue. He fired up his testing scripts, ran them against the test server, and… everything worked just fine.

Calvin checked the build logs, to confirm that both test and production had the same version, and they did. So next, he pulled a copy of the code down to his machine, and ran it. Everything worked again. Twiddling the config files didn’t accomplish anything. He build a version of the service configured for remote debugging, and chucked it up to the production server… and the error went away. Everything suddenly started working fine.

Quickly, he reverted production. On his local machine, he did something he’d never really had call to do- he flipped the build flag from “Debug” to “Release” and recompiled. The service hung. When built in “Release” mode, the resulting DLL had a bug that caused a hang, but it was something that never appeared when built in “Debug” mode.

“I reckon you’re still workin’ on this,” Hank asked, as he ambled by Calvin’s office, thumbs hooked in his belt loops. “I’m sure you’ve got a smart solution, and I ain’t one to gloat, but this ain’t never happened the old way.”

“Well, I can get a temporary fix up into production,” Calvin said. He quickly threw a debug build up onto production, which wouldn’t have the bug. “But I have to hunt for the underlying cause.”

“I guess I just don’t see why we can’t build right on the shared folder, is all.”

“This problem would have cropped up there,” Calvin said. “Once we build for Release, the problem crops up. It’s probably a preprocessor directive.”

“A what now?”

Hank’s ignorance about preprocessor directives was quickly confirmed by a search through the code- there was absolutely no #if statements in there. Calvin spent the next few hours staring at this block of code, which is where the application seemed to hang:

public class ServiceWrapper
{
    bool thingIsDone = false;
    //a bunch of other state variables

    public string InvokeSoap(methodArgs args)
    {
        //blah blah blah
        soapClient client = new Client();
        client.doThingCompleted += new doThingEventHandler(MyCompletionMethod);
        client.doThingAsync(args);

        do
        {
            string busyWork = "";
        }
        while (thingIsDone == false)

        return "SUCCESS!" //seriously, this is what it returns
    }

    private void MyCompletionMethod(object sender, completedEventArgs e)
    {
        //do some other stuff
        thingIsDone = true;
    }
}

Specifically, it was in the busyWork loop where the thing hung. He stared and stared at this code, trying to figure out why thingIsDone never seemed to become true, but only when built in Release. Obviously, it had to be a compiler optimization- and that’s when the lightbulb went off.

The C# compiler, when building for release, will look for variables whose values don’t appear to change, and replace them with in-lined constants. In serial code, this can be handled with some pretty straightforward static analysis, but in multi-threaded code, the compiler can make “mistakes”. There’s no way for the compiler to see that thingIsDone ever changes, since the change happens in an external thread. The fix is simple: chuck volatile on the variable declaration to disable that optimization.

volatile bool thingIsDone = false solved the problem. Well, it solved the immediate problem. Having seen the awfulness of that code, Calvin couldn’t sleep that night. Nightmares about the busyWork loop and the return "SUCCESS!" kept him up. The next day, the very first thing he did was refactor the code to actually properly handle multiple threads.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet Linux AustraliaSimon Lyall: Donations 2017

Like in 2016 and 2015 I am blogging about my charity donations.

The majority of donations were done during December (I start around my birthday) although after my credit card got suspended last year I spread them across several days.

The inspiring others bit seems to have worked a little. Ed Costello has blogged his donations for 2017.

I’ll note that throughout the year I’ve also been giving money via Patreon to several people whose online content I like. I suspended these payments in early-December but they have backed down on the change so I’ll probably restart them in early 2018.

As usual my main donation was to Givewell. This year I gave to them directly and allowed them to allocate to projects as they wish.

  • $US 600 to Givewell (directly for their allocation)

In march I gave to two organization I follow online. Transport Blog re-branded themselves as “Greater Auckland” and is positioning themselves as a lobbying organization as well as news site.

Signum University produce various education material around science-fiction, fantasy and medieval literature. In my case I’m following their lectures on Youtube about the Lord of the Rings.

I gave some money to the Software Conservancy to allocate across their projects and again to the Electronic Frontier Foundation for their online advocacy.

and lastly I gave to various Open Source Projects that I regularly use.

Share

,

Harald WelteOsmocom Review 2017

As 2017 has just concluded, let's have a look at the major events and improvements in the Osmocom Cellular Infrastructure projects (i.e. those projects dealing with building protocol stacks and network elements for mobile network infrastructure.

I've prepared a detailed year 2017 summary at the osmocom.org website, but let me write a bit about the most note-worthy topics here.

NITB Split

Once upon a time, we implemented everything needed to operate a GSM network inside a single process called OsmoNITB. Those days are now gone, and we have separate OsmoBSC, OsmoMSC, OsmoHLR, OsmoSTP processes, which use interfaces that are interoperable with non-Osmocom implementations (which is what some of our users require).

This change is certainly the most significant change in the close-to-10-year history of the project. However, we have tried to make it as non-intrusive as possible, by using default point codes and IP addresses which will make the individual processes magically talk to each other if installed on a single machine.

We've also released a OsmoNITB Migration Guide, as well as our usual set of user manuals in order to help our users.

We'll continue to improve the user experience, to re-introduce some of the features lost in the split, such as the ability to attach names to the subscribers.

Testing

We have osmo-gsm-tester together with the two physical setups at the sysmocom office, which continuously run the latest Osmocom components and test an entire matrix of different BTSs, software configurations and modems. However, this tests at super low load, and it tests only signalling so far, not user plane yet. Hence, coverage is limited.

We also have unit tests as part of the 'make check' process, Jenkins based build verification before merging any patches, as well as integration tests for some of the network elements in TTCN-3. This is much more than we had until 2016, but still by far not enough, as we had just seen at the fall-out from the sub-optimal 34C3 event network.

OsmoCon

2017 also marks the year where we've for the first time organized a user-oriented event. It was a huge success, and we will for sure have another OsmoCon incarnation in 2018 (most likely in May or June). It will not be back-to-back with the developer conference OsmoDevCon this time.

SIGTRAN stack

We have a new SIGTRAN stakc with SUA, M3UA and SCCP as well as OsmoSTP. This has been lacking a long time.

OsmoGGSN

We have converted OpenGGSN into a true member of the Osmocom family, thereby deprecating OpenGGSN which we had earlier adopted and maintained.

Harald Welte34C3 and its Osmocom GSM/UMTS network

At the 34th annual Chaos Communication Congress, a team of Osmocom folks continued the many years old tradition of operating an experimental Osmocom based GSM network at the event. Though I've originally started that tradition, I'm not involved in installation and/or operation of that network, all the credits go to Lynxis, neels, tsaitgaist and the larger team of volunteers surrounding them. My involvement was only to answer the occasional technical question and to look at bugs that show up in the software during operation, and if possible fix them on-site.

34C3 marks two significant changes in terms of its cellular network:

  • the new post-nitb Osmocom stack was used, with OsmoBSC, OsmoMSC and OsmoHLR
  • both an GSM/GPRS network (on 1800 MHz) was operated ,as well as (for the first time) an UMTS network (in the 850 MHz band)

The good news is: The team did great work building this network from scratch, in a new venue, and without relying on people that have significant experience in network operation. Definitely, the team was considerably larger and more distributed than at the time when I was still running that network.

The bad news is: There was a seemingly endless number of bugs that were discovered while operating this network. Some shortcomings were known before, but the extent and number of bugs uncovered all across the stack was quite devastating to me. Sure, at some point from day 2 onwards we had a network that provided [some level of] service, and as far as I've heard, some ~ 23k calls were switched over it. But that was after more than two days of debugging + bug fixing, and we still saw unexplained behavior and crashes later on.

This is such a big surprise as we have put a lot of effort into testing over the last years. This starts from the osmo-gsm-tester software and continuously running test setup, and continues with the osmo-ttcn3-hacks integration tests that mainly I wrote during the last few months. Both us and some of our users have also (successfully!) performed interoperability testing with other vendors' implementations such as MSCs. And last, but not least, the individual Osmocom developers had been using the new post-NITB stack on their personal machines.

So what does this mean?

  • I'm sorry about the sub-standard state of the software and the resulting problems we've experienced in the 34C3 network. The extent of problems surprised me (and I presume everyone else involved)
  • I'm grateful that we've had the opportunity to discover all those bugs, thanks to the GSM team at 34C3, as well as Deutsche Telekom for donating 3 ARFCNs from their spectrum, as well as the German regulatory authority Bundesnetzagentur for providing the experimental license in the 850 MHz spectrum.
  • We need to have even more focus on automatic testing than we had so far. None of the components should be without exhaustive test coverage on at least the most common transactions, including all their failure modes (such as timeouts, rejects, ...)

My preferred method of integration testing has been by using TTCN-3 and Eclipse TITAN to emulate all the interfaces surrounding a single of the Osmocom programs (like OsmoBSC) and then test both valid and invalid transactions. For the BSC, this means emulating MS+BTS on Abis; emulating MSC on A; emulating the MGW, as well as the CTRL and VTY interfaces.

I currently see the following areas in biggest need of integration testing:

  • OsmoHLR (which needs a GSUP implementation in TTCN-3, which I've created on the spot at 34C3) where we e.g. discovered that updates to the subscriber via VTY/CTRL would surprisingly not result in an InsertSubscriberData to VLR+SGSN
  • OsmoMSC, particularly when used with external MNCC handlers, which was so far blocked by the lack of a MNCC implementation in TTCN-3, which I've been working on both on-site and after returning back home.
  • user plane testing for OsmoMGW and other components. We currently only test the control plane (MGCP), but not the actual user plane e.g. on the RTP side between the elements
  • UMTS related testing on OsmoHNBGW, OsmoMSC and OsmoSGSN. We currently have no automatic testing at all in these areas.

Even before 34C3 and the above-mentioned experiences, I concluded that for 2018 we will pursue a test-driven development approach for all new features added by the sysmocom team to the Osmocom code base. The experience with the many issues at 34C3 has just confirmed that approach. In parallel, we will have to improve test coverage on the existing code base, as outlined above. The biggest challenge will of course be to convince our paying customers of this approach, but I see very little alternative if we want to ensure production quality of our cellular stack.

So here we come: 2018, The year of testing.

Don Martisome more random links

This one is timely, considering that an investment in "innovation" comes with a built-in short position in Bay Area real estate, and the short squeeze is on: Collaboration in 2018: Trends We’re Watching by Rowan Trollope

In 2018, we’ll see the rapid decline of “place-ism,” the discrimination against people who aren’t in a central office. Technology is making it easier not just to communicate with distant colleagues about work, but to have the personal interactions with them that are the foundation of trust, teamwork, and friendship.

Really, "place-ism" only works if you can afford to overpay the workers who are themselves overpaying for housing. And management can only afford to overpay the workers by giving in to the temptations of rent-seeking and deception. So the landlord makes the nerd pay too much, the manager has to pay the nerd too much, and you end up with, like the man said, "debts that no honest man can pay"?

File under "good examples to illustrate Betteridge's law of headlines": Now That The FCC Is Doing Away With Title II For Broadband, Will Verizon Give Back The Taxpayer Subsidies It Got Under Title II?

Open source business news: Docker, Inc is Dead. Easy to see this as a run-of-the-mill open source business failure story. But at another level, it's the story of how the existing open source incumbents used open practices to avoid having to bid against each other for an overfunded startup.

If "data is the new oil" where is the resource curse for data? Google Maps’s Moat, by Justin O’Beirne (related topic: once Google has the 3d models of buildings, they can build cool projects: Project Sunroof)

Have police departments even heard of Caller ID Spoofing or Swatting? Kansas Man Killed In ‘SWATting’ Attack

Next time I hear someone from a social site talking about how much they're doing about extremists and misinformation and such, I have to remember to ask: have you adjusted your revenue targets for political advertising down in order to reflect the bad shit you're not doing any more? How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Or are you just encouraging the "dark social" users to hide it better?

ICYMI, great performance optimization: Firefox 57 delays requests to tracking domains

Boring: you're operating a 4500-pound death machine. Exciting: three Slack notifications and a new AR game! Yes, Smartphone Use Is Probably Behind the Spike in Driving Deaths. So Why Isn’t More Being Done to Curb It?

I love "nopoly controls entire industry so there is no point in it any more" stories: The Digital Advertising Duopoly Good news on advertising. The Millennials are burned out on advertising—most of what they're exposed to now is just another variant of "creepy annoying shit on the Internet"—but the generation after the Millennials are going to have hella mega opportunities building the next Creative Revolution.

Another must-read for the diversity and inclusion department. 2017 Was the Year I Learned About My White Privilege by Max Boot.

Sam VargheseAll your gods have feet of clay: Sarah Ferguson’s fall from grace

The year that ends today was remarkable for one thing on the media front that has gone largely unnoticed: the fall from grace of one of the Australian Broadcasting Corporation’s brightest stars who has long been a standard-setter at the country’s national broadcaster.

Sarah Ferguson was the journalist’s journalist, seemingly a woman of fierce integrity, and one who pandered to neither left nor right. When she sat in for Leigh Sales, the host of 7.30, the main current affairs programme, for six months while Sales was on a maternity leave break, the programme seemed to come to life as she attacked politicians with vigour and fearlessness.

There was bite in her speech, there was knowledge, there was surprise aplenty. Apart from the stint on 7.30, she brought depth and understanding to a long programme on the way the Labor Party tore itself to bits while in government for six years from 2007, a memorable TV saga.

A powerful programme on domestic violence during the year was filled with the kind of sparse and searing dialogue for which Ferguson was known. I use the past tense advisedly for it all came apart in October.

That was the month when Ferguson decided for some strange reason to interview Hillary Clinton for an episode of Four Corners, the ABC’s main investigative affairs programme. How an interview with a jaded politician who was trying to blame all and sundry for her defeat in the 2016 US presidential election fit into this category is anyone’s guess.

The normally direct and forthright Ferguson seemed to be in awe of Clinton, and gave the former American secretary of state free pass after free pass, never challenging the lies that Clinton used to paint herself as the victim of some extraordinary plot.

In fact, Ferguson herself appeared to be so embarrassed by her own performance that she penned — or a ghost writer did — an article that was totally out of character, claiming that she had prepared for the interview and readied herself to the extent possible.

To put it kindly, this was high-grade bulldust.

Either Ferguson was overwhelmed by the task of interviewing a figure such as Clinton or else she decided to go easy on one of her own sex. It was a pathetic sight to see one of the best journalists at the ABC indulge in such ordinary social intercourse.

There were so many points during the interview when Ferguson could have caught her subject napping. And that’s an art she is adept at.

But, alas, that 45 minutes went without a single contradiction, without a single interjection, with Ferguson projecting a has-been as somehow a subject who was worthy of being featured on a programme that generally caters to hard news. By the time this interview took place, Clinton had been interviewed by world+dog as she tried to sell her book, What Happened.

Thus, Ferguson was reduced to recycling old stuff, and she made a right royal hash of it too.

I sent the following complaint to the ABC on 22 October, six days after the interview was broadcast:

I am writing to make a formal complaint about the dissemination of false information on the ABC, via the Four Corners interview with Hillary Clinton on 16 October.

Sarah Ferguson conducted the interview but did not challenge numerous falsehoods uttered by Clinton.

Four Corners is promoted as investigative journalism. Ferguson had done no preparation at all to anticipate Clinton’s lies – and that is a major failing. There was no investigative aspect about this interview. It was pap at its finest. The ABC had a long article about how Ferguson had prepared for the interview – but this seems toe be so much eyewash.

Clinton has been interviewed numerous times after her election loss and many of these interviews are available on the internet, beginning with the 75-minute interview done by Walt Mossberg and Kara Swisher of The Verge on 30 June. So Ferguson cannot claim that she did not have access to these.

The ABC, as part of its charter, has to be balanced. Given that there were so many accusations made about WikiLeaks publisher Julian Assange by Clinton, he should have been given a chance — after the programme — to give his side of the story. This did not happen.

There has been a claim by Four Corners executive producer Sally Neighbour on Twitter that she wrote to Assange on 19 September seeking an interview. But this, if true, could not have been a right of reply as it was well before the Clinton interview.

Neighbour also retweeted a tweet which said “Assange is Putin’s bitch” as part of promoting the programme. This is not very professional, to put it mildly.

Finally, given that the allegations about the 2016 US presidential polls have been dominating the news for more than a year, Ferguson should have known what the central issues raised by Clinton would be. Else, she should not have done the interview.

If, as claimed, Ferguson did prepare for the interview, then how she did not know about these issues is a mystery.

Some of the false statements made by Clinton during the interview, none of which were challenged by Ferguson.

1. “And if he’s such a, you know, martyr of free speech, why doesn’t WikiLeaks ever publish anything coming out of Russia?” was one of Clinton’s claims about Assange.

Here is a massive WikiLeaks drop on Russia: https://wikileaks.org/spyfiles/russia/

There are many documents critical of Russia: https://contraspin.co.nz/in-plain-sight-why-wikileaks-is-clearly-not-in-bed-with-russia/

2. Clinton claimed that the release of emails from the Democratic National Committee was timed to cut off oxygen to the stories around the Trump Hollywood access tape which was released on 7 October.

This again is a lie. Assange had proclaimed the release of these emails on 4 October – http://www.reuters.com/article/us-ecuador-sweden-assange/wikileaks-assange-signals-release-of-documents-before-u-s-election-idUSKCN1240UG

3. Clinton claimed in the interview that made-up stories were run using the DNC emails. This again is a lie.

One email showed that Islamic State is funded by Qatar and Saudi Arabia, both countries from which Clinton accepted donations for the Clinton Foundation. The fact that the DNC acted to favour Clinton over Bernie Sanders for the Democrat nomination in 2016 was also mentioned in the emails.

The New York Times published stories based on these facts. Ferguson did not challenge Clinton’s lie about this.

There are numerous other instances of stories being run based on the emails – and none of these was ever contradicted by Clinton or her lackeys.

4. Clinton also claimed that the DNC emails were obtained through an external hack. There is no concrete evidence for this claim.

There is evidence, however, produced by NSA whistleblower William Binny and CIA veteran Ray McGovern to show that they could only have been taken by an internal source, who was likely to have used an USB key and copied them. See https://consortiumnews.com/2017/09/20/more-holes-in-russia-gate-narrative/

5. Clinton alleged that Julian Assange is a “tool of Russian intelligence” who “does the bidding of a dictator”.

Barack Obama himself is on the record (https://www.youtube.com/watch?v=XEu6kHRHYhU&t=30s) as stating that there is no clear connection between WikiLeaks and Russian intelligence. Several others in his administration have said likewise.

Once again, Ferguson did not challenge the lie.

6. Ferguson did not ask Clinton a word about the Libyan conflict where 40,000 died in an attack which she (Clinton) had orchestrated.

7. Not a word was asked about the enormous sums Clinton took from Wall Street for speeches – Goldman Sachs, a pivot of the global financial crisis, was charged $675,000 for a speech.

The ABC owes the public, whose funds it uses, an explanation for the free pass given to Clinton.

Kieran Doyle of the ABC’s Audience and Consumer Affairs department sent me the following response on 6 December:

Your complaint has been investigated by Audience and Consumer Affairs, a unit which is separate to and independent of program making areas within the ABC. We have considered your concerns and information provided by ABC News management, reviewed the broadcast and assessed it against the ABC’s editorial standards for impartiality and accuracy.

The newsworthy focus of this interview was Hillary Clinton’s personal reaction to her defeat to Donald Trump in the most stunning election loss in modern US history, which she has recounted in her controversial book What Happened. Mrs Clinton is a polarising historical figure in US politics who is transparently partisan and has well established personal and political positions on a wide range of issues. An extended, stand-alone interview will naturally include her expressing critical views of her political adversaries and her personal view on why she believes she was defeated.

ABC News management has explained the program was afforded 40 minutes of Mrs Clinton’s time for the interview, and the reporter had to be extremely selective about the questions she asked and the subjects she covered within that limited time frame. It was therefore not possible to contest and interrogate, to any significant degree, all of the claims she made. We note she did challenge Mrs Clinton’s view of Mr Assange –

SARAH FERGUSON: lots of people, including in Australia, think that Assange is a martyr for free speech and freedom of information.

SARAH FERGUSON: Isn’t he just doing what journalists do, which is publish information when they get it?

We are satisfied that Mrs Clinton’s comments about Wikileaks not publishing critical information about Russia, were presented within the context of the key issues of the 2016 US election and what impacted the campaign. The interview was almost exclusively focused on events leading up to the election, and the reporter’s questions about Wikileaks were clearly focused on its role in the leadup to the election, rather than after it. One of the key events in the lead-up to the election was the Wikileaks releases of hacked emails from the Democratic National Committee and Clinton’s campaign manager over a number of dates, starting with the Democratic Convention.

In response to your concern, the program has provided the following statement –

The key point is that Wikileaks didn’t publish anything about Russia during the campaign. Wikileaks received a large cache of documents related to the Russian government during the 2016 campaign and declined to publish them.

Mrs Clinton’s suggestion that the release of the Podesta emails was timed to occur shortly after the release of the Access Hollywood tape was presented as her person view. It was not presented as a factual statement that has been confirmed.

ABC News management has explained that in anticipation of the interview with Mrs Clinton, Four Corners wrote to Mr Assange in September and invited him to take part in an interview, to address the criticisms she has made of him and Wikileaks in the wake of her election defeat. The program explicitly set out a statement Mrs Clinton had made in a podcast interview with The New Yorker criticising Mr Assange’s alleged links to Russia, in an almost identical way to the criticisms she then made to Four Corners, and asking Mr Assange to respond. Mr Assange did not respond. Four Corners has subsequently renewed the invitation, both directly through Twitter and by email with Mr Assange’s Australian legal advisor, Greg Barnes, offering him a full right of reply. The program would welcome the opportunity to interview Mr Assange.

We observe that Four Corners published a report covering Mr Assange’s reaction to Mrs Clinton’s criticisms, soon after he released his comments. That report is available at ABC News online by clicking the attached link –

http://www.abc.net.au/news/2017-10-16/hillary-clinton-says-julian-assange-helped-donald-trump-win/9047944

We note the program’s Executive Producer has stated publicly that her re-tweet of an offensive viewer tweet during the program was done in error. ABC News management has explained there was very heavy twitter traffic on the night and the re-tweet was a mistake and not done intentionally. As soon as she realised she had done it, the Executive Producer deleted the tweet and apologised on Twitter that evening.

While Audience and Consumer Affairs believe it would have been preferable for the program to make a further attempt to seek Mr Assange’s response to Mrs Clinton’s criticism of him, between recording the interview and broadcasting it, we are satisfied that the program has clearly afforded Mr Assange an opportunity to be interviewed to respond to those criticisms, and to the extent that he has responded, his views have been reported in the Four Corners online article on the ABC website.

Please be assured that the specific issues you personally believe should have been raised with Mrs Clinton are noted.

The ABC Code of Practice is available online at the attached link; http://about.abc.net.au/reports-publications/code-of-practice/.

Should you be dissatisfied with this response to your complaint, you may be able to pursue the matter with the Australian Communications and Media Authority http://www.acma.gov.au

,

Sam VargheseToo much of anything is good for nothing

Last year, Australia’s national Twenty20 competition, the Big Bash League, had 32 league games plus three finals. It was deemed a great success.

But the organiser, Cricket Australia, is not content with that. This year, there will be 40 games followed by the two semi-finals and the final. And the tournament will drag on into February.

This means many of the same cricketers will be forced to play those eight extra games, putting that much more strain on their bodies and minds. How much cricket can people play before they become jaded and reduced to going through the motions?

Why are the organisers always trying to squeeze out more and more from the same players? Why are they not content with what they have – a tournament that is popular, draws fairly decent crowds and is considered a success?

There was talk last season of increasing the number of teams; mercifully, that has not happened. There is an old saying that one have too much of a good thing.

Many of the same cricketers who are expected to perform well at the BBL play in similar competitions around the globe – Pakistan (played in the UAE), the West Indies, New Zealand, Sri Lanka, India, and Bangladesh all have their own leagues.

At the end of the year, it should not be surprising to find that the better cricketers in this format are quite a tired lot. The organisers seem to be content with more games, never mind if they are boring, one-sided matches.

There is a breaking point in all these tournaments, one at which the people lose interest and begin to wander away. From 2015-16 to 2016-17, there was a sizeable drop in the numbers who came to watch.

While it is true that the organisers make money before a ball is bowled — the TV rights ensure that — the BBL has been sold as a family-friendly tournament that is meant for the average Australian or visitor to the country to watch in person.

Empty stands do not look good on TV and send a message to prospective attendees. But it is unlikely that such thoughts have occurred to the likes of cricket supremo James Sutherland.