Planet Russell

,

Worse Than FailureCodeSOD: The Strangelet Solution

Chris M works for a “solutions provider”. Mostly, this means taking an off-the-shelf product from Microsoft or Oracle or SAP and customizing it to fit a client’s specific needs. Since many of these clients have in-house developers, the handover usually involves training those developers up on the care and maintenance of the system.

Then, a year or two later, the client comes back, complaining about the system. “It’s broken,” or “performance is terrible,” or “we need a new feature”. Chris then goes back out to their office, and starts taking a look at what has happened to the code in his absence.

It’s things like this:

    var getAdjustType = Xbp.Page.getAttribute("cw_adjustmenttype").getText;

    var reasonCodeControl = Xbp.Page.getControl("cw_reasoncode");
    if (getAdjustType === "Short-pay/Applying Credit" || getAdjustType === "Refund/Return (Credit)") {
        var i;
        var options = (Xbp.Page.getAttribute("cw_reasoncode").getOptions());
        reasonCodeControl

        for (i = 0; i < options.length; i++) {
            if (i <= 4) {
                reasonCodeControl.removeOption(options[i].value);

            }
            if (i >= 5) {
                reasonCodeControl.clearOptions();

            }
            if (i >= 5) {
                reasonCodeControl.addOption(options[5]);
                reasonCodeControl.addOption(options[6]);
                reasonCodeControl.addOption(options[7]);
                reasonCodeControl.addOption(options[8]);
                reasonCodeControl.addOption(options[9]);
                reasonCodeControl.addOption(options[10]);
                reasonCodeControl.addOption(options[11]);
                reasonCodeControl.addOption(options[12]);
                reasonCodeControl.addOption(options[13]);
                reasonCodeControl.addOption(options[14]);
                reasonCodeControl.addOption(options[15]);
                reasonCodeControl.addOption(options[16]);
                reasonCodeControl.addOption(options[17]);
                reasonCodeControl.addOption(options[18]);
                reasonCodeControl.addOption(options[19]);
                reasonCodeControl.addOption(options[20]);
                reasonCodeControl.addOption(options[21]);


            }
        }
    }
    else {
        var options = (Xbp.Page.getAttribute("cw_reasoncode").getOptions());
        for (var i = 0; i < options.length; i++) {
            if (i >= 4) {
                reasonCodeControl.removeOption(options[i].value);

            }
            if (i <= 4) {
                reasonCodeControl.clearOptions();

            }
            if (i <= 4) {
                reasonCodeControl.addOption(options[0]);
                reasonCodeControl.addOption(options[1]);
                reasonCodeControl.addOption(options[2]);
                reasonCodeControl.addOption(options[3]);
                reasonCodeControl.addOption(options[4]);

            }
        }
    }

There are patterns and there are anti-patterns, like there is matter and anti-matter. An anti-pattern would be the “switch loop”, where you have different conditional branches that execute depending on how many times the loop has run. And then there’s this, which is superficially similar to the “switch loop” anti-pattern, but confused. Twisted, with conditional branches that execute on the same condition. It may have once been an anti-pattern, but now it’s turned into a strange pattern, and like strange matter threatens to turn everything it touches into more of itself.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Valerie AuroraRepealing Obamacare will repeal my small business

I emailed this to the U.S. Senate Finance Committee today in response to the weekly Wall-of-Us email call-to-action, and thought it would fit on my blog as well.

Hello,

I am a small business owner with a pre-existing condition who can’t go without health insurance for even one month. The Affordable Care Act made my small business possible. If ACA is repealed or replaced, I will be forced to go out of business.

Two years ago, I started my own business, Frame Shift Consulting, teaching technology companies how to improve diversity and inclusion. I also have a genetic disease called Ehlers-Danlos Syndrome. If I take about ten prescription drugs every day, see several medical professionals regularly, and exercise carefully, I can live a semi-normal life and even work full-time if I don’t have to go to an office every day. Without access to prescription drugs and medical care, I would be unable to work full-time or even care for myself, and would have to go on disability, SSDI.

Before the Affordable Care Act, no health insurance company would sell me a policy on the individual market. My only option was to get a salaried job at a company large enough to offer health insurance to their employees. If I lost my job, I could buy one or two coverage options under COBRA or HIPAA, but I was always just one missed payment away from losing my access to health insurance at any price. (I once tried to apply for health insurance on the open market; after two questions about my medical history they told me I’d never get approved.) The ACA let me quit my job and start my own small business free from fear of losing my health insurance and becoming unable to work.

At my new small business, I am doing far more innovative and valuable work than I ever did for a big company. I love being my own boss, and the flexibility I have makes it far easier to cope with the bad days of Ehlers-Danlos Syndrome. I love how high impact my work is, and that I am training other people to do the same work. I could never have done work that changed so many people’s lives for the better while working at any other company.

Every time I hear about a new bill to repeal or replace the ACA, I study it to see whether I would still be able to afford health insurance under the new system. So far, the answer has been a resounding no. Without the individual mandate, coverage for pre-existing conditions, price controls, and minimum coverage requirements that states can’t waive, no health insurance company offer me an individual policy at a price I can afford.

I’m one of the luckier ones; if the ACA is repealed or replaced and I lose my health insurance, I can probably get a salaried job at a big company with health insurance benefits. I don’t expect anyone to care about my personal satisfaction in doing work I love, or having the flexibility to stay home when my Ehlers-Danlos is acting up. But I do expect my elected representatives to care that a cutting edge, high-impact small business would go out of business if they passed Graham-Cassidy or any other repeal or replace bill. The ACA is good for business, good for innovation, and good for people. Instead of replacing it with an inferior system that would cover fewer people for more money, let’s work on improving the ACA and filling in the many gaps in its coverage.

Thank you for your time,

Valerie Aurora
Proud small business owner


Tagged: politics

,

Planet Linux AustraliaOpenSTEM: What Makes Humans Different From Most Other Mammals?

Well, there are several things that makes us different from other mammals – although perhaps fewer than one might think. We are not unique in using tools, in fact we discover more animals that use tools all the time – even fish! We pride ourselves on being a “moral animal”, however fairness, reciprocity, empathy and […]

Planet DebianEnrico Zini: Systemd service units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.service units

Describe how to start and stop system services, like daemons.

Services are described in a [Service] section. The Type= configuration describes how the service wants to be brought up:

  • Type=simple: a program is started and runs forever, providing the servive. systemd takes care of daemonizing it properly in the background, creating a pidfile, stopping it and so on. The service is considered active as soon as it has started.
  • Type=forking: a traditional daemons that forks itself, creates a pidfile and so on. The server is considered active as soon as the parent process ends.
  • Type=oneshot: a program is run once, and the service is considered started after the program ends. This can be used, for example, to implement a service to do one-off configuration, like checking a file system.
  • Type=dbus: like simple but for D-Bus services: the service is considered active as soon as it appears on the D-Bus bus.
  • Type=notify: like simple, but the service tells sytemd when it has finished initialization and is ready. Notification can happen via the sd_notify C function, or the systemd-notify command.
  • Type=idle: like simple, but it is run after all other services has been started on a transaction. You can use this, for example, to start a shell on a terminal after the boot, so that the prompt doesn't get flooded with boot messages, or to play a happy trumped sound after the system has finished booting.

There are a lot more configuration options to fine-tune how the program should be managed, to limit its resource access or capabilities to harden the system security, to run setup/cleanup scripts before or after it started, and after it gets stopped, to control what signals to send to ask for reload or quit, and quite a lot more.

See: man systemd.service, man systemd.exec, man systemd.resource-control, and man systemd.kill.

See systemctl --all -t service for examples.

Planet DebianJulian Andres Klode: APT 1.5 is out

APT 1.5 is out, after almost 3 months the release of 1.5 alpha 1, and almost six months since the release of 1.4 on April 1st. This release cycle was unusually short, as 1.4 was the stretch release series and the zesty release series, and we waited for the latter of these releases before we started 1.5. In related news, 1.4.8 hit stretch-proposed-updates today, and is waiting in the unapproved queue for zesty.

This release series moves https support from apt-transport-https into apt proper, bringing with it support for https:// proxies, and support for autodetectproxy scripts that return http, https, and socks5h proxies for both http and https.

Unattended updates and upgrades now work better: The dependency on network-online was removed and we introduced a meta wait-online helper with support for NetworkManager, systemd-networkd, and connman that allows us to wait for network even if we want to run updates directly after a resume (which might or might not have worked before, depending on whether update ran before or after network was back up again). This also improves a boot performance regression for systems with rc.local files:

The rc.local.service unit specified After=network-online.target, and login stuff was After=rc.local.service, and apt-daily.timer was Wants=network-online.target, causing network-online.target to be pulled into the boot and the rc.local.service ordering dependency to take effect, significantly slowing down the boot.

An earlier less intrusive variant of that fix is in 1.4.8: It just moves the network-online.target Want/After from apt-daily.timer to apt-daily.service so most boots are uncoupled now. I hope we get the full solution into stretch in a later point release, but we should gather some experience first before discussing this with the release time.

Balint Reczey also provided a patch to increase the time out before killing the daily upgrade service to 15 minutes, to actually give unattended-upgrades some time to finish an in-progress update. Honestly, I’d have though the machine hung up and force rebooted it after 5 seconds already. (this patch is also in 1.4.8)

We also made sure that unreadable config files no longer cause an error, but only a warning, as that was sort of a regression from previous releases; and we added documentation for /etc/apt/auth.conf, so people actually know the preferred way to place sensitive data like passwords (and can make their sources.list files world-readable again).

We also fixed apt-cdrom to support discs without MD5 hashes for Sources (the Files field), and re-enabled support for udev-based detection of cdrom devices which was accidentally broken for 4 years, as it was trying to load libudev.so.0 at runtime, but that library had an SONAME change to libudev.so.1 – we now link against it normally.

Furthermore, if certain information in Release files change, like the codename, apt will now request confirmation from the user, avoiding a scenario where a user has stable in their sources.list and accidentally upgrades to the next release when it becomes stable.

Paul Wise contributed patches to allow configuring the apt-daily intervals more easily – apt-daily is invoked twice a day by systemd but has more fine-grained internal timestamp files. You can now specify the intervals in seconds, minutes, hours, and day units, or specify “always” to always run (that is, up to twice a day on systemd, once per day on non-systemd platforms).

Development for the 1.6 series has started, and I intent to upload a first alpha to unstable in about a week, removing the apt-transport-https package and enabling compressed index files by default (save space, a lot of space, at not much performance cost thanks to lz4). There will also be some small clean ups in there, but I don’t expect any life-changing changes for now.

I think our new approach of uploading development releases directly to unstable instead of parking them in experimental is working out well. Some people are confused why alpha releases appear in unstable, but let me just say one thing: These labels basically just indicate feature-completeness, and not stability. An alpha is just very likely to get a lot more features, a beta is less likely (all the big stuff is in), and the release candidates just fix bugs.

Also, we now have 3 active stable series: The 1.2 LTS series, 1.4 medium LTS, and 1.5. 1.2 receives updates as part of Ubuntu 16.04 (xenial), 1.4 as part of Debian 9.0 (stretch) and Ubuntu 17.04 (zesty); whereas 1.5 will only be supported for 9 months (as part of Ubuntu 17.10). I think the stable release series are working well, although 1.4 is a bit tricky being shared by stretch and zesty right now (but zesty is history soon, so …).


Filed under: Debian, Ubuntu

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.3

A maintenance update RcppGSL 0.3.3 is now on CRAN. It switched the vignette to the our new pinp package and its two-column pdf default.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.3 (2017-09-24)

  • We also check for gsl-config at package load.

  • The vignette now uses the pinp package in two-column mode.

  • Minor other fixes to package and testing infrastructure.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityEquifax or Equiphish?

More than a week after it said most people would be eligible to enroll in a free year of its TrustedID identity theft monitoring service, big three consumer credit bureau Equifax has begun sending out email notifications to people who were able to take the company up on its offer. But in yet another security stumble, the company appears to be training recipients to fall for phishing scams.

Some people who signed up for the service after Equifax announced Sept. 7 that it had lost control over Social Security numbers, dates of birth and other sensitive data on 143 million Americans are still waiting for the promised notice from Equifax. But as I recently noted on Twitter, other folks have received emails from Equifax over the past few days, and the messages do not exactly come across as having emanated from a company that cares much about trying to regain the public’s trust.

Here’s a redacted example of an email Equifax sent out to one recipient recently:

equifaxcare

As we can see, the email purports to have been sent from trustedid.com, a domain that Equifax has owned for almost four years. However, Equifax apparently decided it was time for a new — and perhaps snazzier — name: trustedidpremier.com.

The above-pictured message says it was sent from one domain, and then asks the recipient to respond by clicking on a link to a completely different (but confusingly similar) domain.

My guess is the reason Equifax registered trustedidpremier.com was to help people concerned about the breach to see whether they were one of the 143 million people affected (for more on how that worked out for them, see Equifax Breach Response Turns Dumpster Fire). I’d further surmise that Equifax was expecting (and received) so much interest in the service as a result of the breach that all the traffic from the wannabe customers might swamp the trustedid.com site and ruin things for the people who were already signed up for the service before Equifax announced the breach on Sept. 7.

The problem with this dual-domain approach is that the domain trustedidpremier.com is only a few weeks old, so it had very little time to establish itself as a legitimate domain. As a result, in the first few hours after Equifax disclosed the breach the domain was actually flagged as a phishing site by multiple browsers because it was brand new and looked about as professionally designed as a phishing site.

What’s more, there is nothing tying the domain registration records for trustedidpremier.com to Equifax: The domain is registered to a WHOIS privacy service, which masks information about who really owns the domain (again, not exactly something you might expect from an identity monitoring site). Anyone looking for assurances that the site perhaps was hosted on Internet address space controlled by and assigned to Equifax would also be disappointed: The site is hosted at Amazon.

While there’s nothing wrong with that exactly, one might reasonably ask: Why didn’t Equifax just send the email from Equifax.com and host the ID theft monitoring service there as well? Wouldn’t that have considerably lessened any suspicion that this missive might be a phishing attempt?

Perhaps, but you see while TrustedID is technically owned by Equifax Inc., its services are separate from Equifax and its terms of service are different from those provided by Equifax (almost certainly to separate Equifax from any consumer liability associated with its monitoring service).

THE BACKSTORY

What’s super-interesting about trustedid.com is that it didn’t always belong to Equifax. According to the site’s Wikipedia page, TrustedID Inc. was purchased by Equifax in 2013, but it was founded in 2004 as an identity protection company which offered a service that let consumers automatically “freeze” their credit file at the major bureaus. A freeze prevents Equifax and the other major credit bureaus from selling an individual’s credit data without first getting consumer consent.

By 2006, some 17 states offered consumers the ability to freeze their credit files, and the credit bureaus were starting to see the freeze as an existential threat to their businesses (in which they make slightly more than a dollar each time a potential creditor — or ID thief — asks to peek at your credit file).

Other identity monitoring firms — such as LifeLock — were by then offering services that automated the placement of identity fraud controls — such as the “fraud alert,” a free service that consumers can request to block creditors from viewing their credit files.

[Author’s note: Fraud alerts only last for 90 days, although you can renew them as often as you like. More importantly, while lenders and service providers are supposed to seek and obtain your approval before granting credit in your name if you have a fraud alert on your file, they are not legally required to do this — and very often don’t.]

Anyway, the era of identity monitoring services automating things like fraud alerts and freezes on behalf of consumers effectively died after a landmark lawsuit filed by big-three bureau Experian (which has its own storied history of data breaches). In 2008, Experian sued LifeLock, arguing its practice of automating fraud alerts violated the Fair Credit Reporting Act.

In 2009, a court found in favor of Experian, and that decision effectively killed such services — mainly because none of the banks wanted to distribute them and sell them as a service anymore.

WHAT SHOULD YOU DO

These days, consumers in all states have a right to freeze their credit files, and I would strongly encourage all readers to do this. Yes, it can be a pain, and the bureaus certainly seem to be doing everything they can at the moment to make this process extremely difficult and frustrating for consumers. As detailed in the analysis section of last week’s story — Equifax Breach: Setting the Record Straight — many of the freeze sites are timing out, crashing or telling consumers just to mail in copies of identity documents and printed-out forms.

Other bureaus, like TransUnion and Experian, are trying mightily to steer consumers away from a freeze and toward their confusingly named “credit lock” services — which claim to be the same thing as freezes only better. The truth is these lock services do not prevent the bureaus from selling your credit reports to anyone who comes asking for them (including ID thieves); and consumers who opt for them over freezes must agree to receive a flood of marketing offers from a myriad of credit bureau industry partners.

While it won’t stop all forms of identity theft (such as tax refund fraud or education loan fraud), a freeze is the option that puts you the consumer in the strongest position to control who gets to monkey with your credit file. In contrast, while credit monitoring services might alert you when someone steals your identity, they’re not designed to prevent crooks from doing so.

That’s not to say credit monitoring services aren’t useful: They can be helpful in recovering from identity theft, which often involves a tedious, lengthy and expensive process for straightening out the phony activity with the bureaus.

The thing is, it’s almost impossible to sign up for credit monitoring services while a freeze is active on your credit file, so if you’re interested in signing up for them it’s best to do so before freezing your credit. But there’s no need to pay for these services: Hundreds of companies — many of which you have probably transacted with at some point in the last year — have disclosed data breaches and are offering free monitoring. California maintains one of the most comprehensive lists of companies that disclosed a breach, and most of those are offering free monitoring.

There’s a small catch with the freezes: Depending on the state in which you live, the bureaus may each be able to charge you for freezing your file (the fee ranges from $5 to $20); they may also be able to charge you for lifting or temporarily thawing your file in the event you need access to credit. Consumers Union has a decent rundown of the freeze fees by state.

In short, sign up for whatever free monitoring is available if that’s of interest, and then freeze your file at the four major bureaus. You can do this online, by phone, or through the mail. Given how unreliable the credit bureau Web sites have been for placing freezes these past few weeks, it may be easiest to do this over the phone. Here are the freeze Web sites and freeze phone numbers for each bureau (note the phone procedures can and likely will change as the bureaus get wise to more consumers learning how to quickly step through their automated voice response systems):

Equifax: 866-349-5191; choose option 3 for a “Security Freeze”

Experian: 888-397-3742;
–Press 2 “To learn about fraud or ADD A
SECURITY FREEZE”
–Press 2 “for security freeze options”
–Press 1 “to place a security freeze”
–Press 2 “…for all others”
–enter your info when prompted

Innovis: 800-540-2505;
–Press 1 for English
–Press 3 “to place or manage an active duty alert
or a SECURITY FREEZE”
–Press 2 “to place or manage a SECURITY
FREEZE”
–enter your info when prompted

Transunion: 888-909-8872, choose option 3

If you still have questions about freezes, fraud alerts, credit monitoring or anything else related to any of the above, check out the lengthy primer/Q&A I published here on Sept. 11, The Equifax Breach: What You Should Know.

Planet Linux AustraliaDave Hall: Drupal Puppies

Over the years Drupal distributions, or distros as they're more affectionately known, have evolved a lot. We started off passing around database dumps. Eventually we moved onto using installations profiles and features to share par-baked sites.

There are some signs that distros aren't working for people using them. Agencies often hack a distro to meet client requirements. This happens because it is often difficult to cleanly extend a distro. A content type might need extra fields or the logic in an alter hook may not be desired. This makes it difficult to maintain sites built on distros. Other times maintainers abandon their distributions. This leaves site owners with an unexpected maintenance burden.

We should recognise how people are using distros and try to cater to them better. My observations suggest there are 2 types of Drupal distributions; starter kits and targeted products.

Targeted products are easier to deal with. Increasingly monetising targeted distro products is done through a SaaS offering. The revenue can funds the ongoing development of the product. This can help ensure the project remains sustainable. There are signs that this is a viable way of building Drupal 8 based products. We should be encouraging companies to embrace a strategy built around open SaaS. Open Social is a great example of this approach. Releasing the distros demonstrates a commitment to the business model. Often the secret sauce isn't in the code, it is the team and services built around the product.

Many Drupal 7 based distros struggled to articulate their use case. It was difficult to know if they were a product, a demo or a community project that you extend. Open Atrium and Commerce Kickstart are examples of distros with an identity crisis. We need to reconceptualise most distros as "starter kits" or as I like to call them "puppies".

Why puppies? Once you take a puppy home it becomes your responsibility. Starter kits should be the same. You should never assume that a starter kit will offer an upgrade path from one release to the next. When you install a starter kit you are responsible for updating the modules yourself. You need to keep track of security releases. If your puppy leaves a mess on the carpet, no one else will clean it up.

Sites build on top of a starter kit should diverge from the original version. This shouldn't only be an expectation, it should be encouraged. Installing a starter kit is the starting point of building a unique fork.

Project pages should clearly state that users are buying a puppy. Prospective puppy owners should know if they're about to take home a little lap dog or one that will grow to the size of a pony that needs daily exercise. Puppy breeders (developers) should not feel compelled to do anything once releasing the puppy. That said, most users would like some documentation.

I know of several agencies and large organisations that are making use of starter kits. Let's support people who are adopting this approach. As a community we should acknowledge that distros aren't working. We should start working out how best to manage the transition to puppies.

Planet DebianPetter Reinholdtsen: Easier recipe to observe the cell phones around you

A little more than a month ago I wrote how to observe the SIM card ID (aka IMSI number) of mobile phones talking to nearby mobile phone base stations using Debian GNU/Linux and a cheap USB software defined radio, and thus being able to pinpoint the location of people and equipment (like cars and trains) with an accuracy of a few kilometer. Since then we have worked to make the procedure even simpler, and it is now possible to do this without any manual frequency tuning and without building your own packages.

The gr-gsm package is now included in Debian testing and unstable, and the IMSI-catcher code no longer require root access to fetch and decode the GSM data collected using gr-gsm.

Here is an updated recipe, using packages built by Debian and a git clone of two python scripts:

  1. Start with a Debian machine running the Buster version (aka testing).
  2. Run 'apt install gr-gsm python-numpy python-scipy python-scapy' as root to install required packages.
  3. Fetch the code decoding GSM packages using 'git clone github.com/Oros42/IMSI-catcher.git'.
  4. Insert USB software defined radio supported by GNU Radio.
  5. Enter the IMSI-catcher directory and run 'python scan-and-livemon' to locate the frequency of nearby base stations and start listening for GSM packages on one of them.
  6. Enter the IMSI-catcher directory and run 'python simple_IMSI-catcher.py' to display the collected information.

Note, due to a bug somewhere the scan-and-livemon program (actually its underlying program grgsm_scanner) do not work with the HackRF radio. It do work with RTL 8232 and other similar USB radio receivers you can get very cheaply (for example from ebay), so for now the solution is to scan using the RTL radio and only use HackRF for fetching GSM data.

As far as I can tell, a cell phone only show up on one of the frequencies at the time, so if you are going to track and count every cell phone around you, you need to listen to all the frequencies used. To listen to several frequencies, use the --numrecv argument to scan-and-livemon to use several receivers. Further, I am not sure if phones using 3G or 4G will show as talking GSM to base stations, so this approach might not see all phones around you. I typically see 0-400 IMSI numbers an hour when looking around where I live.

I've tried to run the scanner on a Raspberry Pi 2 and 3 running Debian Buster, but the grgsm_livemon_headless process seem to be too CPU intensive to keep up. When GNU Radio print 'O' to stdout, I am told there it is caused by a buffer overflow between the radio and GNU Radio, caused by the program being unable to read the GSM data fast enough. If you see a stream of 'O's from the terminal where you started scan-and-livemon, you need a give the process more CPU power. Perhaps someone are able to optimize the code to a point where it become possible to set up RPi3 based GSM sniffers? I tried using Raspbian instead of Debian, but there seem to be something wrong with GNU Radio on raspbian, causing glibc to abort().

Planet DebianIain R. Learmonth: Onion Services

In the summer 2017 edition of 2600 magazine there is a brilliant article on running onion services as part of a series on censorship resistant services. Onion services provide privacy and security for readers above that which is possible through the use of HTTPS.

Since moving my website to Netlify, my onion service died as Netlify doesn’t provide automatic onion services (although they do offer automated Let’s Encrypt certificate provisioning). If anyone from Netlify is reading this, please consider adding a one-click onion service button next to the Let’s Encrypt button.

For now though, I have my onion service hosted elsewhere. I’ve got a regular onion service (version 2) and also now a next generation onion service (version 3). My setup works like this:

  • A cronjob polls my website’s git repository that contains a Hugo static site
  • Two versions of the site are built with different base URLs set in the Hugo configuration, one for the regular onion service domain and one for the next generation onion service domain
  • Apache is configured for two virtual hosts, one for each domain name
  • tor from the Debian archives is configured for the regular onion service
  • tor from git (to have next generation onion service support) is configured for the next generation onion service

The main piece of advice I have for anyone that would like to have an onion service version of their static website is to make sure that your static site generator is handling URLs for you and that your sources have relative URLs as far as possible. Hugo is great at this and most themes should be using the baseURL configuration parameter where appropriate.

There may be some room for improvement here in the polling process, perhaps this could be triggered by a webhook instead.

I’m not using HTTPS on these services as the HTTPS private key for the domain isn’t even controlled by me, it’s controlled by Netlify, so wouldn’t really be a great method of authentication and Tor already provides strong encryption and its own authentication through the URL of the onion service.

Of course, this means you need a secure way to get the URL, so here’s a PGP signed couple of URLs:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

As of 2017-09-23, the website at iain.learmonth.me is mirrored by me at
the following onion addresses:

w6d6vblb6vhuqxt6.onion
tvin5bvfwew3ldttg5t6ynlif4t53y3mbmb7sgbyud7h5q6gblrpsnyd.onion

This declaration was written and signed for publication in my blog.
-----BEGIN PGP SIGNATURE-----

iQEzBAEBCgAdFiEEfGEElJRPyB2mSFaW0hedW4oe0BEFAlnG1FMACgkQ0hedW4oe
0BGtTwgAp9PK6x1X9lnPLaeOOEALxn2BkDK5Q6PBt7OfnTh+f53oRrrxf0fmfNMH
Qz/IDY+tULX3TZYbjDsuu+aDpk6YIdOnOzFpIYW9Qhm6jAsX4RDfn1cZoHg1IeM7
bCvrYHA5u753U3Mm+CsLbGihpYZE/FBdc/nE5S6LxYH83QZWLIW19EPeiBpBp3Hu
VB6hUrDz3XU23dXn2U5/7faK7GKbC6TrBG/Z6dUtaXB62xgDIrPEMorwfsAZnWv4
3mAEsYJv9rnIyLbWamXDas8fJG04DOT+2C1NYmZ5CNJ4C7PKZuIYkaoVAp+pzLGJ
6BEBYaRvYIjd5g8xdVC3kmje6IM9cg==
=lUvh
-----END PGP SIGNATURE-----

Note: For the next generation onion service, I do currently have some logging enabled in the tor daemon as I’m running this service as an experiment to uncover any bugs that appear. There is no logging beyond the default for the version 2 hidden service’s tor daemon.

Another note: Current stable releases of Tor Browser do not support next generation onion services, you’ll have to grab an experimental build to try them out.

Viewing my next generation onion service in Tor Browser

Planet DebianIain R. Learmonth: Free Software Efforts (2017W38)

Here’s my weekly report for week 38 of 2017. This week has not been a great week as I saw my primary development machine die in a spectacular reboot loop. Thanks to the wonderful community around Debian and free software (that if you’re reading this, you’re probably part of), I should be back up to speed soon. A replacement workstation is currently moving towards me and I’ve received a number of smaller donations that will go towards video converters and upgrades to get me back to full productivity.

Debian

I’ve prepared and tested backports for 3 packages in the tasktools packaging team: tasksh, bugwarrior and powerline-taskwarrior. Unfortunately I am not currently in the backports ACLs and so I can’t upload these but I’m hoping this to be resolved soon. Once these are uploaded, the latest upstream release for all packages in the tasktools team will be available either in the stable suite or in the stable backports suite.

In preparation for the shutdown of Alioth mailing lists, I’ve set up a new mailing list for the tasktools team and have already updated the maintainer fields for all the team’s packages in git. I’ve subscribed the old mailing list’s user to the new mailing list in DDPO so there will still be a comprehensive view there during the migration. I am currently in the process of reaching out to the admins of git.tasktools.org with a view to moving our git repositories there.

I’ve also continued to review the scapy package and have closed a couple more bugs that were already fixed in the latest upstream release but had been missed in the changelog.

Bugs closed (fixed/wontfix): #774962, #850570

Tor Project

I’ve deployed a small fix to an update from last week where the platform field on Atlas had been pulled across to the left column. It has now been returned to the right hand column and is not pushed down the page by long family lists.

I’ve been thinking about the merge of Compass functionality into a future Atlas and this is being tracked in #23517.

Tor Project has approved expenses (flights and hotel) for me to attend an in-person meeting of the Metrics Team. This meeting will occur in Berlin on the 28th September and I will write up a report detailing outcomes relevant to my work after the meeting. I have spent some time this week preparing for this meeting.

Bugs closed (fixed/wontfix): #22146, #22297, #23511

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

The loss of my primary development machine was a setback, however, I have been donated a new workstation which should hopefully arrive soon. The hard drives in my NAS can now also be replaced as I have budget available for this now. I do not see any hardware failures being imminent at this time, however should they occur I would not have budget to replace hardware, I only have funds to replace the hardware that has already failed.

,

Planet DebianEnrico Zini: Systemd unit files

These are the notes of a training course on systemd I gave as part of my work with Truelite.

Writing .unit files

For reference, the global index with all .unit file directives is at man systemd.directives.

All unit files have a [Unit] section with documentation and dependencies. See man systemd.unit for documentation.

It is worth having a look at existing units to see what they are like. Use systemctl --all -t unittype for a list, and systemctl cat unitname to see its content wherever it is installed.

For example: systemctl cat graphical.target. Note that systemctl cat adds a line of comment at the top so one can see where the unit file is installed.

Most unit files also have an [Install] section (also documented in man systemd.unit) that controls what happens when enabling or disabling the unit.

See also:

.target units

.target units only contain [Unit] and [Install] sections, and can be used to give a name to a given set of dependencies.

For example, one could create a remote-maintenance.target unit, that when brought up activates, via dependencies, a set of services, mounts, network sockets, and so on.

See man systemd.target

See systemctl --all -t target for examples.

special units

man systemd.special has a list of units names that have a standard use associated to them.

For example, ctrl-alt-del.target is a unit that is started whenever Control+Alt+Del is pressed on the console. By default it is symlinked to reboot.target, and you can provide your own version in /etc/systemd/system/ to perform another action when Control+Alt+Del is pressed.

User units

systemd can also be used to manage services on a user session, starting them at login and stopping them at logout.

Add --user to the normal systemd commands to have them work with the current user's session instead of the general system.

See systemd/User in the Arch Wiki for a good description of what it can do.

Planet DebianDirk Eddelbuettel: RcppCNPy 0.2.7

A new version of the RcppCNPy package arrived on CRAN yesterday.

RcppCNPy provides R with read and write access to NumPy files thanks to the cnpy library by Carl Rogers.

This version updates internals for function registration, but otherwise mostly switches the vignette over to the shiny new pinp two-page template and package.

Changes in version 0.2.7 (2017-09-22)

  • Vignette updated to Rmd and use of pinp package

  • File src/init.c added for dynamic registration

CRANberries also provides a diffstat report for the latest release. As always, feedback is welcome and the best place to start a discussion may be the GitHub issue tickets page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RcppClassic 0.9.8

A bug-fix release RcppClassic 0.9.8 for the very recent 0.9.7 release which fixes a build issue on macOS introduced in 0.9.7. No other changes.

Courtesy of CRANberries, there are changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaTim Serong: On Equal Rights

This is probably old news now, but I only saw it this morning, so here we go:

In case that embedded tweet doesn’t show up properly, that’s an editorial in the NT News which says:

Voting papers have started to drop through Territory mailboxes for the marriage equality postal vote and I wanted to share with you a list of why I’ll be voting yes.

1. I’m not an arsehole.

This resulted in predictable comments along the lines of “oh, so if I don’t share your views, I’m an arsehole?”

I suppose it’s unlikely that anyone who actually needs to read and understand what I’m about to say will do so, but just in case, I’ll lay this out as simply as I can:

  • A personal belief that marriage is a thing that can only happen between a man and a woman does not make you an arsehole (it might make you on the wrong side of history, or a lot of other things, but it does not necessarily make you an arsehole).
  • Voting “no” to marriage equality is what makes you an arsehole.

The survey says “Should the law be changed to allow same-sex couples to marry?” What this actually means is, “Should same-sex couples have the same rights under law as everyone else?”

If you believe everyone should have the same rights under law, you need to vote yes regardless of what you, personally, believe the word “marriage” actually means – this is to make sure things like “next of kin” work the way the people involved in a relationship want them to.

If you believe that there are minorities that should not have the same rights under law as everyone else, then I’m sorry, but you’re an arsehole.

(Personally I think the Marriage Act should be ditched entirely in favour of a Civil Unions Act – that way the word “marriage” could go back to simply meaning whatever it means to the individuals being married, and to their god(s) if they have any – but this should in no way detract from the above. Also, this vote shouldn’t have happened in the first place; our elected representatives should have done their bloody jobs and fixed the legislation already.)

TEDFuture visions: The talks of TEDGlobalNYC

A night of TED Talks at The Town Hall theater in Manhattan covered topics ranging from climate change and fake news to the threat AI poses for democracy. Photo: Ryan Lash / TED

The advance toward a more connected, united, compassionate world is in peril. Some voices are demanding a retreat, back to a world where insular nations battle for their own interests. But most of the big problems we face are collective in nature and global in scope. What can we do, together, about it?

In a night of talks curated and hosted by TED International Curator Bruno Giussani and TED Curator, Chris Anderson, at The Town Hall in Manhattan, eight speakers covered topics ranging from climate change and fake news to the threat AI poses for democracy and the future of markets, imagining what a globally connected world could and should look like.

What stake do we have in common? Naoko Ishii is all about building bridges between people and the environment (her organization is one of the main partners in a Herculean effort to restore the Amazon). As the CEO and chair of the Global Environment Facility, it’s her job is to get everyone on board with protecting and respecting the global commons (water, air, forests, biodiversity, the oceans), if only for the simple fact that the world’s economy is intimately linked to the wellness of Earth. Ishii opened TEDGlobal>NYC with a necessary reminder: that despite their size, these global commons have been neglected for too long, and the price is too high not to make fundamental changes in our collective behavior to save them from collapse. This current generation, she says, is the last generation that can preserve what’s left of our natural resources. If we change how eat, reduce our waste and make determined strides toward sustainable cities, there’s a chance that all hope is not lost.

Climate psychologist Per Espen Stoknes explains a new way of talking about climate change at TEDGlobal>NYC, September 20, 2017, The Town Hall, NY. Photo: Ryan Lash / TED

What we think about when we try not to think about global warming. From “scientese” to visions of the apocalypse, climate-change advocates have struggled with communicating the realities of our warming planet in a way that actually gets people to do something. “Climate psychologist” Per Espen Stoknes wondered why so many climate-change messages leave us feeling helpless and in denial instead of inspired to seek solutions. He shares with us his findings for “a more brain-friendly climate communication” — one that feels personal, doable and empowering. By scaling actions and examples down to local and more relatable levels, we can begin to feel more in control, and start to feel like our actions will have impact, Stoknes suggests. Stepping away from the doomsday narratives and instead reframing green behavior in terms of its positive additions to our lives, such as job growth and better health, can also limit our fear and increase our desire to engage in these important conversations. Our planet may be in trouble, but telling new stories could just save us.

Building the resilient cities. With fantastic new maps that provide interactive and visual representations of large data sets, Robert Muggah articulates an ancient but resurging idea: that cities should be not only the center of economic life but also the foundation of our political lives. Cities bear a significant burden of the world’s problems and have been catalysts for catastrophe, Muggah says — as an example, he shows how, in the run-up to the civil war in Syria, fragile cities like Homs and Aleppo could not bear the weight of internally displaced refugees running away from drought and famine. While this should alarm us, Muggah also sees opportunity and a chance to ride the chaotic waves of the 21st century. Looking around the world, he puts down six principles for building the resilient city. For instance, he highlights integrated and multi-use solutions like Seoul’s expanding public transportation system, where cars once dominated how people move. The current model of the nation-state that emerged in the 17th century is no longer what it once was; nation-states cannot face global crises decisively and efficiently. But the work of urban leaders and coalitions of cities like the C-40 can guide us to a healthier, more peaceful planet.

Christiane Amanpour speaks about the era of fake news at TEDGlobal>NYC, September 20, 2017, The Town Hall, NY, NY. Photo: Ryan Lash / TED

Seeking the truth. Known worldwide for her courage and clarity, Christiane Amanpour has spent the past three decades interviewing business, cultural and political leaders who have shaped history. This time she’s the one being interviewed, by TED curator Chris Anderson, in a comprehensive conversation covering fake news, objectivity in journalism, the leadership vacuum in global politics and much more. Amanpour opens with her experience reporting the Srebrenica genocide in the 1990s, and connects it to the state of journalism today, making a strong case for refusing to be an accomplice to fake news. “We’ve never faced such a massive amount of information which is not curated by those whose profession leads them to abide by the truth,” she says. “Objectivity means giving all sides an equal hearing but not creating a forced moral equivalence.” Facebook and other outlets need to step up and combat fake news, she continues, calling for a moral code of conduct and algorithms to “filter out the crap” that populates our news feeds. Amanpour — fresh from her interview with French president Emmanuel Macron, his first with an international journalist — leaves us with some wisdom: “Be careful where you get information from. Unless we are all engaged as global citizens who appreciate the truth, who understand science, empirical evidence and facts, then we are going to be wandering around — to a potential catastrophe.”

Though he had a cold and could not sing for us, Yusuf Islam (Cat Stevens) takes a moment onstage to discuss faith and music with TED’s own Chris Anderson, at TEDGlobal NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

A cat’s attic. Yusuf Islam (Cat Stevens)‘s music has been embraced by generations of fans as anthems of peace and unity. In conversation with TED curator Chris Anderson, Yusuf discusses the influence of his music, the arc of his career and his Muslim faith. “I discovered something beyond the facade of what we are taught to believe about others,” Yusuf says of his embrace of Islam in the late ’70s. “There are ways of looking at this world other than the material … Islam brought together all the strands of religion I could ever wish for.” Connecting his return to music after 9/11 to his current work and new album, The Laughing Apple, Yusuf sees his mission as spreading messages of peace and hope. “Be careful about exclusion,” he says. “In the [education] curriculum, we’ve got to start looking towards a globalized curriculum … We should know a bit more about the other to avoid the build up of antagonization.”

“Wherever I look, I see nuances withering away.” In a personal talk, author and political commentator Elif Shafak cautions against the dangers of a dualist worldview. A native of Turkey, she has experienced the devastation that a loss of diversity can bring firsthand, and she knows the revolutionary power of plurality in response to authoritarianism. She reminds us that there are no binaries, whether between developed and developing nations, politics and emotions, or even our own identities. By embracing our countries and societies as mosaics, we push back against tribalism and reach across borders. “One should never ever remain silent for fear of complexity,” Shafak says.

We know what we are saying “no” to, but what are we saying “yes” to? In her classic book The Shock Doctrine — and her new book No Is Not Enough — writer and activist Naomi Klein examines how governments use large-scale shocks like natural disasters, financial crises and terrorist attacks to exploit the public and push through radical pro-corporate measures. At TEDGlobal>NYC, Klein explains that resistance to policies that attack the public is not enough; we also must have a concrete plan for how we want to reorganize society. A few years ago, Klein and a consortium of indigenous leaders, urban hipsters, climate change activists, oil and gas workers, faith leaders, anarchists, migrant rights organizers and leading feminists decided to lock themselves in a room to discuss their utopian vision for the future. They emerged two days later with a manifesto known as The Leap Manifesto, which is all about caring for the earth and one another. Klein shares a few propositions from the platform, including a call for a 100 percent renewable economy, new investment in the low-carbon workforce, comprehensive programs to retrain workers who are losing their jobs in extractive and industrial sectors, and a demand that those who that profit from pollution pay for it. “We live in a time where every alarm in our house is going off,” she concludes. “It’s time to listen. It’s time — together — to leap.”

Could a Facebook algorithm tell us how to vote? Zeynep Tufekci asks why algorithms are controlling more and more of our behavior, like it or not. She speaks at TEDGlobal>NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

There’s nothing left to fear from AI but the humans behind it. Technosociologist Zeynep Tufecki isn’t worried about AI — it’s the intention behind the technology that’s truly concerning. Data about you is being collected and sold daily, says Tufecki, and the prodigious potential of machine learning comes with potentially catastrophic risks. Companies like Facebook and Google haven’t thoroughly factored in the ethical dilemmas that come with automated systems that are programmed to exploit human weakness in order to place ads in front of exactly the people most likely to buy. If not checked, the ads and recommendations that follow you around well after you’ve stopped searching can snowball from well-meaning to insidious. It’s not to say that social media and the internet are all bad — in fact, Tufecki has written at length about the benefits and power it has bestowed upon many — but her talk is a strong reminder to be aware of the negative potential of AI as well as the positives, and to fight for our collective digital future.

Competition is only fair, says the EU’s Commissioner for Competition, Margrethe Vestager at TEDGlobal NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

The fight for fairness. This June, the EU levied a record $2.7 billion fine against Google for breaching antitrust rules by unfairly favoring its comparison shopping service in search. More than double the previous largest penalty in this type of antitrust case, the penalty confirmed Margrethe Vestager, European Commissioner for Competition, as one of the world’s most powerful trustbusters. In the closing talk of TEDGlobal>NYC, Vestager makes the connection between how fairness in the markets — and corrective action to ensure it exists — can establish trust in society and each other. Competition in markets gives us the power to demand a fair deal, Vestager says; when it’s removed, either by colluding businesses or biased governments, trust disappears too. “Lack of trust in the market can rub off on society, so we lose trust in society as well,” she says. “Without trust, everything becomes harder.” But competition rules — and those that enforce them — can reestablish the balance between individuals and powerful, seemingly invulnerable multinational corporations. “Trust cannot be imposed, it has be to earned,” Vestager says. “Competition makes the market work for everyone. And that’s why I’m convinced that real and fair competition has a vital role to play in building the trust we need to get the best out of society. And that starts with enforcing our rules.”

Shine as bright as you can. Electro-soul duo Ibeyi closed out TEDGlobal>NYC with a minimalistic, deeply transportive lyrical set. A harmony of voices, piano and cajon drum filled the venue as the pair sang in a mixture of Yoruba, English and French. “Look at the star,” they sing. “I know she’s proud of who you’ve been and who you are.”

TEDGlobal>NYC was made possible by support from Ford Foundation, The Skoll FoundationUnited Nations Foundation and Global Citizen.


TEDHow the ‘Battle of the Sexes’ influenced a generation of men: Billie Jean King’s TEDWomen update

Billie Jean King: “Bobby Riggs — he was the former number one player, he wasn’t just some hacker. He was one of my heroes and I admired him. And that’s the reason I beat him, actually, because I respected him.” She spoke with Pat Mitchell at TEDWomen2015. Photo: Marla Aufmuth/TED

Forty-three years ago this week, the number one tennis star in the world, 29-year-old Billie Jean King, agreed to take on 55-year-old Bobby Riggs, in a match dubbed the “Battle of the Sexes.” The prize was $100,000 — which compared with today’s million-dollar-winning pots wasn’t much — but it was the first time that women and men were offered the same amount of prize money for victory.

The exhibition match, which admittedly was more notable at the time for its spectacle and outrageousness — Billie Jean King entered the Houston Astrodome on a feathery litter carried by shirtless men, for instance — was the most watched tennis match ever, with an estimated worldwide television audience of 90 million people. If you are old enough to remember it, you probably watched it.

Billie Jean King won in straight sets: 6-4, 6-3, 6-3.

This weekend, a new movie based on the true story starring Emma Stone as Billie Jean King and Steve Carell as Bobby Riggs hits theaters. With the election of Donald Trump — and all the sexism and misogyny that the 2016 election entailed just behind us — the story is sadly relevant today. As Lynn Sherr wrote in her review of the movie today at BillMoyers.com, “It’s all frustratingly familiar, but this time, the over-the-hill clown won.”

I interviewed Billie Jean King at TEDWomen in 2015 about her tennis career and lifelong fight for gender parity in sports and in the workplace. She talked about the match with Riggs and the intense pressure she felt on every stroke to win for women. She recalled, “I thought, ‘If I lose, it’s going to put women back 50 years, at least.’”

After she won, many women told her that her victory empowered them to finally get up the nerve to ask for a raise at work. “Some women had waited 10, 15 years to ask. I said, ‘More importantly, did you get it?’” (They did.)

As for men, the reaction was delayed. Many years later, she came to realize that the match had made an impact on the generation of men who were children at the time – an impact that they themselves didn’t realize until they were older. She told me, “Most times, the men are the ones who have tears in their eyes, it’s very interesting.” They say, ‘Billie, I was very young when I saw that match, and now I have a daughter. And I am so happy I saw that as a young man.’”

One of those young men was President Obama.

He said: “You don’t realize it, but I saw that match at 12. And now I have two daughters, and it has made a difference in how I raise them.”

Watch my interview with Billie Jean King if you haven’t seen it:

A common refrain of those working to improve diversity and representation in media is that if you can’t see it, you can’t be it. And that’s true in sports, government and in the workplace as well. If leaders don’t represent the diversity of our globalizing world, fresh ideas, diverse talent and an inclusive society can’t flourish. Through the Billie Jean King Leadership Initiative, King works to level the playing field for all people of all backgrounds so that everyone can “achieve their maximum potential and contribute to building a better society for all.” (Full disclosure: I am a member of the BJKLI advisory council.)

Emma Stone told USA Today earlier this month, she’s proud to play a part in showing some of King’s story to a younger audience. “The nice thing about doing a film like this,” she said, “is that there’s a whole generation of people who weren’t born before the Battle of the Sexes who are going to learn about this incredible period in history and all the things that have come since, so I’m grateful for that.”

“It wasn’t about tennis,” says King. “It was about history and social change.”

TEDWomen 2017 happens November 1–3 in New Orleans, and you’re invited. Learn more!

Billie Jean King: “I started thinking about my sport and how everybody who played wore white shoes, white clothes, played with white balls — everybody who played was white. And I said to myself, at 12 years old, “Where is everyone else?” And that just kept sticking in my brain. And that moment, I promised myself I’d fight for equal rights and opportunities for boys and girls, men and women, the rest of my life.” Photo: Marla Aufmuth/TED


Planet DebianIain R. Learmonth: VM on bhyve not booting

Last night I installed updates on my FreeNAS box and rebooted it. As expected my network died, but then it never came back, which I hadn’t expected.

My FreeNAS box provides backup storage space, a local Debian mirror and a mirror of talks from recent conferences. It also runs a couple of virtual machines and one of these provides my local DNS resolver.

I hooked up the VNC console to the virtual machine and the problem looked to be that it was booting from the Debian installer CD. I removed the CD from the VM and rebooted, thinking that would be the end of it, but nope:

The EFI shell presented where GRUB should have been

I put the installer CD back and booted in “Rescue Mode”. For some reason, the bootloader installation wasn’t working, so I planned to reinstall it. The autopartition layout for Debian with EFI seems to use /dev/sda2 for the root partition. When you choose this it will see that you have an EFI partition and offer to mount it for you too.

When I went to install the bootloader, I saw another option that I didn’t know about: “Force GRUB installation in removable media path”. In the work I did on live-wrapper I had only ever dealt with this method of booting, I didn’t realise that there were other methods. The reasoning behind this option can be found in detail in Debian bug #746662. I also found mjg59’s blog post from 2011 useful in understanding this.

Suffice is to say that this fixed the booting issue for me in this case. I haven’t investigated this much futher so I can’t be certain of any reproducable steps to this problem, but I did also stumble across this forum post which essentially gives the manual steps that are taken by that Rescue Mode option in order to fix the problem. I think the only reason I hadn’t run into this before now is that the VMs hadn’t been rebooted since their installation.

Planet DebianRussell Coker: Converting Mbox to Maildir

MBox is the original and ancient format for storing mail on Unix systems, it consists of a single file per user under /var/spool/mail that has messages concatenated. Obviously performance is very poor when deleting messages from a large mail store as the entire file has to be rewritten. Maildir was invented for Qmail by Dan Bernstein and has a single message per file giving fast deletes among other performance benefits. An ongoing issue over the last 20 years has been converting Mbox systems to Maildir. The various ways of getting IMAP to work with Mbox only made this more complex.

The Dovecot Wiki has a good page about converting Mbox to Maildir [1]. If you want to keep the same message UIDs and the same path separation characters then it will be a complex task. But if you just want to copy a small number of Mbox accounts to an existing server then it’s a bit simpler.

Dovecot has a mb2md.pl script to convert folders [2].

cd /var/spool/mail
mkdir -p /mailstore/example.com
for U in * ; do
  ~/mb2md.pl -s $(pwd)/$U -d /mailstore/example.com/$U
done

To convert the inboxes shell code like the above is needed. If the users don’t have IMAP folders (EG they are just POP users or use local Unix MUAs) then that’s all you need to do.

cd /home
for DIR in */mail ; do
  U=$(echo $DIR| cut -f1 -d/)
  cd /home/$DIR
  for FOLDER in * ; do
    ~/mb2md.pl -s $(pwd)/$FOLDER -d /mailstore/example.com/$U/.$FOLDER
  done
  cp .subscriptions /mailstore/example.com/$U/ subscriptions
done

Some shell code like the above will convert the IMAP folders to Maildir format. The end result is that the users will have to download all the mail again as their MUA will think that every message had been deleted and replaced. But as all servers with significant amounts of mail or important mail were probably converted to Maildir a decade ago this shouldn’t be a problem.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV October 2017 Workshop

Oct 21 2017 12:30
Oct 21 2017 16:30
Oct 21 2017 12:30
Oct 21 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 21, 2017 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main October 2017 Meeting

Oct 3 2017 18:30
Oct 3 2017 20:30
Oct 3 2017 18:30
Oct 3 2017 20:30
Location: 
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

PLEASE NOTE NEW LOCATION

Tuesday, October 3, 2017
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000

Speakers:

  • TBA

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.
October 3, 2017 - 18:30

,

Planet DebianEnrico Zini: Systemd on the command line

These are the notes of a training course on systemd I gave as part of my work with Truelite.

Exploring the state of a system

  • systemctl status [unitname [unitname..]] show status of one or more units, or of the whole system. Glob patterns also work: systemctl status "systemd-fsck@*"
  • systemctl list-units or just systemctl show a table with all units, their status and their description
  • systemctl list-sockets lists listening sockets managed by systemd and what they activate
  • systemctl list-timers lists timer-activated units, with information about when they last ran and when they will run again
  • systemctl is-active [pattern] checks if one or more units are in active state
  • systemctl is-enabled [pattern] checks if one or more units are enabled
  • systemctl is-failed [pattern] checks if one or more units are in failed state
  • systemctl list-dependencies [unitname] lists the dependencies of a unit, or a system-wide dependency tree
  • systemctl is-system-running check if the system is running correctly, or if some unit is in a failed state
  • systemd-cgtop like top but processes are aggregated by unit
  • systemd-analyze produces reports on boot time, per-unit boot time charts, dependency graphs, and more

Start and stop services

Similar to the System V service command, systemctl provides commands to start/stop/restart/reload units or services:

  • start: starts a unit if it is not already started
  • stop: stops a unit
  • restart: starts or restarts a unit
  • reload: tell a unit to reload its configuration (if it supports it)
  • try-restart: restarts a unit only if it is already active, otherwise do nothing, to prevent accidentally starting a service
  • reload-or-restart: tell a unit to reload its configuration if supported, otherwise restart it
  • try-reload-or-restart: tell a unit to reload its configuration if supported, otherwise restart it. If the unit is not already active, do nothing to prevent accidentally starting a service.

Changing global system state

systemctl has halt, poweroff, reboot, suspend, hibernate, and hybrid-sleep commands to tell systemd to reboot, power off, suspend and so on. kexec and switch-root also work.

The rescue and emergency commands switch the system to rescue and emergency mode (see man systemd.special. systemctl default switches to the default mode, which also happens when exiting the rescue or emergency shell.

Run services at boot

systemd does not implement runlevels, and services start at boot based on their dependencies.

To start a service at boot, you add to its .service file a WantedBy= dependency on a well-known .target unit.

At boot, systemd brings up the whole chain of dependency started from a default unit, and that will eventually activate also your service.

See systemctl get-default for what unit is currently the default in your system. You can change it via the systemd.unit= kernel command line, so you can configure multiple entries in the boot loader that boot the system running different services. For example systemd.unit=rescue.target for a rescue mode, systemd.unit=multi-user.target for a non-graphical mode, or add your own .target file to implement new system modes.

See systemctl list-units -t target --all for a list of all currently available targets in your system.

  • systemctl enable unitname enables the unit to start at boot, by creating symlinks to it in the .wants directory of the units listed in its WantedBy= configuration
  • systemctl disable unitname removes the symlinks created by enable
  • systemctl reenable unitname removes and readds the symlinks for when you changed WantedBy=

Notes: systemctl start activates a unit right now, but does not automatically enable it at boot systemctl enable enables a unit at boot, but does not automatically start it right now * a disabled unit can still be activated if another unit depends on it

To disable a unit so that it will never get started even if another unit depends on it, use systemctl mask unitname. Use systemctl unmask unitname to undo the masking.

Reloading / restarting systemd

systemctl daemon-reload tells systemd to reload its configuration.

systemctl daemon-reexec tells systemd to restart iself.

TEDCassini’s final dive, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

Farewell to Cassini — and here’s to the continuing search for life beyond Earth. In mid-August, PBS released a digital short featuring Carolyn Porco, a planetary scientist and the leader of the imaging team for the Cassini mission to Saturn. In the short, Porco discusses what is required for life to exist on a planet, and how Saturn’s moon Enceladus seems a promising place to look for life outside Earth. This coincides with Cassini’s final dive on September 15, 2017. After 20 years in space, the Cassini spacecraft ended its seven-year observation of Saturn by diving into its atmosphere, where it burned and disintegrated. (Watch Porco’s TED Talk)

How old is zero really? The Bakshali manuscript is a 70-page birch bark manuscript thought to have been used by merchants in India to practice arithmetic. Notably, it contains the number zero, represented by a small dot. After carbon-dating the manuscript, scientists from the University of Oxford, including mathematics professor Marcus du Sautoy, determined that the manuscript likely dates from 200–400 A.D., much earlier than previously thought. If the carbon dating is correct, Bakshali may be the first known usage of zero as a symbol for nothing. (Watch du Sautoy’s TED Talk)

The power of taking time off. In 2009, Stefan Sagmeister took the TED stage by storm as he shared his vision of time off. In his talk, he explains that every seven years, he embarks on a sabbatical year to recharge, be creative, and feel inspired. Fast forward to 2017, and Neil Pasricha teamed up with the CEO of SimpliFlying, a global aviation strategy firm, to test Sagmeister’s approach within the company. Instead of every seven years, employees took vacation every seven weeks. Despite a few pain points, workers’ creativity, productivity and happiness increased, and the firm’s economic performance improved, Pasricha reports in the Harvard Business Review. It seems as though it pays to relax. (Watch Sagmeister’s TED Talk and Neil Pasricha’s TED Talk)

What’s wrong with US democracy — and how to fix it. In this time of divisive politics, Michael Porter and colleague Katherine Gehl released new research describing the causes of the U.S political system’s failure to serve the public interest. Their detailed report explains how the system changed over the years to benefit political parties and industry allies, and offers strategies for how we can reinvigorate our democracy. (Watch Michael Porter’s TED Talk)

The worst flag in North America gets a reboot. In Roman Mars’ TED Talk on awful city flag designs, he calls Pocatello, Idaho’s flag the worst in North America. The city’s residents didn’t stand for that; they called on local officials to create a new flag. In 2016, a flag design committee was formed, discussions were open to the public, and 709 submissions poured in. Mars even traveled to Pocatello to consult on the design process. Now, Pocatello’s flag has been transformed from what the North American Vexillological Association rated as the worst flag in North America into a flag that attempts to capture the beauty and history of Pocatello. (Watch Roman Mars’ TED Talk)  

Community Health Academy: Phase one. The news may be regularly alarming, but around the world, things are on an upward trajectory. At Goalkeepers, held September 19 and 20 in New York City, the Bill & Melinda Gates Foundation set out to celebrate the “quiet progress” being made toward the UN’s Sustainable Development Goals. Amid a speaker lineup that included Malala Yousafzai, Justin Trudeau and Barack Obama, 2017 TED Prize winner Raj Panjabi stepped up to share his vision for bringing health care to the billion people who lack it by empowering community health workers. He shared the latest on his TED Prize wish: the Community Health Academy. The project now has 15 partners and phase one, launching next year, will be a free, open-education platform for policy makers and nonprofit leaders interested in community health models. “We cannot achieve the Global Goals without investing in hiring, training and equipping community health workers,” said Panjabi. “We’re working to make sure community health workers are no longer an informal, unrecognized group but become a renowned, empowered profession like nurses and doctors.” (Watch Panjabi’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.

Featured Image Credit: NASA.

 

 


CryptogramFriday Squid Blogging: Using Squid Ink to Detect Gum Disease

A new dental imagery method, using squid ink, light, and ultrasound.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet Linux Australiasthbrx - a POWER technical blog: Stupid Solutions to Stupid Problems: Hardcoding Your SSH Key in the Kernel

The "problem"

I'm currently working on firmware and kernel support for OpenCAPI on POWER9.

I've recently been allocated a machine in the lab for development purposes. We use an internal IBM tool running on a secondary machine that triggers hardware initialisation procedures, then loads a specified skiboot firmware image, a kernel image, and a root file system directly into RAM. This allows us to get skiboot and Linux running without requiring the usual hostboot initialisation and gives us a lot of options for easier tinkering, so it's super-useful for our developers working on bringup.

When I got access to my machine, I figured out the necessary scripts, developed a workflow, and started fixing my code... so far, so good.

One day, I was trying to debug something and get logs off the machine using ssh and scp, when I got frustrated with having to repeatedly type in our ultra-secret, ultra-secure root password, abc123. So, I ran ssh-copy-id to copy over my public key, and all was good.

Until I rebooted the machine, when strangely, my key stopped working. It took me longer than it should have to realise that this is an obvious consequence of running entirely from an initrd that's reloaded every boot...

The "solution"

I mentioned something about this to Jono, my housemate/partner-in-stupid-ideas, one evening a few weeks ago. We decided that clearly, the best way to solve this problem was to hardcode my SSH public key in the kernel.

This would definitely be the easiest and most sensible way to solve the problem, as opposed to, say, just keeping my own copy of the root filesystem image. Or asking Mikey, whose desk is three metres away from mine, whether he could use his write access to add my key to the image. Or just writing a wrapper around sshpass...

One Tuesday afternoon, I was feeling bored...

The approach

The SSH daemon looks for authorised public keys in ~/.ssh/authorized_keys, so we need to have a read of /root/.ssh/authorized_keys return a specified hard-coded string.

I did a bit of investigation. My first thought was to put some kind of hook inside whatever filesystem driver was being used for the root. After some digging, I found out that the filesystem type rootfs, as seen in mount, is actually backed by the tmpfs filesystem. I took a look around the tmpfs code for a while, but didn't see any way to hook in a fake file without a lot of effort - the tmpfs code wasn't exactly designed with this in mind.

I thought about it some more - what would be the easiest way to create a file such that it just returns a string?

Then I remembered sysfs, the filesystem normally mounted at /sys, which is used by various kernel subsystems to expose configuration and debugging information to userspace in the form of files. The sysfs API allows you to define a file and specify callbacks to handle reads and writes to the file.

That got me thinking - could I create a file in /sys, and then use a bind mount to have that file appear where I need it in /root/.ssh/authorized_keys? This approach seemed fairly straightforward, so I decided to give it a try.

First up, creating a pseudo-file. It had been a while since the last time I'd used the sysfs API...

sysfs

The sysfs pseudo file system was first introduced in Linux 2.6, and is generally used for exposing system and device information.

Per the sysfs documentation, sysfs is tied in very closely with the kobject infrastructure. sysfs exposes kobjects as directories, containing "attributes" represented as files. The kobject infrastructure provides a way to define kobjects representing entities (e.g. devices) and ksets which define collections of kobjects (e.g. devices of a particular type).

Using kobjects you can do lots of fancy things such as sending events to userspace when devices are hotplugged - but that's all out of the scope of this post. It turns out there's some fairly straightforward wrapper functions if all you want to do is create a kobject just to have a simple directory in sysfs.

#include <linux/kobject.h>

static int __init ssh_key_init(void)
{
        struct kobject *ssh_kobj;
        ssh_kobj = kobject_create_and_add("ssh", NULL);
        if (!ssh_kobj) {
                pr_err("SSH: kobject creation failed!\n");
                return -ENOMEM;
        }
}
late_initcall(ssh_key_init);

This creates and adds a kobject called ssh. And just like that, we've got a directory in /sys/ssh/!

The next thing we have to do is define a sysfs attribute for our authorized_keys file. sysfs provides a framework for subsystems to define their own custom types of attributes with their own metadata - but for our purposes, we'll use the generic bin_attribute attribute type.

#include <linux/sysfs.h>

const char key[] = "PUBLIC KEY HERE...";

static ssize_t show_key(struct file *file, struct kobject *kobj,
                        struct bin_attribute *bin_attr, char *to,
                        loff_t pos, size_t count)
{
        return memory_read_from_buffer(to, count, &pos, key, bin_attr->size);
}

static const struct bin_attribute authorized_keys_attr = {
        .attr = { .name = "authorized_keys", .mode = 0444 },
        .read = show_key,
        .size = sizeof(key)
};

We provide a simple callback, show_key(), that copies the key string into the file's buffer, and we put it in a bin_attribute with the appropriate name, size and permissions.

To actually add the attribute, we put the following in ssh_key_init():

int rc;
rc = sysfs_create_bin_file(ssh_kobj, &authorized_keys_attr);
if (rc) {
        pr_err("SSH: sysfs creation failed, rc %d\n", rc);
        return rc;
}

Woo, we've now got /sys/ssh/authorized_keys! Time to move on to the bind mount.

Mounting

Now that we've got a directory with the key file in it, it's time to figure out the bind mount.

Because I had no idea how any of the file system code works, I started off by running strace on mount --bind ~/tmp1 ~/tmp2 just to see how the userspace mount tool uses the mount syscall to request the bind mount.

execve("/bin/mount", ["mount", "--bind", "/home/ajd/tmp1", "/home/ajd/tmp2"], [/* 18 vars */]) = 0

...

mount("/home/ajd/tmp1", "/home/ajd/tmp2", 0x18b78bf00, MS_MGC_VAL|MS_BIND, NULL) = 0

The first and second arguments are the source and target paths respectively. The third argument, looking at the signature of the mount syscall, is a pointer to a string with the file system type. Because this is a bind mount, the type is irrelevant (upon further digging, it turns out that this particular pointer is to the string "none").

The fourth argument is where we specify the flags bitfield. MS_MGC_VAL is a magic value that was required before Linux 2.4 and can now be safely ignored. MS_BIND, as you can probably guess, signals that we want a bind mount.

(The final argument is used to pass file system specific data - as you can see it's ignored here.)

Now, how is the syscall actually handled on the kernel side? The answer is found in fs/namespace.c.

SYSCALL_DEFINE5(mount, char __user *, dev_name, char __user *, dir_name,
                char __user *, type, unsigned long, flags, void __user *, data)
{
        int ret;

        /* ... copy parameters from userspace memory ... */

        ret = do_mount(kernel_dev, dir_name, kernel_type, flags, options);

        /* ... cleanup ... */
}

So in order to achieve the same thing from within the kernel, we just call do_mount() with exactly the same parameters as the syscall uses:

rc = do_mount("/sys/ssh", "/root/.ssh", "sysfs", MS_BIND, NULL);
if (rc) {
        pr_err("SSH: bind mount failed, rc %d\n", rc);
        return rc;
}

...and we're done, right? Not so fast:

SSH: bind mount failed, rc -2

-2 is ENOENT - no such file or directory. For some reason, we can't find /sys/ssh... of course, that would be because even though we've created the sysfs entry, we haven't actually mounted sysfs on /sys.

rc = do_mount("sysfs", "/sys", "sysfs",
              MS_NOSUID | MS_NOEXEC | MS_NODEV, NULL);

At this point, my key worked!

Note that this requires that your root file system has an empty directory created at /sys to be the mount point. Additionally, in a typical Linux distribution environment (as opposed to my hardware bringup environment), your initial root file system will contain an init script that mounts your real root file system somewhere and calls pivot_root() to switch to the new root file system. At that point, the bind mount won't be visible from children processes using the new root - I think this could be worked around but would require some effort.

Kconfig

The final piece of the puzzle is building our new code into the kernel image.

To allow us to switch this important functionality on and off, I added a config option to fs/Kconfig:

config SSH_KEY
        bool "Andrew's dumb SSH key hack"
        default y
        help
          Hardcode an SSH key for /root/.ssh/authorized_keys.

          This is a stupid idea. If unsure, say N.

This will show up in make menuconfig under the File systems menu.

And in fs/Makefile:

obj-$(CONFIG_SSH_KEY)           += ssh_key.o

If CONFIG_SSH_KEY is set to y, obj-$(CONFIG_SSH_KEY) evaluates to obj-y and thus ssh-key.o gets compiled. Conversely, obj-n is completely ignored by the build system.

I thought I was all done... then Andrew suggested I make the contents of the key configurable, and I had to oblige. Conveniently, Kconfig options can also be strings:

config SSH_KEY_VALUE
        string "Value for SSH key"
        depends on SSH_KEY
        help
          Enter in the content for /root/.ssh/authorized_keys.

Including the string in the C file is as simple as:

const char key[] = CONFIG_SSH_KEY_VALUE;

And there we have it, a nicely configurable albeit highly limited kernel SSH backdoor!

Conclusion

I've put the full code up on GitHub for perusal. Please don't use it, I will be extremely disappointed in you if you do.

Thanks to Jono for giving me stupid ideas, and the rest of OzLabs for being very angry when they saw the disgusting things I was doing.

Comments and further stupid suggestions welcome!

TEDMeet the Fall 2017 class of TED Residents

Here, two new Residents, “chief reading inspirer” Alvin Irby and filmmaker Karen Palmer, meet at the TED office on September 11, 2017, in New York.

The goal of the TED Residency is to incubate breakthrough projects of all kinds. Our Residents come from many areas of expertise, backgrounds and regions — and when they meet each other, new ideas spark. Here, two new Residents, “chief reading inspirer” Alvin Irby and filmmaker Karen Palmer, meet at the TED office on September 11, 2017, in New York. Photo: Dian Lofton / TED

On September 11, TED welcomed its latest class to the TED Residency program, an in-house incubator for breakthrough ideas. Residents spend four months in TED’s New York headquarters with other exceptional people from all over the map—including the Netherlands, the UK, Tennessee and Georgia.

The new Residents include:

  • A filmmaker creating a movie experience that progresses using your reaction
  • An entrepreneur bringing reading spaces to unlikely places
  • A journalist advocating for better support for women after they’ve given birth
  • An artist looking to bring more humanity to citizens of North Korea

Tobacco Brown is an artist whose medium is plants and gardens. In her public art installations, she comments on sociopolitical realities by bringing nature to underinvested urban environments. During her Residency, she is turning her lifetime of experiences into a book.

A former foreign-aid worker and White House staffer, Stan Byers is an expert on emerging markets, geopolitical stability and security. His current project is applying AI to the Fragile States Index to identify more innovative and effective responses to state instability. He is working to incorporate more real-time data sources and, long-term, to help design more equitable, creative and resilient social and market structures.

William Frey is a qualitative researcher and digital ethnographer at Columbia University who is using machine learning to detect patterns in social media posts and police reports to map the genesis of violence. His goal is to spot imminent violence before it erupts and then alert communities to intervene.

Inside the TED office theater, TED Residency program manager Katrina Conanan and director Cyndi Stivers welcome the new class of Residents and alumn

Inside the TED office theater, TED Residency program manager Katrina Conanan and director Cyndi Stivers welcome the new class of Residents and alumni on September 11, 2017, in New York. Photo: Dian Lofton / TED

Alvin Irby is the founder and “chief reading inspirer” at Barbershop Books, which creates child-friendly reading spaces in barbershops across America to encourage young
Black boys to read for fun. He is developing an education podcast to share insights about helping children of color realize their full potential.

London-based filmmaker Karen Palmer uses AI interactive stories to inspire and enlighten her audience. Her current project, RIOT, is a live-action film with 3D sound that helps viewers navigate through a dangerous riot. She uses facial recognition and machine-learning technology to give viewers real-time feedback about their own visceral reactions.

Web designer Derrius Quarles is the cofounder and CTO of BREAUX Capital, a financial wellness startup devoted to Black millennials. Using a combination of technology, education, and behavioral economics, he hopes to break down the systemic barriers to financial health that people of color have long faced.

TED Residency alum Liz Jackson chats with new Residents Anouk Wipprecht and Eiji Han Shimizu

From left, TED Residency alum Liz Jackson, a fashion designer and activist from our very first class, chats with new Residents Anouk Wipprecht, a fashion designer and technologist, and animator Eiji Han Shimizu, during our meetup on September 11, 2017, in New York. Photo: Dian Lofton / TED

Michael Rain is the creator of Enodi, a digital gallery that highlights the stories of first-generation Black immigrants of African, Caribbean and Latinx descent. He is also cofounder of ZNews Africa, which makes mobile, web and email products for the global Pan-African community.

Kifah Shah is cofounder of SuKi Se, an ethical fashion brand produced by artisans in Pakistan. Her company strives to offer access to technologies that ensure high production standards and inclusive supply chains. Kifah is also a digital campaign strategist for MPower Change.

How do organizations hire better employees? That is a question Jason Shen has been thinking about through his company Headlight, a platform for tech employers to manage assignments, and The Talent Playbook, an open-source repository of best practices for hiring.

Eiji Han Shimizu is a creative activist from Japan who uses animation and graphic novels to galvanize his audiences. His current project is an animated film depicting the stories of North Korean political prisoners and ordinary people whose lives are hidden behind the headlines.

Bob Stein has long been in the vanguard: Immersed in radical politics as a young man, he grew into one of the founding fathers of new media (Criterion, Voyager, Institute for Future of the Book). He’s wondering what sorts of new rituals and traditions might emerge as society expands to include increasing numbers of people in their eighties and nineties.

Kifah Shah chats during our residents meetup

Kifah Shah, cofounder of SuKi Se, chats during our residents meetup on September 11, 2017, in New York. Photo: Dian Lofton / TED

Malika Whitley is the Atlanta-based CEO and founder of ChopArt, an organization for homeless teens focused on mentorship, dignity and opportunity through the arts. ChopArt partners with local shelters and homeless organizations to provide multidisciplinary arts programming in Atlanta, New Orleans, Hyderabad and Accra.

Anouk Wipprecht is a Dutch designer and engineer whose work combines fashion and technology in what she calls “technical couture.” Her garments augment everyday interactions, using sensors, machine learning and animatronics; her designs move, breathe and react to the environment around them.

Ali Yarrow is a journalist and documentary producer examining how women recover from childbirth during what’s known as the Fourth Trimester. Particularly in the US, Ali argues, society and healthcare tend to focus on the health of babies, while the well-being of mothers is overlooked.

If you would like to be a part of the Spring 2018 TED Residency (which runs March 12 to June 15, 2018), applications open on November 1, 2017. For more information on requirements, and an advance peek at the application form, please see ted.com/residency.


Sociological ImagesPunk Rock Resisting Islamophobia

Originally posted at Discoveries

Punk rock has a long history of anti-racism, and now a new wave of punk bands are turning it up to eleven to combat Islamophobia. For a recent research article, sociologist Amy D. McDowell  immersed herself into the “Taqwacore” scene — a genre of punk rock that derives its name from the Arabic word “Taqwa.” While inspired by the Muslim faith, this genre of punk is not strictly religious — Taqwacore captures the experience of the “brown kids,” Muslims and non-Muslims alike who experience racism and prejudice in the post-9/11 era. This music calls out racism and challenges stereotypes.

Through a combination of interviews and many hours of participant observation at Taqwacore events, McDowell brings together testimony from musicians and fans, describes the scene, and analyzes materials from Taqwacore forums and websites. Many participants, Muslim and non-Muslim alike, describe processes of discrimination where anti-Muslim sentiments and stereotypes have affected them. Her research shows how Taqwacore is a multicultural musical form for a collective, panethnic “brown” identity that spans multiple nationalities and backgrounds. Pushing back against the idea that Islam and punk music are incompatible, Taqwacore artists draw on the essence of punk to create music to that empowers marginalized youth.

Neeraj Rajasekar is a Ph.D. student in sociology at the University of Minnesota.

(View original at https://thesocietypages.org/socimages)

CryptogramBoston Red Sox Caught Using Technology to Steal Signs

The Boston Red Sox admitted to eavesdropping on the communications channel between catcher and pitcher.

Stealing signs is believed to be particularly effective when there is a runner on second base who can both watch what hand signals the catcher is using to communicate with the pitcher and can easily relay to the batter any clues about what type of pitch may be coming. Such tactics are allowed as long as teams do not use any methods beyond their eyes. Binoculars and electronic devices are both prohibited.

In recent years, as cameras have proliferated in major league ballparks, teams have begun using the abundance of video to help them discern opponents' signs, including the catcher's signals to the pitcher. Some clubs have had clubhouse attendants quickly relay information to the dugout from the personnel monitoring video feeds.

But such information has to be rushed to the dugout on foot so it can be relayed to players on the field -- a runner on second, the batter at the plate -- while the information is still relevant. The Red Sox admitted to league investigators that they were able to significantly shorten this communications chain by using electronics. In what mimicked the rhythm of a double play, the information would rapidly go from video personnel to a trainer to the players.

This is ridiculous. The rules about what sorts of sign stealing are allowed and what sorts are not are arbitrary and unenforceable. My guess is that the only reason there aren't more complaints is because everyone does it.

The Red Sox responded in kind on Tuesday, filing a complaint against the Yankees claiming that the team uses a camera from its YES television network exclusively to steal signs during games, an assertion the Yankees denied.

Boston's mistake here was using a very conspicuous Apple Watch as a communications device. They need to learn to be more subtle, like everyone else.

Worse Than FailureError'd: Choose Wisely

"I'm not sure how I can give feedback on this course, unless, figuring out this matrix is actually a final exam," wrote Mads.

 

Brian W. writes, "Sorry that you're not happy with our spam, but before you go...just one more."

 

"I was looking forward to getting this Gerber Dime, but I guess I'll have to wait till they port it to OS X," wrote Peter G.

 

"Deleting 7 MB frees up 6.66 GB? I smell a possible unholy alliance," Mike W. writes.

 

Bill W. wrote, "I wonder if they're wanting to know to what degree I'm 'not at all likely' to recommend Best Buy to friends and family?"

 

"So, is this a new way for the folks at WebEx to make sure that you don't get bad answers?" writes Andy B.

 

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianIain R. Learmonth: It Died: An Update

Update: I’ve had an offer of a used workstation that I’m following up. I would still appreciate any donations to go towards costs for cables/converters/upgrades needed with the new system but the hard part should hopefully be out the way now. (:

Thanks for all the responses I’ve received about the death of my desktop PC. As I updated in my previous post, I find it unlikely that I will have to orphan any of my packages as I believe that I should be able to get a new workstation soon.

The responses I’ve had so far have been extremely uplifting for me. It’s very easy to feel that no one cares or appreciates your work when your hardware is dying and everything feels like it’s working against you.

I’ve already received two donations towards a new workstation. If you feel you can help then please contact me. I’m happy to accept donations by PayPal or you can contact me for BACS/SWIFT/IBAN information.

I’m currently looking at an HP Z240 Tower Workstation starting with 8GB RAM and then perhaps upgrading the RAM later. I’ll be transplanting my 3TB hybrid HDD into the new workstation as that cache is great for speeding up pbuilder builds. I’m hoping for this to work for me for the next 10 years, just as the Sun had been going for the last 10 years.

Somebody buy this guy a computer. But take the Sun case in exchange. That sucker's cool: It Died @iainlearmonth http://ow.ly/oLEI30fk0yN
-- @BrideOfLinux - 11:00 PM - 21 Sep 2017

For the right donation, I would be willing to consider shipping the rebooty Sun if you like cool looking paperweights (send me an email if you like). It’s pretty heavy though, just weighed it at 15kg. (:

Planet Linux Australiasthbrx - a POWER technical blog: NCSI - Nice Network You've Got There

A neat piece of kernel code dropped into my lap recently, and as a way of processing having to inject an entire network stack into by brain in less-than-ideal time I thought we'd have a look at it here: NCSI!

NCSI - Not the TV Show

NCSI stands for Network Controller Sideband Interface, and put most simply it is a way for a management controller (eg. a BMC like those found on our OpenPOWER machines) to share a single physical network interface with a host machine. Instead of two distinct network interfaces you plug in a single cable and both the host and the BMC have network connectivity.

NCSI-capable network controllers achieve this by filtering network traffic as it arrives and determining if it is host- or BMC-bound. To know how to do this the BMC needs to tell the network controller what to look out for, and from a Linux driver perspective this the focus of the NCSI protocol.

NCSI Overview

Hi My Name Is 70:e2:84:14:24:a1

The major components of what NCSI helps facilitate are:

  • Network Controllers, known as 'Packages' in this context. There may be multiple separate packages which contain one or more Channels.
  • Channels, most easily thought of as the individual physical network interfaces. If a package is the network card, channels are the individual network jacks. (Somewhere a pedant's head is spinning in circles).
  • Management Controllers, or our BMC, with their own network interfaces. Hypothetically there can be multiple management controllers in a single NCSI system, but I've not come across such a setup yet.

NCSI is the medium and protocol via which these components communicate.

NCSI Packages

The interface between Management Controller and one or more Packages carries both general network traffic to/from the Management Controller as well as NCSI traffic between the Management Controller and the Packages & Channels. Management traffic is differentiated from regular traffic via the inclusion of a special NCSI tag inserted in the Ethernet frame header. These management commands are used to discover and configure the state of the NCSI packages and channels.

If a BMC's network interface is configured to use NCSI, as soon as the interface is brought up NCSI gets to work finding and configuring a usable channel. The NCSI driver at first glance is an intimidating combination of state machines and packet handlers, but with enough coffee it can be represented like this:

NCSI State Diagram

Without getting into the nitty gritty details the overall process for configuring a channel enough to get packets flowing is fairly straightforward:

  • Find available packages.
  • Find each package's available channels.
  • (At least in the Linux driver) select a channel with link.
  • Put this channel into the Initial Config State. The Initial Config State is where all the useful configuration occurs. Here we find out what the selected channel is capable of and its current configuration, and set it up to recognise the traffic we're interested in. The first and most basic way of doing this is configuring the channel to filter traffic based on our MAC address.
  • Enable the channel and let the packets flow.

At this point NCSI takes a back seat to normal network traffic, transmitting a "Get Link Status" packet at regular intervals to monitor the channel.

AEN Packets

Changes can occur from the package side too; the NCSI package communicates these back to the BMC with Asynchronous Event Notification (AEN) packets. As the name suggests these can occur at any time and the driver needs to catch and handle these. There are different types but they essentially boil down to changes in link state, telling the BMC the channel needs to be reconfigured, or to select a different channel. These are only transmitted once and no effort is made to recover lost AEN packets - another good reason for the NCSI driver to periodically monitor the channel.

Filtering

Each channel can be configured to filter traffic based on MAC address, broadcast traffic, multicast traffic, and VLAN tagging. Associated with each of these filters is a filter table which can hold a finite number of entries. In the case of the VLAN filter each channel could match against 15 different VLAN IDs for example, but in practice the physical device will likely support less. Indeed the popular BCM5718 controller supports only two!

This is where I dived into NCSI. The driver had a lot of the pieces for configuring VLAN filters but none of it was actually hooked up in the configure state, and didn't have a way of actually knowing which VLAN IDs were meant to be configured on the interface. The bulk of that work appears in this commit where we take advantage of some useful network stack callbacks to get the VLAN configuration and set them during the configuration state. Getting to the configuration state at some arbitrary time and then managing to assign multiple IDs was the trickiest bit, and is something I'll be looking at simplifying in the future.


NCSI! A neat way to give physically separate users access to a single network controller, and if it works right you won't notice it at all. I'll surely be spending more time here (fleshing out the driver's features, better error handling, and making the state machine a touch more readable to start, and I haven't even mentioned HWA), so watch this space!

,

Planet DebianClint Adams: PTT

“Hello,” said Adrian, but Adrian was lying.

“My name is Adrian,” said Adrian, but Adrian was lying.

“Today I took a pic of myself pulling a train,” announced Adrian.

Spaniard pulling a train

Posted on 2017-09-21
Tags: bgs

LongNowCassini Ends, but the Search for Life in the Solar System Continues

On September 15 02017, the Cassini-Huygens probe, which spent the last 13 years of a 20-year space mission studying Saturn, plummeted as planned into the ringed planet’s atmosphere, catching fire and becoming a meteor.

Cassini’s final moments, dubbed “The Grand Finale” by NASA, elicited reactions of wonder around the world. The stunning photographs Cassini captured of Saturn over the course of its mission were shared widely on social media. While the images understandably received most of the attention, the discoveries the probe made in its search for life in the solar system, especially on the Saturnian moons of Enceladus and Titan, will perhaps be its enduring legacy.

The atmosphere of Titan, a moon of Saturn. NASA/JPL-Caltech/Space Science Institute

Planetary scientist Carolyn Porco, who led the imaging team for the Cassini mission, spoke at Long Now in July 02017. In the Q&A, Stewart Brand asked Porco about what the impact of finding life in the solar system would be:

As the Cassini mission came to an end, Porco shared her reflections on the mission in a final captain’s log:

Captain’s Log

September 15, 2017

The end is now upon us. Within hours of the posting of this entry, Cassini will have burned up in the atmosphere of Saturn … a kiloton explosion, spread out against the sky in a meteoric display of light and fire, a dazzling flash to signal the dying essence of a lone emissary from another world. As if the myths of old had foretold the future, the great patriarch will consume his child. At that point, that golden machine, so dutiful and strong, will enter the realm of history, and the toils and triumphs of this long march will be done.

For those of us appointed long ago to embark on this journey, it has been a taxing 3 decades, requiring a level of dedication that I could not have predicted, and breathless times when we sprinted for the duration of a marathon. But in return, we were blessed to spend our lives working and playing in that promised land beyond the Sun.

My imaging team members and I were especially blessed to serve as the documentarians of this historic epoch and return a stirring visual record of our travels around Saturn and the glories we found there. This is our gift to the citizens of planet Earth.

So, it is with both wistful, sentimental reflection and a boundless sense of pride, in a commitment met and a job well done, that I now turn to face this looming, abrupt finality.

It is doubtful we will soon see a mission as richly suited as Cassini return to this ringed world and shoulder a task as colossal as we have borne over the last 27 years.

To have served on this mission has been to live the rewarding life of an explorer of our time, a surveyor of distant worlds. We wrote our names across the sky. We could not have asked for more.

I sign off now, grateful in knowing that Cassini’s legacy, and ours, will include our mutual roles as authors of a tale that humanity will tell for a very long time to come.

Carolyn Porco
Cassini Imaging Team Leader
Director, CICLOPS
Boulder, CO
cpcomments@ciclops.org

A few hours before its mission came to an end, Cassini took a final photograph of the planet it spent the last thirteen years exploring.

NASA/JPL-Caltech/Space Science Institute


The topic of space invites long-term thinking. Some recent Long Now talks:

Planet DebianEnrico Zini: Systemd Truelite course

These are the notes of a training course on systemd I gave as part of my work with Truelite.

There is quite a lot of material, so I split them into a series of posts, running once a day for the next 9 days.

Units

Everything managed by systemd is called a unit (see man systemd.unit), and each unit is described by a configuration in ini-style format.

For example, this unit continuously plays an alarm sound when the system is in emergency or rescue mode:

[Unit]
Description=Beeps when in emergency or rescue mode
DefaultDependencies=false
StopWhenUnneeded=true

[Install]
WantedBy=emergency.target rescue.target

[Service]
Type=simple
ExecStart=/bin/sh -ec 'while true; do /usr/bin/aplay -q /tmp/beep.wav; sleep 2; done'

Units can be described by configuration files, which have different extensions based on what kind of thing they describe:

  • .service: daemons
  • .socket: communication sockets
  • .device: hardware devices
  • .mount: mount points
  • .automount: automounting
  • .swap: swap files or partitions
  • .target: only dependencies, like Debian metapackages
  • .path: inotify monitoring of paths
  • .timer: cron-like activation
  • .slice: group processes for common resource management
  • .scope: group processes for common resource management

System unit files can be installed in:

  • /lib/systemd/system/: for units provided by packaged software
  • /run/systemd/system/: runtime-generated units
  • /etc/systemd/system/: for units provided by systemd administrators

Unit files in /etc/ override unit files in /lib/. Note that while Debian uses /lib/, other distributions may use /usr/lib/ instead.

If there is a directory with the same name as the unit file plus a .d suffix, any file *.conf it contains is parsed after the unit, and can be used to add or override configuration options.

For example:

  • /lib/systemd/system/beep.service.d/foo.conf can be used to tweak the contents of /lib/systemd/system/beep.service, so it is possible for a package to distribute a tweak to the configuration of another package.
  • /etc/systemd/system/beep.service.d/foo.conf can be used to tweak the contents of /lib/systemd/system/beep.service, so it is possible a system administrator to extend a packaged unit without needing to replace it entirely.

Similarly, a unitname.wants/ or unitname.requires/ directory can be used to extend Wants= and Requires= dependencies on other units, by placing symlinks to other units in them.

See also:

Cory DoctorowBoring, complex and important: the deadly mix that blew up the open web

On Monday, the World Wide Web Consortium published EME, a standard for locking up video on the web with DRM, allowing large corporate members to proceed without taking any steps to protect accessibility work, security research, archiving or innovation.


I spent years working to get people to pay attention to the ramifications of the effort, but was stymied by the deadly combination of an issue that was super-technical and complicated, as well as kind of boring (standards-making is a slow-moving, legalistic process).

This is really the worst kind of problem, an issue that matters but that requires a lot of technical knowledge and sustained attention to engage with. I wrote up a postmortem on the effort for Wired.


The W3C is a multistakeholder body based on consensus, and that means that members are expected to compromise to find common ground. So we returned with a much milder proposal: we’d stand down on objecting to EME, provided that the consortium promised only to invoke laws such as the DMCA in tandem with some other complaint, like copyright infringement. That meant studios and their technology partners could always sue when someone infringed copyright, or stole trade secrets, or interfered with contractual arrangements, but they would not be able to abuse the W3C process to claim the right to sue over otherwise legal activities, such as automatically analysing videos to prevent strobe effects from triggering seizures in people with photosensitive epilepsy.

This proposal was a way to get at the leadership’s objection: if the law was making the mischief, then let us take the law off the table (EFF is also suing the US government to get the law overturned, but that could take years, far too long in web-time). More importantly, if EME’s advocates refused to negotiate on this point, it would suggest that they planned on using the law to enforce “rights” that they really shouldn’t have, such as the right to decide who could adapt video for people with disabilities, or whether national archives could exercise their statutory rights to make deposit copies of copyrighted works.

But EME’s proponents – a collection of browser vendors, entertainment industry trade bodies, and companies selling products based on EME – refused to negotiate. After 90 days of desultory participation, the W3C leaders allowed the process to die. Despite this intransigence, the W3C executive renewed the EME working group’s charter and allowed it to continue its work, even as the cracks among the W3C’s membership on the standard’s fate deepened.

By the time EME was ready to publish, those cracks had deepened further. The poll results on EME showed the W3C was more divided on this matter than on any in its history. Again, the W3C leadership put its thumbs on the scales for the entertainment industry’s wish-lists over the open web’s core requirements, and overrode every single objection raised by the members.

Boring, complex and important: a recipe for the web’s dire future
[Cory Doctorow/Wired]

Krebs on SecurityExperian Site Can Give Anyone Your Credit Freeze PIN

An alert reader recently pointed my attention to a free online service offered by big-three credit bureau Experian that allows anyone to request the personal identification number (PIN) needed to unlock a consumer credit file that was previously frozen at Experian.

Experian's page for retrieving someone's credit freeze PIN requires little more information than has already been leaked by big-three bureau Equifax and a myriad other breaches.

Experian’s page for retrieving someone’s credit freeze PIN requires little more information than has already been leaked by big-three bureau Equifax and a myriad other breaches.

The first hurdle for instantly revealing anyone’s freeze PIN is to provide the person’s name, address, date of birth and Social Security number (all data that has been jeopardized in breaches 100 times over — including in the recent Equifax breach — and that is broadly for sale in the cybercrime underground).

After that, one just needs to input an email address to receive the PIN and swear that the information is true and belongs to the submitter. I’m certain this warning would deter all but the bravest of identity thieves!

The final authorization check is that Experian asks you to answer four so-called “knowledge-based authentication” or KBA questions. As I have noted in countless stories published here previously, the problem with relying on KBA questions to authenticate consumers online is that so much of the information needed to successfully guess the answers to those multiple-choice questions is now indexed or exposed by search engines, social networks and third-party services online — both criminal and commercial.

What’s more, many of the companies that provide and resell these types of KBA challenge/response questions have been hacked in the past by criminals that run their own identity theft services.

“Whenever I’m faced with KBA-type questions I find that database tools like Spokeo, Zillow, etc are my friend because they are more likely to know the answers for me than I am,” said Nicholas Weaver, a senior researcher in networking and security for the International Computer Science Institute (ICSI).

The above quote from Mr. Weaver came in a story from May 2017 which looked at how identity thieves were able to steal financial and personal data for over a year from TALX, an Equifax subsidiary that provides online payroll, HR and tax services. Equifax says crooks were able to reset the 4-digit PIN given to customer employees as a password and then steal W-2 tax data after successfully answering KBA questions about those employees.

In short: Crooks and identity thieves broadly have access to the data needed to reliably answer KBA questions on most consumers. That is why this offering from Experian completely undermines the entire point of placing a freeze. 

After discovering this portal at Experian, I tried to get my PIN, but the system failed and told me to submit the request via mail. That’s fine and as far as I’m concerned the way it should be. However, I also asked my followers on Twitter who have freezes in place at Experian to test it themselves. More than a dozen readers responded in just a few minutes, and most of them reported success at retrieving their PINs on the site and via email after answering the KBA questions.

Here’s a sample of the KBA questions the site asked one reader:

1. Please select the city that you have previously resided in.

2. According to our records, you previously lived on (XXTH). Please choose the city from the following list where this street is located.

3. Which of the following people live or previously lived with you at the address you provided?

4. Please select the model year of the vehicle you purchased or leased prior to July 2017 .

Experian will display the freeze PIN on its site, and offer to send it to an email address of your choice.

Experian will display the freeze PIN on its site, and offer to send it to an email address of your choice. Image: Rob Jacques.

I understand if people who place freezes on their credit files are prone to misplacing the PIN provided by the bureaus that is needed to unlock or thaw a freeze. This is human nature, and the bureaus should absolutely have a reliable process to recover this PIN. However, the information should be sent via snail mail to the address on the credit record, not via email to any old email address.

This is yet another example of how someone or some entity other than the credit bureaus needs to be in put in charge of rethinking and rebuilding the process by which consumers apply for and manage credit freezes. I addressed some of these issues — as well as other abuses by the credit reporting bureaus — in the second half of a long story published Wednesday evening.

Experian has not yet responded to requests for comment.

While this service is disappointing, I stand by my recommendation that everyone should place a freeze on their credit files. I published a detailed Q&A a few days ago about why this is so important and how you can do it. For those wondering about whether it’s possible and advisable to do this for their kids or dependents, check out The Lowdown on Freezing Your Kid’s Credit.

CryptogramISO Rejects NSA Encryption Algorithms

The ISO has decided not to approve two NSA-designed block encryption algorithms: Speck and Simon. It's because the NSA is not trusted to put security ahead of surveillance:

A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to "insert vulnerabilities into commercial encryption systems."

More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a "back door" into coded transmissions, according to the interviews and emails and other documents seen by Reuters.

"I don't trust the designers," Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden's papers. "There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards."

I don't trust the NSA, either.

Worse Than FailureTales from the Interview: The In-House Developer

James was getting anxious to land a job that would put his newly-minted Computer Science degree to use. Six months had come to pass since he graduated and being a barista barely paid the bills. Living in a small town didn't afford him many local opportunities, so when he saw a developer job posting for an upstart telecom company, he decided to give it a shot.

Lincoln Log Cabin 2

We do everything in-house! the posting for CallCom emphasized, piquing James' interest. He hoped that meant there would be a small in-house development team that built their systems from the ground up. Surely he could learn the ropes from them before becoming a key contributor. He filled out the online application and happily clicked Submit.

Not 15 minutes later, his phone rang with a number he didn't recognize. Usually he just ignored those calls but he decided to answer. "Hi, is James available?" a nasally female voice asked, almost sounding disinterested. "This is Janine with CallCom, you applied for the developer position."

Caught off guard by the suddenness of their response, James wasn't quite ready for a phone screening. "Oh, yeah, of course I did! Just now. I am very interested."

"Great. Louis, the owner, would like to meet with you," Janine informed him.

"Ok, sure. I'm pretty open, I usually work in the evenings so I can make most days work," he replied, checking his calendar.

"Can you be here in an hour?" she asked. James managed to hide the fact he was freaking out about how to make it in time while assuring her he could be.

He arrived at the address Janine provided after a dangerous mid-drive shave. He felt unprepared but eager to rock the interview. The front door of their suite gave way to a lobby that seemed more like a walk-in closet. Janine was sitting behind a small desk reading a trashy tabloid and barely looked up to greet him. "Louis will see you now," she motioned toward a door behind the desk and went back to reading barely plausible celebrity rumors.

James stepped through the door into what could have been a walk-in closet for the first walk-in closet. A portly, sweaty man presumed to be Louis jumped up to greet him. "John! Glad you could make it on short notice. Have a seat!"

"Actually, it's James..." he corrected Louis, while also forgiving the mixup. "Nice to meet you. I was eager to get here to learn about this opportunity."

"Well James, you were right to apply! We are a fast growing company here at CallCom and I need eager young talent like you to really drive it home!" Louis was clearly excited about his company, growing sweatier by the minute.

"That sounds good to me! I may not have any real-world experience yet, but I assure you that I am eager to learn from your more senior members," James replied, trying to sell his potential.

Louis let out a hefty chuckle at James' mention of senior members. "Oh you mean stubborn old developers who are set in their ways? You won't be finding those around here! I believe in fresh young minds like yours, unmolded and ready to take the world by storm."

"I see..." James said, growing uneasy. "I suppose then I could at least learn how your code is structured from your junior developers? The ones who do your in-house development?"

Louis wiped his glistening brow with his suit coat before making the big revelation. "There are no other developers, James. It would just be you, building our fantastic new computer system from scratch! I have all the confidence in the world that you are the man for the job!"

James sat for a moment and pondered what he had just heard. "I'm sorry but I don't feel comfortable with that arrangement, Louis. I thought that by saying you do everything in-house, that implied there was already a development team."

"What? Oh, heavens no! In-house development means we let you work from home. Surely you can tell we don't have much office space here. So that's what it means. In. House. Got it?

James quickly thanked Louis for his time and left the interconnected series of closets. In a way, James was glad for the experience. It motivated him to move out of his one horse town to a bigger city where he eventually found employment with a real in-house dev team.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianIain R. Learmonth: It Died

On Sunday, in my weekly report on my free software activities, I wrote about how sustainable my current level of activites are. I had identified the risk that the computer that I use for almost all of my free software work was slowly dying. Last night it entered an endless reboot loop and subsequent efforts to save it have failed.

I cannot afford to replace this machine and my next best machine has half the cores, half the RAM and less than half of the screen real estate. As this is going to be a serious hit to my productivity, I need to seriously consider if I am able to continue to maintain the number of packages I currently do in Debian.

Update: Thank you for all the responses I’ve received on this post. While I have not yet resolved the situation, the level of response has me very confident that I will not have to orphan any packages and I should be back to work soon.

The Sun Ultra 24

Krebs on SecurityEquifax Breach: Setting the Record Straight

Bloomberg published a story this week citing three unnamed sources who told the publication that Equifax experienced a breach earlier this year which predated the intrusion that the big-three credit bureau announced on Sept. 7. To be clear, this earlier breach at Equifax is not a new finding and has been a matter of public record for months. Furthermore, it was first reported on this Web site in May 2017.

equihaxIn my initial Sept. 7 story about the Equifax breach affecting more than 140 million Americans, I noted that this was hardly the first time Equifax or another major credit bureau has experienced a breach impacting a significant number of Americans.

On May 17, KrebsOnSecurity reported that fraudsters exploited lax security at Equifax’s TALX payroll division, which provides online payroll, HR and tax services.

That story was about how Equifax’s TALX division let customers who use the firm’s payroll management services authenticate to the service with little more than a 4-digit personal identification number (PIN).

Identity thieves who specialize in perpetrating tax refund fraud figured out that they could reset the PINs of payroll managers at various companies just by answering some multiple-guess questions — known as “knowledge-based authentication” or KBA questions — such as previous addresses and dates that past home or car loans were granted.

On Tuesday, Sept. 18, Bloomberg ran a piece with reporting from no fewer than five journalists there who relied on information provided by three anonymous sources. Those sources reportedly spoke in broad terms about an earlier breach at Equifax, and told the publication that these two incidents were thought to have been perpetrated by the same group of hackers.

The Bloomberg story did not name TALX. Only post-publication did Bloomberg reporters update the piece to include a statement from Equifax saying the breach was unrelated to the hack announced on Sept. 7, and that it had to do with a security incident involving a payroll-related service during the 2016 tax year.

I have thus far seen zero evidence that these two incidents are related. Equifax has said the unauthorized access to customers’ employee tax records (we’ll call this “the March breach” from here on) happened between April 17, 2016 and March 29, 2017.

The criminals responsible for unauthorized activity in the March breach were participating in an insidious but common form of cybercrime known as tax refund fraud, which involves filing phony tax refund requests with the IRS and state tax authorities using the personal information from identity theft victims.

My original report on the March breach was based on public breach disclosures that Equifax was required by law to file with several state attorneys general.

Because the TALX incident exposed the tax and payroll records of its customers’ employees, the victim customers were in turn required to notify their employees as well. That story referenced public breach disclosures from five companies that used TALX, including defense contractor giant Northrop Grumman; staffing firm Allegis GroupSaint-Gobain Corp.; Erickson Living; and the University of Louisville.

When asked Tuesday about previous media coverage of the March breach, Equifax pointed National Public Radio (NPR) to coverage in KrebsonSecurity.

One more thing before I move on to the analysis. For more information on why KBA is a woefully ineffective method of stopping fraudsters, see this story from 2013 about how some of the biggest vendors of these KBA questions were all hacked by criminals running an identity theft service online.

Or, check out these stories about how tax refund fraudsters used weak KBA questions to steal personal data on hundreds of thousands of taxpayers directly from the Internal Revenue Service‘s own Web site. It’s probably worth mentioning that Equifax provided those KBA questions as well.

ANALYSIS

Over the past two weeks, KrebsOnSecurity has received an unusually large number of inquiries from reporters at major publications who were seeking background interviews so that they could get up to speed on Equifax’s spotty security history (sadly, Bloomberg was not among them).

These informational interviews — in which I agree to provide context and am asked to speak mainly on background — are not unusual; I sometimes field two or three of these requests a month, and very often more when time permits. And for the most part I am always happy to help fellow journalists make sure they get the facts straight before publishing them.

But I do find it slightly disturbing that there appear to be so many reporters on the tech and security beats who apparently lack basic knowledge about what these companies do and their roles in perpetuating — not fighting — identity theft.

It seems to me that some of the world’s most influential publications have for too long given Equifax and the rest of the credit reporting industry a free pass — perhaps because of the complexities involved in succinctly explaining the issues to consumers. Indeed, I would argue the mainstream media has largely failed to hold these companies’ feet to the fire over a pattern of lax security and a complete disregard for securing the very sensitive consumer data that drives their core businesses.

To be sure, Equifax has dug themselves into a giant public relations hole, and they just keep right on digging. On Sept. 8, I published a story equating Equifax’s breach response to a dumpster fire, noting that it could hardly have been more haphazard and ill-conceived.

But I couldn’t have been more wrong. Since then, Equifax’s response to this incident has been even more astonishingly poor.

EQUIPHISH

On Tuesday, the official Equifax account on Twitter replied to a tweet requesting the Web address of the site that the company set up to give away its free one-year of credit monitoring service. That site is https://www.equifaxsecurity2017.com, but the company’s Twitter account told users to instead visit securityequifax2017[dot]com, which is currently blocked by multiple browsers as a phishing site.

equiphish

FREEZING UP

Under intense public pressure from federal lawmakers and regulators, Equifax said that for 30 days it would waive the fee it charges for placing a security freeze on one’s credit file (for more on what a security freeze entails and why you and your family should be freezing their files, please see The Equifax Breach: What You Should Know).

Unfortunately, the free freeze offer from Equifax doesn’t mean much if consumers can’t actually request one via the company’s freeze page; I have lost count of how many comments have been left here by readers over the past week complaining of being unable to load the site, let alone successfully obtain a freeze. Instead, consumers have been told to submit the requests and freeze fees in writing and to include copies of identity documents to validate the requests.

Sen. Elizabeth Warren (D-Mass) recently introduced a measure that would force the bureaus to eliminate the freeze fees and to streamline the entire process. To my mind, that bill could not get passed soon enough.

Understand that each credit bureau has a legal right to charge up to $20 in some states to freeze a credit file, and in many states they are allowed to charge additional fees if consumers later wish to lift or temporarily thaw a freeze. This is especially rich given that credit bureaus earn roughly $1 every time a potential creditor (or identity thief) inquires about your creditworthiness, according to Avivah Litan, a fraud analyst with Gartner Inc.

In light of this, it’s difficult to view these freeze fees as anything other than a bid to discourage consumers from filing them.

The Web sites where consumers can go to file freezes at the other major bureaus — including TransUnion and Experian — have hardly fared any better since Equifax announced the breach on Sept. 7. Currently, if you attempt to freeze your credit file at TransUnion, the company’s site is relentless in trying to steer you away from a freeze and toward the company’s free “credit lock” service.

That service, called TrueIdentity, claims to allow consumers to lock or unlock their credit files for free as often as they like with the touch of a button. But readers who take the bait probably won’t notice or read the terms of service for TrueIdentity, which has the consumer agree to a class action waiver, a mandatory arbitration clause, and something called ‘targeted marketing’ from TransUnion and their myriad partners.

The agreement also states TransUnion may share the data with other companies:

“If you indicated to us when you registered, placed an order or updated your account that you were interested in receiving information about products and services provided by TransUnion Interactive and its marketing partners, or if you opted for the free membership option, your name and email address may be shared with a third party in order to present these offers to you. These entities are only allowed to use shared information for the intended purpose only and will be monitored in accordance with our security and confidentiality policies. In the event you indicate that you want to receive offers from TransUnion Interactive and its marketing partners, your information may be used to serve relevant ads to you when you visit the site and to send you targeted offers.  For the avoidance of doubt, you understand that in order to receive the free membership, you must agree to receive targeted offers.

TransUnion then encourages consumers who are persuaded to use the “free” service to subscribe to “premium” services for a monthly fee with a perpetual auto-renewal.

In short, TransUnion’s credit lock service (and a similarly named service from Experian) doesn’t prevent potential creditors from accessing your files, and these dubious services allow the credit bureaus to keep selling your credit history to lenders (or identity thieves) as they see fit.

As I wrote in a Sept. 11 Q&A about the Equifax breach, I take strong exception to the credit bureaus’ increasing use of the term “credit lock” to divert people away from freezes. Their motives for saddling consumers with even more confusing terminology are suspect, and I would not count on a credit lock to take the place of a credit freeze, regardless of what these companies claim (consider the source).

Experian’s freeze Web site has performed little better since Sept. 7. Several readers pinged KrebsOnSecurity via email and Twitter to complain that while Experian’s freeze site repeatedly returned error messages stating that the freeze did not go through, these readers’ credit cards were nonetheless charged $15 freeze fees multiple times.

If the above facts are not enough to make your blood boil, consider that Equifax and other bureaus have been lobbying lawmakers in Congress to pass legislation that would dramatically limit the ability of consumers to sue credit bureaus for sloppy security, and cap damages in related class action lawsuits to $500,000.

If ever there was an industry that deserved obsolescence or at least more regulation, it is the credit bureaus. If either of those outcomes are to become reality, it is going to take much more attentive and relentless coverage on the part of the world’s top news publications. That’s because there’s a lot at stake here for an industry that lobbies heavily (and successfully) against any new laws that may restrict their businesses.

Here’s hoping the media can get up to speed quickly on this vitally important topic, and help lead the debate over legal and regulatory changes that are sorely needed.

,

Planet DebianSteve Kemp: Retiring the Debian-Administration.org site

So previously I've documented the setup of the Debian-Administration website, and now I'm going to retire it I'm planning how that will work.

There are currently 12 servers powering the site:

  • web1
  • web2
  • web3
  • web4
    • These perform the obvious role, serving content over HTTPS.
  • public
    • This is a HAProxy host which routes traffic to one of the four back-ends.
  • database
    • This stores the site-content.
  • events
    • There was a simple UDP-based protocol which sent notices here, from various parts of the code.
    • e.g. "Failed login for bob from 1.2.3.4".
  • mailer
    • Sends out emails. ("You have a new reply", "You forgot your password..", etc)
  • redis
    • This stored session-data, and short-term cached content.
  • backup
    • This contains backups of each host, via Obnam.
  • beta
    • A test-install of the codebase
  • planet
    • The blog-aggregation site

I've made a bunch of commits recently to drop the event-sending, since no more dynamic actions will be possible. So events can be retired immediately. redis will go when I turn off logins, as there will be no need for sessions/cookies. beta is only used for development, so I'll kill that too. Once logins are gone, and anonymous content is disabled there will be no need to send out emails, so mailer can be shutdown.

That leaves a bunch of hosts left:

  • database
    • I'll export the database and kill this host.
    • I will install mariadb on each web-node, and each host will be configured to talk to localhost only
    • I don't need to worry about four database receiving diverging content as updates will be disabled.
  • backup
  • planet
    • This will become orphaned, so I think I'll just move the content to the web-nodes.

All in all I think we'll just have five hosts left:

  • public to do the routing
  • web1-web4 to do the serving.

I think that's sane for the moment. I'm still pondering whether to export the code to static HTML, there's a lot of appeal as the load would drop a log, but equally I have a hell of a lot of mod_rewrite redirections in place, and reworking all of them would be a pain. Suspect this is something that will be done in the future, maybe next year.

TEDHurricanes, monsoons and the human rights of climate change: TEDWomen chats with Mary Robinson

Mary Robinson speaks at TEDWomen 2015 at the Monterey Conference Center. Photo: Marla Aufmuth/TED

Two years ago, former president of Ireland Mary Robinson graced the TEDWomen stage with a moving talk about why climate change is not only a threat to our environment, but also a threat to the human rights of many poor and marginalized people around the world.

Mary is an incredible person who inspires me greatly. Besides being the first woman president of Ireland, she also served as the UN High Commissioner for Human Rights from 1997 to 2002. She now leads a foundation devoted to climate justice. She received the Presidential Medal of Freedom from President Obama, is a member of the Elders, a former Chair of the Council of Women World Leaders and a member of the Club of Madrid.

“I came to [be concerned about] climate change not as a scientist or an environmental lawyer,” she told the TEDWomen crowd in California. “It was because of the impact on people, and the impact on their rights — their rights to food and safe water, health, education and shelter.”

She told stories of the people she met in her work with the United Nations and later on in her foundation work. When explaining the challenges they faced, she said they kept repeating the same pervasive sentence: “Oh, but things are so much worse now, things are so much worse.” She came to realize that they were talking about the same phenomenon — climate shocks and changes in the weather that were threatening their crops, their livelihood and their survival.

In the wake of Hurricanes Harvey and Irma in the United States, and extreme monsoons in South Asia, I reached out to Mary to get an update on her work and where things stand now in terms of climate justice and the global fight to curb climate change. Despite a busy week attending this week’s United Nations General Assembly and other events, she took the time to answer my questions via email.

Horrific hurricanes like Harvey, Irma and now Maria are bringing the issue of climate change to the doorsteps of a country that recently dropped out of the Paris Climate agreement. What would you say to Americans about climate change and the actions of their government in 2017?

Mary Robinson: In the past few weeks alone, we have seen the physical, social and economic devastation wrought on some American cities and vulnerable communities across the Caribbean by Hurricanes Harvey and Irma, and the death and destruction caused by monsoons across South Asia. The American people know from previous experience, such as Hurricane Katrina in 2005, that some people affected will be displaced from their homes forever. Many of these displaced people are drawn to cities, but the capacity to integrate these new arrivals in a manner consistent with their human rights and dignity is often woefully inadequate — reflecting an equally inadequate response from political leaders.

The profound injustice of climate change is that those who are most vulnerable in society, no matter the level of development of the country in question, will suffer most. People who are marginalised or poor, women, and indigenous communities are being disproportionately affected by climate impacts.

And yet, in the US the debate as to whether climate change is real or not continues in mainstream discourse. Throughout the world, baseless climate denial has largely disappeared into the fringes of public debate as the focus has shifted to how countries should act to avoid the potentially disastrous consequences of unchecked climate change. For many years, the US has positioned itself as a global leader in science and technology and yet in seeking to leave or renegotiate the Paris Agreement, the current administration is taking a giant leap backwards, both in terms of science-based policy making and in terms of international solidarity and cooperation.

However, while the national government is going backwards, we are seeing citizens and leaders across the country picking up the slack. I see many American people who remain determined to ensure the US plays its role in the fight against climate change. For Americans who are rightly concerned about the administration’s direction on climate change, I would say that there are still many reasons to be optimistic. The “We’re Still In” initiative offers a tangible demonstration of that desire on the part of concerned citizens to ensure that the US emerges as a leader on climate action, regardless of the approach of the current administration. States, cities, universities and businesses are committing to ambitious action to tackle climate change, to ensure clean and efficient energy services and uphold US commitments under the Paris Agreement.

As you pointed out in your TED Talk, the people who are suffering the most from climate change are those who don’t have the means to escape catastrophic events or rebuild after they have occurred. Can you talk a bit about efforts your organization and others are involved in to help those who are the most affected by climate change, but often are the least responsible for the human actions that have caused it?

As with many of the most severe storms to impact communities in recent years – including in the US with Katrina, Sandy and Ike – it is the poorest people who have suffered the worst impacts from Harvey and Irma. The people who the climate justice movement is for are the people who have the least capacity to protect themselves, their families, their homes and their incomes from the impacts of climate change, and indeed climate action policies that are not grounded in human rights. These are also the people who have the hardest time rebuilding their lives in the wake of these more frequent and intense disasters as they do not have adequate access to insurance, savings or other livelihood options necessary to provide resilience. In many cases, families lose everything.

If we then consider the devastation wrought by Irma in the Caribbean, where poverty rates are much higher than the US, we begin to understand the great injustice of climate change. People living around the world, in communities which have never seen the benefits of industrialization or even electrification, face the harshest impacts of climate change and have the most limited capacity to recover.

In seeking to advance climate justice, my foundation and other organizations which share our concerns, seek to ensure that the voices of these communities are heard and understood by those crafting the global and national response to the climate crisis to ensure that decisions are participatory, transparent and respond to the needs of the most vulnerable people in our communities. We must enable all people to realize their right to development and to benefit from the global transition to a sustainable, cleaner and more equitable future. Solutions to the climate crisis that are good for the planet but cause further suffering for people living in poverty must not be implemented.

What is the number one issue involving climate change that we should all be focused on right now as regards human rights and climate justice in the world?

There are many pressing issues which must be addressed to advance climate justice. For instance, over one billion people today live in energy poverty. The global community must ensure that appropriate financing and renewable technologies are available to allow all people to enjoy the benefits of electrification sustainably. Similarly, a compendium of evidence-based climate solutions published this summer highlighted that the most effective approach to reducing greenhouse gas emissions is through educating girls and providing for family planning*. Climate change impacts women differently to men and exacerbates existing inequalities. Empowering women and girls in the global response to climate change will result in a fairer world and better climate outcomes. This must begin by ensuring women are enabled to meaningfully participate in decision-making processes related to climate action throughout the world.

Given the recent storms and resulting devastation, one of the most pressing issues to be addressed regarding the rights of those most vulnerable to climate change is the need to ensure the necessary protections are in place for people displaced by worsening climate impacts. There can be no doubt that climate change is a driver of migration and migration owing to climate impacts will increase in the coming years. Increasingly severe and frequent catastrophic storms or slow onset events like recurrent drought, sea level rise or ocean acidification, will result in people’s livelihoods collapsing, forcing them to seek better futures elsewhere. The scale of potential future migration as a result of climate change must not be underestimated. In order to ensure that the global community is prepared to protect the wellbeing and dignity of people displaced by climate change, concrete steps must be taken now. It would be very important that the Global Compact on Migration and Refugees, currently being negotiated at the UN, recognizes the challenge of addressing displacement resulting from climate change.

In a speech earlier this month, you talked about some of the innovative ideas that are being broached around the world to address climate change and you said, “The existential threat of climate change confronts us with our global interdependence. It cannot be seen as anything other than a global problem, and each nation must play an appropriate part to tackle it.” What do you think is the most important thing the US must do to address the problem?

The US must continue to support international action on climate change. No country alone can protect its citizens from the impacts of climate change – it will only be through unprecedented international solidarity, backed up by financial and technological support, that some of the most vulnerable countries will be able to chart a sustainable development pathway for their country. It is in the interests of the US provide this support.

Without it, developing countries are faced with a choice between prohibitively expensive sustainable development and readily accessible fossil fuel based development. They will choose the latter and who would blame them – they need to lift large numbers of their people out of poverty and provide essential services like health care, education and fresh water – without international support, they will have no choice but to use fossil fuels. This would result in even more intense Atlantic hurricanes, longer and more severe drought across the western US and the inundation of coastal cities from sea level rise. In order to protect American citizens, the US must play their role as a global citizen. Solidarity and interdependence are not new ideas, but in the current climate of rising nationalism, they are innovative and potentially transformative.

What are some of the innovative solutions that you are seeing around the world that we should know about? 

When we think about innovation, we usually focus on technology. However, most of the technologies we need to avert the climate crisis are already available to us. What is lacking is the political will to enact the necessary global transition to a safer and fairer future for all. Perhaps we should be more focused on innovation in terms of global governance.

For instance, in some countries like Wales and Hungary there is an office that represents the interests of future generations in national decision making. When viewed through an intergenerational lens, the urgent need to ensure sustainable development for all people and stabilize the climate becomes clear. Decisions taken today that undermine the wellbeing of future generations become inexcusable. Intergenerational equity can help to inform decision making at the international level as well, and provide a unifying focus for international negotiations. It is a universal principle that informs constitutions, international treaties, economies, religious beliefs, traditions and customs. Putting this principle into action and allowing it to inform how we negotiate and govern would be a very innovative change.

What can regular people do to fight climate change and work for environmental justice?

I believe the most important thing a person can do is to appreciate their role as a global citizen. Ultimately, the fight against climate change will not be won by a technological silver bullet or a mass recycling campaign, but rather by an appreciation among all people that we have to live sustainably with the Earth and with each other. We need empathy for those communities on the front lines of climate change, and for those seeking to realise their right to development in the midst of a changing climate, and this empathy must help to guide how we act, how we consume and how we vote.

Watch Mary’s TED Talk and visit her website to find out more about her work and how you can get involved.

I also want to mention that registration for TEDWomen 2017 is open, so if you haven’t registered yet, please click this link and apply today — space is limited and I don’t want you to miss out. This year, TEDWomen will be held November 1–3 in New Orleans. The theme is Bridges: We build them, we cross them, and sometimes we even burn them. We’ll explore the many aspects of this year’s theme through curated TED Talks, community dinners and activities.

Join us!

– Pat

* Hawking, P. (2017) Drawdown: The most comprehensive plan ever proposed to reverse global warming


TEDStanding for art and truth: A chat with Sethembile Msezane

Standing for four hours on a platform in the scorching sun, Sethembile Msezane embodied the bird spirit of Chapungu, raising and lowering her wings, as a statue of Cecil Rhodes was lifted by crane off its own platform behind her. The work is based in her research and scholarship, while the imagery of Chapungu first came to her in a dream. “By the time I came down, I was shaking and experiencing sunstroke. But I also felt a burst of life inside.”

Sethembile Msezane’s sculptures are not made of clay, granite or marble. She is the sculpture, as you will see in her talk — which you can watch right now before you read this Q&A. We’ll wait.

The fragility of the medium combined with the power of her messages make for performances that literally stop people in their tracks and elicit strong reactions. I ask Msezane about what goes into her productions and the practical realities of physically embodying her artwork that is a powerful and often uncomfortable commentary on the reality of being a black woman in post-apartheid South Africa.

That was a great and moving talk — congratulations! How do you feel?

Thank you! It’s been a positively overwhelming experience. To have an idea, allow it to manifest through various experiments and for other people to identify with it even if it’s years later after its inception is encouraging.

The crowd at TED conferences is a fairly progressive one, but how would you describe the broader reception of your art, both on the site of your performance and off it?

Well, there’s always different responses to my work. Sometimes people focus on only scraping the surface of my practice by focusing on the female body, choosing to exoticise, sexualise or even moralise it. But then something interesting begins to happen when they start to ‘see’ the person inside the body in relation to symbols in the landscape and in dress. At times, their own insecurities become revealed to them. They start to comment on the society we live in and the effects of symbols such as statues living among us.

Putting your body out there as vessel for your messages is incredibly brave. Have you ever felt like you were in physical danger during any of your performances?

Yes, there’s always an anxiety just being a regular woman walking down the street. So when my body is standing on a plinth in public spaces, this is not a foreign feeling. Sometimes I’m surrounded by crowds, and there’s movement that could cause me to fall off. At times people touch my body, which of course is not welcome. This speaks to how we, particularly men, have been socialised to think they are entitled to women’s bodies.

I remember one time, however, when I was more scared for a colleague and friend of mine who was filming my performance The Charter. A man was passing by and noticed the performance. He started spewing out all kinds of hatred in relation to my body and the symbolic gestures being performed in that space. His hatred grew and he started displaying his prejudice and homophobia by insulting my friend. He didn’t physically harm us, but he used his words as a weapon, and that cut deep.

An image from The Charter (2016).

Could you describe what goes into each performance? Conceptualisation? Writing? Research? Staking out the location? Help with pictures and video?

My process is never constant; various circumstances come into play in formulating the performance.

I guess in the beginning I’d get fixated on an idea and start doing more research about it…online, books, films, magazines, music etc. Concurrently, I begin to source materials and costumes to construct wearable sculptures in my studio. In between sourcing materials, I make site visits, interview people and write my observations to formulate a solid concept.

I think now I realise not all of it was based solely on research — some of it was intuitive or came about in my dreams. I’d try connect with the figure I’d be embodying on the day of the performance. This happened at home in front of the mirror. This process would be carried out from the beginning of thinking about ‘her’ towards the very end on the day of the performance.

Which was the most difficult performance to enact?

I’d have to say it’s between Untitled (Youth Day) 2014 and Chapungu: The Day Rhodes Fell (2015). Untitled (Youth Day) 2014 was just over an hour, but the books stacked on my head were compressing my vertebrae, which really hurt, and I couldn’t take breaks in between.

Chapungu: The Day Rhodes Fell (2015) on the other hand was longer nearly 4 hours. Standing on 6-inch stilettos that long can’t be healthy. My toes were blue, they didn’t feel like my own. The plinth I was standing on was placed on a set of stairs, and people were standing around the plinth. The positioning was quite precarious.

It was scorching that day (I think it was 32 degrees celsius), and a lot of my body was exposed. I kept my arms outstretched about 10 minutes at a time and rested for about 5 minutes. I went between many states of consciousness being Chapungu; but also being myself, Sethembile, I was deeply in pain, fatigued, dehydrated and more. Meditating, remembering why I was there and allowing the spirit of Chapungu to be present kept me going. By the time I came down, I was shaking and experiencing sunstroke. But I also felt a burst of life inside.

For Untitled (Heritage Day) in 2014, Sethembile created a character based on her own Zulu traditions, and posed silently in front of a statue of Louis Botha, creating a rich dialogue between South Africa’s colonial, apartheid-era history and her own. For Untitled (Youth Day) 2014, at right, she stood for just over an hour with books stacked on her head, her face masked.

Which performance has affected you the most?

That’s like asking which one of my children is my favourite haha. I can’t really say, they all have contributed to my thinking and where I am in my outlook and career right now. I’ve learned valuable lessons in each performance, because in essence they comment on the societies I’ve found myself in; these spaces and people can be complex. Ultimately, I learned more about being a woman in physical space (both public and private) but also within the spiritual realm, which is very present in my daily life.

What more can we look forward to from Sethembile?

I’m looking forward to the opening of Zeitz Museum of Contemporary Art Africa (MOCAA) this September, where select pieces of my work that are part of their collection will be showing. One of my favorite pieces, Signal Her Return I (2015–2016), a living sound installation with a sea of lit candles, an 18th-century bell and long braid of hair, will also be featuring. After that I’m headed to Finland for the ANTI Festival International Prize for Live Art award ceremony where I’m one of four nominees.

That’s as much as I’m willing to reveal for now. Keep following, you won’t be disappointed …


Sociological ImagesWhat’s Trending? The Crime Drop

Over at Family Inequality, Phil Cohen has a list of demographic facts you should know cold. They include basic figures like the US population (326 million), and how many Americans have a BA or higher (30%). These got me thinking—if we want to have smarter conversations and fight fake news, it is also helpful to know which way things are moving. “What’s Trending?” is a post series at Sociological Images with quick looks at what’s up, what’s down, and what sociologists have to say about it.

The Crime Drop

You may have heard about a recent spike in the murder rate across major U.S. cities last year. It was a key talking point for the Trump campaign on policing policy, but it also may be leveling off. Social scientists can also help put this bounce into context, because violent and property crimes in the U.S. have been going down for the past twenty years.

You can read more on the social sources of this drop in a feature post at The Society Pages. Neighborhood safety is a serious issue, but the data on crime rates doesn’t always support the drama.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianDirk Eddelbuettel: pinp 0.0.2: Onwards

A first update 0.0.2 of the pinp package arrived on CRAN just a few days after the initial release.

We added a new vignette for the package (see below), extended a few nice features, and smoothed a few corners.

The NEWS entry for this release follows.

Changes in tint version 0.0.2 (2017-09-20)

  • The YAML segment can be used to select font size, one-or-two column mode, one-or-two side mode, linenumbering and watermarks (#21 and #26 addressing #25)

  • If pinp.cls or jss.bst are not present, they are copied in ((#27 addressing #23)

  • Output is now in shaded framed boxen too (#29 addressing #28)

  • Endmatter material is placed in template.tex (#31 addressing #30)

  • Expanded documentation of YAML options in skeleton.Rmd and clarified available one-column option (#32).

  • Section numbering can now be turned on and off (#34)

  • The default bibliography style was changed to jss.bst.

  • A short explanatory vignette was added.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramWhat the NSA Collects via 702

New York Times reporter Charlie Savage writes about some bad statistics we're all using:

Among surveillance legal policy specialists, it is common to cite a set of statistics from an October 2011 opinion by Judge John Bates, then of the FISA Court, about the volume of internet communications the National Security Agency was collecting under the FISA Amendments Act ("Section 702") warrantless surveillance program. In his opinion, declassified in August 2013, Judge Bates wrote that the NSA was collecting more than 250 million internet communications a year, of which 91 percent came from its Prism system (which collects stored e-mails from providers like Gmail) and 9 percent came from its upstream system (which collects transmitted messages from network operators like AT&T).

These numbers are wrong. This blog post will address, first, the widespread nature of this misunderstanding; second, how I came to FOIA certain documents trying to figure out whether the numbers really added up; third, what those documents show; and fourth, what I further learned in talking to an intelligence official. This is far too dense and weedy for a New York Times article, but should hopefully be of some interest to specialists.

Worth reading for the details.

Worse Than FailureCodeSOD: A Dumbain Specific Language

I’ve had to write a few domain-specific-languages in the past. As per Remy’s Law of Requirements Gathering, it’s been mostly because the users needed an Excel-like formula language. The danger of DSLs, of course, is that they’re often YAGNI in the extreme, or at least a sign that you don’t really understand your problem.

XML, coupled with schemas, is a tool for building data-focused DSLs. If you have some complex structure, you can convert each of its features into an XML attribute. For example, if you had a grammar that looked something like this:

The Source specification obeys the following syntax

source = ( Feature1+Feature2+... ":" ) ? steps

Feature1 = "local" | "global"

Feature2 ="real" | "virtual" | "ComponentType.all"

Feature3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

Feature4 = "first" | "last" | "DayAllocation.all"

If features are specified, the order of features as given above has strictly to be followed.

steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

oneOrMoreNameSteps = nameStep ( "." nameStep ) *

zeroOrMoreNameSteps = ( nameStep "." ) *

nameStep = "#" name

name is a string of characters from "A"-"Z", "a"-"z", "0"-"9", "-" and "_". No umlauts allowed, one character is minimum.

componentSteps is a list of valid values, see below.

Valid 'componentSteps' are:

- GlobalValue
- Product
- Product.Brand
- Product.Accommodation
- Product.Accommodation.SellingAccom
- Product.Accommodation.SellingAccom.Board
- Product.Accommodation.SellingAccom.Unit
- Product.Accommodation.SellingAccom.Unit.SellingUnit
- Product.OnewayFlight
- Product.OnewayFlight.BookingClass
- Product.ReturnFlight
- Product.ReturnFlight.BookingClass
- Product.ReturnFlight.Inbound
- Product.ReturnFlight.Outbound
- Product.Addon
- Product.Addon.Service
- Product.Addon.ServiceFeature

In addition to that all subsequent steps from the paths above are permitted, that is 'Board', 
'Accommodation.SellingAccom' or 'SellingAccom.Unit.SellingUnit'.
'Accommodation.Unit' in the contrary is not permitted, as here some intermediate steps are missing.

You could turn that grammar into an XML document by converting syntax elements to attributes and elements. You could do that, but Stella’s predecessor did not do that. That of course, would have been work, and they may have had to put some thought on how to relate their homebrew grammar to XSD rules, so instead they created an XML schema rule for SourceAttributeType that verifies that the data in the field is valid according to the grammar… using regular expressions. 1,310 characters of regular expressions.

<xs:simpleType>
    <xs:restriction base="xs:string">
            <xs:pattern value="(((Scope.)?(global|local|current)\+?)?((((ComponentType.)?
(real|virtual))|ComponentType.all)\+?)?((((Hierarchy.)?(self|ancestors|descendants))|Hierarchy.all)\+?)?
((((DayAllocation.)?(first|last))|DayAllocation.all)\+?)?:)?(#[A-Za-z0-9\-_]+(\.(#[A-Za-z0-9\-_]+))*|(#[A-Za-z0-
9\-_]+\.)*
(ThisComponent|GlobalValue|Product|Product\.Brand|Product\.Accommodation|Product\.Accommodation\.SellingAccom|Prod
uct\.Accommodation\.SellingAccom\.Board|Product\.Accommodation\.SellingAccom\.Unit|Product\.Accommodation\.Selling
Accom\.Unit\.SellingUnit|Product\.OnewayFlight|Product\.OnewayFlight\.BookingClass|Product\.ReturnFlight|Product\.
ReturnFlight\.BookingClass|Product\.ReturnFlight\.Inbound|Product\.ReturnFlight\.Outbound|Product\.Addon|Product\.
Addon\.Service|Product\.Addon\.ServiceFeature|Brand|Accommodation|Accommodation\.SellingAccom|Accommodation\.Selli
ngAccom\.Board|Accommodation\.SellingAccom\.Unit|Accommodation\.SellingAccom\.Unit\.SellingUnit|OnewayFlight|Onewa
yFlight\.BookingClass|ReturnFlight|ReturnFlight\.BookingClass|ReturnFlight\.Inbound|ReturnFlight\.Outbound|Addon|A
ddon\.Service|Addon\.ServiceFeature|SellingAccom|SellingAccom\.Board|SellingAccom\.Unit|SellingAccom\.Unit\.Sellin
gUnit|BookingClass|Inbound|Outbound|Service|ServiceFeature|Board|Unit|Unit\.SellingUnit|SellingUnit))"/>
    </xs:restriction>
</xs:simpleType>
</xs:union>

There’s a bug in that regex that Stella needed to fix. As she put it: “Every time you evaluate it a few little kitties die because you shouldn’t use kitties to polish your car. I’m so, so sorry, little kitties…”

The full, unexcerpted code is below, so… at least it has documentation. In two languages!

<xs:simpleType name="SourceAttributeType">
                <xs:annotation>
                        <xs:documentation xml:lang="de">
                Die Source Angabe folgt folgender Syntax

                        source = ( Eigenschaft1+Eigenschaft2+... ":" ) ? steps

                        Eigenschaft1 = "local" | "global"

                        Eigenschaft2 ="real" | "virtual" | "ComponentType.all"

                        Eigenschaft3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

                        Eigenschaft4 = "first" | "last" | "DayAllocation.all"

                        Falls Eigenschaften angegeben werden muss zwingend die oben angegebene Reihenfolge der Eigenschaften eingehalten werden.

                        steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

                        oneOrMoreNameSteps = nameStep ( "." nameStep ) *

                        zeroOrMoreNameSteps = ( nameStep "." ) *

                        nameStep = "#" name

                        name ist eine Folge von Zeichen aus der Menge "A"-"Z", "a"-"z", "0"-"9", "-" und "_". Keine Umlaute. Mindestens ein Zeichen

                        componentSteps ist eine Liste gültiger Werte, siehe im folgenden

                Gültige 'componentSteps' sind zunächst:

                        - GlobalValue
                        - Product
                        - Product.Brand
                        - Product.Accommodation
                        - Product.Accommodation.SellingAccom
                        - Product.Accommodation.SellingAccom.Board
                        - Product.Accommodation.SellingAccom.Unit
                        - Product.Accommodation.SellingAccom.Unit.SellingUnit
                        - Product.OnewayFlight
                        - Product.OnewayFlight.BookingClass
                        - Product.ReturnFlight
                        - Product.ReturnFlight.BookingClass
                        - Product.ReturnFlight.Inbound
                        - Product.ReturnFlight.Outbound
                        - Product.Addon
                        - Product.Addon.Service
                        - Product.Addon.ServiceFeature

                Desweiteren sind alle Unterschrittfolgen aus obigen Pfaden erlaubt, also 'Board', 'Accommodation.SellingAccom' oder 'SellingAccom.Unit.SellingUnit'.
                'Accommodation.Unit' hingegen ist nicht erlaubt, da in diesem Fall einige Zwischenschritte fehlen.

                                </xs:documentation>
                        <xs:documentation xml:lang="en">
                                The Source specification obeys the following syntax

                                source = ( Feature1+Feature2+... ":" ) ? steps

                                Feature1 = "local" | "global"

                                Feature2 ="real" | "virtual" | "ComponentType.all"

                                Feature3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

                                Feature4 = "first" | "last" | "DayAllocation.all"

                                If features are specified, the order of features as given above has strictly to be followed.

                                steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

                                oneOrMoreNameSteps = nameStep ( "." nameStep ) *

                                zeroOrMoreNameSteps = ( nameStep "." ) *

                                nameStep = "#" name

                                name is a string of characters from "A"-"Z", "a"-"z", "0"-"9", "-" and "_". No umlauts allowed, one character is minimum.

                                componentSteps is a list of valid values, see below.

                                Valid 'componentSteps' are:

                                - GlobalValue
                                - Product
                                - Product.Brand
                                - Product.Accommodation
                                - Product.Accommodation.SellingAccom
                                - Product.Accommodation.SellingAccom.Board
                                - Product.Accommodation.SellingAccom.Unit
                                - Product.Accommodation.SellingAccom.Unit.SellingUnit
                                - Product.OnewayFlight
                                - Product.OnewayFlight.BookingClass
                                - Product.ReturnFlight
                                - Product.ReturnFlight.BookingClass
                                - Product.ReturnFlight.Inbound
                                - Product.ReturnFlight.Outbound
                                - Product.Addon
                                - Product.Addon.Service
                                - Product.Addon.ServiceFeature

                                In addition to that all subsequent steps from the paths above are permitted, that is 'Board', 'Accommodation.SellingAccom' or 'SellingAccom.Unit.SellingUnit'.
                                'Accommodation.Unit' in the contrary is not permitted, as here some intermediate steps are missing.

                        </xs:documentation>
                </xs:annotation>
                <xs:union>
                        <xs:simpleType>
                                <xs:restriction base="xs:string">
                                        <xs:pattern value="(((Scope.)?(global|local|current)\+?)?((((ComponentType.)?(real|virtual))|ComponentType.all)\+?)?((((Hierarchy.)?(self|ancestors|descendants))|Hierarchy.all)\+?)?((((DayAllocation.)?(first|last))|DayAllocation.all)\+?)?:)?(#[A-Za-z0-9\-_]+(\.(#[A-Za-z0-9\-_]+))*|(#[A-Za-z0-9\-_]+\.)*(ThisComponent|GlobalValue|Product|Product\.Brand|Product\.Accommodation|Product\.Accommodation\.SellingAccom|Product\.Accommodation\.SellingAccom\.Board|Product\.Accommodation\.SellingAccom\.Unit|Product\.Accommodation\.SellingAccom\.Unit\.SellingUnit|Product\.OnewayFlight|Product\.OnewayFlight\.BookingClass|Product\.ReturnFlight|Product\.ReturnFlight\.BookingClass|Product\.ReturnFlight\.Inbound|Product\.ReturnFlight\.Outbound|Product\.Addon|Product\.Addon\.Service|Product\.Addon\.ServiceFeature|Brand|Accommodation|Accommodation\.SellingAccom|Accommodation\.SellingAccom\.Board|Accommodation\.SellingAccom\.Unit|Accommodation\.SellingAccom\.Unit\.SellingUnit|OnewayFlight|OnewayFlight\.BookingClass|ReturnFlight|ReturnFlight\.BookingClass|ReturnFlight\.Inbound|ReturnFlight\.Outbound|Addon|Addon\.Service|Addon\.ServiceFeature|SellingAccom|SellingAccom\.Board|SellingAccom\.Unit|SellingAccom\.Unit\.SellingUnit|BookingClass|Inbound|Outbound|Service|ServiceFeature|Board|Unit|Unit\.SellingUnit|SellingUnit))"/>
                                </xs:restriction>
                        </xs:simpleType>
                </xs:union>
</xs:simpleType>
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianIain R. Learmonth: Easy APT Repository

The PATHspider software I maintain as part of my work depends on some features in cURL and in PycURL that have only just been mereged or are still awaiting merge. I need to build a docker container that includes these as Debian packages, so I need to quickly build an APT repository.

A Debian repository can essentially be seen as a static website and the contents are GPG signed so it doesn’t necessarily need to be hosted somewhere trusted (unless availability is critical for your application). I host my blog with Netlify, a static website host, and I figured they would be perfect for this use case. They also support open source projects.

There is a CLI tool for netlify which you can install with:

sudo apt install npm
sudo npm install -g netlify-cli

The basic steps for setting up a repository are:

mkdir repository
cp /path/to/*.deb repository/
cd repository
apt-ftparchive packages . > Packages
apt-ftparchive release . > Release
gpg --clearsign -o InRelease Release
netlify deploy

Once you’ve followed these steps, and created a new site on Netlify, you’ll be able to manage this site also through the web interface. A few things you might want to do are set up a custom domain name for your repository, or enable HTTPS with Let’s Encrypt. (Make sure you have apt-transport-https if you’re going to enable HTTPS though.)

To add this repository to your apt sources:

gpg --export -a YOURKEYID | sudo apt-key add -
echo "deb https://SUBDOMAIN.netlify.com/ /" | sudo tee -a /etc/apt/sources.list
sudo apt update

You’ll now find that those packages are installable. Beware of APT pinning as you may find that the newer versions on your repository are not actually the preferred versions according to your policy.

Update: If you’re wanting a solution that would be more suitable for regular use, take a look at repropro. If you’re wanting to have end-users add your apt repository as a third-party repository to their system, please take a look at this page on the Debian wiki which contains advice on how to instruct users to use your repository.

Update 2: Another commenter has pointed out aptly, which offers a greater feature set and removes some of the restrictions imposed by repropro. I’ve never use aptly myself so can’t comment on specifics, but from the website it looks like it might be a nicely polished tool.

,

Planet DebianGunnar Wolf: Call to Mexicans: Open up your wifi #sismo

Hi friends,

~3hr ago, we just had a big earthquake, quite close to Mexico City. Fortunately, we are fine, as are (at least) most of our friends and family. Hopefully, all of them. But there are many (as in, tens) damaged or destroyed buildings; there have been over 50 deceased people, and numbers will surely rise until a good understanding of the event's strength are evaluated.

Mainly in these early hours after the quake, many people need to get in touch with their families and friends. There is a little help we can all provide: Provide communication.

Open up your wireless network. Set it up unencrypted, for anybody to use.

Refrain from over-sharing graphical content — Your social network groups don't need to see every video and every photo of the shaking moments and of broken buildings. Download of all those images takes up valuable time-space for the saturated cellular networks.

This advice might be slow to flow... The important moment to act is two or three hours ago, even now... But we are likely to have replicas; we are likely to have panic moments again. Do a little bit to help others in need!

Planet DebianSylvain Beucler: dot-zed extractor

Following last week's .zed format reverse-engineered specification, Loïc Dachary contributed a POC extractor!
It's available at http://www.dachary.org/loic/zed/, it can list non-encrypted metadata without password, and extract files with password (or .pem file).
Leveraging on python-olefile and pycrypto, only 500 lines of code (test cases excluded) are enough to implement it :)

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #125

Here's what happened in the Reproducible Builds effort between Sunday September 10 and Saturday September 16 2017:

Upcoming events

Reproduciblity work in Debian

devscripts/2.17.10 was uploaded to unstable, fixing #872514. This adds a script to report on reproducibility status of installed packages written by Chris Lamb.

#876055 was opened against Debian Policy to decide the precise requirements we should have on a build's environment variables.

Bugs filed:

Non-maintainer uploads:

  • Holger Levsen:

Reproduciblity work in other projects

Patches sent upstream:

  • Bernhard M. Wiedemann:

Reviews of unreproducible packages

16 package reviews have been added, 99 have been updated and 92 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been updated:

diffoscope development

  • Juliana Oliveira Rodrigues:
    • Fix comparisons between different container types not comparing inside files. It was caused by falling back to binary comparison for different file types even for unextracted containers.
    • Add many tests for the fixed behaviour.
    • Other code quality improvements.
  • Chris Lamb:
    • Various code quality and style improvements, some of it using Flake8.
  • Mattia Rizzolo:
    • Add a check to prevent installation with python < 3.4

reprotest development

  • Ximin Luo:
    • Split up the very large __init__.py and remove obsolete earlier code.
    • Extend the syntax for the --variations flag to support parameters to certain variations like user_group, and document examples in README.
    • Add a --vary flag for the new syntax and deprecate --dont-vary.
    • Heavily refactor internals to support > 2 builds.
    • Support >2 builds using a new --extra-build flag.
    • Properly sanitize artifact_pattern to avoid arbitrary shell execution.

trydiffoscope development

Version 65 was uploaded to unstable by Chris Lamb including these contributions:

  • Chris Lamb:
    • Packaging maintenance updates.
    • Developer documentation updates.

Reproducible websites development

tests.reproducible-builds.org

  • Vagrant Cascadian and Holger Levsen:
    • Added two armhf boards to the build farm. #874682
  • Holger also:
    • use timeout to limit the diffing of the two build logs to 30min, which greatly reduced jenkins load again.

Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Daniel Shahaf & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

CryptogramApple's FaceID

This is a good interview with Apple's SVP of Software Engineering about FaceID.

Honestly, I don't know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can't be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:

I also quizzed Federighi about the exact way you "quick disabled" Face ID in tricky scenarios -- like being stopped by police, or being asked by a thief to hand over your device.

"On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while -- we'll take you to the power down [screen]. But that also has the effect of disabling Face ID," says Federighi. "So, if you were in a case where the thief was asking to hand over your phone -- you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID."

That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the "5 clicks" because it's less obtrusive. When you do this, it defaults back to your passcode.

More:

It's worth noting a few additional details here:

  • If you haven't used Face ID in 48 hours, or if you've just rebooted, it will ask for a passcode.

  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode -- it tried to read the people setting the phones up on the podium.)

  • Developers do not have access to raw sensor data from the Face ID array. Instead, they're given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

  • You'll also get a passcode request if you haven't unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn't unlocked it in 4 hours.

Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.

Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you're a researcher or security wonk looking for more, he says it will have "extreme levels of detail" about the security of the system.

Here's more about fooling it with fake faces:

Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop's owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.

Hacking FaceID, though, won't be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user's face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face's 3-D shape­ -- a trick similar to the kind now used to capture actors' faces to morph them into animated and digitally enhanced characters.

It'll be harder, but I have no doubt that it will be done.

More speculation.

I am not planning on enabling it just yet.

Worse Than FailurePoor Shoe

OldShoe201707

"So there's this developer who is the end-all, be-all try-hard of the year. We call him Shoe. He's the kind of over-engineering idiot that should never be allowed near code. And, to boot, he's super controlling."

Sometimes, you'll be talking to a friend, or reading a submission, and they'll launch into a story of some crappy thing that happened to them. You expect to sympathize. You expect to agree, to tell them how much the other guy sucks. But as the tale unfolds, something starts to feel amiss.

They start telling you about the guy's stand-up desk, how it makes him such a loser, such a nerd. And you laugh nervously, recalling the article you read just the other day about the health benefits of stand-up desks. But sure, they're pretty nerdy. Why not?

"But then, get this. So we gave Shoe the task to minify a bunch of JavaScript files, right?"

You start to feel relieved. Surely this is more fertile ground. There's a ton of bad ways to minify and concatenate files on the server-side, to save bandwidth on the way out. Is this a premature optimization story? A story of an idiot writing code that just doesn't work? An over-engineered monstrosity?

"So he fires up gulp.js and gets to work."

Probably over-engineered. Gulp.js lets you write arbitrary JavaScript to do your processing. It has the advantage of being the same language as the code being minified, so you don't have to switch contexts when reading it, but the disadvantage of being JavaScript and thus impossible to read.

"He asks how to concat JavaScript, and the room tells him the right answer: find javascripts/ -name '*.js' -exec cat {} \; > main.js"

Wait, what? You blink. Surely that's not how Gulp.js is meant to work. Just piping out to shell commands? But you've never used it. Maybe that's the right answer; you don't know. So you nod along, making a sympathetic noise.

"Of course, this moron can't just take the advice. Shoe has to understand how it works. So he starts googling on the Internet, and when he doesn't find a better answer, he starts writing a shell script he can commit to the repo for his 'jay es minifications.'"

That nagging feeling is growing stronger. But maybe the punchline is good. There's gotta be a payoff here, right?

"This guy, right? Get this: he discovers that most people install gulp via npm.js. So he starts shrieking, 'This is a dependency of mah script!' and adds node.js and npm installation to the shell script!"

Stronger and stronger the feeling grows, refusing to be shut out. You swallow nervously, looking for an excuse to flee the conversation.

"We told him, just put it in the damn readme and move on! Don't install anything on anyone else's machines! But he doesn't like this solution, either, so he finally just echoes out in the shell script, requires npm. Can you believe it? What a n00b!"

That's it? That's the punchline? That's why your friend has worked himself into a lather, foaming and frothing at the mouth? Try as you might to justify it, the facts are inescapable: your friend is TRWTF.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Planet DebianCarl Chenet: The Github threat

Many voices arise now and then against risks linked to the Github use by Free Software projects. Yet the infatuation for the collaborative forge of the Octocat Californian start-ups doesn’t seem to fade away.

These recent years, Github and its services take an important role in software engineering as they are seen as easy to use, efficient for a daily workload with interesting functions in enterprise collaborative workflow or amid a Free Software project. What are the arguments against using its services and are they valid? We will list them first, then we’ll examine their validity.

1. Critical points

1.1 Centralization

The Github application belongs to a single entity, Github Inc, a US company which manage it alone. So, a unique company under US legislation manages the access to most of Free Software application code sources, which may be a problem with groups using it when a code source is no longer available, for political or technical reason.

The Octocat, the Github mascot

 

This centralization leads to another trouble: as it obtained critical mass, it becomes more and more difficult not having a Github account. People who don’t use Github, by choice or not, are becoming a silent minority. It is now fashionable to use Github, and not doing so is seen as “out of date”. The same phenomenon is a classic, and even the norm, for proprietary social networks (Facebook, Twitter, Instagram).

1.2 A Proprietary Software

When you interact with Github, you are using a proprietary software, with no access to its source code and which may not work the way you think it is. It is a problem at different levels. First, ideologically, but foremost in practice. In the Github case, we send them code we can control outside of their interface. We also send them personal information (profile, Github interactions). And mostly, Github forces any project which goes through the US platform to use a crucial proprietary tools: its bug tracking system.

Windows, the epitome of proprietary software, even if others took the same path

 

1.3 The Uniformization

Working with Github interface seems easy and intuitive to most. Lots of companies now use it as a source repository, and many developers leaving a company find the same Github working environment in the next one. This pervasive presence of Github in free software development environment is a part of the uniformization of said developers’ working space.

Uniforms always bring Army in my mind, here the Clone army

2 – Critical points cross-examination

2.1 Regarding the centralization

2.1.1 Service availability rate

As said above, nowadays, Github is the main repository of Free Software source code. As such it is a favorite target for cyberattacks. DDOS hit it in March and August 2015. On December 15, 2015, an outage led to the inaccessibility of 5% of the repositories. The same occurred on November 15. And these are only the incident reported by Github itself. One can imagine that the mean outage rate of the platform is underestimated.

2.1.2 Chain reaction could block Free Software development

Today many dependency maintenance tools, as npm for javascript, Bundler for Ruby or even pip for Python can access an application source code directly from Github. Free Software projects getting more and more linked and codependents, if one component is down, all the developing process stop.

One of the best examples is the npmgate. Any company could legally demand that Github take down some source code from its repository, which could create a chain reaction and blocking the development of many Free Software projects, as suffered the Node.js community from the decisions of Npm, Inc, the company managing npm.

2.2 A historical precedent: SourceForge

Github didn’t appear out of the blue. In his time, its predecessor, SourceForge, was also extremely popular.

Heavily centralized, based on strong interaction with the community, SourceForge is now seen as an aging SAAS (Software As A Service) and sees most of its customers fleeing to Github. Which creates lots of hurdles for those who stayed. The Gimp project suffered from spams and terrible advertising, which led to the departure of the VLC project, then from installers corrupted with adwares instead of the official Gimp installer for Windows. And finally, the Project Gimp’s SourceForge account was hacked by… SourceForge team itself!

These are very recent examples of what can do a commercial entity when it is under its stakeholders’ pressure. It is vital to really understand what it means to trust them with data and exchange centralization, where it could have tremendous repercussion on the day-to-day life and the habits of the Free Software and open source community.

2.3. Regarding proprietary software

2.3.1 One community, several opinions on proprietary software

Mostly based on ideology, this point deals with the definition every member of the community gives to Free Software and open source. Mostly about one thing: is it viral or not? Or GPL vs MIT/BSD.

Those on the side of the viral Free Software will have trouble to use a proprietary software as this last one shouldn’t even exist. It must be assimilated, to quote Star Trek, as it is a connected black box, endangering privacy, corrupting for profit our uses and restrain our freedom to use as we’re pleased what we own, etc.

Those on the side of complete freedom have no qualms using proprietary software as their very existence is a consequence of freedom without restriction. They even agree that code they developed may be a part of proprietary software, which is quite a common occurrence. This part of the Free Software community has no qualm using Github, which is well within their ideology parameters. Just take a look at the Janson amphitheater during Fosdem and check how many Apple laptops running on macOS are around.

FreeBSD, the main BSD project under the BSD license

2.3.2 Data loss and data restrictions linked to proprietary software use

Even without ideological consideration, and just focusing on Github infrastructure, the bug tracking system is a major problem by itself.

Bug report builds the memory of Free Software projects. It is the entrance point for new contributors, the place to find bug reporting, requests for new functions, etc. The project history can’t be limited only to the code. It’s very common to find bug reports when you copy and paste an error message in a search engine. Not their historical importance is precious for the project itself, but also for its present and future users.

Github gives the ability to extract bug reports through its API. What would happen if Github is down or if the platform doesn’t support this feature anymore? In my opinion, not that many projects ever thought of this outcome. How could they move all the data generated by Github into a new bug tracking system?

One old example now is Astrid, a TODO list bought by Yahoo a few years ago. Very popular, it grew fast until it was closed overnight, with only a few weeks for its users to extract their data. It was only a to-do list. The same situation with Github would be tremendously difficult to manage for several projects if they even have the ability to deal with it. Code would still be available and could still live somewhere else, but the project memory would be lost. A project like Debian has today more than 800,000 bug reports, which are a data treasure trove about problems solved, function requests and where the development stand on each. The developers of the Cpython project have anticipated the problem and decided not to use Github bug tracking systems.

Issues, the Github proprietary bug tracking system

Another thing we could lose if Github suddenly disappear: all the work currently done regarding the push requests (aka PRs). This Github function gives the ability to clone one project’s Github repository, to modify it to fit your needs, then to offer your own modification to the original repository. The original repository’s owner will then review said modification, and if he or she agrees with them will fuse them into the original repository. As such, it’s one of the main advantages of Github, since it can be done easily through its graphic interface.

However reviewing all the PRs may be quite long, and most of the successful projects have several ongoing PRs. And this PRs and/or the proprietary bug tracking system are commonly used as a platform for comment and discussion between developers.

Code itself is not lost if Github is down (except one specific situation as seen below), but the peer review works materialized in the PRs and the bug tracking system is lost. Let’s remember than the PR mechanism let you clone and modify projects and then generate PRs directly from its proprietary web interface without downloading a single code line on your computer. In this particular case, if Github is down, all the code and the work in progress is lost.

Some also use Github as a bookmark place. They follow their favorite projects’ activity through the Watch function. This technological watch style of data collection would also be lost if Github is down.

Debian, one of the main Free Software projects with at least a thousand official contributors

2.4 Uniformization

The Free Software community is walking a thigh rope between normalization needed for an easier interoperability between its products and an attraction for novelty led by a strong need for differentiation from what is already there.

Github popularized the use of Git, a great tool now used through various sectors far away from its original programming field. Step by step, Git is now so prominent it’s almost impossible to even think to another source control manager, even if awesome alternate solutions, unfortunately not as popular, exist as Mercurial.

A new Free Software project is now a Git repository on Github with README.md added as a quick description. All the other solutions are ostracized? How? None or very few potential contributors would notice said projects. It seems very difficult now to encourage potential contributors into learning a new source control manager AND a new forge for every project they want to contribute. Which was a basic requirement a few years ago.

It’s quite sad because Github, offering an original experience to its users, cut them out of a whole possibility realm. Maybe Github is one of the best web versioning control systems. But being the main one doesn’t let room for a new competitor to grow. And it let Github initiate development newcomers into a narrow function set, totally unrelated to the strength of the Git tool itself.

3. Centralization, uniformization, proprietary software… What’s next? Laziness?

Fight against centralization is a main part of the Free Software ideology as centralization strengthens the power of those who manage it and who through it control those who are managed by it. Uniformization allergies born against main software companies and their wishes to impose a closed commercial software world was for a long time the main fuel for innovation thirst and intelligent alternative development. As we said above, part of the Free Software community was built as a reaction to proprietary software and their threat. The other part, without hoping for their disappearance, still chose a development model opposite to proprietary software, at least in the beginning, as now there’s more and more bridges between the two.

The Github effect is a morbid one because of its consequences: at least centralization, uniformization, proprietary software usage as their bug tracking system. But some years ago the Dear Github buzz showed one more side effect, one I’ve never thought about: laziness. For those who don’t know what it is about, this letter is a complaint from several spokespersons from several Free Software projects which demand to Github team to finally implement, after years of polite asking, new functions.

Since when Free Software project facing a roadblock request for clemency and don’t build themselves the path they need? When Torvalds was involved in the Bitkeeper problem and the Linux kernel development team couldn’t use anymore their revision control software, he developed Git. The mere fact of not being able to use one tool or functions lacking is the main motivation to seek alternative solutions and, as such, of the Free Software movement. Every Free Software community member able to code should have this reflex. You don’t like what Github offers? Switch to Gitlab. You don’t like it Gitlab? Improve it or make your own solution.

The Gitlab logo

Let’s be crystal clear. I’ve never said that every Free Software developers blocked should code his or her own alternative. We all have our own priorities, and some of us even like their beauty sleep, including me. But, to see that this open letter to Github has 1340 names attached to it, among them some spokespersons for major Free Software project showed me that need, willpower and strength to code a replacement are here. Maybe said replacement will be born from this letter, it would be the best outcome of this buzz.

In the end, Github usage is just another example of Internet usage massification. As Internet users are bound to go to massively centralized social network as Facebook or Twitter, developers are following the same path with Github. Even if a large fraction of developers realize the threat linked this centralized and proprietary organization, the whole community is following this centralization and uniformization trend. Github service is useful, free or with a reasonable price (depending on the functions you need) easy to use and up most of the time. Why would we try something else? Maybe because others are using us while we are savoring the convenience? The Free Software community seems to be quite sleepy to me.

The lion enjoying the hearth warm

About Me

Carl Chenet, Free Software Indie Hacker, founder of the French-speaking Hacker News-like Journal du hacker.

Follow me on social networks

Translated from French by Stéphanie Chaptal. Original article written in 2015.

Sociological ImagesWhen Bros Hug

In February, CBS Sunday Morning aired a short news segment on the bro hug phenomenon: a supposedly new way heterosexual (white) men (i.e., bros) greet each other. According to this news piece, the advent of the bro hug can be attributed to decreased homophobia and is a sign of social progress.

I’m not so sure.

To begin, bro-ness isn’t really about any given individuals, but invokes a set of cultural norms, statuses, and meanings. A stereotypical bro is a white middle-class, heterosexual male, especially one who frequents strongly masculinized places like fraternities, business schools, and sport events. (The first part of the video, in fact, focused on fraternities and professional sports.) The bro, then, is a particular kind of guy, one that frequents traditionally male spaces with a history of homophobia and misogyny and is invested in maleness and masculinity.

The bro hug reflects this investment in masculinity and, in particular, the masculine performance in heterosexuality. To successfully complete a bro hug, the two men clasp their right hands and firmly pull their bodies towards each other until they are or appear to be touching whilst their left hands swing around to forcefully pat each other on the back. Men’s hips and chests never make full contact. Instead, the clasped hands pull in, but also act as a buffer between the men’s upper bodies, while the legs remain firmly rooted in place, maintaining the hips at a safe distance. A bro hug, in effect, isn’t about physical closeness between men, but about limiting bodily contact.

Bro hugging, moreover, is specifically a way of performing solidarity with heterosexual men. In the CBS program, the bros explain that a man would not bro hug a woman since a bro hug is, by its forcefulness, designed to be masculinity affirming. Similarly, a bro hug is not intended for gay men, lesbians, or queer people. The bro hug performs and reinforce bro identity within an exclusively bro domain. For bros, by bros. As such, the bro hug does little to signal a decrease in homophobia. Instead, it affirms men’s identities as “real” men and their difference from both women and non-heterosexual men.

In this way, the bro-hug functions similarly to the co-masturbation and same-sex sexual practices of heterosexually identified white men, documented by the sociologist Jane Ward in her book, Not Gay. Ward argues that when straight white men have sex with other straight white men they are not necessarily blurring the boundaries between homo- and heterosexuality. Instead, they are shifting the line separating what is considered normal from what is considered queer.  Touching another man’s anus during a fraternity hazing ritual is normal (i.e., straight) while touching another man’s anus in a gay porn is queer.  In other words, the white straight men can have sex with each other because it is not “real” gay sex. 

Similarly, within the context of a bro hug, straight white men can now bro hug each other because they are heterosexual. Bro hugging will not diminish either man’s heterosexual capital. In fact, it might increase it. When two bros hug, they signal to others their unshakable strength of and comfort in their heterosexuality. Even though they are touching other men in public, albeit minimally, the act itself reinforces their heterosexuality and places it beyond reproach.

Hubert Izienicki, PhD, is a professor of sociology at Purdue University Northwest. 

(View original at https://thesocietypages.org/socimages)

CryptogramBluetooth Vulnerabilities

A bunch of Bluetooth vulnerabilities are being reported, some pretty nasty.

BlueBorne concerns us because of the medium by which it operates. Unlike the majority of attacks today, which rely on the internet, a BlueBorne attack spreads through the air. This works similarly to the two less extensive vulnerabilities discovered recently in a Broadcom Wi-Fi chip by Project Zero and Exodus. The vulnerabilities found in Wi-Fi chips affect only the peripherals of the device, and require another step to take control of the device. With BlueBorne, attackers can gain full control right from the start. Moreover, Bluetooth offers a wider attacker surface than WiFi, almost entirely unexplored by the research community and hence contains far more vulnerabilities.

Airborne attacks, unfortunately, provide a number of opportunities for the attacker. First, spreading through the air renders the attack much more contagious, and allows it to spread with minimum effort. Second, it allows the attack to bypass current security measures and remain undetected, as traditional methods do not protect from airborne threats. Airborne attacks can also allow hackers to penetrate secure internal networks which are "air gapped," meaning they are disconnected from any other network for protection. This can endanger industrial systems, government agencies, and critical infrastructure.

Finally, unlike traditional malware or attacks, the user does not have to click on a link or download a questionable file. No action by the user is necessary to enable the attack.

Fully patched Windows and iOS systems are protected; Linux coming soon.

Worse Than FailureCodeSOD: Mutex.js

Just last week, I was teaching a group of back-end developers how to use Angular to develop front ends. One question that came up, which did suprise me a bit, was how to deal with race conditions and concurrency in JavaScript.

I’m glad they asked, because it’s a good question that never occurred to me. The JavaScript runtime, of course, is single-threaded. You might use Web Workers to get multiple threads, but they use an Actor model, so there’s no shared state, and thus no need for any sort of locking.

Chris R’s team did have a need for locking. Specifically, their .NET backend needed to run a long-ish bulk operation against their SqlServer. It would be triggered by an HTTP request from the client-side, AJAX-style, but only one user should be able to run it at a time.

Someone, for some reason, decided that they would implement this lock in front-end JavaScript, since that’s where the AJAX calls were coming from..

var myMutex = true; //global (as in page wide, global) variable
function onClickHandler(element) {
    if (myMutex == true) {
        myMutex = false;
        // snip...
        if ($(element).hasClass("addButton") == true) {
            $(element).removeClass("addButton").addClass("removeButton");
            // snip...
            $.get(url).done(function (r) {
                // snip... this code is almost identical to the branch below
                setTimeout("myMutex = true;", 100);
            });
        } else {
            if ($(element).hasClass("removeButton") == true) {
                $(element).removeClass("removeButton").addClass("addButton");
                // snip...
                $.get(url).done(function (r) {
                    // snip... this code is almost identical to the branch above
                    setTimeout("myMutex = true;", 100);
                });
            }
        }
    }
}

You may be shocked to learn that this solution didn’t work, and the developer responsible never actually tested it with multiple users. Obviously, a client side variable isn’t going to work as a back-end lock. Honestly, I’m not certain that’s the worst thing about this code.

First, they reinvented the mutex badly. They seem to be using CSS classes to hold application state. They have (in the snipped code) duplicate branches of code that vary only by a handful of flags. They aren’t handling errors on the request- which, when this code started failing, made it that much harder to figure out why.

But it’s the setTimeout("myMutex = true;", 100); that really gets me. Why? Why the 100ms lag? What purpose does that serve?

Chris threw this code away and put a mutex in the backend service.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianRuss Allbery: Consolidation haul

My parents are less fond than I am of filling every available wall in their house with bookshelves and did a pruning of their books. A lot of them duplicated other things that I had, or didn't sound interesting, but I still ended up with two boxes of books (and now have to decide which of my books to prune, since I'm out of shelf space).

Also included is the regular accumulation of new ebook purchases.

Mitch Albom — Tuesdays with Morrie (nonfiction)
Ilona Andrews — Clean Sweep (sff)
Catherine Asaro — Charmed Sphere (sff)
Isaac Asimov — The Caves of Steel (sff)
Isaac Asimov — The Naked Sun (sff)
Marie Brennan — Dice Tales (nonfiction)
Captain Eric "Winkle" Brown — Wings on My Sleeve (nonfiction)
Brian Christian & Tom Griffiths — Algorithms to Live By (nonfiction)
Tom Clancy — The Cardinal of the Kremlin (thriller)
Tom Clancy — The Hunt for the Red October (thriller)
Tom Clancy — Red Storm Rising (thriller)
April Daniels — Sovereign (sff)
Tom Flynn — Galactic Rapture (sff)
Neil Gaiman — American Gods (sff)
Gary J. Hudson — They Had to Go Out (nonfiction)
Catherine Ryan Hyde — Pay It Forward (mainstream)
John Irving — A Prayer for Owen Meany (mainstream)
John Irving — The Cider House Rules (mainstream)
John Irving — The Hotel New Hampshire (mainstream)
Lawrence M. Krauss — Beyond Star Trek (nonfiction)
Lawrence M. Krauss — The Physics of Star Trek (nonfiction)
Ursula K. Le Guin — Four Ways to Forgiveness (sff collection)
Ursula K. Le Guin — Words Are My Matter (nonfiction)
Richard Matheson — Somewhere in Time (sff)
Larry Niven — Limits (sff collection)
Larry Niven — The Long ARM of Gil Hamilton (sff collection)
Larry Niven — The Magic Goes Away (sff)
Larry Niven — Protector (sff)
Larry Niven — World of Ptavvs (sff)
Larry Niven & Jerry Pournelle — The Gripping Hand (sff)
Larry Niven & Jerry Pournelle — Inferno (sff)
Larry Niven & Jerry Pournelle — The Mote in God's Eye (sff)
Flann O'Brien — The Best of Myles (nonfiction)
Jerry Pournelle — Exiles to Glory (sff)
Jerry Pournelle — The Mercenary (sff)
Jerry Pournelle — Prince of Mercenaries (sff)
Jerry Pournelle — West of Honor (sff)
Jerry Pournelle (ed.) — Codominium: Revolt on War World (sff anthology)
Jerry Pournelle & S.M. Stirling — Go Tell the Spartans (sff)
J.D. Salinger — The Catcher in the Rye (mainstream)
Jessica Amanda Salmonson — The Swordswoman (sff)
Stanley Schmidt — Aliens and Alien Societies (nonfiction)
Cecilia Tan (ed.) — Sextopia (sff anthology)
Lavie Tidhar — Central Station (sff)
Catherynne Valente — Chicks Dig Gaming (nonfiction)
J.E. Zimmerman — Dictionary of Classical Mythology (nonfiction)

This is an interesting tour of a lot of stuff I read as a teenager (Asimov, Niven, Clancy, and Pournelle, mostly in combination with Niven but sometimes his solo work).

I suspect I will no longer consider many of these books to be very good, and some of them will probably go back into used bookstores after I've re-read them for memory's sake, or when I run low on space again. But all those mass market SF novels were a big part of my teenage years, and a few (like Mote In God's Eye) I definitely want to read again.

Also included is a random collection of stuff my parents picked up over the years. I don't know what to expect from a lot of it, which makes it fun to anticipate. Fall vacation is coming up, and with it a large amount of uninterrupted reading time.

,

Planet Linux AustraliaOpenSTEM: Those Dirty Peasants!

It is fairly well known that many Europeans in the 17th, 18th and early 19th centuries did not follow the same routines of hygiene as we do today. There are anecdotal and historical accounts of people being dirty, smelly and generally unhealthy. This was particularly true of the poorer sections of society. The epithet “those […]

Planet DebianSean Whitton: Debian Policy call for participation -- September 2017

Here’s a summary of the bugs against the Debian Policy Manual. Please consider getting involved, whether or not you’re an existing contributor.

Consensus has been reached and help is needed to write a patch

#172436 BROWSER and sensible-browser standardization

#273093 document interactions of multiple clashing package diversions

#299007 Transitioning perms of /usr/local

#314808 Web applications should use /usr/share/package, not /usr/share/doc/…

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#476810 Please clarify 12.5, “Copyright information”

#484673 file permissions for files potentially including credential informa…

#491318 init scripts “should” support start/stop/restart/force-reload - why…

#556015 Clarify requirements for linked doc directories

#568313 Suggestion: forbid the use of dpkg-statoverride when uid and gid ar…

#578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al.

#582109 document triggers where appropriate

#587991 perl-policy: /etc/perl missing from Module Path

#592610 Clarify when Conflicts + Replaces et al are appropriate

#613046 please update example in 4.9.1 (debian/rules and DEB_BUILD_OPTIONS)

#614807 Please document autobuilder-imposed build-dependency alternative re…

#628515 recommending verbose build logs

#664257 document Architecture name definitions

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#685746 debian-policy Consider clarifying the use of recommends

#688251 Built-Using description too aggressive

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

#767839 Linking documentation of arch:any package to arch:all

#770440 policy should mention systemd timers

#773557 Avoid unsafe RPATH/RUNPATH

#780725 PATH used for building is not specified

#793499 The Installed-Size algorithm is out-of-date

#810381 Update wording of 5.6.26 VCS-* fields to recommend encryption

#823256 Update maintscript arguments with dpkg >= 1.18.5

#833401 virtual packages: dbus-session-bus, dbus-default-session-bus

#835451 Building as root should be discouraged

#838777 Policy 11.8.4 for x-window-manager needs update for freedesktop menus

#845715 Please document that packages are not allowed to write outside thei…

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874019 Note that the ’-e’ argument to x-terminal-emulator works like ’–’

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#515856 remove get-orig-source

#542288 Versions for native packages, NMU’s, and binary only uploads

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#645696 [copyright-format] clearer definitions and more consistent License:…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#683222 say explicitly that debian/changelog is required in source packages

#688251 Built-Using description too aggressive

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#810381 Update wording of 5.6.26 VCS-* fields to recommend encryption

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#850729 Documenting special version number suffixes

#874090 Clarify wording of some passages

#874095 copyright-format: Use the “synopsis” term established in the de…

Merged for the next release

#661928 recipe for determining shlib package name

#679751 please clarify package account and home directory location in policy

#683222 say explicitly that debian/changelog is required in source packages

#870915 [5.6.30] Testsuite: There are much more defined values

#872893 Chapters, sections, appendices and numbering

#872895 Include multi-page HTML in package

#872896 An html.tar.gz has leaked into the .deb?

#872900 Very generic info file name

#872950 Too much indirection in info file menus

#873819 upgrading-checklist.txt: typo pgpsignurlmangle in section 4.11 of V…

#874411 missing line breaks in summary of ways maintainers scripts are call…

Planet DebianRuss Allbery: Free software log (July and August 2017)

I've wanted to start making one of these posts for a few months but have struggled to find the time. But it seems like a good idea, particularly since I get more done when I write down what I do, so you all get a rather belated one. This covers July and August; hopefully the September one will come closer to the end of September.

Debian

August was DebConf, which included a ton of Policy work thanks to Sean Whitton's energy and encouragement. During DebConf, we incorporated work from Hideki Yamane to convert Policy to reStructuredText, which has already made it far easier to maintain. (Thanks also to David Bremner for a lot of proofreading of the result.) We also did a massive bug triage and closed a ton of older bugs on which there had been no forward progress for many years.

After DebConf, as expected, we flushed out various bugs in the reStructuredText conversion and build infrastructure. I fixed a variety of build and packaging issues and started doing some more formatting cleanup, including moving some footnotes to make the resulting document more readable.

During July and August, partly at DebConf and partly not, I also merged wording fixes for seven bugs and proposed wording (not yet finished) for three more, as well as participated in various Policy discussions.

Policy was nearly all of my Debian work over these two months, but I did upload a new version of the webauth package to build with OpenSSL 1.1 and drop transitional packages.

Kerberos

I still haven't decided my long-term strategy with the Kerberos packages I maintain. My personal use of Kerberos is now fairly marginal, but I still care a lot about the software and can't convince myself to give it up.

This month, I started dusting off pam-krb5 in preparation for a new release. There's been an open issue for a while around defer_pwchange support in Heimdal, and I spent some time on that and tracked it down to an upstream bug in Heimdal as well as a few issues in pam-krb5. The pam-krb5 issues are now fixed in Git, but I haven't gotten any response upstream from the Heimdal bug report. I also dusted off three old Heimdal patches and submitted them as upstream merge requests and reported some more deficiencies I found in FAST support. On the pam-krb5 front, I updated the test suite for the current version of Heimdal (which changed some of the prompting) and updated the portability support code, but haven't yet pulled the trigger on a new release.

Other Software

I merged a couple of pull requests in podlators, one to fix various typos (thanks, Jakub Wilk) and one to change the formatting of man page references and function names to match the current Linux manual page standard (thanks, Guillem Jover). I also documented a bad interaction with line-buffered output in the Term::ANSIColor man page. Neither of these have seen a new release yet.

Planet DebianDirk Eddelbuettel: RcppClassic 0.9.7

A rather boring and otherwise uneventful release 0.9.7 of RcppClassic is now at CRAN. This package provides a maintained version of the otherwise deprecated first Rcpp API; no new projects should use it.

Once again no changes in user-facing code. But this makes it the first package to use the very new and shiny pinp package as the backend for its vignette, now converted to Markdown---see here for this new version. We also updated three sources files for tabs versus spaces as the current g++ version complained (correctly !!) about misleading indents. Otherwise a file src/init.c was added for dynamic registration, the Travis CI runner script was updated to using run.sh from our r-travis fork, and we now strip the library after they have been built. Again, no user code changes.

And no iterate: nobody should use this package. Rcpp is so much better in so many ways---this one is simply available as we (quite strongly) believe that APIs are contracts, and as such we hold up our end of the deal.

Courtesy of CRANberries, there are changes relative to the previous release.

Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianIain R. Learmonth: Free Software Efforts (2017W37)

I’d like to start making weekly reports again on my free software efforts. Part of the reason for these reports is for me to see how much time I’m putting into free software. Hopefully I can keep these reports up.

Debian

I have updated txtorcon (a Twisted-based asynchronous Tor control protocol implementation used by ooniprobe, magic-wormhole and tahoe-lafs) to its latest upstream version. I’ve also added two new binary packages that are built by the txtorcon source package: python3-txtorcon and python-txtorcon-doc for Python 3 support and generated HTML documentation respectively.

I have gone through the scapy (Python module for the forging and dissection of network packets) bugs and closed a couple that seem to have been silently fixed by new upstream releases and not been caught in the BTS. I’ve uploaded a minor revision to include a patch that fixes the version number reported by scapy.

I have prepared and uploaded a new package for measurement-kit (a portable C++11 network measurement library) from the Open Observatory of Network Interference, which at time of writing is still in the NEW queue. I have also updated ooniprobe (probe for the Open Observatory of Network Interference) to its latest upstream version.

I have updated the Swedish debconf strings in the xastir (X Amateur Station Tracking and Information Reporting) package, thanks to the translators.

I have updated the direwolf (soundcard terminal node controller for APRS) package to its latest upstream version and fixed the creation of the system user to run direwolf with systemd to happen at the time the package is installed. Unfortunately, it has been necessary to drop the PDF documentation from the package as I was unable to contact the upstream author and acquire the Microsoft Word sources for this release.

I have reviewed and sponsored the uploads of the new packages comptext (GUI based tool to compare two text streams), comptty (GUI based tool to compare two radio teletype streams) and flnet (amateur radio net control station software) in the hamradio team. Thanks to Ana Custura for preparing those packages, comptext and comptty are now available in unstable.

I have updated the Debian Hamradio Blend metapackages to include cubicsdr (a software defined radio receiver). This build also refreshes the list of packages that can now be included as they had not been packaged at the time of the last build.

I have produced and uploaded an initial package for python-azure-devtools (development tools for Azure SDK and CLI for Python) and have updated python-azure (the Azure SDK for Python) to a recent git snapshot. Due to some issues with python-vcr it is currently not possible to run the test suite as part of the build process and I’m watching the situation. I have also fixed the auto dependency generation for python3-azure, which had previously been broken.

Bugs closed (fixed/wontfix): #873036, #871940, #869566, #873083, #867420, #861753, #855385, #855497, #684727, #683711

Tor Project

I have been working through tickets for Atlas (a tool for looking up details about Tor relays and bridges) and have merged and deployed a number of fixes. Some highlights include: bandwidth sorting in search results is now semantically correct (not just an alphanumeric sort ignoring units), added when a relay was first seen to the details page along with the host name if a reverse DNS record has been found for the IP address of the relay and added support for the NoEdConsensus flag (although happily no relays had this flag at the time this support was added).

The metrics team has been working on merging projects into the metrics team website to give a unified view of information about the Tor network. This week I have been working towards a prototype of a port of Atlas to the metrics website’s style and this work has been published in my personal Atlas git repository. If you’d like to have a click around, you can do so.

A relay operators meetup will be happening in Montreal on the 14th of October. I won’t be present, but I have taken this opportunity to ask operators if there’s anything that they would like from Atlas that they are not currently getting. Some feedback has already been received and turned into code and trac tickets.

I also attended the weekly metrics team meeting in #tor-dev.

Bugs closed (fixed/wontfix): #6787, #9814, #21958, #21636, #23296, #23160

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

I continue to be happy to spend my time on this work, however I do find myself in a position where it may not be sustainable when it comes to hardware. My desktop, a Sun Ultra 24, is now 10 years old and I’m starting to see random reboots which so far have not been explained. It is incredibly annoying to have this happen during a long build. Further, the hard drives in my NAS which are used for the local backups and for my local Debian mirror are starting to show SMART errors. It is not currently within my budget to replace any of this hardware. Please contact me if you believe you can help.

This week's energy was provided by Club Mate

Planet DebianUwe Kleine-König: IPv6 in my home network

I am lucky and get both IPv4 (without CGNAT) and IPv6 from my provider. Recently after upgrading my desk router (that is an Netgear WNDR3800 that serves the network on my desk) from OpenWRT to latest LEDE I looked into what can be improved in the IPv6 setup for both my home network (served by a FRITZ!Box) and my desk network.

Unfortunately I was unable to improve the situation compared to what I already had before.

Things that work

Making IPv6 work in general was easy, just a few clicks in the configuration of the FRITZ!Box and it mostly worked. After that I have:

  • IPv6 connectivity in the home net
  • IPv6 connectivity in the desk net

Things that don't work

There are a few things however that I'd like to have, that are not that easy it seems:

ULA for both nets

I let the two routers announce an ULA prefix each. Unfortunately I was unable to make the LEDE box announce its net on the wan interface for clients in the home net. So the hosts in the desk net know how to reach the hosts in the home net but not the other way round which makes it quite pointless. (It works fine as long as the FRITZ!Box announces a global net, but I'd like to have local communication work independent of the global connectivity.)

To fix this I'd need something like radvd on my LEDE router, but that isn't provided by LEDE (or OpenWRT) any more as odhcpd is supposed to be used which AFAICT is unable to send RAs on the wan interface though. Ok, probably I could install bird, but that seems a bit oversized. I created an entry in the LEDE forum but without any reply up to now.

Alternatively (but less pretty) I could setup an IPv6 route in the FRITZ!Box, but that only works with a newer firmware and as this router is owned by my provider I cannot update it.

Firewalling

The FRITZ!Box has a firewall that is not very configurable. I can punch a hole in it for hosts with a given interface-ID, but that only works for hosts in the home net, not the machines in the delegated subnet behind the LEDE router. In fact I think the FRITZ!Box should delegate firewalling for a delegated net also to the router of that subnet. (Hello AVM, did you hear me? At least a checkbox for that would be nice.)

So having a global address on the machines on my desk doesn't allow me to reach them from the internet.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, August 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 189 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours is the same as last month.

The security tracker currently lists 59 packages with a known CVE and the dla-needed.txt file 60. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation. The number of CVE to fix per package tends to increase due to the increased usage of fuzzers.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

,

Planet DebianDirk Eddelbuettel: pinp 0.0.1: pinp is not PNAS

A brandnew and very exciting (to us, at least) package called pinp just arrived on CRAN, following a somewhat unnecessarily long passage out of incoming. It is based on the PNAS LaTeX Style offered by the Proceeding of the National Academy of Sciences of the United States of America, or PNAS for short. And there is already a Markdown version in the wonderful rticles packages.

But James Balamuta and I thought we could do one better when we were looking to typeset our recent PeerJ Prepint as an attractive looking vignette for use within the Rcpp package.

And so we did by changing a few things (font, color, use of natbib and Chicago.bst for references, removal of a bunch of extra PNAS-specific formalities from the frontpage) and customized a number of other things for easier use by vignettes directly from the YAML header (draft mode watermark, doi or url for packages, easier author naming in footer, bibtex file and more).

We are quite pleased with the result which seems ready for the next Rcpp release---see e.g., these two teasers:

and

and the pinp package page or the GitHub repo have the full (four double-)pages of what turned a more dull looking 27 page manuscript into eight crisp two-column pages.

We have few more things planned (i.e., switching to single column mode, turning on linenumbers at least in one-column mode).

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: drat 0.1.3

A new version of drat arrived earlier today on CRAN as another no-human-can-delay-this automatic upgrade directly from the CRAN prechecks. It is mostly a maintenance release ensuring PACKAGES.rds is also updated, plus some minor other edits.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.3 (2017-09-16)

  • Ensure PACKAGES.rds, if present, is also inserted in repo

  • Updated 'README.md' removing stale example URLs (#63)

  • Use https to fetch Travis CI script from r-travis

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

TED5 reasons to convince your boss to send you to TEDWomen this year

Inspiration, challenge, community — when we listen to great ideas together, great things can happen. Photo: Stacie McChesney / TED

Every year at TEDWomen, we gather to talk about issues that matter, to learn and bond and get energized. This year, we will be reconvening on November 1–3 in New Orleans — and we would love for you, and your amazing perspective and ideas, to join us and become part of this diverse, welcoming group that’s growing every year.

Join us at TEDWomen 2017 >>

However, there’s a challenge we’re hearing from some of you — especially those who’d like to attend in a professional capacity. And it’s this: It’s hard to explain to your boss how this conference can contribute to your professional success and development.

What we know from past attendees is, TEDWomen is an extraordinary professional development event — sending people back to work refreshed, connected and full of ideas. We’d love to encourage more people to attend with professional growth in mind. So, if you’re interested in attending TEDWomen, here are some talking points to support you when you ask for your share of the staff-development budget:

1. At TEDWomen, you’ll learn tools to craft better messages, to listen and connect more deeply, to problem-solve and spark new ideas. What you hear onstage — and from fellow attendees — will spark new thinking that you can bring back to your team. (Many TEDsters, in fact, schedule a team meeting for the week after TED to download what they learned.) As one attendee wrote: “Amazing and inspiring overall. I’m leaving a better person because of it.”

Join an audience of curious and enthusiastic lifelong learners and doers. Photo: Marla Aufmuth / TED

2. TEDWomen is where some of the boldest conversations are happening — which can help you kickstart the conversations your organization needs to have. You’ll hear about new markets and new power structures, learn how people are engaging with diversity internally and externally, and get new ideas for leveraging technology. Because you never know where your company’s next great idea may come from. As one attendee told us: “I am a VP at a Fortune 500 company and this conference was life-changing for me. There are so many execs who have the experience, money and resources to help drive the causes that were discussed.”

3. The TEDWomen community is a powerful network, offering connections across many fields and in many countries. VC Chris Fralic once described the benefit of attending TED in four words: “permission to follow up.” TEDWomen is not a place for high-pitched networking — it’s designed to be a place to connect over conversations that matter, to plant seeds for collaborations and real relationships. As one attendee said: “I connected with so many people with whom I am able to help grow their work and they are going to work with me to grow mine. I think it is terrific that TED provides such meaningful resources for attendees to connect and converse.”

Well, we make no promises that you too will get a selfie with Sandi Toksvig, left, host of the Great British Bake Off, but yes, connections like this happen at TEDWomen all the time. The audience and speakers are all part of the same amazing community. Photo: Stacie McChesney / TED

4. Finally, it’s just a great conference — offering TED’s legendary high quality, brilliant content and attention to detail at every turn, at a more approachable price. Attendees tell us things like: “Single best and most diverse event that I’ve been to” and “It was a truly immersive, brilliant experience that left me feeling mentally refreshed and inspired. This was my first TED, and I can see why people get addicted to coming back year upon year.”

5. You don’t have to wait to be invited. In fact, consider this blog post your invitation to TEDWomen. We truly want to diversify and grow the audience for this conference, to increase the network effect that happens when great people get together. Come join us for what one attendee calls “a truly transformative conference and experience. TED has become a very important part of my CEO/executive life in feeding my soul!”

Apply to attend TEDWomen 2017 — we can’t wait to meet you!

We hope to see you at TEDWomen, where our awesome audience is as vital to the magic as any speaker on stage. Photo: Marla Aufmuth / TED


Planet Linux AustraliaDave Hall: Trying Drupal

While preparing for my DrupalCamp Belgium keynote presentation I looked at how easy it is to get started with various CMS platforms. For my talk I used Contentful, a hosted content as a service CMS platform and contrasted that to the "Try Drupal" experience. Below is the walk through of both.

Let's start with Contentful. I start off by visiting their website.

Contentful homepage

In the top right corner is a blue button encouraging me to "try for free". I hit the link and I'm presented with a sign up form. I can even use Google or GitHub for authentication if I want.

Contentful signup form

While my example site is being installed I am presented with an overview of what I can do once it is finished. It takes around 30 seconds for the site to be installed.

Contentful installer wait

My site is installed and I'm given some guidance about what to do next. There is even an onboarding tour in the bottom right corner that is waving at me.

Contentful dashboard

Overall this took around a minute and required very little thought. I never once found myself thinking come on hurry up.

Now let's see what it is like to try Drupal. I land on d.o. I see a big prominent "Try Drupal" button, so I click that.

Drupal homepage

I am presented with 3 options. I am not sure why I'm being presented options to "Build on Drupal 8 for Free" or to "Get Started Risk-Free", I just want to try Drupal, so I go with Pantheon.

Try Drupal providers

Like with Contentful I'm asked to create an account. Again I have the option of using Google for the sign up or completing a form. This form has more fields than contentful.

Pantheon signup page

I've created my account and I am expecting to be dropped into a demo Drupal site. Instead I am presented with a dashboard. The most prominent call to action is importing a site. I decide to create a new site.

Pantheon dashboard

I have to now think of a name for my site. This is already feeling like a lot of work just to try Drupal. If I was a busy manager I would have probably given up by this point.

Pantheon create site form

When I submit the form I must surely be going to see a Drupal site. No, sorry. I am given the choice of installing WordPress, yes WordPress, Drupal 8 or Drupal 7. Despite being very confused I go with Drupal 8.

Pantheon choose application page

Now my site is deploying. While this happens there is a bunch of items that update above the progress bar. They're all a bit nerdy, but at least I know something is happening. Why is my only option to visit my dashboard again? I want to try Drupal.

Pantheon site installer page

I land on the dashboard. Now I'm really confused. This all looks pretty geeky. I want to try Drupal not deal with code, connection modes and the like. If I stick around I might eventually click "Visit Development site", which doesn't really feel like trying Drupal.

Pantheon site dashboard

Now I'm asked to select a language. OK so Drupal supports multiple languages, that nice. Let's select English so I can finally get to try Drupal.

Drupal installer, language selection

Next I need to chose an installation profile. What is an installation profile? Which one is best for me?

Drupal installer, choose installation profile

Now I need to create an account. About 10 minutes I already created an account. Why do I need to create another one? I also named my site earlier in the process.

Drupal installer, configuration form part 1
Drupal installer, configuration form part 2

Finally I am dropped into a Drupal 8 site. There is nothing to guide me on what to do next.

Drupal site homepage

I am left with a sense that setting up Contentful is super easy and Drupal is a lot of work. For most people wanting to try Drupal they would have abandoned someway through the process. I would love to see the conversion stats for the try Drupal service. It must miniscule.

It is worth noting that Pantheon has the best user experience of the 3 companies. The process with 1&1 just dumps me at a hosting sign up page. How does that let me try Drupal?

Acquia drops onto a page where you select your role, then you're presented with some marketing stuff and a form to request a demo. That is unless you're running an ad blocker, then when you select your role you get an Ajax error.

The Try Drupal program generates revenue for the Drupal Association. This money helps fund development of the project. I'm well aware that the DA needs money. At the same time I wonder if it is worth it. For many people this is the first experience they have using Drupal.

The previous attempt to have simplytest.me added to the try Drupal page ultimately failed due to the financial implications. While this is disappointing I don't think simplytest.me is necessarily the answer either.

There needs to be some minimum standards for the Try Drupal page. One of the key item is the number of clicks to get from d.o to a working demo site. Without this the "Try Drupal" page will drive people away from the project, which isn't the intention.

If you're at DrupalCon Vienna and want to discuss this and other ways to improve the marketing of Drupal, please attend the marketing sprints.

AttachmentSize
try-contentful-1.png342.82 KB
try-contentful-2.png214.5 KB
try-contentful-3.png583.02 KB
try-contentful-5.png826.13 KB
try-drupal-1.png1.19 MB
try-drupal-2.png455.11 KB
try-drupal-3.png330.45 KB
try-drupal-4.png239.5 KB
try-drupal-5.png203.46 KB
try-drupal-6.png332.93 KB
try-drupal-7.png196.75 KB
try-drupal-8.png333.46 KB
try-drupal-9.png1.74 MB
try-drupal-10.png1.77 MB
try-drupal-11.png1.12 MB
try-drupal-12.png1.1 MB
try-drupal-13.png216.49 KB

CryptogramFriday Squid Blogging: Another Giant Squid Caught off the Coast of Kerry

The Flannery family have caught four giant squid, two this year.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

,

Planet DebianBen Hutchings: Debian LTS work, August 2017

I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 1 hour from July. I only worked 10 hours, so I will carry over 6 hours to the next month.

I prepared and released an update on the Linux 3.2 longterm stable branch (3.2.92), and started work on the next update. I rebased the Debian linux package on this version, but didn't yet upload it.

Sociological ImagesResearch Finds Obesity is in the Eye of the Beholder

In an era of body positivity, more people are noting the way American culture stigmatizes obesity and discriminates by weight. One challenge for studying this inequality is that a common measure for obesity—Body Mass Index (BMI), a ratio of height to weight—has been criticized for ignoring important variation in healthy bodies. Plus, the basis for weight discrimination is what other people see as “too fat,” and that’s a standard with a lot of variation.

Recent research in Sociological Science from Vida Maralani and Douglas McKee gives us a picture of how the relationship between obesity and inequality changes with social context. Using data from the National Longitudinal Surveys of Youth (NLSY), Maralani and McKee measure BMI in two cohorts, one in 1981 and one in 2003. They then look at social outcomes seven years later, including wages, the probability of a person being married, and total family income.

The figure below shows their findings for BMI and 2010 wages for each group in the study. The dotted lines show the same relationships from 1988 for comparison.

For White and Black men, wages actually go up as their BMI increases from the “Underweight” to “Normal” ranges, then levels off and slowly decline as they cross into the “Obese” range. This pattern is fairly similar to 1988, but check out the “White Women” graph in the lower left quadrant. In 1988, the authors find a sharp “obesity penalty” in which women over a BMI of 30 reported a steady decline in wages. By 2010, this has largely leveled off, but wage inequality didn’t go away. Instead, that spike near the beginning of the graph suggests people perceived as skinny started earning more. The authors write:

The results suggest that perceptions of body size may have changed across cohorts differently by race and gender in ways that are consistent with a normalizing of corpulence for black men and women, a reinforcement of thin beauty ideals for white women, and a status quo of a midrange body size that is neither too thin nor too large for white men (pgs. 305-306).

This research brings back an important lesson about what sociologists mean when they say something is “socially constructed”—patterns in inequality can change and adapt over time as people change the way they interpret the world around them.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramAnother iPhone Change to Frustrate the Police

I recently wrote about the new ability to disable the Touch ID login on iPhones. This is important because of a weirdness in current US law that protects people's passcodes from forced disclosure in ways it does not protect actions: being forced to place a thumb on a fingerprint reader.

There's another, more significant, change: iOS now requires a passcode before the phone will establish trust with another device.

In the current system, when you connect your phone to a computer, you're prompted with the question "Trust this computer?" and you can click yes or no. Now you have to enter in your passcode again. That means if the police have an unlocked phone, they can scroll through the phone looking for things but they can't download all of the contents onto a another computer without also knowing the passcode.

More details:

This might be particularly consequential during border searches. The "border search" exception, which allows Customs and Border Protection to search anything going into the country, is a contentious issue when applied electronics. It is somewhat (but not completely) settled law, but that the U.S. government can, without any cause at all (not even "reasonable articulable suspicion", let alone "probable cause"), copy all the contents of my devices when I reenter the country sows deep discomfort in myself and many others. The only legal limitation appears to be a promise not to use this information to connect to remote services. The new iOS feature means that a Customs office can browse through a device -- a time limited exercise -- but not download the full contents.

Worse Than FailureError'd: Have it Your Way!

"You can have any graphics you want, as long as it's Intel HD Graphics 515," Mark R. writes.

 

"You know, I'm pretty sure that I've been living there for a while now," writes Derreck.

 

Sven P. wrote, "Usually, I blame production outages on developers who, I swear, have trouble counting to five. After seeing this, I may want to blame the compiler too."

 

"Whenever I hear someone complaining about their device battery life, I show them this picture," wrote Renan.

 

"Prepaying for gas, my credit card was declined," Rand H. writes, "I was worried some thief must've maxed it out, but then I saw how much I was paying in taxes."

 

Brett A. wrote, "Yo Dawg I heard you like zips, so you should zip your zips to send your zips."

 

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Planet DebianChris Lamb: Which packages on my system are reproducible?

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process.

As part of this project I wrote a script to determine which packages installed on your system are "reproducible" or not:

$ apt install devscripts
[]

$ reproducible-check
[]
W: subversion (1.9.7-2) is unreproducible (libsvn-perl, libsvn1, subversion) <https://tests.reproducible-builds.org/debian/subversion>
W: taglib (1.11.1+dfsg.1-0.1) is unreproducible (libtag1v5, libtag1v5-vanilla) <https://tests.reproducible-builds.org/debian/taglib>
W: tcltk-defaults (8.6.0+9) is unreproducible (tcl, tk) <https://tests.reproducible-builds.org/debian/tcltk-defaults>
W: tk8.6 (8.6.7-1) is unreproducible (libtk8.6, tk8.6) <https://tests.reproducible-builds.org/debian/tk8.6>
W: valgrind (1:3.13.0-1) is unreproducible <https://tests.reproducible-builds.org/debian/valgrind>
W: wavpack (5.1.0-2) is unreproducible (libwavpack1) <https://tests.reproducible-builds.org/debian/wavpack>
W: x265 (2.5-2) is unreproducible (libx265-130) <https://tests.reproducible-builds.org/debian/x265>
W: xen (4.8.1-1+deb9u1) is unreproducible (libxen-4.8, libxenstore3.0) <https://tests.reproducible-builds.org/debian/xen>
W: xmlstarlet (1.6.1-2) is unreproducible <https://tests.reproducible-builds.org/debian/xmlstarlet>
W: xorg-server (2:1.19.3-2) is unreproducible (xserver-xephyr, xserver-xorg-core) <https://tests.reproducible-builds.org/debian/xorg-server>
282/4494 (6.28%) of installed binary packages are unreproducible.

Whether a package is "reproducible" or not is determined by querying the Debian Reproducible Builds testing framework.



The --raw command-line argument lets you play with the data in more detail. For example, you can see who maintains your unreproducible packages:

$ reproducible-check --raw | dd-list --stdin
Alec Leamas <leamas.alec@gmail.com>
   lirc (U)

Alessandro Ghedini <ghedo@debian.org>
   valgrind

Alessio Treglia <alessio@debian.org>
   fluidsynth (U)
   libsoxr (U)
[]


reproducible-check is available in devscripts since version 2.17.10, which landed in Debian unstable on 14th September 2017.

,

CryptogramSecuring a Raspberry Pi

A Raspberry Pi is a tiny computer designed for makers and all sorts of Internet-of-Things types of projects. Make magazine has an article about securing it. Reading it, I am struck by how much work it is to secure. I fear that this is beyond the capabilities of most tinkerers, and the result will be even more insecure IoT devices.

Krebs on SecurityEquifax Hackers Stole 200k Credit Card Accounts in One Fell Swoop

Visa and MasterCard are sending confidential alerts to financial institutions across the United States this week, warning them about more than 200,000 credit cards that were stolen in the epic data breach announced last week at big-three credit bureau Equifax. At first glance, the private notices obtained by KrebsOnSecurity appear to suggest that hackers initially breached Equifax starting in November 2016. But Equifax says the accounts were all stolen at the same time — when hackers accessed the company’s systems in mid-May 2017.

equifax-hq

Both Visa and MasterCard frequently send alerts to card-issuing financial institutions with information about specific credit and debit cards that may have been compromised in a recent breach. But it is unusual for these alerts to state from which company the accounts were thought to have been pilfered.

In this case, however, Visa and MasterCard were unambiguous, referring to Equifax specifically as the source of an e-commerce card breach.

In a non-public alert sent this week to sources at multiple banks, Visa said the “window of exposure” for the cards stolen in the Equifax breach was between Nov. 10, 2016 and July 6, 2017. A similar alert from MasterCard included the same date range.

“The investigation is ongoing and this information may be amended as new details arise,” Visa said in its confidential alert, linking to the press release Equifax initially posted about the breach on Sept. 7, 2017.

The card giant said the data elements stolen included card account number, expiration date, and the cardholder’s name. Fraudsters can use this information to conduct e-commerce fraud at online merchants.

It would be tempting to conclude from these alerts that the card breach at Equifax dates back to November 2016, and that perhaps the intruders then managed to install software capable of capturing customer credit card data in real-time as it was entered on one of Equifax’s Web sites.

Indeed, that was my initial hunch in deciding to report out this story. But according to a statement from Equifax, the hacker(s) downloaded the data in one fell swoop in mid-May 2017.

“The attacker accessed a storage table that contained historical credit card transaction related information,” the company said. “The dates that you provided in your e-mail appear to be the transaction dates. We have found no evidence during our investigation to indicate the presence of card harvesting malware, or access to the table before mid-May 2017.”

Equifax did not respond to questions about how it was storing credit card data, or why only card data collected from customers after November 2016 was stolen.

In its initial breach disclosure on Sept. 7, Equifax said it discovered the intrusion on July 29, 2017. The company said the hackers broke in through a vulnerability in the software that powers some of its Web-facing applications.

In an update to its breach disclosure published Wednesday evening, Equifax confirmed reports that the application flaw in question was a weakness disclosed in March 2017 in a popular open-source software package called Apache Struts (CVE-2017-5638)

“Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted,” the company wrote. “We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638. We continue to work with law enforcement as part of our criminal investigation, and have shared indicators of compromise with law enforcement.”

The Apache flaw was first spotted around March 7, 2017, when security firms began warning that attackers were actively exploiting a “zero-day” vulnerability in Apache Struts. Zero-days refer to software or hardware flaws that hackers find and figure out how to use for commercial or personal gain before the vendor even knows about the bugs.

By March 8, Apache had released new versions of the software to mitigate the vulnerability. But by that time exploit code that would allow anyone to take advantage of the flaw was already published online — making it a race between companies needing to patch their Web servers and hackers trying to exploit the hole before it was closed.

Screen shots apparently taken on March 10, 2017 and later posted to the vulnerability tracking site xss[dot]cx indicate that the Apache Struts vulnerability was present at the time on annualcreditreport.com — the only web site mandated by Congress where all Americans can go to obtain a free copy of their credit reports from each of the three major bureaus annually.

In another screen shot apparently made that same day and uploaded to xss[dot]cx, we can see evidence that the Apache Struts flaw also was present in Experian’s Web properties.

Equifax has said the unauthorized access occurred from mid-May through July 2017, suggesting either that the company’s Web applications were still unpatched in mid-May or that the attackers broke in earlier but did not immediately abuse their access.

It remains unclear when exactly Equifax managed to fully eliminate the Apache Struts flaw from their various Web server applications. But one thing we do know for sure: The hacker(s) got in before Equifax closed the hole, and their presence wasn’t discovered until July 29, 2017.

Update, Sept. 15, 12:31 p.m. ET: Visa has updated their advisory about these 200,000+ credit cards stolen in the Equifax breach. Visa now says it believes the records also included the cardholder’s Social Security number and address, suggesting that (ironically enough) the accounts were stolen from people who were signing up for credit monitoring services through Equifax.

Equifax also clarified the breach timeline to note that it patched the Apache Struts flaw in its Web applications only after taking the hacked system(s) offline on July 30, 2017. Which means Equifax left its systems unpatched for more than four months after a patch (and exploit code to attack the flaw) was publicly available.

CryptogramHacking Robots

Researchers have demonstrated hacks against robots, taking over and controlling their camera, speakers, and movements.

News article.

Worse Than FailureCodeSOD: string isValidArticle(string article)

Anonymous sends us this little blob of code, which is mildly embarassing on its own:

    static StringBuilder vsb = new StringBuilder();
    internal static string IsValidUrl(string value)
    {
        if (value == null)
        {
            return "\"\"";
        }

        vsb.Length= 0;
        vsb.Append("@\"");

        for (int i=0; i<value.Length; i++)
        {
            if (value[i] == '\"')
                vsb.Append("\"\"");
            else
                vsb.Append(value[i]);
        }

        vsb.Append("\"");
        return vsb.ToString();
    }

I’m willing to grant that re-using the same static StringBuilder object is a performance tuning thing, but everything else about this is… just plain puzzling.

The method is named IsValidUrl, but it returns a string. It doesn’t do any validation! All it appears to do is take any arbitrary string and return that string wrapped as if it were a valid C# string literal. At best, this method is horridly misnamed, but if its purpose is to truly generate valid C# strings, it has a potential bug: it doesn’t handle new-lines. Now, I’m sure that won’t be a problem that comes back up before the end of this article.

The code, taken on its own, is just bad. But when placed into context, it gets worse. This isn’t just code. It’s part of .NET’s System.Runtime.Remoting package. Still, I know, you’re saying to yourself, ‘In all the millions of lines in .NET, this is really the worst you’ve come up with?’

Well, it comes up because remember that bug with new-lines? Well, guess what. That exact flaw was a zero-day that allowed code execution… in RTF files.

Now, skim through some of the other code in wsdlparser.cs, and you'll see the real horror. This entire file has one key job: generating a class capable of parsing data according to an input WSDL file… by using string concatenation.

The real WTF is the fact that you can embed SOAP links in RTF files and Word will attempt to use them, thus running the WSDL parser against the input data. This is code that’s a little bad, used badly, creating an exploited zero-day.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianLior Kaplan: Public money? Public Code!

An open letter published today to the EU government says:

Why is software created using taxpayers’ money not released as Free Software?
We want legislation requiring that publicly financed software developed for the public sector be made publicly available under a Free and Open Source Software licence. If it is public money, it should be public code as well.

Code paid by the people should be available to the people!

See https://publiccode.eu/ for the campaign details.

This makes me think of starting an Israeli version…


Filed under: Uncategorized

Don Martianother 2x2 chart

What to do about different kinds of user data interchange:

Data collected without permission Data collected with permission
Good dataBuild tools and norms to reduce the amount of reliable data that is available without permission. Develop and test new tools and norms that enable people to share data that they choose to share.
Bad data Report on and show errors in low-quality data that was collected without permission. Offer users incentives and tools that help them choose to share accurate data and correct errors in voluntarily shared data.

Most people who want data about other people still prefer data that's collected without permission, and collaboration is something that they'll settle for. So most voluntary user data sharing efforts will need a defense side as well. Freedom-loving technologists have to help people reduce the amount of data that they allow to be taken from them without permission in order for data listen to people about sharing data.

Planet DebianJames McCoy: devscripts needs YOU!

Over the past 10 years, I've been a member of a dwindling team of people maintaining the devscripts package in Debian.

Nearly two years ago, I sent out a "Request For Help" since it was clear I didn't have adequate time to keep driving the maintenance.

In the mean time, Jonas split licensecheck out into its own project and took over development. Osamu has taken on much of the maintenance for uscan, uupdate, and mk-origtargz.

Although that has helped spread the maintenance costs, there's still a lot that I haven't had time to address.

Since Debian is still fairly early in the development cycle for Buster, I've decided this is as good a time as any for me to officially step down from active involvement in devscripts. I'm willing to keep moderating the mailing list and other related administrivia (which is fairly minimal given the repo is part of collab-maint), but I'll be unsubscribing from all other notifications.

I think devscripts serves as a good funnel for useful scripts to get in front of Debian (and its derivatives) developers, but Jonas may also be onto something by pulling scripts out to stand on their own. One of the troubles with "bucket" packages like devscripts is the lack of visibility into when to retire scripts. Breaking scripts out on their own, and possibly creating multiple binary packages, certainly helps with that. Maybe uscan and friends would be a good next candidate.

At the end of the day, I've certainly enjoyed being able to play my role in helping simplify the life of all the people contributing to Debian. I may come back to it some day, but for now it's time to let someone else pick up the reins.

If you're interested in helping out, you can join #devscripts on OFTC and/or send a mail to <devscripts-devel@lists.alioth.debian.org>.

Planet DebianDirk Eddelbuettel: RcppMsgPack 0.2.0

A new and much enhanced version of RcppMsgPack arrived on CRAN a couple of days ago. It came together following this email to the r-package-devel list which made it apparent that Travers Ching had been working on MessagePack converters for R which required the very headers I had for use from, inter alia, the RcppRedis package.

So we joined our packages. I updated the headers in RcppMsgPack to the current upstream version 2.1.5 of MessagePack, and Travers added his helper functions allow direct packing / unpacking of MessagePack objects at the R level, as well as tests and a draft vignette. Very exciting, and great to have a coauthor!

So now RcppMspPack provides R with both MessagePack header files for use via C++ (or C, if you must) packages such as RcppRedis --- and direct conversion routines at the R prompt.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.

Changes in version 0.2.0 (2017-09-07)

  • Added support for building on Windows

  • Upgraded to MsgPack 2.1.5 (#3)

  • New R functions to manipulate MsgPack objects: msgpack_format, msgpack_map, msgpack_pack, msgpack_simplify, mgspack_unpack (#4)

  • New R functions also available as msgpackFormat, msgpackMap, msgpackPack, msgpackSimplify, mgspackUnpack (#4)

  • New vignette (#4)

  • New tests (#4)

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

More information may be on the RcppMsgPack page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RcppRedis 0.1.8

A new minor release of RcppRedis arrived on CRAN last week, following the release 0.2.0 of RcppMsgPack which brought the MsgPack headers forward to release 2.1.5. This required a minor and rather trivial change in the code. When the optional RcppMsgPack package is used, we now require this version 0.2.0 or later.

We made a few internal updates to the package as well.

Changes in version 0.1.8 (2017-09-08)

  • A new file init.c was added with calls to R_registerRoutines() and R_useDynamicSymbols()

  • Symbol registration is enabled in useDynLib

  • Travis CI was updated to using run.sh

  • The (optional MessagePack) code was updated for MsgPack 2.*

Courtesy of CRANberries, there is also a diffstat report for this release. More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaOpenSTEM: New Dates for Human Relative + ‘Explorer Classroom’ Resources

During September, National Geographic is featuring the excavations of Homo naledi at Rising Star Cave in South Africa in their Explorer Classroom, in tune with new discoveries and the publishing of dates for this enigmatic little hominid. A Teacher’s Guide and Resources are available and classes can log in to see live updates from the […]

,

TED“World peace will come from sitting around the table”: Chef Pierre Thiam chats with food blogger Ozoz Sokoh

Chef and cookbook author Pierre Thiam, left, sits down with food blogger Ozoz Sokoh to talk about the West African rice dish jollof — beloved in Nigeria, Senegal, Ghana and around the world. But who makes it best? They spoke during TEDGlobal 2017 in Arusha, Tanzania. Photo: Callie Giovanna / TED

Two African cooks walk into a bar; 30 seconds later they are arguing over whose country’s jollof rice is better. Or so the corny joke would go. The truth is, I really had no idea what would happen if we got Senegal-born chef Pierre Thiam (TED Talk: A Forgotten Ancient Grain That Could Help Africa Prosper) and Nigerian jollof promoter Ozoz Sokoh to sit down together for a friendly chat.

Based in New York, Pierre is a world-renowned chef who grew up in Senegal and is known for his exquisite dishes and his passion for spreading African cuisine across the world. He informed me that my interview request was the third jollof-related one he had granted in a week, the previous ones coming from the BBC and Wall Street Journal. It totally makes sense that in the heat of the jollof wars that now erupt every few weeks, mostly on Twitter, usually between Nigerians and Ghanaians, pundits are turning to a Senegalese chef for their take on the dispute. Jollof, after all, is named for the Wolof people, the largest ethnic group in Senegal; the country does have some claim.

Ozoz for her own part is an accomplished cook (she declined to be called a chef because it’s like a professional certification, apparently), food blogger and photographer, and probably one of the biggest promoters of jollof rice in Africa right now, an obsession that has since burst out of her Twitter timeline into a dedicated blog and the well-attended World Jollof Day festival. Was she down to interview Pierre about the jollof controversy? Of course. In fact, Ozoz had come from Lagos armed with homemade Nigerian spices, snacks and a jollof T-shirt for Pierre.

I apologize in advance to everyone who was spoiling for some sort of fiery showdown; this isn’t it. And I will admit to influencing their conversation slightly, by suggesting to them that the jollof question was merely an interesting pretext for a broader and infinitely more useful conversation about African cuisine that both of them were incredibly suited to have. What you are about to read is what happened next.

Ozoz: I think that it’s amazing that we’ve had all these ingredients for centuries but our preference is to default to what isn’t homegrown. You were talking about fonio yesterday, and I think there is an appreciation that we need to develop for homegrown products. Apart from fonio, what other things do to think we should be going crazy about? That are locally grown and could have transformative effects on food security.

Pierre: There are countless, you see. Millet is one of them. Sorghum is another one. The leaves too, especially in Nigeria where there are so many interesting leaf vegetables that are highly recommended for diets, and many cultures don’t know them as much as Nigeria does. So there is an opportunity there to share this knowledge. People talk about moringa, but moringa is just one of them.

Ozoz: One of my concerns is how do we get people in remote, non-urban areas to realise the value of what they have around them.

Pierre: Actually I don’t think it’s people in rural areas who have this problem. It’s people in urban areas who like to mimic the westerners’ way of eating and look down on the rural way of eating. Take fonio, for instance — you find it in Northern Nigeria and the Southern part of Senegal a lot, but in Lagos, Abuja, Dakar, you have to look for it. So the rural areas, they have it because there is a tradition. That’s what they have. And they can’t even afford the food that comes from the west. But us, we prefer to import from the west, and this is terrible for our economy. It’s terrible for our sense of pride, which is affected every day.

“I think there are many rituals that we’ve lost,” Ozoz says, “but sitting around the table with family and friends is one that we need to reintroduce into our way of life.” She’s speaking with Pierre Thiam at TEDGlobal 2017. Photo: Callie Giovanna / TED

Ozoz: I feel like the attitude to homegrown is changing. Nok by Alara for instance, it has an amazing menu that is tribute to homegrown, just an amazing mixture of local flavours and textures. But what other things do you think we can do to grow the whole new Nigerian or West African-style cuisine — in addition to cooking, what other ways beyond the kitchen?

Pierre: It’s a very good question, because it goes beyond the kitchen. It’s not only chefs who can wage that battle. It takes many, many levels. The media is important because information is key. Many people don’t know: We have wonderful ingredients. We have superfoods. If you look at our DNA, our background, our ancestors were strong people and they were eating that food, and because of that they were taken, because of their strength. We today want to say that that food is not good enough, and we import diseases. Many of the diseases that you see today in Nigeria or Dakar are imported. Diabetes, high cholesterol, high blood pressure, hypertension … all of which are directly connected with your diet. We use a lot of cubes now in our diet, and that is directly linked to why there is a lot of hypertension, because there is a lot of sodium in them. It’s a mind shift, we have to get back to what we have.

Ozoz: You are right, the media plays a really important role. So jollof rice. Obviously, everyone says Nigerian Jollof is the best :) what do you think?

Pierre: I hear you. When I’m in Nigeria, I eat Nigerian jollof, that’s for sure. And I enjoy it. When I’m in Ghana, I love Ghanaian jollof too. This is the great thing about jollof, jollof is a dish that’s like all these different cultures and countries just owning it. Jollof means Senegal [ed: the name derives from “Wolof“], but that doesn’t mean we own it. That is the way Africa is, food transcends borders, you know, and jollof has obviously transcended borders in a way that is powerful. This war is beautiful.

Ozoz: So you think Jollof can promote world peace?

Pierre: Absolutely. I think world peace will come from sitting around the table.

Pierre Thiam says: “When I’m in Nigeria, I eat Nigerian jollof, that’s for sure. And I enjoy it. When I’m in Ghana, I love Ghanaian jollof too. That is the way Africa is: food transcends borders.” Photo: Callie Giovanna / TED

Ozoz: I think there are many rituals that we’ve lost, but sitting around the table with family and friends is one that we need to reintroduce into our way of life.

Pierre: It is key. Simple moments like this on a daily basis can make a huge difference. And jollof rice is a symbolic dish that it’s great that everyone claims

Ozoz: it’s so refreshing to hear you say that — it’s a testament to your open and giving nature

Pierre: That’s what food is about: sharing. In Africa you go to a household and people offer you food. Food is something we don’t keep to ourselves, we have to share it. If you go to a household in Lagos, you will be offered something to drink, zobo, it’s a symbolic thing.

Ozoz: I was really, really fascinated to read modern recipes in Senegal, modern recipes from the source to the bowl. I was really intrigued by the palm oil recipes, particularly the palm oil ice cream. Really, really intrigued, it looks really amazing and it’s on my list of things to make once I get back and I settle down. I’m gonna get organic palm oil, the best quality that I can find

Pierre: That’s the best ice cream I’ve ever had.

Ozoz: It looks the part.

Pierre: I want to hear what you have to say when you make it.

Ozoz: Tell about how you developed this recipe. Were you sleeping? Was it midnight? How did it come to you?

Pierre: At first I wanted to have something vegan, something without dairy — as you can see, there is no dairy in that recipe. But when you eat it, you don’t taste that there is no dairy, it’s got the richness of the palm oil. There’s coconut milk, there is palm oil, and there is lime zest, which really brings the acidity. So you have a perfect balance, which is what you are really looking for. Creating new recipes is like chemistry. Your kitchen is your lab, and you just get creative and have fun with it.

Ozoz: I find myself thinking a lot about my memory bank…my taste bank. There are certain things I eat that transport me to a time, a place…what are some of the things that are in your memory bank, and can you share a bit about why they are there?

Pierre: Well, it usually goes back to childhood. The memories of food are powerful, and it can come from anything. Like a whiff that takes you back to your grandmother’s, the dishes that she would cook for you when you were a kid. So for me, I’m gonna come back to palm oil and okro, those are the ingredients that are very powerful to me and take me back to those moments of innocence. It’s very emotional when I get into that zone. A lot of my creations come from there, and those traditions. And that is why traditions are important. I think that any African chef before looking to the future has to go back into the past and remember what was served to them in their childhood — or do some research into the traditions and get a better grasp of the future.

Ozoz: If you were a spice, what would you be?

Pierre: Probably ginger, because I like the heat of it. Especially Nigerian ginger. I like it because it can bring the sensation of heat without being too overpowering like pepper.

Ozoz: If you were a fruit, what would your be?

Pierre: A fruit, huh? I love papaya, because I can use it as a dessert, or as a tenderiser when I’m cooking meat. I love green papaya that I can put in a salad, with red onions and chili and lime juice, that becomes a snack. It’s very versatile.

Ozoz: I think the future of food in Africa has a lot to do with collaboration. How do we grow this collective of voices around it, writers, food photographers, chefs… In the US, for instance, there are associations, foundations, but I’m not sure if those constructs would suit African needs. What should we thinking about if we are to take the appreciation of our food history and practice of the culture to the next level?

Pierre: I think that this conversation is important to have…like chef’s meetings. It could be around events. For instance, this November I’m inviting chefs to Saint-Louis, in Senegal. And they are coming from across Africa, from Cameroon, Morocco, Cote d’Ivoire, South Africa, and they are coming to this event as part of the Saint-Louis Forum. Each of us will come with our own traditions and approach to food.

Ozoz: You are absolutely right, that coming together, exchange of ideas, discussions …

Bankole in the background: blogging, food festivals…

Ozoz: Yes. We talked about the role of media earlier. Writing, podcasts, videos, how-tos, documentaries, it’s a whole range.

Pierre: And it’s the right time, right now, we have a lot of tools at our disposal. We don’t need big networks to broadcast this, we can do it ourselves and reach millions of people. As Africans, we have a unique opportunity to tell our story. African cuisine is ready to be explored, we’ve got so much to offer from each country and so many different cultures with different flavors.

Surrounded by mounds of fresh ingredients, Pierre Thiam preps fonio sushi rolls to share onstage at TEDGlobal 2017. Photo: Ryan Lash / TED

 

Ozoz: Quick fire round. Zobo or tamarind?

Pierre: Zobo.

Ozoz: What do you always have in your fridge?

Pierre: Oh boy…I don’t have much in my fridge…

Ozoz: What food can’t you live without?

Pierre: Uh? This is going to sound clichéd but I really love my fonio on a regular basis.

Ozoz: I don’t mind that. Foraging or fishing?

Pierre: Fishing.

Ozoz: Cumin or coriander seeds?

Pierre: Cumin.

Ozoz: Rain or sun?

Pierre: Sun.

Ozoz: Pancakes or French toast?

Pierre: French toast.

Ozoz: Food writing or photography?

Pierre: Both. Actually photography is very important, but good food writing can transport you to places in your imagination, which is more difficult to capture with photography.

Ozoz: Cilantro or parsley?

Pierre: Cilantro.

Ozoz: Last one. Nigerian jollof or Ghanaian jollof?

Pierre: Senegalese …

To share with Pierre, Ozoz brought a package of homemade spice mixes from Nigeria, including yaji spice, a peanut-based mixture of smoky and spicy aromatics that’s traditionally used to make suja, a popular street food. Photo: Callie Giovanna / TED


CryptogramOn the Equifax Data Breach

Last Thursday, Equifax reported a data breach that affects 143 million US customers, about 44% of the population. It's an extremely serious breach; hackers got access to full names, Social Security numbers, birth dates, addresses, driver's license numbers -- exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, and other businesses vulnerable to fraud.

Many sites posted guides to protecting yourself now that it's happened. But if you want to prevent this kind of thing from happening again, your only solution is government regulation (as unlikely as that may be at the moment).

The market can't fix this. Markets work because buyers choose between sellers, and sellers compete for buyers. In case you didn't notice, you're not Equifax's customer. You're its product.

This happened because your personal information is valuable, and Equifax is in the business of selling it. The company is much more than a credit reporting agency. It's a data broker. It collects information about all of us, analyzes it all, and then sells those insights.

Its customers are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you'd be a profitable customer -- everyone who wants to sell you something, even governments.

It's not just Equifax. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about you -- almost all of them companies you've never heard of and have no business relationship with.

Surveillance capitalism fuels the Internet, and sometimes it seems that everyone is spying on you. You're secretly tracked on pretty much every commercial website you visit. Facebook is the largest surveillance organization mankind has created; collecting data on you is its business model. I don't have a Facebook account, but Facebook still keeps a surprisingly complete dossier on me and my associations -- just in case I ever decide to join.

I also don't have a Gmail account, because I don't want Google storing my e-mail. But my guess is that it has about half of my e-mail anyway, because so many people I correspond with have accounts. I can't even avoid it by choosing not to write to gmail.com addresses, because I have no way of knowing if newperson@company.com is hosted at Gmail.

And again, many companies that track us do so in secret, without our knowledge and consent. And most of the time we can't opt out. Sometimes it's a company like Equifax that doesn't answer to us in any way. Sometimes it's a company like Facebook, which is effectively a monopoly because of its sheer size. And sometimes it's our cell phone provider. All of them have decided to track us and not compete by offering consumers privacy. Sure, you can tell people not to have an e-mail account or cell phone, but that's not a realistic option for most people living in 21st-century America.

The companies that collect and sell our data don't need to keep it secure in order to maintain their market share. They don't have to answer to us, their products. They know it's more profitable to save money on security and weather the occasional bout of bad press after a data loss. Yes, we are the ones who suffer when criminals get our data, or when our private information is exposed to the public, but ultimately why should Equifax care?

Yes, it's a huge black eye for the company -- this week. Soon, another company will have suffered a massive data breach and few will remember Equifax's problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

This market failure isn't unique to data security. There is little improvement in safety and security in any industry until government steps in. Think of food, pharmaceuticals, cars, airplanes, restaurants, workplace conditions, and flame-retardant pajamas.

Market failures like this can only be solved through government intervention. By regulating the security practices of companies that store our data, and fining companies that fail to comply, governments can raise the cost of insecurity high enough that security becomes a cheaper alternative. They can do the same thing by giving individuals affected by these breaches the ability to sue successfully, citing the exposure of personal data itself as a harm.

By all means, take the recommended steps to protect yourself from identity theft in the wake of Equifax's data breach, but recognize that these steps are only effective on the margins, and that most data security is out of your hands. Perhaps the Federal Trade Commission will get involved, but without evidence of "unfair and deceptive trade practices," there's nothing it can do. Perhaps there will be a class-action lawsuit, but because it's hard to draw a line between any of the many data breaches you're subjected to and a specific harm, courts are not likely to side with you.

If you don't like how careless Equifax was with your data, don't waste your breath complaining to Equifax. Complain to your government.

This essay previously appeared on CNN.com.

EDITED TO ADD: In the early hours of this breach, I did a radio interview where I minimized the ramifications of this. I didn't know the full extent of the breach, and thought it was just another in an endless string of breaches. I wondered why the press was covering this one and not many of the others. I don't remember which radio show interviewed me. I kind of hope it didn't air.

TEDThis is how to make Pierre Thiam’s fonio sushi

Pierre Thiam’s fonio sushi recipe wraps chunks of fresh vegetables in a mixture of the ancient fonio grain and sweet potato.  Photo: Ryan Lash / TED

If you’ve seen Pierre Thiam’s TED Talk about fonio, then you saw that part when he actually handed food out to the audience, yes? For those who didn’t know to sit in the front rows to receive that blessing (or couldn’t be there in the first place), and don’t mind rolling up their sleeves in the kitchen, Pierre has shared the recipe and cooking instructions for anyone who would like to re-create his fonio sushi.

No, I haven’t tried it yet, but if you can procure all the ingredients, especially the fonio, obviously, it looks super easy to make! Here we go.

To make Fonio Sweet Potato and Okra Sushi, you are going to need:

1 cup cooked fonio
1 cooked and mashed sweet potato
1 tbsp. rice vinegar
Salt to taste
1 carrot, cut into sticks and blanched
1 cucumber, seeded and cut into sticks
2 cups young okra, trimmed on both ends, blanched and shocked in iced water
1 package nori seaweed sheets, toasted

In a large bowl, combine cooked fonio, sweet potato and rice vinegar. Season with salt. Lay a bamboo sushi mat on a smooth surface, and lay out seaweed sheet on sushi mat. Using a paddle or your hands, lay out the fonio-sweet potato mixture evenly and thinly, leaving about 2 inches of the seaweed edge farthest from you uncovered.

Lay out the fonio mixture evenly on top of the nori rice sheet, leaving space at the far end for rolling. Photo: Ryan Lash / TED

Lay cucumber sticks in a row at the edge nearest you. Lay out a row of carrot sticks next, then a row of okra. Moisten the far edge of the nori with fingers dipped in water. Take the edge closest to you and roll the nori sheet as tightly as possible until you have one complete roll.

Lay out a row of cucumber, a row of carrot and a row of okra, then carefully roll everything together, using the bamboo mat for support. Photo: Ryan Lash / TED

Press the moistened edge against the roll to seal, and place the roll seam side down. Run your knife under warm water, to prevent sticking, and carefully slice the roll into 6–8 pieces.

Neaten up the edges, then slice the roll, using a damp knife to prevent sticking. Photo: Ryan Lash / TED

Serve with soy sauce and wasabi, and garnish with spice if you like; when preparing the sushi for the TEDGlobal audience, Pierre used dehydrated dawadawa. This recipe serves four. Enjoy.

Pierre Thiam garnishes his fonio sushi with dehydrated dawadawa for spice and color. Photo: Ryan Lash / TED


Krebs on SecurityAdobe, Microsoft Plug Critical Security Holes

Adobe and Microsoft both on Tuesday released patches to plug critical security vulnerabilities in their products. Microsoft’s patch bundles fix close to 80 separate security problems in various versions of its Windows operating system and related software — including two vulnerabilities that already are being exploited in active attacks. Adobe’s new version of its Flash Player software tackles two flaws that malware or attackers could use to seize remote control over vulnerable computers with no help from users.

brokenwindows

Of the two zero-day flaws being fixed this week, the one in Microsoft’s ubiquitous .NET Framework (CVE-2017-8759) is perhaps the most concerning. Despite this flaw being actively exploited, it is somehow labeled by Microsoft as “important” rather than “critical” — the latter being the most dire designation.

More than two dozen flaws Microsoft remedied with this patch batch come with a “critical” warning, which means they could be exploited without any assistance from Windows users — save for perhaps browsing to a hacked or malicious Web site.

Regular readers here probably recall that I’ve often recommended installing .NET updates separately from any remaining Windows updates, mainly because in past instances in which I’ve experienced problems installing Windows updates, a .NET patch was usually involved.

For the most part, Microsoft now bundles all security updates together in one big patch ball for regular home users — no longer letting people choose which patches to install. One exception is patches for the .NET Framework, and I stand by my recommendation to install the patch roll-ups separately, reboot, and then tackle the .NET updates. Your mileage may vary.

Another vulnerability Microsoft fixed addresses “BlueBorne” (CVE-2017-8628), which is a flaw in the Bluetooth wireless data transmission standard that attackers could use to snarf data from Bluetooth-enabled devices that are physically nearby and with Bluetooth turned on.

For more on this month’s Patch Tuesday from Microsoft, check out Microsoft’s security update guide, as well as this blog from Ivanti (formerly Shavlik).

brokenflash-aAdobe’s newest Flash version — v. 27.0.0.130 for Windows, Mac and Linx systems — corrects two critical bugs in Flash. For those of you who still have and want Adobe Flash Player installed in a browser, it’s time to update and/or restart your browser.

Windows users who browse the Web with anything other than Internet Explorer may need to apply the Flash patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then. Chrome will replace that three dot icon with an up-arrow inside of a circle when updates are ready to install).

Better yet, consider removing or at least hobbling Flash Player, which is a perennial target of malware attacks. Most sites have moved away from requiring Flash, and Adobe itself is sunsetting this product (albeit not for another long two more years).

Windows users can get rid of Flash through the Add/Remove Programs menu, unless they’re using Chrome, which bundles its own version of Flash Player. To get to the Flash settings page, type or cut and paste “chrome://settings/content” into the address bar, and click on the Flash result.

Planet DebianShirish Agarwal: Android, Android marketplace and gaming addiction.

This would be a longish piece so please bear and play with tea, coffee, beer or anything stronger that you desire while reading below 🙂

I had bought an Android phone, a Samsung J5 just before going to debconf 2016. It was more for being in-trend rather than really using it. The one which I shared is the upgraded version (recentish) the one I have is 2 GB for which I had paid around double of what the list price was. The only reason I bought the model is that it had ‘removable battery’ at the price point I was willing to pay. I did see that Samsung has the same ham-handed issues with audio as previous Nokia devices use to, the speakers and microphone probably the cheapest you can get on the market. Nokia was same too, at least on the lower-end of the market, while Oppo has loud ringtones and loud music, perfect for those who are a bit hard of hearing (as yours truly is).

I had been pleasantly surprised by the quality of photos Samsung J5 was churning even though I’m less than average shooter, never been really into it so was a sort of wake-up call for where camera sensor technology is advancing. And of course with newer phones the kind of detail it can capture is mesmerizing to say the least, although wide-angle shots still would take some time to get right I guess.

If memory serves me right, sometime back Laura Arjona Reina (who handles part of debian-publicity and part of debian-women among other responsibilities) shared a blog post on p.d.o. where she had shared the troubles she had while exporting data from the phone. While she shared that and I lack the time or the energy to try and find it ( the entry is really bookmarkable, at least that specific blog post).

What was interesting though that I had gone few years ago to Bangalore, there is an organization which I like and admire CIS great for researchers. Anyways they had done a project getting between 10-20 phones from the market made of Chinese origin (almost all mobiles sold in India, the fabrication of CPU and APU etc. are done in China/Taiwan and then assembled here). Here what is done at the most is assembly which for all political purposes is called ‘manufacturing’ . All the mobiles kept quite a bit of info. on the device even after you wiped them clean/put some other ROM on them. The CIS site is more than a bit cluttered otherwise would have shared the direct link. I do hope to send an e-mail to CIS and hopefully they will respond with the report and will share that here as and when. It would be interesting to know if after people flash a custom rom if the status quo is the same as it was before. I do suspect it would be the former as flashing ROMs on phones is still a bit of specialized subject at least here in India with even an average phone costing a month or two’s salary or more and the idea of bricking the phone scares most people (including yours truly).

Anyways, for a long time I was on bed and had the phone. I used 2 games from the android marketplace which both mum and I enjoy and enjoyed. Those are Real jigsaw and Jigsaw puzzle HD . The permissions dialog which Real jigsaw among other games has is horrible and part of me freaks that all such apps. have unrestricted access to my storage area. Ideally, what Android should have done is either partition or give functionality to the user to have private space for their photos and whatever media they have and the rest of the area is like a public park. If anybody has any thoughts on partitioning on Android phone would like to hear that.

One game though which really hooked mumma and me is ‘The Island Experiment‘ . It reminded me of my younger days when gaming addiction was not treated as a disease but thankfully now is . I would call myself somewhat of a ‘functional addict’ as in do my every day things, work etc. but do dream about the game as to what it will show me next. A part of it is the game is web-based (which means it needs constant internet connection) and web access is somewhat pricey, although with Reliance Jio an upcoming data network operator having bundles of money and promising the moon, network issues at least on low-bandwidth game which I and mum are playing hopefully will not have any issues. I haven’t used tshark or any such tool to analyze the traffic but I guess it probably just sends short messages of number of clicks in a time period and things like that, all the rest (I guess) is happening on the mobile itself. I know at sometime I probably will try to put a custom rom on it but which one is the question as there are so many and also which is most compatible with my device. It seems I would have to do lot of homework before I can make any choices.

Couple of months back, a friend of mine Akshat who has been using Android for few years enabled Developer Options which I didn’t know about till he shared that info. with me. I do hope people do check Akshat’s repo. as he has made/has quite a few useful scripts, especially if you are into digital photography. I have shared with him gimp scripting few days back so along with imagemagick you might see him doing some rough scripts in it. Of course, if people use it and give feedback he might clean the scripts a bit so it gives useful error messages and gives statement like ‘gimp is not installed on your system, please install it or ask for specific version’ but as it works in free software it is somewhat directly proportional to the number of users, bugs and users behind it.

A good example of what I mean is youtube-dl . I filed 873853 where I shared the upstream ticket. Apparently YouTube again changed few days back and while upstream has fixed it, the youtube-dl maintainer probably needs to find time and get the new version up. Apparently the issues lies in –

[$] dpkg -L youtube-dl | grep youtube.py
/usr/lib/python3/dist-packages/youtube_dl/extractor/youtube.py

Hopefully somebody does the needful.

Btw, I find f-droid extremely useful and especially osmand but sadly both of them are not shared or talked about by people 😦

The reason I shared about Developer Options in Android is that few days back I noticed that the phone wonks off and has unpredictable behaviour such as not letting me browse the web, do additions or deletions using the google play store and alike . Things came to a head when few days back I saw a fibre-optic splicing operation being carried by some workers near my home by the state operator which elated me and wanted to shoot the video for it but the battery died/there was no power even though I hadn’t used it much. I have deliberately shared the hindi version which tells how that knowledge is now coming to the masses. I had seen fibre-optic splicing more than a decade and a half back at some net conference where it was going to be in your neighbourhood soonish, hopefully it will happen soon 🙂

I had my suspicions for quite sometime all the issues with the phone were not due to proper charging. During course of my investigation, found out that in Developer Options there is an option called USB Configuration and changing that from the default MTP (Media Transfer Protocol) (which is basically used to put or take movies, music or any file from the phone to the computer or vice-versa improved much better behaviour on my android phone. But this caused an unexpected side-effect, I got pretty aggressive polling of the phone by the computer even after saying I do not want to share the phone details with the computer. This I filed as 874216 . The phone and I am guessing most Samsung phones today come with an adaptor with a USB male socket which goes in to the phone’s usb port. There is the classical port for electricity but like most people heavily rely on usb charging even for deep fully powered down phone for full charging.

One interesting project which I saw which came in Debian some days back is dummydroid. I did file a bug about it . I do hope that either the maintainer gives some more documentation. I am sure many people might use and add to the resource if the documentation was/is there. I did take a look at the package and the profile seems to be like an xml pair kind of database. Having more profiles shouldn’t be hard if we knew what info. needs to be added and how do we find that info.

Lastly, I am slowly transferring all the above knowledge to my mum as well, although in small doses. She, just like me has and had problems coming from resistive touchscreen to capacitive touchscreen. You can call me wrong but resistive touchscreen seemed to be superior and not as error-prone or liable to commit mistakes as is possible in capacitive touchscreens. There may be a setting to higher/lower the threshold for touching which I have not been able to find as of yet.

Hope somebody finds something useful in there. I do hope that Debian does become a replacement to be used on such mobiles but then they would have to duplicate/also have some sort of mainstream content with editors to help people find stuff, something that Debian is not so good at currently. Also I’m not sure Synaptic is good fit as a mobile store.


Filed under: Miscellenous Tagged: #Android, #capacitive touchscreen, #custom ROMs, #digital photography, #dummydroid, #f-droid, #fabrication, #flashing, #game addiction, #Google Play Store, #Mainstreaming Debian, #mobile connectivity, #Oppo, #osmand, #planet-debian, #resistive touchscreen, #Samsung Galaxy J5, #scripting, #USB charging, #USB configuration, #youtube-dl, gaming

Sociological ImagesThe Cost of Sexual Harassment

Originally posted at Gender & Society

Last summer, Donald Trump shared how he hoped his daughter Ivanka might respond should she be sexually harassed at work. He said“I would like to think she would find another career or find another company if that was the case.” President Trump’s advice reflects what many American women feel forced to do when they’re harassed at work: quit their jobs. In our recent Gender & Society article, we examine how sexual harassment, and the job disruption that often accompanies it, affects women’s careers.

How many women quit and why?  Our study shows how sexual harassment affects women at the early stages of their careers. Eighty percent of the women in our survey sample who reported either unwanted touching or a combination of other forms of harassment changed jobs within two years. Among women who were not harassed, only about half changed jobs over the same period. In our statistical models, women who were harassed were 6.5 times more likely than those who were not to change jobs. This was true after accounting for other factors – such as the birth of a child – that sometimes lead to job change. In addition to job change, industry change and reduced work hours were common after harassing experiences.

Percent of Working Women Who Change Jobs (2003–2005)

In interviews with some of these survey participants, we learned more about how sexual harassment affects employees. While some women quit work to avoid their harassers, others quit because of dissatisfaction with how employers responded to their reports of harassment.

Rachel, who worked at a fast food restaurant, told us that she was “just totally disgusted and I quit” after her employer failed to take action until they found out she had consulted an attorney. Many women who were harassed told us that leaving their positions felt like the only way to escape a toxic workplace climate. As advertising agency employee Hannah explained, “It wouldn’t be worth me trying to spend all my energy to change that culture.”

The Implications of Sexual Harassment for Women’s Careers  Critics of Donald Trump’s remarks point out that many women who are harassed cannot afford to quit their jobs. Yet some feel they have no other option. Lisa, a project manager who was harassed at work, told us she decided, “That’s it, I’m outta here. I’ll eat rice and live in the dark if I have to.

Our survey data show that women who were harassed at work report significantly greater financial stress two years later. The effect of sexual harassment was comparable to the strain caused by other negative life events, such as a serious injury or illness, incarceration, or assault. About 35 percent of this effect could be attributed to the job change that occurred after harassment.

For some of the women we interviewed, sexual harassment had other lasting effects that knocked them off-course during the formative early years of their career. Pam, for example, was less trusting after her harassment, and began a new job, for less pay, where she “wasn’t out in the public eye.” Other women were pushed toward less lucrative careers in fields where they believed sexual harassment and other sexist or discriminatory practices would be less likely to occur.

For those who stayed, challenging toxic workplace cultures also had costs. Even for women who were not harassed directly, standing up against harmful work environments resulted in ostracism, and career stagnation. By ignoring women’s concerns and pushing them out, organizational cultures that give rise to harassment remain unchallenged.

Rather than expecting women who are harassed to leave work, employers should consider the costs of maintaining workplace cultures that allow harassment to continue. Retaining good employees will reduce the high cost of turnover and allow all workers to thrive—which benefits employers and workers alike.

Heather McLaughlin is an assistant professor in Sociology at Oklahoma State University. Her research examines how gender norms are constructed and policed within various institutional contexts, including work, sport, and law, with a particular emphasis on adolescence and young adulthood. Christopher Uggen is Regents Professor and Martindale chair in Sociology and Law at the University of Minnesota. He studies crime, law, and social inequality, firm in the belief that good science can light the way to a more just and peaceful world. Amy Blackstone is a professor in Sociology and the Margaret Chase Smith Policy Center at the University of Maine. She studies childlessness and the childfree choice, workplace harassment, and civic engagement. 

(View original at https://thesocietypages.org/socimages)

CryptogramHacking Voice Assistant Systems with Inaudible Voice Commands

Turns out that all the major voice assistants -- Siri, Google Now, Samsung S Voice, Huawei
HiVoice, Cortana and Alexa -- listen at audio frequencies the human ear can't hear. Hackers can hijack those systems with inaudible commands that their owners can't hear.

News articles.

Worse Than FailureCodeSOD: You Absolutely Don't Need It

The progenitor of this story prefers to be called Mr. Syntax, perhaps because of the sins his boss committed in the name of attempting to program a spreadsheet-loader so generic that it could handle any potential spreadsheet with any data arranged in any conceivable format.

The boss had this idea that everything should be dynamic, even things that should be relatively straightforward to do, such as doing a web-originated bulk load of data from a spreadsheet into the database. Although only two such spreadsheet formats were in use, the boss wrote it to handle ANY spreadsheet. As you might imagine, this spawned mountains of uncommented and undocumented code to keep things generic. Sin was tasked with locating and fixing the cause of a NullPointerException that should simply never have occurred. There was no stack dump. There were no logs. It was up to Sin to seek out and destroy the problem.

Just to make it interesting, this process was slow, so the web service would spawn a job that would email the user with the status of the job. Of course, if there was an error, there would inevitably be no email.

It took an entire day to find and then debug through this simple sheet-loader and the mountain of unrelated embedded code, just to find that the function convertExcelSheet blindly assumed that every cell would exist in all spreadsheets, regardless of potential format differences.

[OP: in the interest of brevity, I've omitted all of the methods outside the direct call-chain...]

  public class OperationsController extends BaseController {
    private final JobService jobService;

    @Inject
    public OperationsController(final JobService jobService) {
      this.jobService = jobService;
    }

    @RequestMapping(value = ".../bulk", method = RequestMethod.POST)
    public @ResponseBody SaveResponse bulkUpload(@AuthenticationPrincipal final User               activeUser,
                                                 @RequestParam("file")    final MultipartFile      file, 
                                                                          final WebRequest         web, 
                                                                          final HttpServletRequest request){
      SaveResponse response = new SaveResponse();
      try {
          if (getSystemAdmin(activeUser)) {
             final Map<String,Object> customParams = new HashMap<>();
             customParams.put(ThingBulkUpload.KEY_FILE,file.getInputStream());
             customParams.put(ThingBulkUpload.KEY_SERVER_NAME,request.getServerName());
             response = jobService.runJob((CustomUserDetails)activeUser,ThingBulkUpload.JOB_NAME, customParams);
          } else {
             response.setWorked(false);
             response.addError("ACCESS_ERROR","Only Administrators can run bulk upload");
          }
      } catch (final Exception e) {
        logger.error("Unable to process file",e);
      }
      return response;
    }
  }

  @Service("jobService")
  @Transactional
  public class JobServiceImpl implements JobService {
    private static final Logger logger = LoggerFactory.getLogger(OperationsService.class);
    private final JobDAO jobDao;

    @Inject
    public JobServiceImpl(final JobDAO dao){
      this.jobDao = dao;
    }

    public SaveResponse runJob(final @NotNull CustomUserDetails user, 
                               final @NotNull String            jobName, 
                               final Map<String,Object>         customParams) {
      SaveResponse response = new SaveResponse();
      try {
          Job job = (Job) jobDao.findFirstByProperty("Job","name",jobName);
          if (job == null || job.getJobId() == null || job.getJobId() <= 0) {
             response.addError("Unable to find Job for name '"+jobName+"'");
             response.setWorked(false);
          } else {
            JobInstance ji = new JobInstance();
            ji.setCreatedBy(user.getUserId());
            ji.setCreatedDate(Util.getCurrentTimestamp());
            ji.setUpdatedBy(user.getUserId());
            ji.setUpdatedDate(Util.getCurrentTimestamp());
            ji.setJobStatus((JobStatus) jobDao.findFirstByProperty("JobStatus", "jobStatusId", JobStatus.KEY_INITIALZING) );
            ji.setStartTime(Util.getCurrentTimestamp());
            ji.setJob(job);
            Boolean created = jobDao.saveHibernateEntity(ji);
            if (created) {
               String className = job.getJobType().getJavaClass();
               Class<?> c = Class.forName(className);
               Constructor<?> cons = c.getConstructor(JobDAO.class,CustomUserDetails.class,JobInstance.class,Map.class);
               BaseJobImpl baseJob = (BaseJobImpl) cons.newInstance(jobDao,user,ji,customParams);
               baseJob.start();
               ji.setUpdatedDate(Util.getCurrentTimestamp());
               ji.setJobStatus((JobStatus) jobDao.findFirstByProperty("JobStatus", "jobStatusId", JobStatus.KEY_IN_PROCESS) );
               jobDao.updateHibernateEntity(ji);
                                 
               StringBuffer successMessage = new StringBuffer();
               successMessage.append("Job '").append(jobName).append("' has been started. ");
               successMessage.append("An email will be sent to '").append(user.getUsername()).append("' when the job is complete. ");
               String url = baseJob.generateCheckBackURL();
               successMessage.append("You can also check the detailed status here: <a href=\"").append(url).append("\">").append(url).append("</a>");
               response.addInfo(successMessage.toString());
               response.setWorked(true);
            } else {
               response.addError("Unable to create JobInstance for Job name '"+jobName+"'");
               response.setWorked(false);
            }
          }
      } catch (Exception e) {
        String message = "Unable to runJob. Please contact support";
        logger.error(message,e);
        response.addError(message);
        response.setWorked(false);
      }
      return response;
    }
  }

  public class ThingBulkUpload extends BaseJobImpl {
    public static final String JOB_NAME = "Thing Bulk Upload";
    public static final String KEY_FILE = "file";

    public ThingBulkUpload(final JobDAO             jobDAO, 
                           final CustomUserDetails  user, 
                           final JobInstance        jobInstance, 
                           final Map<String,Object> customParams) {
                super(jobDAO,user,jobInstance,customParams);
        }

        @Override
        public void run() {
                SaveResponse response = new SaveResponse();
                response.setWorked(false);
                try {
                        final InputStream inputStream = (InputStream) getCustomParam(KEY_FILE);
                        if(inputStream == null) {
                                response.addError("Unable to run ThingBulkUpload; file is NULL");
                        } else {
                                final AnotherThingImporter cri = new AnotherThingImporter(customParams);
                                cri.changeFileStream(inputStream);
                                response = cri.importThingData(user);
                        }
                } catch (final Exception e) {
                        final String message = "Unable to finish ThingBulkUpload";
                        logger.error(message,e);
                        response.addError(message + ": " + e.getMessage());
                } finally {
                        finalizeJob(response);
                }
        }
}

public class AnotherThingImporter {

        // Op: Instantiated this way, even though the impls are annotated with Spring's @Repository.
        private final LocationDAO locationDAO = new LocationDAOImpl();
        private final ContactDAO contactDAO = new ContactDAOImpl();
        private final EntityDAO entityDAO = new EntityDAOImpl();
        private final BaseHibernateDAO baseDAO = new BaseHibernateDAOImpl();
        // Op: snip a few dozen more DAOs

        private       InputStream         workbookStream = null;
        private final Map<String, Object> customParams;

        public ClientRosterImporter(final Map<String, Object> customParams) {
                this.customParams = customParams;
        }

        public void changeFileStream(final InputStream fileStream) {
                workbookStream = fileStream;
        }

        public SaveResponse importThingData(final CustomUserDetails adminUser) {
                final SaveResponse response = new SaveResponse();
                if (workbookStream == null) {
                        throw new ThreeWonException("MISSING_FILE", "ClientRosterImporter was improperly created. No file found.");
                }
                try {
                        final XSSFWorkbook workbook = new XSSFWorkbook(workbookStream);

                        for (int i = 0; i < workbook.getNumberOfSheets(); i++) {

                                final XSSFSheet sheet = workbook.getSheetAt(i);
                                final String sheetName = sheet.getSheetName();

                                // Op: snip 16 unrelated else ifs...
                                
                                } else if (sheetName.equalsIgnoreCase("History")) {
                                        populateHistory(adminUser, response, sheet);
                                }
                                
                                // Op: snip 3 more unrelated else ifs...
                        }
                } catch (final IOException e) {
                        throw new ThreeWonException("BAD_EXCEL_FILE", "Unable to open excel workbook.");
                }
                if (response.getErrors() == null || response.getErrors().size() <= 0) {
                        response.setWorked(true);
                }
                return response;
        }

        // Op: snip 19 completely unrelated methods
        
        private void populateEducationHistory(final CustomUserDetails adminUser, final SaveResponse response,
                                              final XSSFSheet sheet) {
                final ThingDataConverter converter = new ThingDataConverterImpl(entityDAO, locationDAO,
                                contactDAO);
                converter.convertExcelSheet(adminUser, response, sheet, customParams);
        }
}


public class ThingChildAssocConverter extends ThingDataConverter {
        public void convertExcelSheet(final CustomUserDetails adminUser, final SaveResponse response, final XSSFSheet sheet,
                final Map<String, Object> customParams) {
                initialize(customParams);
                final int rowCount = sheet.getPhysicalNumberOfRows();
                Integer numCreated = 0;

                for (int rowIndex = DEFAULT_HEADER_ROW_COUNT; rowIndex < rowCount; rowIndex++) {

                    final XSSFRow currentRow = sheet.getRow(rowIndex);

                    ...
                        
                    // Op: Null pointer thrown from row.getCell(...)
                    //final String name = df.formatCellValue(currentRow.getCell(COL_INST_NUM)); 
                    final String name = getValue(currentRow, COL_INST_NUM);
						
                    ...
                        
                    // Op: creation of the record here
                }
        }
		
        protected String getValue(final XSSFRow row, final Integer column) {
		// Op: We can not assume that any given cell will exist on all spreadsheets
                try {
                        return df.formatCellValue(row.getCell(column)).trim();
                } catch (final Exception e) {
                        // avoid NullPointers by returning "" instead of null
                        return "";
                }
        }
}
 

As opposed to two simple methods that just retrieved the cells, in order, from each specific spreadsheet format.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianVincent Bernat: Route-based IPsec VPN on Linux with strongSwan

A common way to establish an IPsec tunnel on Linux is to use an IKE daemon, like the one from the strongSwan project, with a minimal configuration1:

conn V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64
  authby      = psk
  auto        = route

The same configuration can be used on both sides. Each side will figure out if it is “left” or “right”. The IPsec site-to-site tunnel endpoints are 2001:db8:­1::1 and 2001:db8:­2::1. The protected subnets are 2001:db8:­a1::/64 and 2001:db8:­a2::/64. As a result, strongSwan configures the following policies in the kernel:

$ ip xfrm policy
src 2001:db8:a1::/64 dst 2001:db8:a2::/64
        dir out priority 399999 ptype main
        tmpl src 2001:db8:1::1 dst 2001:db8:2::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir fwd priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
src 2001:db8:a2::/64 dst 2001:db8:a1::/64
        dir in priority 399999 ptype main
        tmpl src 2001:db8:2::1 dst 2001:db8:1::1
                proto esp reqid 4 mode tunnel
[…]

This kind of IPsec tunnel is a policy-based VPN: encapsulation and decapsulation are governed by these policies. Each of them contains the following elements:

  • a direction (out, in or fwd2),
  • a selector (source subnet, destination subnet, protocol, ports),
  • a mode (transport or tunnel),
  • an encapsulation protocol (esp or ah), and
  • the endpoint source and destination addresses.

When a matching policy is found, the kernel will look for a corresponding security association (using reqid and the endpoint source and destination addresses):

$ ip xfrm state
src 2001:db8:1::1 dst 2001:db8:2::1
        proto esp spi 0xc1890b6e reqid 4 mode tunnel
        replay-window 0 flag af-unspec
        auth-trunc hmac(sha256) 0x5b68[…]8ba2904 128
        enc cbc(aes) 0x8e0e377ad8fd91e8553648340ff0fa06
        anti-replay context: seq 0x0, oseq 0x0, bitmap 0x00000000
[…]

If no security association is found, the packet is put on hold and the IKE daemon is asked to negotiate an appropriate one. Otherwise, the packet is encapsulated. The receiving end identifies the appropriate security association using the SPI in the header. Two security associations are needed to establish a bidirectionnal tunnel:

$ tcpdump -pni eth0 -c2 -s0 esp
13:07:30.871150 IP6 2001:db8:1::1 > 2001:db8:2::1: ESP(spi=0xc1890b6e,seq=0x222)
13:07:30.872297 IP6 2001:db8:2::1 > 2001:db8:1::1: ESP(spi=0xcf2426b6,seq=0x204)

All IPsec implementations are compatible with policy-based VPNs. However, some configurations are difficult to implement. For example, consider the following proposition for redundant site-to-site VPNs:

Redundant VPNs between 3 sites

A possible configuration between V1-1 and V2-1 could be:

conn V1-1-to-V2-1
  left        = 2001:db8:1::1
  leftsubnet  = 2001:db8:a1::/64,2001:db8:a6::cc:1/128,2001:db8:a6::cc:5/128
  right       = 2001:db8:2::1
  rightsubnet = 2001:db8:a2::/64,2001:db8:a6::/64,2001:db8:a8::/64
  authby      = psk
  keyexchange = ikev2
  auto        = route

Each time a subnet is modified on one site, the configurations need to be updated on all sites. Moreover, overlapping subnets (2001:db8:­a6::/64 on one side and 2001:db8:­a6::cc:1/128 at the other) can also be problematic.

The alternative is to use route-based VPNs: any packet traversing a pseudo-interface will be encapsulated using a security policy bound to the interface. This brings two features:

  1. Routing daemons can be used to distribute routes to be protected by the VPN. This decreases the administrative burden when many subnets are present on each side.
  2. Encapsulation and decapsulation can be executed in a different routing instance or namespace. This enables a clean separation between a private routing instance (where VPN users are) and a public routing instance (where VPN endpoints are).

Route-based VPN on Juniper

Before looking at how to achieve that on Linux, let’s have a look at the way it works with a JunOS-based platform (like a Juniper vSRX). This platform as long-standing history of supporting route-based VPNs (a feature already present in the Netscreen ISG platform).

Let’s assume we want to configure the IPsec VPN from V3-2 to V1-1. First, we need to configure the tunnel interface and bind it to the “private” routing instance containing only internal routes (with IPv4, they would have been RFC 1918 routes):

interfaces {
    st0 {
        unit 1 {
            family inet6 {
                address 2001:db8:ff::7/127;
            }
        }
    }
}
routing-instances {
    private {
        instance-type virtual-router;
        interface st0.1;
    }
}

The second step is to configure the VPN:

security {
    /* Phase 1 configuration */
    ike {
        proposal IKE-P1 {
            authentication-method pre-shared-keys;
            dh-group group20;
            encryption-algorithm aes-256-gcm;
        }
        policy IKE-V1-1 {
            mode main;
            proposals IKE-P1;
            pre-shared-key ascii-text "d8bdRxaY22oH1j89Z2nATeYyrXfP9ga6xC5mi0RG1uc";
        }
        gateway GW-V1-1 {
            ike-policy IKE-V1-1;
            address 2001:db8:1::1;
            external-interface lo0.1;
            general-ikeid;
            version v2-only;
        }
    }
    /* Phase 2 configuration */
    ipsec {
        proposal ESP-P2 {
            protocol esp;
            encryption-algorithm aes-256-gcm;
        }
        policy IPSEC-V1-1 {
            perfect-forward-secrecy keys group20;
            proposals ESP-P2;
        }
        vpn VPN-V1-1 {
            bind-interface st0.1;
            df-bit copy;
            ike {
                gateway GW-V1-1;
                ipsec-policy IPSEC-V1-1;
            }
            establish-tunnels on-traffic;
        }
    }
}

We get a route-based VPN because we bind the st0.1 interface to the VPN-V1-1 VPN. Once the VPN is up, any packet entering st0.1 will be encapsulated and sent to the 2001:db8:­1::1 endpoint.

The last step is to configure BGP in the “private” routing instance to exchange routes with the remote site:

routing-instances {
    private {
        routing-options {
            router-id 1.0.3.2;
            maximum-paths 16;
        }
        protocols {
            bgp {
                preference 140;
                log-updown;
                group v4-VPN {
                    type external;
                    local-as 65003;
                    hold-time 6;
                    neighbor 2001:db8:ff::6 peer-as 65001;
                    multipath;
                    export [ NEXT-HOP-SELF OUR-ROUTES NOTHING ];
                }
            }
        }
    }
}

The export filter OUR-ROUTES needs to select the routes to be advertised to the other peers. For example:

policy-options {
    policy-statement OUR-ROUTES {
        term 10 {
            from {
                protocol ospf3;
                route-type internal;
            }
            then {
                metric 0;
                accept;
            }
        }
    }
}

The configuration needs to be repeated for the other peers. The complete version is available on GitHub. Once the BGP sessions are up, we start learning routes from the other sites. For example, here is the route for 2001:db8:­a1::/64:

> show route 2001:db8:a1::/64 protocol bgp table private.inet6.0 best-path

private.inet6.0: 15 destinations, 19 routes (15 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

2001:db8:a1::/64   *[BGP/140] 01:12:32, localpref 100, from 2001:db8:ff::6
                      AS path: 65001 I, validation-state: unverified
                      to 2001:db8:ff::6 via st0.1
                    > to 2001:db8:ff::14 via st0.2

It was learnt both from V1-1 (through st0.1) and V1-2 (through st0.2). The route is part of the private routing instance but encapsulated packets are sent/received in the public routing instance. No route-leaking is needed for this configuration. The VPN cannot be used as a gateway from internal hosts to external hosts (or vice-versa). This could also have been done with JunOS’ security policies (stateful firewall rules) but doing the separation with routing instances also ensure routes from different domains are not mixed and a simple policy misconfiguration won’t lead to a disaster.

Route-based VPN on Linux

Starting from Linux 3.15, a similar configuration is possible with the help of a virtual tunnel interface3. First, we create the “private” namespace:

# ip netns add private
# ip netns exec private sysctl -qw net.ipv6.conf.all.forwarding=1

Any “private” interface needs to be moved to this namespace (no IP is configured as we can use IPv6 link-local addresses):

# ip link set netns private dev eth1
# ip link set netns private dev eth2
# ip netns exec private ip link set up dev eth1
# ip netns exec private ip link set up dev eth2

Then, we create vti6, a tunnel interface (similar to st0.1 in the JunOS example):

# ip tunnel add vti6 \
   mode vti6 \
   local 2001:db8:1::1 \
   remote 2001:db8:3::2 \
   key 6
# ip link set netns private dev vti6
# ip netns exec private ip addr add 2001:db8:ff::6/127 dev vti6
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_policy=1
# ip netns exec private sysctl -qw net.ipv4.conf.vti6.disable_xfrm=1
# ip netns exec private ip link set vti6 mtu 1500
# ip netns exec private ip link set vti6 up

The tunnel interface is created in the initial namespace and moved to the “private” one. It will remember its original namespace where it will process encapsulated packets. Any packet entering the interface will temporarily get a firewall mark of 6 that will be used only to match the appropriate IPsec policy4 below. The kernel sets a low MTU on the interface to handle any possible combination of ciphers and protocols. We set it to 1500 and let PMTUD do its work.

We can then configure strongSwan5:

conn V3-2
  left        = 2001:db8:1::1
  leftsubnet  = ::/0
  right       = 2001:db8:3::2
  rightsubnet = ::/0
  authby      = psk
  mark        = 6
  auto        = route
  keyexchange = ikev2
  keyingtries = %forever
  ike         = aes256gcm16-prfsha384-ecp384!
  esp         = aes256gcm16-prfsha384-ecp384!
  mobike      = no

The IKE daemon configures the following policies in the kernel:

$ ip xfrm policy
src ::/0 dst ::/0
        dir out priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:1::1 dst 2001:db8:3::2
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir fwd priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
src ::/0 dst ::/0
        dir in priority 399999 ptype main
        mark 0x6/0xffffffff
        tmpl src 2001:db8:3::2 dst 2001:db8:1::1
                proto esp reqid 1 mode tunnel
[…]

Those policies are used for any source or destination as long as the firewall mark is equal to 6, which matches the mark configured for the tunnel interface.

The last step is to configure BGP to exchange routes. We can use BIRD for this:

router id 1.0.1.1;
protocol device {
   scan time 10;
}
protocol kernel {
   persist;
   learn;
   import all;
   export all;
   merge paths yes;
}
protocol bgp IBGP_V3_2 {
   local 2001:db8:ff::6 as 65001;
   neighbor 2001:db8:ff::7 as 65003;
   import all;
   export where ifname ~ "eth*";
   preference 160;
   hold time 6;
}

Once BIRD is started in the “private” namespace, we can check routes are learned correctly:

$ ip netns exec private ip -6 route show 2001:db8:a3::/64
2001:db8:a3::/64 proto bird metric 1024
        nexthop via 2001:db8:ff::5  dev vti5 weight 1
        nexthop via 2001:db8:ff::7  dev vti6 weight 1

The above route was learnt from both V3-1 (through vti5) and V3-2 (through vti6). Like for the JunOS version, there is no route-leaking between the “private” namespace and the initial one. The VPN cannot be used as a gateway between the two namespaces, only for encapsulation. This also prevent a misconfiguration (for example, IKE daemon not running) from allowing packets to leave the private network.

As a bonus, unencrypted traffic can be observed with tcpdump on the tunnel interface:

$ ip netns exec private tcpdump -pni vti6 icmp6
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on vti6, link-type LINUX_SLL (Linux cooked), capture size 262144 bytes
20:51:15.258708 IP6 2001:db8:a1::1 > 2001:db8:a3::1: ICMP6, echo request, seq 69
20:51:15.260874 IP6 2001:db8:a3::1 > 2001:db8:a1::1: ICMP6, echo reply, seq 69

You can find all the configuration files for this example on GitHub. The documentation of strongSwan also features a page about route-based VPNs.


  1. Everything in this post should work with Libreswan

  2. fwd is for incoming packets on non-local addresses. It only makes sense in transport mode and is a Linux-only particularity. 

  3. Virtual tunnel interfaces (VTI) were introduced in Linux 3.6 (for IPv4) and Linux 3.12 (for IPv6). Appropriate namespace support was added in 3.15. KLIPS, an alternative out-of-tree stack available since Linux 2.2, also features tunnel interfaces. 

  4. The mark is set right before doing a policy lookup and restored after that. Consequently, it doesn’t affect other possible uses (filtering, routing). However, as Netfilter can also set a mark, one should be careful for conflicts. 

  5. The ciphers used here are the strongest ones currently possible while keeping compatibility with JunOS. The documentation for strongSwan contains a complete list of supported algorithms as well as security recommendations to choose them. 

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #124

Here's what happened in the Reproducible Builds effort between Sunday September 3 and Saturday September 9 2017:

Media coverage

GSoC and Outreachy updates

Debian will participate in this year's Outreachy initiative and the Reproducible Builds is soliciting mentors and students to join this round.

For more background please see the following mailing list posts: 1, 2 & 3.

Reproduciblity work in Debian

In addition, the following NMUs were accepted:

Reproduciblity work in other projects

Patches sent upstream:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

3 package reviews have been added, 2 have been updated and 2 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (15)

diffoscope development

Development continued in git, including the following contributions:

Mattia Rizzolo also uploaded the version 86 released last week to stretch-backports.

reprotest development

tests.reproducible-builds.org

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Don MartiTracking protection defaults on trusted and untrusted sites

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Setting tracking protection defaults for a browser is hard. Some activities that the browser might detect as third-party tracking are actually third-party services such as single sign-on—so when the browser sets too high of a level of protection it can break something that the user expects to work.

Meanwhile, new research from Pagefair shows that The very large majority (81%) of respondents said they would not consent to having their behaviour tracked by companies other than the website they are visiting. A tracking protection policy that leans too far in the other direction will also fail to meet the user's expectations.

So you have to balance two kinds of complaints.

  • "your dumbass browser broke a site that was working before"

  • "your dumbass browser let that stupid site do stupid shit"

Maybe, though, if the browser can figure out which sites the user trusts, you can keep the user happy by taking a moderate tracking protection approach on the trusted sites, and a more cautious approach on less trusted sites.

Apple Intelligent Tracking Prevention allows third-party tracking by domains that the user interacts with.

If the user has not interacted with example.com in the last 30 days, example.com website data and cookies are immediately purged and continue to be purged if new data is added. However, if the user interacts with example.com as the top domain, often referred to as a first-party domain, Intelligent Tracking Prevention considers it a signal that the user is interested in the website and temporarily adjusts its behavior (More...)

But it looks like this could give large companies an advantage—if the same domain has both a service that users will visit and third-party tracking, then the company that owns it can track users even on sites that the users don't trust. Russell Brandom: Apple's new anti-tracking system will make Google and Facebook even more powerful.

It might makes more sense to set the trust level, and the browser's tracking protection defaults, based on which site the user is on. Will users want a working "Tweet® this story" button on a news site they like, and a "Log in with Google" feature on a SaaS site they use, but prefer to have third-party stuff blocked on random sites that they happen to click through to?

How should the browser calculate user trust level? Sites with bookmarks would look trusted, or sites where the user submits forms (especially something that looks like an email address). More testing is needed, and setting protection policies is still a hard problem.

Bonus link: Proposed Principles for Content Blocking.

,

Planet DebianMarkus Koschany: My Free Software Activities in August 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

DebConf 17 in Montreal

I traveled to DebConf 17 in Montreal/Canada. I arrived on 04. August and met a lot of different people which I only knew by name so far. I think this is definitely one of the best aspects of real life meetings, putting names to faces and getting to know someone better. I totally enjoyed my stay and I would like to thank all the people who were involved in organizing this event. You rock! I also gave a talk about the “The past, present and future of Debian Games”,  listened to numerous other talks and got a nice sunburn which luckily turned into a more brownish color when I returned home on 12. August. The only negative experience I made was with my airline which was supposed to fly me home to Frankfurt again. They decided to cancel the flight one hour before check-in for unknown reasons and just gave me a telephone number to sort things out.  No support whatsoever. Fortunately (probably not for him) another DebConf attendee suffered the same fate and together we could find another flight with Royal Air Maroc the same day. And so we made a short trip to Casablanca/Morocco and eventually arrived at our final destination in Frankfurt a few hours later. So which airline should you avoid at all costs (they still haven’t responded to my refund claims) ? It’s WoW-Air from Iceland. (just wow)

Debian Games

Debian Java

Debian LTS

This was my eighteenth month as a paid contributor and I have been paid to work 20,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 31. July until 06. August I was in charge of our LTS frontdesk. I triaged bugs in tinyproxy, mantis, sox, timidity, ioquake3, varnish, libao, clamav, binutils, smplayer, libid3tag, mpg123 and shadow.
  • DLA-1064-1. Issued a security update for freeradius fixing 6 CVE.
  • DLA-1068-1. Issued a security update for git fixing 1 CVE.
  • DLA-1077-1. Issued a security update for faad2 fixing 11 CVE.
  • DLA-1083-1. Issued a security update for openexr fixing 3 CVE.
  • DLA-1095-1. Issued a security update for freerdp fixing 5 CVE.

Non-maintainer upload

  • I uploaded a security fix for openexr (#864078) to fix CVE-2017-9110, CVE-2017-9112 and CVE-2017-9116.

Thanks for reading and see you next time.

Krebs on SecurityAyuda! (Help!) Equifax Has My Data!

Equifax last week disclosed a historic breach involving Social Security numbers and other sensitive data on as many as 143 million Americans. The company said the breach also impacted an undisclosed number of people in Canada and the United Kingdom. But the official list of victim countries may not yet be complete: According to information obtained by KrebsOnSecurity, Equifax can safely add Argentina — if not also other Latin American nations where it does business — to the list as well.

equihaxEquifax is one of the world’s three-largest consumer credit reporting bureaus, and a big part of what it does is maintain records on consumers that businesses can use to learn how risky it might be to loan someone money or to extend them new lines of credit. On the flip side, Equifax is somewhat answerable to those consumers, who have a legal right to dispute any information in their credit report which may be inaccurate.

Earlier today, this author was contacted by Alex Holden, founder of Milwaukee, Wisc.-based Hold Security LLC. Holden’s team of nearly 30 employees includes two native Argentinians who spent some time examining Equifax’s South American operations online after the company disclosed the breach involving its business units in North America.

It took almost no time for them to discover that an online portal designed to let Equifax employees in Argentina manage credit report disputes from consumers in that country was wide open, protected by perhaps the most easy-to-guess password combination ever: “admin/admin.”

We’ll speak about this Equifax Argentina employee portal — known as Veraz or “truthful” in Spanish — in the past tense because the credit bureau took the whole thing offline shortly after being contacted by KrebsOnSecurity this afternoon. The specific Veraz application being described in this post was dubbed Ayuda or “help” in Spanish on internal documentation.

The landing page for the internal administration page of Equifax’s Veraz portal. Click to enlarge.

Once inside the portal, the researchers found they could view the names of more than 100 Equifax employees in Argentina, as well as their employee ID and email address. The “list of users” page also featured a clickable button that anyone authenticated with the “admin/admin” username and password could use to add, modify or delete user accounts on the system. A search on “Equifax Veraz” at Linkedin indicates the unit currently has approximately 111 employees in Argentina.

A partial list of active and inactive Equifax employees in Argentina. This page also let anyone add or remove users at will, or modify existing user accounts.

Each employee record included a company username in plain text, and a corresponding password that was obfuscated by a series of dots.

The “edit users” page obscured the Veraz employee’s password, but the same password was exposed by sloppy coding on the Web page.

However, all one needed to do in order to view said password was to right-click on the employee’s profile page and select “view source,” a function that displays the raw HTML code which makes up the Web site. Buried in that HTML code was the employee’s password in plain text.

A review of those accounts shows all employee passwords were the same as each user’s username. Worse still, each employee’s username appears to be nothing more than their last name, or a combination of their first initial and last name. In other words, if you knew an Equifax Argentina employee’s last name, you also could work out their password for this credit dispute portal quite easily.

But wait, it gets worse. From the main page of the Equifax.com.ar employee portal was a listing of some 715 pages worth of complaints and disputes filed by Argentinians who had at one point over the past decade contacted Equifax via fax, phone or email to dispute issues with their credit reports. The site also lists each person’s DNI — the Argentinian equivalent of the Social Security number — again, in plain text. All told, this section of the employee portal included more than 14,000 such records.

750 pages worth of consumer complaints — more than 14,000 in all — complete with the Argentinian equivalent of the SSN (the DNI) in plain text. This page was auto-translated by Google Chrome into English.

Jorge Speranza, manager of information technology at Hold Security, was born in Argentina and lived there for 40 years before moving to the United States. Speranza said he was aghast at seeing the personal data of so many Argentinians protected by virtually non-existent security.

Speranza explained that — unlike the United States — Argentina is traditionally a cash-based society that only recently saw citizens gaining access to credit.

“People there have put a lot of effort into getting a loan, and for them to have a situation like this would be a disaster,” he said. “In a country that has gone through so much — where there once was no credit, no mortgages or whatever — and now having the ability to get loans and lines of credit, this is potentially very damaging.”

Shortly after receiving details about this epic security weakness from Hold Security, I reached out to Equifax and soon after heard from a Washington, D.C.-based law firm that represents the credit bureau.

I briefly described what I’d been shown by Hold Security, and attorneys for Equifax said they’d get back to me after they validated the claims. They later confirmed that the Veraz portal was disabled and that Equifax is investigating how this may have happened. Here’s hoping it will stay offline until it is fortified with even the most basic of security protections.

According to Equifax’s own literature, the company has operations and consumer “customers” in several other South American nations, including Brazil, Chile, Ecuador, Paraguay, Peru and Uruguay. It is unclear whether the complete lack of security at Equifax’s Veraz unit in Argentina was indicative of a larger problem for the company’s online employee portals across the region, but it’s difficult to imagine they could be any worse.

“To me, this is just negligence,” Holden said. “In this case, their approach to security was just abysmal, and it’s hard to believe the rest of their operations are much better.”

I don’t have much advice for Argentinians whose data may have been exposed by sloppy security at Equifax. But I have urged my fellow Americans to assume their SSN and other personal data was compromised in the breach and to act accordingly. On Monday, KrebsOnSecurity published a Q&A about the breach, which includes all the information you need to know about this incident, as well as detailed advice for how to protect your credit file from identity thieves.

[Author’s note: I am listed as an adviser to Hold Security on the company’s Web site. However this is not a role for which I have been compensated in any way now or in the past.]

Planet DebianArturo Borrero González: Google Hangouts in Debian testing (Buster)

debian-suricata logo

Google offers a lot of software components packaged specifically for Debian and Debian-like Linux distributions. Examples are: Chrome, Earth and the Hangouts plugin. Also, there are many other Internet services doing the same: Spotify, Dropbox, etc. I’m really grateful for them, since this make our life easier.

Problem is that our ecosystem is rather complex, with many distributions and many versions out there. I guess is not an easy task for them to keep such a big variety of support variations.

In this particular case, it seems Google doesn’t support Debian testing in their .deb packages. In this case, testing means Debian Buster. And the same happens with the official Spotify client package.

I’ve identified several issues with them, to name a few:

  • packages depends on lsb-core, no longer present in Buster testing.
  • packages depends on libpango1.0-0, however testing contains libpango-1.0-0

I’m in need of using Google Hangout so I’ve been forced to solve this situation by editing the .deb package provided by Google.

Simple steps:

  • 1) create a temporal working directory
% user@debian:~ $ mkdir pkg
% user@debian:~ $ cd pkg/
  • 2) get the original .deb package, the Google Hangout talk plugin.
% user@debian:~/pkg $ wget https://dl.google.com/linux/direct/google-talkplugin_current_amd64.deb
[...]
  • 3) extract the original .deb package
% user@debian:~/pkg $ dpkg-deb -R google-talkplugin_current_amd64.deb google-talkplugin_current_amd64/
  • 4) edit the control file, replace libpango1.0-0 with libpango-1.0-0
% user@debian:~/pkg $ nano google-talkplugin_current_amd64/DEBIAN/control
  • 5) rebuild the package and install it!
% user@debian:~/pkg $ dpkg -b google-talkplugin_current_amd64
% user@debian:~/pkg $ sudo dpkg -i google-talkpluging_current_amd64.deb

I have yet to investigate how to workaround the lsb-core thing, so still I can’t use Google Earth.

TEDThe big idea: 3 reasons to be kind to educators

Any dedicated educator can tell you: A teaching job extends far beyond the hours of the school day. Molding the minds of future leaders while simultaneously ferrying them across the rapids of childhood and adolescence — and dealing with the economics of the job — is a calling not for the faint of heart. Here are three solid reasons to give teachers the love and support they deserve.

1. Being a teacher is tough (just about everywhere)

Loving teaching and being a teacher are two different, but not mutually exclusive things when money can play a deciding factor. Teachers from around the world struggle with similar financial issues, no matter their longitude or latitude. Through our TED-Ed network, we caught with up 17 public school teachers from Kildare to Kathmandu, Johannesburg to Oslo and beyond, on how their salary influences their livelihood.

“I took a pay cut to become a teacher. It is a calling, not a job. Teaching is a privilege that is not for the infirm of purpose or seekers of large pay-stub totals. If I didn’t wake up before my alarm so I can get to school early, I’d be worried. The fact is that I do wake each morning excited for what the day holds for my classroom — the challenges as much as the triumphs — which for some can be a simple as reading a first sentence.”

—  a 6th grade teacher from Markham, Canada


“I am happy but financially strapped. I don’t eat at restaurants; I can’t afford it. I am not a demanding guy, so my income seems sufficient for now, but I can’t sustain my life on it.”

— a computer teacher from Kathmandu, Nepal


“Though I love my job, the stress that comes with it along with the stress of money problems sometimes makes me consider leaving, even though I don’t think I would feel as fulfilled as I do right now. We scrape by, and make the best of what we have, and we are happy for now.”

— an elementary school music teacher from Georgia, United States

Many teach for love of education, to shape the minds of the coming generations; not for the love of money.

2. Educators don’t just teach, they manage a flurry of feelings

As kids age into their late teens, they simultaneously embark on an emotional journey that often plays out during school hours. Heartbreak, arguments with friends, troubled home life, struggles with mental health and schoolwork, never-before-experienced emotions, and numerous other factors typically crop up during and in-between classes. Without a parent or guardian at hand, it’s left to the teachers and school staff to tend to the emotional well-being of students.

Amid administrative duties, endless grading and planning lessons that may forever impact the students they teach, educators must manage a room full of budding young adults who aren’t always ready to sit quietly and be taught. Patience and consideration is tested on a daily basis, no matter how much love a teacher has for their craft and their students. Stress is inevitable in any job, of course. But there’s opportunity for a special, haunting stress to form — one born from the knowledge that the future’s sitting just feet from the chalkboard, in its most formative years; to not acknowledge these demands, within limits, is to not recognize teachers as human beings first.

In addition to all of this, some believe educators should start teaching emotions in grade school. The RULER program, which is used in over a thousand schools in the US and abroad, is currently one of the most prominent tools for teaching emotions that breaks down the skill into five convenient steps:

Recognizing emotions in oneself and others

Understanding the causes and consequences of emotions

Labeling emotional experiences with an accurate and diverse vocabulary

Expressing and

Regulating emotions in ways that promote growth

Educator Nadia Lopez (TED Talk: Why open a school? To close a prison)  has her own tips for dealing with emotions that’ve already begun to bubble over. Lopez opened Mott Hall Bridges Academy in Brooklyn, New York (you may recognize the name from Humans of New York), and she did so with a simple goal: for her school to be a haven and guiding light for young scholars. As principal, she dedicates her life to what she sees in the future of each of her students. Sometimes, that means acting as the emotional bridge or traffic control as kids learn about not just what they should know, but more about who they are and what they stand for.

Lopez shares some of her favorite ways to dial down conflict with administrators, her scholars and staff — applicable in situations far beyond the classroom — broken down into 6 bite-sized tips.

  • Be vulnerable. Though it may seem counterintuitive, being open and honest with your team during challenging times demonstrates a sense of trust that can develop into mutual respect.
  • Be aware. Stop and ask, “Why isn’t this working?”
  • Center yourself. Being calm is so important that Lopez tries to spend at least 15 minutes each day enjoying uninterrupted silence.
  • Manage mediation. No yelling, wait to speak your turn, respect a person’s turn to explain their side.
  • Listen deeply and actively. In tense discussions, it’s important to acknowledge the feelings of each party involved and use reflective language to show that they’ve been heard.
  • Acknowledge, respect and thank. Repeat. A simple email, text or brief handwritten (ideally, hand-delivered) note has the power to touch deeply and stave away challenging occurrences.

3. Yes, teachers help kids, but sometimes they need help too

Teachers often spend hundreds of dollars on school supplies over the course of a school year. There are many options that allow parents and other charitable individuals to support classrooms near and far. Organizations like Donors Choose allow any interested party to choose an inspiring project and donate any amount.

Or, you can always take part in chiseling down fees in your own backyard.

If you’re interested in doing more, here’s a nice list of other ways you can help you educators, if time and/or resources are available.

Let’s be honest, most people have at least one story about their favorite teacher that’s left a lasting impression, shaped a lifelong interest, or helped them get through a tough time. That educator’s compassion and dedication may have even brought you to where you are now. Love is a main ingredient in what makes those memories stick — one that helped principal Linda Cliatt-Wayman (TED Talk: How to fix a broken school? Lead fearlessly, love hard) successfully turn around three schools.

As she says to her students everyday and a mantra for many educators to their kids:

Check out the TED-Ed blog for more education-based love and let’s celebrate educators!

teacher writing on chalkboard, linking to TED Talks by inspiring teachers


TEDThe big idea: 5 ways to be a more thoughtful traveler

There’s a difference between traveling to a place and vacationing there. Vacationing renders visions of relaxation and minimal effort, whereas traveling evokes thoughts of an adventure where Wi-Fi hotspots are few and far between. Here are some ways to think differently about the places you visit and the people you see before stepping out of that train, plane, automobile or boat.

1. Know some history

History offers context: it explains why buildings look a certain way, how foods became staples, what specific clothing styles and patterns mean, and which locations hold significance.

Generally, it’ll help you feel less lost as you wander through streets and interact with locals.

No one’s expecting you to become an expert overnight, or at all really. However, learning a few key facts about how an area, the people and their culture came to be demonstrates a basic level of respect. Skim through articles online or check out a book at your local library prior to your trip, or explore via Google’s Cultural Institute and Art Project.

Find out what LGBTQ life is like around the world; if you ever visit New York City, you might be interested to know what it looked like before it became a city; or you may even be shocked to discover, before you ride one, that camels aren’t originally from Middle East or the Horn of Africa at all.

Familiarize yourself within a place’s history, culture, art and science (politics too, if you’re feeling particularly passionate), and watch as your perspective of the world shifts just enough for things to take on a finer, clearer focus.

2. Think about how you’ll document your trip

A good question to ask yourself: Would you even go to this place if you weren’t allowed to take pictures? Try to keep your picture-taking habit in perspective, because focusing on your photos could keep you from truly immersing yourself in the moment and place. A few ways to be a smarter picture-taker:

Here are a few key tips (check out the entire article to collect them all):

  1. Keep your lens clean and your battery charged. Yes, both of these things are obvious, but they’re also very easy to forget. Phones can get especially dirty from riding around in our pockets and getting our fingerprints all over them. So form a habit where every time you go to pick up your camera, you clean off your lens. You can wipe your lens with a lens cloth or a super soft fabric like an old T-shirt. But be careful; using a fabric that’s too rough will scratch.
  2. Light is king. If you remember one thing from this list, choose this one. Lighting is as valuable a tool as your camera itself. Generally, natural light from the sun is the best option. If you’re inside, raise the blinds and open the curtains to let in as much light as possible and, if you can, move your subject near the window.
  3. Use a reflector. Reflectors bounce light from the sun or a lamp onto an object. If you want to get that clean, professional studio look, use a white piece of poster board or foamcore to reflect light onto your subjects.
  4. Think before you shoot. This means taking time to consider what’s in the frame, and coming up with the best composition. Are there any water bottles or random objects that should be moved? Have you cropped off the top of someone’s head? Take some time to consider it.
  5. Mind the lines. Horizon lines should be straight unless you’re making them diagonal for a creative effect. I like to use the grid feature on my phone to make sure I’m not off. I also often use a 9-square grid like the one below that breaks my photo up into thirds. This is called the Rule of Thirds — aim to place the points of interest in your photo along the lines or where the lines cross, and your photos will naturally feel more balanced to the viewer.

 3. Read a book* set wherever you’re going

*Fiction and nonfiction, if you can.

Books give you a good sense of the atmosphere of a place, the people you may encounter … Of course, people can’t be chalked up to imaginary situations and characteristics because that’s just stereotyping.

In the wise words of writer Chimamanda Ngozi Adichie (TED Talk: The danger of a single story):

“Stories matter. Many stories matter. Stories have been used to dispossess and to malign, but stories can also be used to empower and to humanize. Stories can break the dignity of a people, but stories can also repair that broken dignity,” she says.

It helps to learn about the people and the customs of a place so you don’t go charging in there acting like you’ve just dropped into a different planet. If you’re looking for a place to start, here’s 196 fictional novel recommendations (one from each country in the world).

4. Learn some of the language

It always useful to know at least a few words to help you get around. Like a bad friend, you don’t want to always drop into a place only to eat all the good food, find a comfy place to sleep and leave a few days later with barely a word exchanged.

“Why learn languages? If it isn’t going to change the way you think, what would the other reasons be? There are some,” says linguist John McWhorter. “One of them is that if you want to imbibe a culture, if you want to drink it in, if you want to become part of it, then whether or not the language channels the culture — and that seems doubtful — if you want to imbibe the culture, you have to control to some degree the language that the culture happens to be conducted in. There’s no other way.”

 

Here’s a simplified list from McWhorter’s talk (but watch the whole thing for all the language-lovin’):

  1. They are tickets to being able to participate in the culture of the people who speak them, just by virtue of the fact that it is their code.
  2. It’s been shown that if you speak two languages, dementia is less likely to set in, and that you are probably a better multitasker.
  3. Languages are just an awful lot of fun. Much more fun than we’re often told. They’re playful, if you let them be.
  4. We live in an era when it’s never been easier to teach yourself another language. Today you can lay down — lie on your living room floor, sipping bourbon, and teach yourself any language that you want to with wonderful sets such as Rosetta Stone. I highly recommend the lesser known Glossika as well. You can do it any time, therefore you can do it more and better.

If you need a little more motivation to sign up for a class, download an app, or leaf through an translation dictionary, check out the playlist below for TED Talks that’ll inspire you to learn a new language.

5. Understand where you come from

What does it mean to be from a place? For some, the answer is straight-forward and obvious. For others, the question isn’t as simple as it sounds. A thought experiment for yourself, as well as others you encounter while traveling — perhaps over beers or a card game — is to ask, “Where are you a local?” instead of “Where are you from?”

Taiye Selasi suggests an examination of life basics, which she calls the three “R’s”:

  • Rituals. Think of your daily rituals, whatever they may be: making your coffee, driving to work, harvesting your crops, saying your prayers. What kind of rituals are these? Where do they occur? In what city or cities in the world do shopkeepers know your face?
  • Relationships. Think of your relationships, of the people who shape your days. To whom do you speak at least once a week, be it face to face or on FaceTime? Be reasonable in your assessment; I’m not talking about your Facebook friends. I’m speaking of the people who shape your weekly emotional experience.
  • Restrictions. How we experience our locality depends in part on our restrictions. By restrictions, I mean, where are you able to live? What passport do you hold? Are you restricted by, say, racism, from feeling fully at home where you live? By civil war, dysfunctional governance, economic inflation, from living in the locality where you had your rituals as a child? This is the least sexy of the R’s, less lyric than rituals and relationships, but the question takes us past “Where are you now?” to “Why aren’t you there, and why?”

“Take a piece of paper and put those three words on top of three columns, then try to fill those columns as honestly as you can,” Selasi says. “A very different picture of your life in local context, of your identity as a set of experiences, may emerge.”

Need a more in-depth exploration of what it means to be a thoughtful traveler? Or if you’re ready to set off on an adventure, but not quite sure where, check out these TED Talks to watch when you’re in the mood for adventure and this great list of talks to give you wanderlust.


TEDCan cities have compassion? A Q&A with OluTimehin Adegbeye following her blockbuster TED Talk

For 12 spellbinding minutes, OluTimehin Adegbeye gave us a moving, challenging talk on cities and communities — and who gets to belong. She spoke at TEDGlobal 2017 on August 30 in Arusha, Tanzania. Photo: Bret Hartman / TED

Urban gentrification in Lagos is displacing hundreds of thousands of people who do not fit into the administration’s resplendent vision for the future. Their crime? Poverty. In what was one of the most moving talks of TEDGlobal 2017, OluTimehin Adegbeye calls us to consider the human cost of progress, specifically for the former inhabitants of Otodo Gbame, a coastal Lagos fishing community that was forcefully demolished to make way for a prime beachfront development. In 12 minutes of fearless oratory, punctuated with ironic humor and stories, Adegbeye makes the case for why cities must have consciences. We asked for more details about the Otodo Gbame situation, and how to think about creating cities that don’t leave their people behind.

How did you come to be invested in the subject of cities pushing out the poor? Was it before or after Otodo Gbame?

Definitely after Otodo Gbame. I had been vaguely aware of some of the anti-poor policies and actions taken by successive governments in Lagos, but the demolition of Otodo Gbame was the first incident that really woke me up to the injustice and urgency of the situation.

My initial involvement was the result of feelings of helplessness; I didn’t know what I could do, so I volunteered to write about it. But the more stories I heard in trying to write, the clearer it became to me how the structures that allowed anti-poor violence to exist unchallenged were not all that different or separate from those that allowed misogyny, or any other kind of violence really, to thrive. So my involvement became less about a desire to ‘help’ others and more about trying to dismantle systems that hurt me too, whether directly or by allowing me to be complicit in unchecked violence.

You are an activist with many causes. Why did you choose this one to be the subject of your talk?

I chose this topic because of the urgency of the situation. The demolitions and forced, systematic evictions in Lagos are happening with increasing regularity under the current government, so my hope is that the talk will lead to increased scrutiny of the actors who are responsible for these displacements, and eventually the abandonment of a model of “development” which prioritizes profits over people.

You said that these forced evictions are unconstitutional, but they happen anyway. I’m aware that there was a court ruling in favor of the displaced Otodo Gbame residents. Are you close enough to the situation to describe the current legal status of the issue? Will those people see some sort of vindication at some point? Or is justice too much to hope for?

The latest update I have is that the Lagos state government is appealing the ruling in favor of Otodo Gbame and other waterfront communities. I’m not sure what the grounds of the appeal are/will be, but since the people of Otodo Gbame have still been neither compensated nor resettled, it doesn’t seem like the executive is particularly interested in justice.

There are certain agencies within the government who have announced intentions to collaborate with informal settlements and waterfront communities to pursue in-situ upgrading, but very little if any concrete action has come of this.

Do you think the Lagos state government hears, feels this at all? Have you seen any reactions or indications that they do?

The Lagos state government definitely knows there has been widespread resistance and outrage, especially where Otodo Gbame is concerned. A handful of government officials, including the governor himself, have made statements attempting to explain or justify the demolitions in the wake of public outcry. However, it is anybody’s guess whether they are interested in going beyond trying to save face.

Asides TED, where else have you talked about this?

I’ve written about the demolitions for US and Norwegian publications, but TED is the only place I’ve spoken about them. I think it’s a great choice for getting the word out.

Do you have an organised campaign working on this?

The NGO I work with, the Justice and Empowerment Initiatives, at justempower.org, has created a social media campaign tagged #SaveTheWaterfronts, which is specifically about the waterfront communities that are under threat in Lagos, and a broader one tagged #InclusiveLagos that comments on the threats to livelihoods, police brutality, forced migration and other actions that target marginalised groups

Who else is championing these people’s rights, and how can they be supported/helped? Are there organisations that are trusted channels for this help?

JEI has been working with waterfront communities and informal settlements in Lagos and port harcourt, Nigeria, for the past few years. Their model of legal empowerment is one I find incredibly effective for bottom-up organising, and they are a donor-funded organisation so I would definitely recommend donating to them.

Are there other communities at risk of displacement that we should be paying attention to right now?

Right now, Ago Egun Bariga, which is one of the communities you can see from Third Mainland Bridge in Lagos, is being slowly starved out by land reclamation activities that the Lagos state government contracted out to a Nigerian subsidiary of Boskalis, a Netherlands-based company. Efforts to dialogue with them have so far proved abortive. Also, another community, Abete Iwaya, was demolished just two days before I left for TED Global.

This is probably an unfair question, but do you have any thoughts about how to create more inclusive cities, cities with consciences?

I think there are many, many answers to this question that have merit — many of which have been proffered by people with greater expertise than me. But I would suggest responsiveness. Cities that take the needs of their residents into consideration as they grow will inevitably become cities with a conscience, I think. So then the question becomes who the powers-that-be consider ‘legitimate’ residents, and how that is defined. Because it’s not true that the exclusionary cities we have today don’t respond to their residents; it’s just that they respond to a very specific subset of residents. Which brings me back to the question of belonging. So maybe cities with a conscience are those that are non-discriminatory in their responsiveness.

Watch OluTimehin’s TED Talk >>


Worse Than FailureCodeSOD: Cases, Cases, Cases

Illustrated fashion catalogue - summer, 1890 (1890) (14597321320)

Paul R. shows us a classic example of the sort of case statement that maybe, you know, never should've been implemented as a case statement:

It is cut and paste to the extreme.  Even worse, as fields were added, someone would have to go in and update this block of code.  This massive block was replaced with...

var fieldName = reader["TemplateFieldName"].ToString();
theCommands = theCommands.Replace(
    fieldName, WashTheValue(reader["FieldValue"].ToString(), 
    reader["OrderingFieldID"].ToString(), 
    reader["PriceFormat"].ToString()));

Below, you'll find the original code. Don't sprain your scrolling finger!

                    switch (reader["TemplateFieldName"].ToString())
                    {
                        case "<2yr.Guarantee>":
                            theCommands = theCommands.Replace("<2yr.Guarantee>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Address>":
                            theCommands = theCommands.Replace("<Address>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<ADDRESS>":
                            theCommands = theCommands.Replace("<ADDRESS>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<address1>":
                            theCommands = theCommands.Replace("<address1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<address2>":
                            theCommands = theCommands.Replace("<address2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<AddressLine2>":
                            theCommands = theCommands.Replace("<AddressLine2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<BareRoot>":
                            theCommands = theCommands.Replace("<BareRoot>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Blank>":
                            theCommands = theCommands.Replace("<Blank>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<BLANK>":
                            theCommands = theCommands.Replace("<BLANK>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<BlankBack>":
                            theCommands = theCommands.Replace("<BlankBack>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<BulbSize>":
                            theCommands = theCommands.Replace("<BulbSize>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<BulbSizeSpanish>":
                            theCommands = theCommands.Replace("<BulbSizeSpanish>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CanadianTireCode>":
                            theCommands = theCommands.Replace("<CanadianTireCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Certified>":
                            theCommands = theCommands.Replace("<Certified>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CLEMATIS>":
                            theCommands = theCommands.Replace("<CLEMATIS>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<COLOR_BAR>":
                            theCommands = theCommands.Replace("<COLOR_BAR>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CompanyAddress>":
                            theCommands = theCommands.Replace("<CompanyAddress>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CompanyName>":
                            theCommands = theCommands.Replace("<CompanyName>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CONTAINER_SIZE>":
                            theCommands = theCommands.Replace("<CONTAINER_SIZE>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<ContainerSize>":
                            theCommands = theCommands.Replace("<ContainerSize>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CTC>":
                            theCommands = theCommands.Replace("<CTC>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Cust.Stock#>":
                            theCommands = theCommands.Replace("<Cust.Stock#>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CustomerAddress>":
                            theCommands = theCommands.Replace("<CustomerAddress>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<customerCode>":
                            theCommands = theCommands.Replace("<customerCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CustomerStock#>":
                            theCommands = theCommands.Replace("<CustomerStock#>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<CustStockNum>":
                            theCommands = theCommands.Replace("<CustStockNum>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<DeerIcon>":
                            theCommands = theCommands.Replace("<DeerIcon>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<description>":
                            theCommands = theCommands.Replace("<description>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<DisplayStakeHole>":
                            theCommands = theCommands.Replace("<DisplayStakeHole>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GADD>":
                            theCommands = theCommands.Replace("<GADD>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Gallons>":
                            theCommands = theCommands.Replace("<Gallons>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GNAME>":
                            theCommands = theCommands.Replace("<GNAME>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Grade>":
                            theCommands = theCommands.Replace("<Grade>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Grower>":
                            theCommands = theCommands.Replace("<Grower>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrowerAddress>":
                            theCommands = theCommands.Replace("<GrowerAddress>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<growerAddress>":
                            theCommands = theCommands.Replace("<growerAddress>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<growerName>":
                            theCommands = theCommands.Replace("<growerName>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrowerName>":
                            theCommands = theCommands.Replace("<GrowerName>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrownBy>":
                            theCommands = theCommands.Replace("<GrownBy>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<grownBy>":
                            theCommands = theCommands.Replace("<grownBy>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrownBy1>":
                            theCommands = theCommands.Replace("<GrownBy1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrownBy2>":
                            theCommands = theCommands.Replace("<GrownBy2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrownBy3>":
                            theCommands = theCommands.Replace("<GrownBy3>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrownByLine2>":
                            theCommands = theCommands.Replace("<GrownByLine2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrownByLine3>":
                            theCommands = theCommands.Replace("<GrownByLine3>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrownIn>":
                            theCommands = theCommands.Replace("<GrownIn>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrowninCanada>":
                            theCommands = theCommands.Replace("<GrowninCanada>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<GrownInCanada>":
                            theCommands = theCommands.Replace("<GrownInCanada>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<HagenCode>":
                            theCommands = theCommands.Replace("<HagenCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<HasPrice>":
                            theCommands = theCommands.Replace("<HasPrice>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Inches>":
                            theCommands = theCommands.Replace("<Inches>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<InsidersLogo>":
                            theCommands = theCommands.Replace("<InsidersLogo>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<InsidersReport>":
                            theCommands = theCommands.Replace("<InsidersReport>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<ItemNumber>":
                            theCommands = theCommands.Replace("<ItemNumber>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Licensed>":
                            theCommands = theCommands.Replace("<Licensed>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<LicensedGrower>":
                            theCommands = theCommands.Replace("<LicensedGrower>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Liters>":
                            theCommands = theCommands.Replace("<Liters>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Logo>":
                            theCommands = theCommands.Replace("<Logo>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Logo2>":
                            theCommands = theCommands.Replace("<Logo2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<MultiPrice>":
                            theCommands = theCommands.Replace("<MultiPrice>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<NAME>":
                            theCommands = theCommands.Replace("<NAME>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<NewLogo>":
                            theCommands = theCommands.Replace("<NewLogo>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<NotPlantedRetail>":
                            theCommands = theCommands.Replace("<NotPlantedRetail>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<OnSaleFor>":
                            theCommands = theCommands.Replace("<OnSaleFor>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Origin>":
                            theCommands = theCommands.Replace("<Origin>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<OSHLocation>":
                            theCommands = theCommands.Replace("<OSHLocation>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<OwnRoot>":
                            theCommands = theCommands.Replace("<OwnRoot>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Page_Number>":
                            theCommands = theCommands.Replace("<Page_Number>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PBS>":
                            theCommands = theCommands.Replace("<PBS>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PC>":
                            theCommands = theCommands.Replace("<PC>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PICASCode>":
                            theCommands = theCommands.Replace("<PICASCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PinkDot>":
                            theCommands = theCommands.Replace("<PinkDot>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Plant1>":
                            theCommands = theCommands.Replace("<Plant1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Plant2>":
                            theCommands = theCommands.Replace("<Plant2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Plant3>":
                            theCommands = theCommands.Replace("<Plant3>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Plant4>":
                            theCommands = theCommands.Replace("<Plant4>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Plant5>":
                            theCommands = theCommands.Replace("<Plant5>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PlantCount>":
                            theCommands = theCommands.Replace("<PlantCount>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PlantedRetail>":
                            theCommands = theCommands.Replace("<PlantedRetail>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PotSize>":
                            theCommands = theCommands.Replace("<PotSize>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PotSizeIcon>":
                            theCommands = theCommands.Replace("<PotSizeIcon>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Premium>":
                            theCommands = theCommands.Replace("<Premium>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Price>":
                            theCommands = theCommands.Replace("<Price>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<price>":
                            theCommands = theCommands.Replace("<price>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<PricePoint>":
                            theCommands = theCommands.Replace("<PricePoint>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<ProductOfUSA>":
                            theCommands = theCommands.Replace("<ProductOfUSA>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<ProductofUSA>":
                            theCommands = theCommands.Replace("<ProductofUSA>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Retail>":
                            theCommands = theCommands.Replace("<Retail>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<RetailPrice>":
                            theCommands = theCommands.Replace("<RetailPrice>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<RetailPricePoint>":
                            theCommands = theCommands.Replace("<RetailPricePoint>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Season>":
                            theCommands = theCommands.Replace("<Season>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<ShippingCode>":
                            theCommands = theCommands.Replace("<ShippingCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Size>":
                            theCommands = theCommands.Replace("<Size>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<SIZE>":
                            theCommands = theCommands.Replace("<SIZE>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<SizeCode>":
                            theCommands = theCommands.Replace("<SizeCode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<SKU>":
                            theCommands = theCommands.Replace("<SKU>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<SKU2>":
                            theCommands = theCommands.Replace("<SKU2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<SKU3>":
                            theCommands = theCommands.Replace("<SKU3>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Slot_For_Pixie>":
                            theCommands = theCommands.Replace("<Slot_For_Pixie>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<SpecialPricing>":
                            theCommands = theCommands.Replace("<SpecialPricing>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Supplier>":
                            theCommands = theCommands.Replace("<Supplier>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<TagDate>":
                            theCommands = theCommands.Replace("<TagDate>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<TargetLocation>":
                            theCommands = theCommands.Replace("<TargetLocation>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Type>":
                            theCommands = theCommands.Replace("<Type>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<UPC>":
                            theCommands = theCommands.Replace("<UPC>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<UPC_Readable>":
                            theCommands = theCommands.Replace("<UPC_Readable>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<UPCBackground>":
                            theCommands = theCommands.Replace("<UPCBackground>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<UPCCanadian>":
                            theCommands = theCommands.Replace("<UPCCanadian>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<WaM>":
                            theCommands = theCommands.Replace("<WaM>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<WAM>":
                            theCommands = theCommands.Replace("<WAM>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<wam>":
                            theCommands = theCommands.Replace("<wam>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<WaM2>":
                            theCommands = theCommands.Replace("<WaM2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Website>":
                            theCommands = theCommands.Replace("<Website>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Weights>":
                            theCommands = theCommands.Replace("<Weights>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Weights1>":
                            theCommands = theCommands.Replace("<Weights1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<Weights2>":
                            theCommands = theCommands.Replace("<Weights2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<WeightsAndMeasures>":
                            theCommands = theCommands.Replace("<WeightsAndMeasures>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<WeightsAndMeasures2>":
                            theCommands = theCommands.Replace("<WeightsAndMeasures2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<WeightsMeasures>":
                            theCommands = theCommands.Replace("<WeightsMeasures>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<WeightsMeasures1>":
                            theCommands = theCommands.Replace("<WeightsMeasures1>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<WeightsMeasures2>":
                            theCommands = theCommands.Replace("<WeightsMeasures2>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<No_DDD_Logo>":
                            theCommands = theCommands.Replace("<No_DDD_Logo>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        case "<NGcode>":
                            theCommands = theCommands.Replace("<NGcode>", WashTheValue(reader["FieldValue"].ToString(), reader["OrderingFieldID"].ToString(), reader["PriceFormat"].ToString()));
                            break;
                        default:
                            break;
                    }
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Don MartiNew WebExtension reveals targeted political ads: Interview with Jeff Larson

The investigative journalism organization ProPublica is teaming up with three German news sites to collect political ads on Facebook in advance of the German parliamentary election on Sept. 24.

Because typical Facebook ads are shown only to finely targeted subsets of users, the best way to understand them is to have a variety of users cooperate to run a client-side research tool. ProPublica developer Jeff Larson has written a WebExtension, that runs on Mozilla Firefox and Google Chrome, to do just that. I asked him how the development went.

Q: Who was involved in developing your WebExtension?

A: Just me. But I can't take credit for the idea. I was at a conference in Germany a few months ago with my colleague Julia Angwin, and we were talking with people who worked at Spiegel about our work on the Machine Bias series. We all thought it would be a good idea to look at political ads on Facebook during the German election cycle, given what little we knew about what happened in the U.S. election last year.

Q: What documentation did you use, and what would you recommend that people read to get started with WebExtensions?

A: I think both Mozilla and Google's documentation sites are great. I would say that the tooling for Firefox is much better due to the web-ext tool. I'd definitely start there (Getting started with web-ext) the next time around.

Basically, web-ext takes care of a great deal of the fiddly bits of writing an extension—everything from packaging to auto reloading the extension when you edit the source code. It makes the development process a lot more smooth.

Q: Did you develop in one browser first and then test in the other, or test in both as you went along?

A: I started out in Chrome, because most of the users of our site use Chrome. But I started using Firefox about halfway through because of web-ext. After that, I sort of ping ponged back and forth because I was using source maps and each browser handles those a bit differently. Mostly the extension worked pretty seamlessly across both browsers. I had to make a couple of changes but I think it took me a few minutes to get it working in Firefox, which was a pleasant surprise.

Q: What are you running as a back end service to collect ads submitted by the WebExtension?

A: We're running a Rust server that collects the ads and uploads images to an S3 bucket. It is my first Rust project, and it has some rough edges, but I'm pretty much in love with Rust. It is pretty wonderful to know that the server won't go down because of all the built in type and memory safety in the language. We've open sourced the project, I could use help if anyone wants to contribute: Facebook Political Ad Collector on GitHub.

Q: Can you see that the same user got a certain set of ads, or are they all anonymized?

A: We strive to clean the ads of all identifying information. So, we only collect the id of the ad, and the targeting information that the advertiser used. For example, people 18 to 44 who live in New York.

Q: What are your next steps?

A: Well, I'm planning on publishing the ads we've received on a web site, as well as a clean dataset that researchers might be interested in. We also plan to monitor the Austrian elections, and next year is pretty big for the U.S. politically, so I've got my work cut out for me.

Q: Facebook has refused to release some "dark" political ads from the 2016 election in the USA. Will your project make "dark" ads in Germany visible?

A: We've been running for about four days, and so far we've collected 300 political ads in Germany. My hope is we'll start seeing some of the more interesting ones from fly by night groups. Political advertising on sites like Facebook isn't regulated in either the United States or Germany, so on some level just having a repository of these ads is a public service.

Q: Your project reveals the "dark" possibly deceptive ads in Chrome and Firefox but not on mobile platforms. Will it drive deceptive advertising away from desktop and toward mobile?

A: I'm not sure, that's a possibility. I can say that Firefox on Android allows WebExtensions and I plan on making sure this extension works there as well, but we'll never be able to see what happens in the native Facebook applications in any sort of large scale and systematic way.

Q: Has anyone from Facebook offered to help with the project?

A: Nope, but if anyone wants to reach out, I would love the help!

Thank you.

Get the WebExtension

Krebs on SecurityThe Equifax Breach: What You Should Know

It remains unclear whether those responsible for stealing Social Security numbers and other data on as many as 143 million Americans from big-three credit bureau Equifax intend to sell this data to identity thieves. But if ever there was a reminder that you — the consumer — are ultimately responsible for protecting your financial future, this is it. Here’s what you need to know and what you should do in response to this unprecedented breach.

Some of the Q&As below were originally published in a 2015 story, How I Learned to Stop Worrying and Embrace the Security Freeze. It has been updated to include new information specific to the Equifax intrusion.

Q: What information was jeopardized in the breach?

A: Equifax was keen to point out that its investigation is ongoing. But for now, the data at risk includes Social Security numbers, birth dates, addresses on 143 million Americans. Equifax also said the breach involved some driver’s license numbers (although it didn’t say how many or which states might be impacted), credit card numbers for roughly 209,000 U.S. consumers, and “certain dispute documents with personal identifying information for approximately 182,000 U.S. consumers.”

Q: Was the breach limited to Americans?

A: No. Equifax said it believes the intruders got access to “limited personal information for certain UK and Canadian residents.” It has not disclosed what information for those residents was at risk or how many from Canada and the UK may be impacted.

Q: What is Equifax doing about this breach?

A: Equifax is offering one free year of their credit monitoring service. In addition, it has put up a Web site — www.equifaxsecurity2017.com — that tried to let people determine whether they were affected.

Q: That site tells me I was not affected by the breach. Am I safe?

A: As noted in this story from Friday, the site seems hopelessly broken, often returning differing results for the same data submitted at different times. In the absence of more reliable information from Equifax, it is safer to assume you ARE compromised.

Q: I read that the legal language in the terms of service that consumers must accept before enrolling in the free credit monitoring service from Equifax requires one to waive their rights to sue the company in connection with this breach. Is that true?

A: Not according to Equifax. The company issued a statement over the weekend saying that nothing in that agreement applies to this cybersecurity incident.

Q: So should I take advantage of the credit monitoring offer?

A: It can’t hurt, but I wouldn’t count on it protecting you from identity theft.

Q: Wait, what? I thought that was the whole point of a credit monitoring service?

A: The credit bureaus sure want you to believe that, but it’s not true in practice. These services do not prevent thieves from using your identity to open new lines of credit, and from damaging your good name for years to come in the process. The most you can hope for is that credit monitoring services will alert you soon after an ID thief does steal your identity.

Q: Well then what the heck are these services good for?

A: Credit monitoring services are principally useful in helping consumers recover from identity theft. Doing so often requires dozens of hours writing and mailing letters, and spending time on the phone contacting creditors and credit bureaus to straighten out the mess. In cases where identity theft leads to prosecution for crimes committed in your name by an ID thief, you may incur legal costs as well. Most of these services offer to reimburse you up to a certain amount for out-of-pocket expenses related to those efforts. But a better solution is to prevent thieves from stealing your identity in the first place.

Q: What’s the best way to do that?

A: File a security freeze — also known as a credit freeze — with the four major credit bureaus.

Q: What is a security freeze?

A: A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand. With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you (i.e., view your credit file). And because each credit inquiry caused by a creditor has the potential to lower your credit score, the freeze also helps protect your score, which is what most lenders use to decide whether to grant you credit when you truly do want it and apply for it.

Q: What’s involved in freezing my credit file?

A: Freezing your credit involves notifying each of the major credit bureaus that you wish to place a freeze on your credit file. This can usually be done online, but in a few cases you may need to contact one or more credit bureaus by phone or in writing. Once you complete the application process, each bureau will provide a unique personal identification number (PIN) that you can use to unfreeze or “thaw” your credit file in the event that you need to apply for new lines of credit sometime in the future. Depending on your state of residence and your circumstances, you may also have to pay a small fee to place a freeze at each bureau. There are four consumer credit bureaus, including EquifaxExperianInnovis and Trans Union.  It’s a good idea to keep your unfreeze PIN(s) in a folder in a safe place (perhaps along with your latest credit report), so that when and if you need to undo the freeze, the process is simple.

Q: How much is the fee, and how can I know whether I have to pay it?

A: The fee ranges from $0 to $15 per bureau, meaning that it can cost upwards of $60 to place a freeze at all four credit bureaus (recommended). However, in most states, consumers can freeze their credit file for free at each of the major credit bureaus if they also supply a copy of a police report and in some cases an affidavit stating that the filer believes he/she is or is likely to be the victim of identity theft. In many states, that police report can be filed and obtained online. The fee covers a freeze as long as the consumer keeps it in place. Consumers Union has a useful breakdown of state-by-state fees.

Q: But what if I need to apply for a loan, or I want to take advantage of a new credit card offer?

A: You thaw the freeze temporarily (in most cases the default is for 24 hours).

Q: What’s involved in thawing my credit file? And do I need to thaw it at all three bureaus?

A: The easiest way to unfreeze your file for the purposes of gaining new credit is to spend a few minutes the phone with the company from which you hope to gain the line of credit (or research the matter online) to see which credit bureau they rely upon for credit checks. It will most likely be one of the major bureaus. Once you know which bureau the creditor uses, contact that bureau either via phone or online and supply the PIN they gave you when you froze your credit file with them. The thawing process should not take more than 24 hours, but hiccups in the thawing process sometimes make things take longer. It’s best not to wait until the last minute to thaw your file.

Q: It seems that credit bureaus make their money by selling data about me as a consumer to marketers. Does a freeze prevent that?

A: A freeze on your file does nothing to prevent the bureaus from collecting information about you as a consumer — including your spending habits and preferences — and packaging, splicing and reselling that information to marketers.

Q: Can I still use my credit or debit cards after I file a freeze? 

A: Yes. A freeze does nothing to prevent you from using existing lines of credit you may have.

Q: I’ve heard about something called a fraud alert. What’s the difference between a security freeze and a fraud alert on my credit file?

A: With a fraud alert on your credit file, lenders or service providers should not grant credit in your name without first contacting you to obtain your approval — by phone or whatever other method you specify when you apply for the fraud alert. To place a fraud alert, merely contact one of the credit bureaus via phone or online, fill out a short form, and answer a handful of multiple-choice, out-of-wallet questions about your credit history. Assuming the application goes through, the bureau you filed the alert with must by law share that alert with the other bureaus.

Consumers also can get an extended fraud alert, which remains on your credit report for seven years. Like the free freeze, an extended fraud alert requires a police report or other official record showing that you’ve been the victim of identity theft.

An active duty alert is another alert available if you are on active military duty. The active duty alert is similar to an initial fraud alert except that it lasts 12 months and your name is removed from pre-approved firm offers of credit or insurance (prescreening) for 2 years.

Q: Why would I pay for a security freeze when a fraud alert is free?

A: Fraud alerts only last for 90 days, although you can renew them as often as you like. More importantly, while lenders and service providers are supposed to seek and obtain your approval before granting credit in your name if you have a fraud alert on your file, they are not legally required to do this — and very often don’t.

Q: Hang on: If I thaw my credit file after freezing it so that I can apply for new lines of credit, won’t I have to pay to refreeze my file at the credit bureau where I thawed it?

A: It depends on your state. Some states allow bureaus to charge $5 for a temporary thaw or a lift on a freeze; in other states there is no fee for a thaw or lift. However, even if you have to do this once or twice a year, the cost of doing so is almost certainly less than paying for a year’s worth of credit monitoring services. Again, Consumers Union has a handy state-by-state guide listing the freeze and unfreeze laws and fees.

Q: What about my kids? Should I be freezing their files as well? Is that even possible? 

A: Depends on your state. Roughly half of the U.S. states have laws on the books allowing freezes for dependents. Check out The Lowdown on Freezing Your Kid’s Credit for more information.

Q: Is there anything I should do in addition to placing a freeze that would help me get the upper hand on ID thieves?

A: Yes: Periodically order a free copy of your credit report. By law, each of the three major credit reporting bureaus must provide a free copy of your credit report each year — via a government-mandated site: annualcreditreport.com. The best way to take advantage of this right is to make a notation in your calendar to request a copy of your report every 120 days, to review the report and to report any inaccuracies or questionable entries when and if you spot them. Avoid other sites that offer “free” credit reports and then try to trick you into signing up for something else.

Q: I just froze my credit. Can I still get a copy of my credit report from annualcreditreport.com? 

A: According to the Federal Trade Commission, having a freeze in place should not affect a consumer’s ability to obtain copies of their credit report from annualcreditreport.com.

Q: If I freeze my file, won’t I have trouble getting new credit going forward? 

A: If you’re in the habit of applying for a new credit card each time you see a 10 percent discount for shopping in a department store, a security freeze may cure you of that impulse. Other than that, as long as you already have existing lines of credit (credit cards, loans, etc) the credit bureaus should be able to continue to monitor and evaluate your creditworthiness should you decide at some point to take out a new loan or apply for a new line of credit.

Q: Can I have a freeze AND credit monitoring? 

A: Yes, you can. However, it may not be possible to sign up for credit monitoring services while a freeze is in place. My advice is to sign up for whatever credit monitoring may be offered for free, and then put the freezes in place.

Q: Beyond this breach, how would I know who is offering free credit monitoring? 

A: Hundreds of companies — many of which you have probably transacted with at some point in the last year — have disclosed data breaches and are offering free monitoring. California maintains one of the most comprehensive lists of companies that disclosed a breach, and most of those are offering free monitoring.

Q: I see that Trans Union has a free offering. And it looks like they offer another free service called a credit lock. Why shouldn’t I just use that?

A: I haven’t used that monitoring service, but it looks comparable to others. However, I take strong exception to the credit bureaus’ increasing use of the term “credit lock” to steer people away from securing a freeze on their file. I notice that Trans Union currently does this when consumers attempt to file a freeze. Your mileage may vary, but their motives for saddling consumers with even more confusing terminology are suspect. I would not count on a credit lock to take the place of a credit freeze, regardless of what these companies claim (consider the source).

Q: I read somewhere that the PIN code Equifax gives to consumers for use in the event they need to thaw a freeze at the bureau is little more than a date and time stamp of the date and time when the freeze was ordered. Is this correct? 

A: Yes. However, this does not appear to be the case with the other bureaus.

Q: Does this make the process any less secure? 

A: Hard to say. An identity thief would need to know the exact time your report was ordered. Unless of course Equifax somehow allowed attackers to continuously guess and increment that number through its Web site (there is no indication this is the case). However, having a freeze is still more secure than not having one.

Q: Someone told me that having a freeze in place wouldn’t block ID thieves from fraudulently claiming a tax refund in my name with the IRS, or conducting health insurance fraud using my SSN. Is this true?

A: Yes. There are several forms of identity theft that probably will not be blocked by a freeze. But neither will they be blocked by a fraud alert or a credit lock. That’s why it’s so important to regularly review your credit file with the major bureaus for any signs of unauthorized activity.

Q: Okay, I’ve got a security freeze on my file, what else should I do?

A: It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers that are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link

Q: Anything else?

A: ID thieves like to intercept offers of new credit and insurance sent via postal mail, so it’s a good idea to opt out of pre-approved credit offers. If you decide that you don’t want to receive prescreened offers of credit and insurance, you have two choices: You can opt out of receiving them for five years or opt out of receiving them permanently.

To opt out for five years: Call toll-free 1-888-5-OPT-OUT (1-888-567-8688) or visit www.optoutprescreen.com. The phone number and website are operated by the major consumer reporting companies.

To opt out permanently: You can begin the permanent Opt-Out process online at www.optoutprescreen.com. To complete your request, you must return the signed Permanent Opt-Out Election form, which will be provided after you initiate your online request. 

,

Planet DebianSteinar H. Gunderson: rANS encoding of signed coefficients

I'm currently trying to make sense of some still image coding (more details to come at a much later stage!), and for a variety of reasons, I've chosen to use rANS as the entropy coder. However, there's an interesting little detail that I haven't actually seen covered anywhere; maybe it's just because I've missed something, or maybe because it's too blindingly obvious, but I thought I would document what I ended up with anyway. (I had hoped for something even more elegant, but I guess the obvious would have to do.)

For those that don't know rANS coding, let me try to handwave it as much as possible. Your state is typically a single word (in my case, a 32-bit word), which is refilled from the input stream as needed. The encoder and decoder works in reverse order; let's just talk about the decoder. Basically it works by looking at the lowest 12 (or whatever) bits of the decoder state, mapping each of those 2^12 slots to a decoded symbol. More common symbols are given more slots, proportionally to the frequency. Let me just write a tiny, tiny example with 2 bits and three symbols instead, giving four slots:

Lowest bits Symbol
00 0
01 0
10 1
11 2

Note that the zero coefficient here maps to one out of two slots (ie., a range); you don't choose which one yourself, the encoder stashes some information in there (which is used to recover the next control word once you know which symbol there is).

Now for the actual problem: When storing DCT coefficients, we typically want to also store a sign (ie., not just 1 or 2, but also -1/+1 and -2/+2). The statistical distribution is symmetrical, so the sign bit is incompressible (except that of course there's no sign bit needed for 0). We could have done this by introducing new symbols -1 and -2 in addition to our three other ones, but this means we'll need more bits of precision, and accordingly larger look-up tables (which is negative for performance). So let's find something better.

We could also simply store it separately somehow; if the coefficient is non-zero, store the bits in some separate repository. Perhaps more elegantly, you can encode a second symbol in the rANS stream with probability 1/2, but this is more expensive computationally. But both of these have the problem that they're divergent in terms of control flow; nonzero coefficients potentially need to do a lot of extra computation and even loads. This isn't nice for SIMD, and it's not nice for GPU. It's generally not really nice.

The solution I ended up with was simulating a larger table with a smaller one. Simply rotate the table so that the zero symbol has the top slots instead of the bottom slots, and then replicate the rest of the table. For instance, take this new table:

Lowest bits Symbol
000 1
001 2
010 0
011 0
100 0
101 0
110 -1
111 -2

(The observant reader will note that this doesn't describe the exact same distribution as last time—zero has twice the relative frequency as in the other table—but ignore that for the time being.)

In this case, the bottom half of the table doesn't actually need to be stored! We know that if the three bottom bits are >= 110 (6 in decimal), we have a negative value, can subtract 6, and then continue decoding. If we are go past the end of our 2-bit table despite that, we know we are decoding a zero coefficient (which doesn't have a sign), so we can just clamp the read; or for a GPU, reads out-of-bounds on a texture will typically return 0 anyway. So it all works nicely, and the divergent I/O is gone.

If this pickled your interest, you probably want to read up on rANS in general; Fabian Giesen (aka ryg) has some notes that work as a good starting point, but beware; some of this is pretty confusing. :-)

CryptogramA Hardware Privacy Monitor for iPhones

Andrew "bunnie" Huang and Edward Snowden have designed a hardware device that attaches to an iPhone and monitors it for malicious surveillance activities, even in instances where the phone's operating system has been compromised. They call it an Introspection Engine, and their use model is a journalist who is concerned about government surveillance:

Our introspection engine is designed with the following goals in mind:

  1. Completely open source and user-inspectable ("You don't have to trust us")

  2. Introspection operations are performed by an execution domain completely separated from the phone"s CPU ("don't rely on those with impaired judgment to fairly judge their state")

  3. Proper operation of introspection system can be field-verified (guard against "evil maid" attacks and hardware failures)

  4. Difficult to trigger a false positive (users ignore or disable security alerts when there are too many positives)

  5. Difficult to induce a false negative, even with signed firmware updates ("don't trust the system vendor" -- state-level adversaries with full cooperation of system vendors should not be able to craft signed firmware updates that spoof or bypass the introspection engine)

  6. As much as possible, the introspection system should be passive and difficult to detect by the phone's operating system (prevent black-listing/targeting of users based on introspection engine signatures)

  7. Simple, intuitive user interface requiring no specialized knowledge to interpret or operate (avoid user error leading to false negatives; "journalists shouldn't have to be cryptographers to be safe")

  8. Final solution should be usable on a daily basis, with minimal impact on workflow (avoid forcing field reporters into the choice between their personal security and being an effective journalist)

This looks like fantastic work, and they have a working prototype.

Of course, this does nothing to stop all the legitimate surveillance that happens over a cell phone: location tracking, records of who you talk to, and so on.

BoingBoing post.

Worse Than FailureCodeSOD: A Bad Route

Ah, consumer products. Regardless of what the product in question is, therre’s a certain amount of “design” that goes into the device. Not design which might make the product more user-friendly, or useful, or in any way better. No, “design”, which means it looks nicer on the shelf at Target, or Best Buy, or has a better image on its Amazon listing. The manufacturer wants you to buy it, but they don’t really care if you use it.

This thinking extends to any software that may be on the device. This is obviously true if it’s your basic Internet of Garbage device, but it’s often true of something we depend on far more: consumer grade routers.

Micha Koryak just bought a new router, and the first thing he did was peek through the code on the device. Like most routers, it has a web-based configuration tool, and thus it has a directory called “applets” which contains JavaScript.

Javascript like this:

function a6(ba) {
    if (ba == "0") {
        return ad.find("#wireless-channel-auto").text()
    } else {
        if (ba == "1") {
            return "1 - 2.412 GHz"
        } else {
            if (ba == "2") {
                return "2 - 2.417 GHz"
            } else {
                if (ba == "3") {
                    return "3 - 2.422 GHz"
                } else {
                    if (ba == "4") {
                        return "4 - 2.427 GHz"
                    } else {
                        if (ba == "5") {
                            return "5 - 2.432 GHz"
                        } else {
                            if (ba == "6") {
                                return "6 - 2.437 GHz"
                            } else {
                                if (ba == "7") {
                                    return "7 - 2.442 GHz"
                                } else {
                                    if (ba == "8") {
                                        return "8 - 2.447 GHz"
                                    } else {
                                        if (ba == "9") {
                                            return "9 - 2.452 GHz"
                                        } else {
                                            if (ba == "10") {
                                                return "10 - 2.457 GHz"
                                            } else {
                                                if (ba == "11") {
                                                    return "11 - 2.462 GHz"
                                                } else {
                                                    if (ba == "12") {
                                                        return "12 - 2.467 GHz"
                                                    } else {
                                                        if (ba == "13") {
                                                            return "13 - 2.472 GHz"
                                                        } else {
                                                            if (ba == "14") {
                                                                return "14 - 2.484 GHz"
                                                            } else {
                                                                if (ba == "34") {
                                                                    return "34 - 5.170 GHz"
                                                                } else {
                                                                    if (ba == "36") {
                                                                        return "36 - 5.180 GHz"
                                                                    } else {
                                                                        if (ba == "38") {
                                                                            return "38 - 5.190 GHz"
                                                                        } else {
                                                                            if (ba == "40") {
                                                                                return "40 - 5.200 GHz"
                                                                            } else {
                                                                                if (ba == "42") {
                                                                                    return "42 - 5.210 GHz"
                                                                                } else {
                                                                                    if (ba == "44") {
                                                                                        return "44 - 5.220 GHz"
                                                                                    } else {
                                                                                        if (ba == "46") {
                                                                                            return "46 - 5.230 GHz"
                                                                                        } else {
                                                                                            if (ba == "48") {
                                                                                                return "48 - 5.240 GHz"
                                                                                            } else {
                                                                                                if (ba == "52") {
                                                                                                    return "52 - 5.260 GHz"
                                                                                                } else {
                                                                                                    if (ba == "56") {
                                                                                                        return "56 - 5.280 GHz"
                                                                                                    } else {
                                                                                                        if (ba == "60") {
                                                                                                            return "60 - 5.300 GHz"
                                                                                                        } else {
                                                                                                            if (ba == "64") {
                                                                                                                return "64 - 5.320 GHz"
                                                                                                            } else {
                                                                                                                if (ba == "100") {
                                                                                                                    return "100 - 5.500 GHz"
                                                                                                                } else {
                                                                                                                    if (ba == "104") {
                                                                                                                        return "104 - 5.520 GHz"
                                                                                                                    } else {
                                                                                                                        if (ba == "108") {
                                                                                                                            return "108 - 5.540 GHz"
                                                                                                                        } else {
                                                                                                                            if (ba == "112") {
                                                                                                                                return "112 - 5.560 GHz"
                                                                                                                            } else {
                                                                                                                                if (ba == "116") {
                                                                                                                                    return "116 - 5.580 GHz"
                                                                                                                                } else {
                                                                                                                                    if (ba == "120") {
                                                                                                                                        return "120 - 5.600 GHz"
                                                                                                                                    } else {
                                                                                                                                        if (ba == "124") {
                                                                                                                                            return "124 - 5.620 GHz"
                                                                                                                                        } else {
                                                                                                                                            if (ba == "128") {
                                                                                                                                                return "128 - 5.640 GHz"
                                                                                                                                            } else {
                                                                                                                                                if (ba == "132") {
                                                                                                                                                    return "132 - 5.660 GHz"
                                                                                                                                                } else {
                                                                                                                                                    if (ba == "136") {
                                                                                                                                                        return "136 - 5.680 GHz"
                                                                                                                                                    } else {
                                                                                                                                                        if (ba == "140") {
                                                                                                                                                            return "140 - 5.700 GHz"
                                                                                                                                                        } else {
                                                                                                                                                            if (ba == "149") {
                                                                                                                                                                return "149 - 5.745 GHz"
                                                                                                                                                            } else {
                                                                                                                                                                if (ba == "153") {
                                                                                                                                                                    return "153 - 5.765 GHz"
                                                                                                                                                                } else {
                                                                                                                                                                    if (ba == "157") {
                                                                                                                                                                        return "157 - 5.785 GHz"
                                                                                                                                                                    } else {
                                                                                                                                                                        if (ba == "161") {
                                                                                                                                                                            return "161 - 5.805 GHz"
                                                                                                                                                                        } else {
                                                                                                                                                                            if (ba == "165") {
                                                                                                                                                                                return "165 - 5.825 GHz"
                                                                                                                                                                            } else {
                                                                                                                                                                                if (ba == "184") {
                                                                                                                                                                                    return "184 - 4.920 GHz"
                                                                                                                                                                                } else {
                                                                                                                                                                                    if (ba == "188") {
                                                                                                                                                                                        return "188 - 4.940 GHz"
                                                                                                                                                                                    } else {
                                                                                                                                                                                        if (ba == "192") {
                                                                                                                                                                                            return "192 - 4.960 GHz"
                                                                                                                                                                                        } else {
                                                                                                                                                                                            if (ba == "196") {
                                                                                                                                                                                                return "196 - 4.980 GHz"
                                                                                                                                                                                            } else {
                                                                                                                                                                                                return ""
                                                                                                                                                                                            }
                                                                                                                                                                                        }
                                                                                                                                                                                    }
                                                                                                                                                                                }
                                                                                                                                                                            }
                                                                                                                                                                        }
                                                                                                                                                                    }
                                                                                                                                                                }
                                                                                                                                                            }
                                                                                                                                                        }
                                                                                                                                                    }
                                                                                                                                                }
                                                                                                                                            }
                                                                                                                                        }
                                                                                                                                    }
                                                                                                                                }
                                                                                                                            }
                                                                                                                        }
                                                                                                                    }
                                                                                                                }
                                                                                                            }
                                                                                                        }
                                                                                                    }
                                                                                                }
                                                                                            }
                                                                                        }
                                                                                    }
                                                                                }
                                                                            }
                                                                        }
                                                                    }
                                                                }
                                                            }
                                                        }
                                                    }
                                                }
                                            }
                                        }
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
}
}
[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Don MartiSome ways that bug futures markets differ from open source bounties

Question about Bugmark: what's the difference between a futures market on software bugs and an open source bounty system connected to the issue tracker? In many simple cases a bug futures market will function in a similar way, but we predict that some qualities of the futures market will make it work differently.

  • Open source bounty systems have extra transaction costs of assigning credit for a fix.

  • Open source bounty systems can incentivize contention over who can submit a complete fix, when we want to be able to incentivize partial work and meta work.

Incentivizing partial work and meta work (such as bug triage) would be prohibitively expensive to manage using bounties claimed by individuals, where each claim must be accepted or rejected. The bug futures concept addresses this with radical simplicity: the owners of each side of the contract are tracked completely separately from the reporter and assignee of a bug in the bug tracker.

And bug futures contracts can be traded in advance of expiration. Any work that you do that meaningfully changes the probability of the bug getting fixed by the contract closing date can move the price.

You might choose to buy the "fixed" side of the contract, do some work that makes it look more fixable, sell at a higher price. Bugmark might make it practical to do "day trading" of small steps, such as translating a bug report originally posted in a language that the developers don't know, helping a user submit a log file, or writing a failing test.

With the right market design, participants in a bug futures market have the incentive to talk their books, by sharing partial work and metadata.

Related: Some ways that bug futures markets differ from prediction markets, Smart futures contracts on software issues talk, and bullshit walks?

,

Planet Linux AustraliaOpenSTEM: Guess the Artefact #3

This week’s Guess the Artefact challenge centres around an artefact used by generations of school children. There are some adults who may even have used these themselves when they were at school. It is interesting to see if modern students can recognise this object and work out how it was used. The picture below comes […]

Planet DebianSteve Kemp: Debian-Administration.org is closing down

After 13 years the Debian-Administration website will be closing down towards the end of the year.

The site will go read-only at the end of the month, and will slowly be stripped back from that point towards the end of the year - leaving only a static copy of the articles, and content.

This is largely happening due to lack of content. There were only two articles posted last year, and every time I consider writing more content I lose my enthusiasm.

There was a time when people contributed articles, but these days they tend to post such things on their own blogs, on medium, on Reddit, etc. So it seems like a good time to retire things.

An official notice has been posted on the site-proper.

Planet DebianAdnan Hodzic: Secure traffic to ZNC on Synology with Let’s Encrypt

I’ve been using IRC since late 1990’s, and I continue to do so to this day due to it (still) being one of the driving development forces in various open source communities. Especially in Linux development … and some of my acquintances I can only get in touch with via IRC :)

My Setup

On my Synology NAS I run ZNC (IRC bouncer/proxy) to which I connect using various IRC clients (irssi/XChat Azure/AndChat) from various platforms (Linux/Mac/Android). In this case ZNC serves as a gateway and no matter which device/client I connect from, I’m always connected to same IRC servers/chat rooms/settings when I left off.

This is all fine and dandy, but connecting from external networks to ZNC means you will hand in your ZNC credentials in plain text. Which is a problem for me, even thought we’re “only” talking about IRC bouncer/proxy.

With that said, how do we encrypt external traffic to our ZNC?

HowTo: Chat securely with ZNC on Synology using Let’s Encrypt SSL certificate

For reference or more thorough explanation of some of the steps/topics please refer to: Secure (HTTPS) public access to Synology NAS using Let’s Encrypt (free) SSL certificate

Requirements:

  • Synology NAS running DSM >= 6.0
  • Sub/domain name with ability to update DNS records
  • SSH access to your Synology NAS

1: DNS setup

Create A record for sub/domain you’d like to use to connect to your ZNC and point it to your Synology NAS external (WAN) IP. For your reference, subdomain I’ll use is: irc.hodzic.org

2: Create Let’s Encrypt certificate

DSM: Control Panel > Security > Certificates > Add

Followed by:

Add a new certificate > Get a certificate from Let's Encrypt

Followed by adding domain name A record was created for in Step 1, i.e:

Get a certificate from Let's Encrypt for irc.hodzic.org

After certificate is created, don’t forget to configure newly created certificate to point to correct domain name, i.e:

Configure Let's Encrypt Certificate

3: Install ZNC

In case you already have ZNC installed, I suggest you remove it and do a clean install. Mainly due to some problems with package in past, where ZNC wouldn’t start automatically on boot which lead to creating projects like: synology-znc-autostart. In latest version, all of these problems have been fixed and couple of new features have been added.

ZNC can be installed using Synology’s Package Center, if community package sources are enabled. Which can simply be done by adding new package sources:

Name: SynoCommunity
Location: http://packages.synocommunity.com

Enable Community package sources in Synology Package Center

To successfuly authenticate newly added source, under “General” tab, “Trust Level” should be set to “Any publisher”

As part of installation process, ZNC config will be run with most sane/useful options and admin user will be created allowing you access to ZNC webadmin.

4: Secure access to ZNC webadmin

Now we want to bind our sub/domain created in “Step 1” to ZNC webadmin, and secure external access to it. This can be done by creating a reverse proxy.

As part of this, you need to know which port has been allocated for SSL in ZNC Webadmin, i.e:

ZNC Webadmin > Global settings - Listen Ports

In this case, we can see it’s 8251.

Reverse Proxy can be created in:

DSM: Control Panel > Application Portal > Reverse Proxy > Create

Where sub/domain created in “Step 1” needs to be point to ZNC SSL port on localhost, i.e:

Reverse proxy: irc.hodzic.org setup

ZNC Webadmin is now available via HTTPS on external network for the sub/domain you setup as part of Step 1, or in my case:

ZNC webadmin (HTTPS)

As part of this step, in ZNC webadmin I’d advise you to create IRC servers and chatrooms you would like to connect to later.

Step 5: Create .pem file from LetsEncrpyt certificate for ZNC to use

On Synology, Let’s Encrypt certificates are stored and located on:

/usr/syno/etc/certificate/_archive/

In case you have multiple certificates, based on date your certificate was created, you can determine in which directory is your newly generated certificated stored, i.e:

drwx------ 2 root root 4096 Sep 10 12:57 JeRh3Y

Once it’s determined which certifiate is the one we want use, generate .pem by running following:

sudo cat /usr/syno/etc/certificate/_archive/JeRh3Y/{privkey,cert,chain}.pem > /usr/local/znc/var/znc.pem

After this restart ZNC:

sudo /var/packages/znc/scripts/start-stop-status stop && sudo /var/packages/znc/scripts/start-stop-status start

6: Configure IRC client

In this example I’ll use XChat Azure on MacOS, and same procedure should be identical for HexChat/XChat clients on any other platform.

Altough all information is picked up from ZNC itself, user details will need to be filled in.

In my setup I automatically connect to freenode and oftc networks, so I created two for local network and two for external network usage, later is the one we’re concentrating on.

On “General” tab of our newly created server, hostname for our server should be the sub/domain we’ve setup as part of “Step 1”, and port number should be the one we defined in “Step 4”, SSL checkbox must be checked.

Xchat Azure: Network list - General tab

On “Connecting” tab “Server password” field needs to be filled in following format:

johndoe/freenode:password

Where, “johndoe” is ZNC username. “freenode” is ZNC network name, and “password” is ZNC password.

Xchat Azure: Network list - Connecting tab

“freenode” in this case must first be created as part of ZNC webadmin configuration, mentioned in “step 4”. Same case is for oftc network configuration.

As part of establishing the connection, information about our Let’s Encrypt certificate will be displayed, after which connection will be established and you’ll be automatically logged into all chatrooms.

Happy hacking!

Planet Debianintrigeri: Can you reproduce this Tails ISO image?

Thanks to a Mozilla Open Source Software award, we have been working on making the Tails ISO images build reproducibly.

We have made huge progress: since a few months, ISO images built by Tails core developers and our CI system have always been identical. But we're not done yet and we need your help!

Our first call for testing build reproducibility in August uncovered a number of remaining issues. We think that we have fixed them all since, and we now want to find out what other problems may prevent you from building our ISO image reproducibly.

Please try to build an ISO image today, and tell us whether it matches ours!

Build an ISO

These instructions have been tested on Debian Stretch and testing/sid. If you're using another distribution, you may need to adjust them.

If you get stuck at some point in the process, see our more detailed build documentation and don't hesitate to contact us:

Setup the build environment

You need a system that supports KVM, 1 GiB of free memory, and about 20 GiB of disk space.

  1. Install the build dependencies:

    sudo apt install \
        git \
        rake \
        libvirt-daemon-system \
        dnsmasq-base \
        ebtables \
        qemu-system-x86 \
        qemu-utils \
        vagrant \
        vagrant-libvirt \
        vmdebootstrap && \
    sudo systemctl restart libvirtd
    
  2. Ensure your user is in the relevant groups:

    for group in kvm libvirt libvirt-qemu ; do
       sudo adduser "$(whoami)" "$group"
    done
    
  3. Logout and log back in to apply the new group memberships.

Build Tails 3.2~alpha2

This should produce a Tails ISO image:

git clone https://git-tails.immerda.ch/tails && \
cd tails && \
git checkout 3.2-alpha2 && \
git submodule update --init && \
rake build

Send us feedback!

No matter how your build attempt turned out we are interested in your feedback.

Gather system information

To gather the information we need about your system, run the following commands in the terminal where you've run rake build:

sudo apt install apt-show-versions && \
(
  for f in /etc/issue /proc/cpuinfo
  do
    echo "--- File: ${f} ---"
    cat "${f}"
    echo
  done
  for c in free locale env 'uname -a' '/usr/sbin/libvirtd --version' \
            'qemu-system-x86_64 --version' 'vagrant --version'
  do
    echo "--- Command: ${c} ---"
    eval "${c}"
    echo
  done
  echo '--- APT package versions ---'
  apt-show-versions qemu:amd64 linux-image-amd64:amd64 vagrant \
                    libvirt0:amd64
) | bzip2 > system-info.txt.bz2

Then check that the generated file doesn't contain any sensitive information you do not want to leak:

bzless system-info.txt.bz2

Next, please follow the instructions below that match your situation!

If the build failed

Sorry about that. Please help us fix it by opening a ticket:

  • set Category to Build system;
  • paste the output of rake build;
  • attach system-info.txt.bz2 (this will publish that file).

If the build succeeded

Compute the SHA-512 checksum of the resulting ISO image:

sha512sum tails-amd64-3.2~alpha2.iso

Compare your checksum with ours:

9b4e9e7ee7b2ab6a3fb959d4e4a2db346ae322f9db5409be4d5460156fa1101c23d834a1886c0ce6bef2ed6fe378a7e76f03394c7f651cc4c9a44ba608dda0bc

If the checksums match: success, congrats for reproducing Tails 3.2~alpha2! Please send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 successful" and system-info.txt.bz2 attached. Thanks in advance! Then you can stop reading here.

Else, if the checksums differ: too bad, but really it's good news as the whole point of the exercise is precisely to identify such problems :) Now you are in a great position to help improve the reproducibility of Tails ISO images by following these instructions:

  1. Install diffoscope version 83 or higher and all the packages it recommends. For example, if you're using Debian Stretch:

    sudo apt remove diffoscope && \
    echo 'deb http://ftp.debian.org/debian stretch-backports main' \
      | sudo tee /etc/apt/sources.list.d/stretch-backports.list && \
    sudo apt update && \
    sudo apt -o APT::Install-Recommends="true" \
             install diffoscope/stretch-backports
    
  2. Download the official Tails 3.2~alpha2 ISO image.

  3. Compare the official Tails 3.2~alpha2 ISO image with yours:

    diffoscope \
           --text diffoscope.txt \
           --html diffoscope.html \
           --max-report-size 262144000 \
           --max-diff-block-lines 10000 \
           --max-diff-input-lines 10000000 \
           path/to/official/tails-amd64-3.2~alpha2.iso \
           path/to/your/own/tails-amd64-3.2~alpha2.iso
    bzip2 diffoscope.{txt,html}
    
  4. Send an email to tails-dev@boum.org (public) or tails@boum.org (private) with the subject "Reproduction of Tails 3.2~alpha2 failed", attaching:

    • system-info.txt.bz2;
    • the smallest file among diffoscope.txt.bz2 and diffoscope.html.bz2, except if they are larger than 100 KiB, in which case better upload the file somewhere (e.g. share.riseup.net and share the link in your email.

Thanks a lot!

Credits

Thanks to Ulrike & anonym who authored a draft on which this blog post is based.

Planet DebianSylvain Beucler: dot-zed archive file format

TL,DR: I reverse-engineered the .zed encrypted archive format.
Following a clean-room design, I'm providing a description that can be implemented by a third-party.
Interested? :)

(reference version at: https://www.beuc.net/zed/)

.zed archive file format

Introduction

Archives with the .zed extension are conceptually similar to an encrypted .zip file.

In addition to a specific format, .zed files support multiple users: files are encrypted using the archive master key, which itself is encrypted for each user and/or authentication method (password, RSA key through certificate or PKCS#11 token). Metadata such as filenames is partially encrypted.

.zed archives are used as stand-alone or attached to e-mails with the help of a MS Outlook plugin. A variant, which is not covered here, can encrypt/decrypt MS Windows folders on the fly like ecryptfs.

In the spirit of academic and independent research this document provides a description of the file format and encryption algorithms for this encrypted file archive.

See the conventions section for conventions and acronyms used in this document.

Structure overview

The .zed file format is composed of several layers.

  • The main container is using the (MS-CFB), which is notably used by MS Office 97-2003 .doc files. It contains several streams:

    • Metadata stream: in OLE Property Set format (MS-OLEPS), contains 2 blobs in a specific Type-Length-Value (TLV) format:

      • _ctlfile: global archive properties and access list
        It is obfuscated by means of static-key AES encryption.
        The properties include archive initial filename and a global IV.
        A global encryption key is itself encrypted in each user entry.

      • _catalog: file list
        Contains each file metadata indexed with a 15-bytes identifier.
        Directories are supported.
        Full filename is encrypted using AES.
        File extension is (redundantly) stored in clear, and so are file metadata such as modification time.

    • Each file in the archive compressed with zlib and encrypted with the standard AES algorithm, in a separate stream.
      Several encryption schemes and key sizes are supported.
      The file stream is split in chunks of 512 bytes, individually encrypted.

    • Optional streams, contain additional metadata as well as pictures to display in the application background ("watermarks"). They are not discussed here.

Or as a diagram:

+----------------------------------------------------------------------------------------------------+
| .zed archive (MS-CBF)                                                                              |
|                                                                                                    |
|  stream #1                         stream #2                       stream #3...                    |
| +------------------------------+  +---------------------------+  +---------------------------+     |
| | metadata (MS-OLEPS)          |  | encryption (AES)          |  | encryption (AES)          |     |
| |                              |  | 512-bytes chunks          |  | 512-bytes chunks          |     |
| | +--------------------------+ |  |                           |  |                           |     |
| | | obfuscation (static key) | |  | +-----------------------+ |  | +-----------------------+ |     |
| | | +----------------------+ | |  |-| compression (zlib)    |-|  |-| compression (zlib)    |-|     |
| | | |_ctlfile (TLV)        | | |  | |                       | |  | |                       | | ... |
| | | +----------------------+ | |  | | +---------------+     | |  | | +---------------+     | |     | 
| | +--------------------------+ |  | | | file contents |     | |  | | | file contents |     | |     |
| |                              |  | | |               |     | |  | | |               |     | |     |
| | +--------------------------+ |  |-| +---------------+     |-|  |-| +---------------+     |-|     |
| | | _catalog (TLV)           | |  | |                       | |  | |                       | |     |
| | +--------------------------+ |  | +-----------------------+ |  | +-----------------------+ |     |
| +------------------------------+  +---------------------------+  +---------------------------+     |
+----------------------------------------------------------------------------------------------------+

Encryption schemes

Several AES key sizes are supported, such as 128 and 256 bits.

The Cipher Block Chaining (CBC) block cipher mode of operation is used to decrypt multiple AES 16-byte blocks, which means an initialisation vector (IV) is stored in clear along with the ciphertext.

All filenames and file contents are encrypted using the same encryption mode, key and IV (e.g. if you remove and re-add a file in the archive, the resulting stream will be identical).

No cleartext padding is used during encryption; instead, several end-of-stream handlers are available, so the ciphertext has exactly the size of the cleartext (e.g. the size of the compressed file).

The following variants were identified in the 'encryption_mode' field.

STREAM

This is the end-of-stream handler for:

  • obfuscated metadata encrypted with static AES key
  • filenames and files in archives with 'encryption_mode' set to "AES-CBC-STREAM"
  • any AES ciphertext of size < 16 bytes, regardless of encryption mode

This end-of-stream handler is apparently specific to the .zed format, and applied when the cleartext's does not end on a 16-byte boundary ; in this case special processing is performed on the last partial 16-byte block.

The encryption and decryption phases are identical: let's assume the last partial block of cleartext (for encryption) or ciphertext (for decryption) was appended after all the complete 16-byte blocks of ciphertext:

  • the second-to-last block of the ciphertext is encrypted in AES-ECB mode (i.e. block cipher encryption only, without XORing with the IV)

  • then XOR-ed with the last partial block (hence truncated to the length of the partial block)

In either case, if the full ciphertext is less then one AES block (< 16 bytes), then the IV is used instead of the second-to-last block.

CTS

CTS or CipherText Stealing is the end-of-stream handler for:

  • filenames and files in archives with 'encryption_mode' set to "AES-CBC-CTS".
    • exception: if the size of the ciphertext is < 16 bytes, then "STREAM" is used instead.

It matches the CBC-CS3 variant as described in Recommendation for Block Cipher Modes of Operation: Three Variants of Ciphertext Stealing for CBC Mode.

Empty cleartext

Since empty filenames or metadata are invalid, and since all files are compressed (resulting in a minimum 8-byte zlib cleartext), no empty cleartext was encrypted in the archive.

metadata stream

It is named 05356861616161716149656b7a6565636e576a33317a7868304e63 (hexadecimal), i.e. the character with code 5 followed by '5haaaaqaIekzeecnWj31zxh0Nc' (ASCII).

The format used is OLE Property Set (MS-OLEPS).

It introduces 2 property names "_ctlfile" (index 3) and "_catalog" (index 4), and 2 instances of said properties each containing an application-specific VT_BLOB (type 0x0041).

_ctlfile: obfuscated global properties and access list

This subpart is stored under index 3 ("_ctlfile") of the MS-OLEPS metadata.

It consists of:

  • static delimiter 0765921A2A0774534752073361719300 (hexadecimal) followed by 0100 (hexadecimal) (18 bytes total)
  • 16-byte IV
  • ciphertext
  • 1 uint32be representing the length of all the above
  • static delimiter 0765921A2A0774534752073361719300 (hexadecimal) followed by "ZoneCentral (R)" (ASCII) and a NUL byte (32 bytes total)

The ciphertext is encrypted with AES-CBC "STREAM" mode using 128-bit static key 37F13CF81C780AF26B6A52654F794AEF (hexadecimal) and the prepended IV so as to obfuscate the access list. The ciphertext is continuous and not split in chunks (unlike files), even when it is larger than 512 bytes.

The decrypted text contain properties in a TLV format as described in _ctlfile TLV:

  • global archive properties as a 'fileprops' structure,

  • extra archive properties as a 'archive_extraprops' structure

  • users access list as a series of 'passworduser' and 'rsauser entries.

Archives may include "mandatory" users that cannot be removed. They are typically used to add an enterprise wide recovery RSA key to all archives. Extreme care must be taken to protect these key, as it can decrypt all past archives generated from within that company.

_catalog: file list

This subpart is stored under index 4 ("_catalog") of the MS-OLEPS metadata.

It contains a series of 'fileprops' TLV structures, one for each file or directory.

The file hierarchy can be reconstructed by checking the 'parent_id' field of each file entry. If 'parent_id' is 0 then the file is located at the top-level of the hierarchy, otherwise it's located under the directory with the matching 'file_id'.

TLV format

This format is a series of fields :

  • 4 bytes for Type (specified as a 4-bytes hexadecimal below)
  • 4 bytes for value Length (uint32be)
  • Value

Value semantics depend on its Type. It may contain an uint32be integer, a UTF-16LE string, a character sequence, or an inner TLV structure.

Unless otherwise noted, TLV structures appear once.

Some fields are optional and may not be present at all (e.g. 'archive_createdwith').

Some fields are unique within a structure (e.g. 'files_iv'), other may be repeated within a structure to form a list (e.g. 'fileprops' and 'passworduser').

The following top-level types that have been identified, and detailed in the next sections:

  • 80110600: fileprops, used for the file list as well as for the global archive properties
  • 001b0600: archive_extraprops
  • 80140600: accesslist

Some additional unidentified types may be present.

_ctlfile TLV

  • 80110600: fileprops (TLV structure): global archive properties
    • 00230400: archive_pathname (UTF-16LE string): initial archive filename (past versions also leaked the full pathname of the initial archive)
    • 80270200: encryption_mode (utf32be): 103 for "AES-CBC-STREAM", 104 for "AES-CBC-CTS"
    • 80260200: encryption_strength (utf32be): AES key size, in bytes (e.g. 32 means AES with a 256-bit key)
    • 80280500: files_iv (sequence of bytes): global IV for all filenames and file contents
  • 001b0600: archive_extraprops (TLV structure): additionnal archive properties (optional)
    • 00c40500: archive_creationtime (FILETIME): date and time when archive was initially created (optional)
    • 00c00400: archive_createdwith (UTF-16LE string): uuid-like structure describing the application that initialized the archive (optional)
      {00000188-1000-3CA8-8868-36F59DEFD14D} is Zed! Free 1.0.188.
  • 80140600: accesslist (TLV structure): describe the users, their key encryption and their permissions
    • 80610600: passworduser (TLV structure): user identified by password (0 or more)
    • 80620600: rsauser (TLV structure): user identified by RSA key (via file or PKCS#11 token) (0 or more)
    • Fields common to passworduser and rsauser:
      • 80710400: login (UTF-16LE string): user name
      • 80720300: login_md5 (sequence of bytes): used by the application to search for a user name
      • 807e0100: priv1 (uchar): user privileges; present and set to 1 when user is admin (optional)
      • 00830200: priv2 (uint32be): user privileges; present and set to 2 when user is admin, present and set to 5 when user is a marked as mandatory, e.g. for recovery keys (optional)
      • 80740500: files_key_ciphertext (sequence of bytes): the archive encryption key, itself encrypted
      • 00840500: user_creationtime (FILETIME): date and time when the user was added to the archive
    • passworduser-specific fields:
      • 80760500: pbe_salt (sequence of bytes): salt for PBE
      • 80770200: pbe_iter (uint32be): number of iterations for PBE
      • 80780200: pkcs12_hashfunc (uint32be): hash function used for PBE and PBA key derivation
      • 80790500: pba_checksum (sequence of bytes): password derived with PBA to check for password validity
      • 807a0500: pba_salt (sequence of bytes): salt for PBA
      • 807b0200: pba_iter (uint32be): number of iterations for PBA
    • rsauser-specific fields:
      • 807d0500: certificate (sequence of bytes): user X509 certificate in DER format

_catalog TLV

  • 80110600: fileprops (TLV structure): describe the archive files (0 or more)
    • 80300500: file_id (sequence of bytes): a 16-byte unique identifier
    • 80310400: filename_halfanon (UTF-16LE string): half-anonymized filename, e.g. File1.txt (leaking filename extension)
    • 00380500: filename_ciphertext (sequence of bytes): encrypted filename; may have a trailing NUL byte once decrypted
    • 80330500: file_size (uint64le): decompressed file size in bytes
    • 80340500: file_creationtime (FILETIME): file creation date and time
    • 80350500: file_lastwritetime (FILETIME): file last modification date and time
    • 80360500: file_lastaccesstime (FILETIME): file last access date and time
    • 00370500: parent_directory_id (sequence of bytes): file_id of the parent directory, 0 is top-level
    • 80320100: is_dir (uint32be): 1 if entry is directory (optional)

Decrypting the archive AES key

rsauser

The user accessing the archive will be authenticated by comparing his/her X509 certificate with the one stored in the 'certificate' field using DER format.

The 'files_key_ciphertext' field is then decrypted using the PKCS#1 v1.5 encryption mechanism, with the private key that matches the user certificate.

passworduser

An intermediary user key, a user IV and an integrity checksum will be derived from the user password, using the deprecated PKCS#12 method as described at rfc7292 appendix B.

Note: this is not PKCS#5 (nor PBKDF1/PBKDF2), this is an incompatible method from PKCS#12 that notably does not use HMAC.

The 'pkcs12_hashfunc' field defines the underlying hash function. The following values have been identified:

  • 21: SHA-1
  • 22: SHA-256

PBA - Password-based authentication

The user accessing the archive will be authenticated by deriving an 8-byte sequence from his/her password.

The parameters for the derivation function are:

  • ID: 3
  • 'pba_salt': the salt, typically an 8-byte random sequence
  • 'pba_iter': the iteration count, typically 200000

The derivation is checked against 'pba_checksum'.

PBE - Password-based encryption

Once the user is identified, 2 new values are derived from the password with different parameters to produce the IV and the key decryption key, with the same hash function:

  • 'pbe_salt': the salt, typically an 8-bytes random sequence
  • 'pbe_iter': the iteration count, typically 100000

The parameters specific to user key are:

  • ID: 1
  • size: 32

The user key needs to be truncated to a length of 'encryption_strength', as specified in bytes in the archive properties.

The parameters specific to user IV are:

  • ID: 2
  • size: 16

Once the key decryption key and the IV are derived, 'files_key_ciphertext' is decrypted using AES CBC, with PKCS#7 padding.

Identifying file streams

The name of the MS-CFB stream is derived by shuffling the bytes from the 'file_id' field and then encoding the result as hexadecimal.

The reordering is:

Initial  offset: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Shuffled offset: 3 2 1 0 5 4 7 6 8 9 10 11 12 13 14 15

The 16th byte is usually a NUL byte, hence the stream identifier is a 30-character-long string.

Decrypting files

The compressed stream is split in chunks of 512 bytes, each of them encrypted separately using AES CBS and the global archive encryption scheme. Decryption uses the global AES key (retrieved using the user credentials), and the global IV (retrieved from the deobfuscated archive metadata).

The IV for each chunk is computed by:

  • expressing the current chunk number as little endian on 16 bytes
  • XORing it with the global IV
  • encrypting with the global AES key in ECB mode (without IV).

Each chunk is an independent stream and the decryption process involves end-of-stream handling even if this is not the end of the actual file. This is particularly important for the CTS handler.

Note: this is not to be confused with CTR block cipher mode of operation with operates differently and requires a nonce.

Decompressing files

Compressed streams are zlib stream with default compression options and can be decompressed following the zlib format.

Test cases

Excluded for brevity, cf. https://www.beuc.net/zed/#test-cases.

Conventions and references

Feedback

Feel free to send comments at beuc@beuc.net. If you have .zed files that you think are not covered by this document, please send them as well (replace sensitive files with other ones). The author's GPG key can be found at 8FF1CB6E8D89059F.

Copyright (C) 2017 Sylvain Beucler

Copying and distribution of this file, with or without modification, are permitted in any medium without royalty provided the copyright notice and this notice are preserved. This file is offered as-is, without any warranty.

Planet DebianCharles Plessy: Summary of the discussion on off-line keys.

Last month, there has been an interesting discussion about off-line GnuPG keys and their storage systems on the debian-project@l.d.o mailing list. I tried to summarise it in the Debian wiki, in particular by creating two new pages.

Planet DebianJoachim Breitner: Less parentheses

Yesterday, at the Haskell Implementers Workshop 2017 in Oxford, I gave a lightning talk titled ”syntactic musings”, where I presented three possibly useful syntactic features that one might want to add to a language like Haskell.

The talked caused quite some heated discussions, and since the Internet likes heated discussion, I will happily share these ideas with you

Context aka. Sections

This is probably the most relevant of the three proposals. Consider a bunch of related functions, say analyseExpr and analyseAlt, like these:

analyseExpr :: Expr -> Expr
analyseExpr (Var v) = change v
analyseExpr (App e1 e2) =
  App (analyseExpr e1) (analyseExpr e2)
analyseExpr (Lam v e) = Lam v (analyseExpr flag e)
analyseExpr (Case scrut alts) =
  Case (analyseExpr scrut) (analyseAlt <$> alts)

analyseAlt :: Alt -> Alt
analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

You have written them, but now you notice that you need to make them configurable, e.g. to do different things in the Var case. You thus add a parameter to all these functions, and hence an argument to every call:

type Flag = Bool

analyseExpr :: Flag -> Expr -> Expr
analyseExpr flag (Var v) = if flag then change1 v else change2 v
analyseExpr flag (App e1 e2) =
  App (analyseExpr flag e1) (analyseExpr flag e2)
analyseExpr flag (Lam v e) = Lam v (analyseExpr (not flag) e)
analyseExpr flag (Case scrut alts) =
  Case (analyseExpr flag scrut) (analyseAlt flag <$> alts)

analyseAlt :: Flag -> Alt -> Alt
analyseAlt flag (dc, pats, e) = (dc, pats, analyseExpr flag e)

I find this code problematic. The intention was: “flag is a parameter that an external caller can use to change the behaviour of this code, but when reading and reasoning about this code, flag should be considered constant.”

But this intention is neither easily visible nor enforced. And in fact, in the above code, flag does “change”, as analyseExpr passes something else in the Lam case. The idiom is indistinguishable from the environment idiom, where a locally changing environment (such as “variables in scope”) is passed around.

So we are facing exactly the same problem as when reasoning about a loop in an imperative program with mutable variables. And we (pure functional programmers) should know better: We cherish immutability! We want to bind our variables once and have them scope over everything we need to scope over!

The solution I’d like to see in Haskell is common in other languages (Gallina, Idris, Agda, Isar), and this is what it would look like here:

type Flag = Bool
section (flag :: Flag) where
  analyseExpr :: Expr -> Expr
  analyseExpr (Var v) = if flag then change1 v else change2v 
  analyseExpr (App e1 e2) =
    App (analyseExpr e1) (analyseExpr e2)
  analyseExpr (Lam v e) = Lam v (analyseExpr e)
  analyseExpr (Case scrut alts) =
    Case (analyseExpr scrut) (analyseAlt <$> alts)

  analyseAlt :: Alt -> Alt
  analyseAlt (dc, pats, e) = (dc, pats, analyseExpr e)

Now the intention is clear: Within a clearly marked block, flag is fixed and when reasoning about this code I do not have to worry that it might change. Either all variables will be passed to change1, or all to change2. An important distinction!

Therefore, inside the section, the type of analyseExpr does not mention Flag, whereas outside its type is Flag -> Expr -> Expr. This is a bit unusual, but not completely: You see precisely the same effect in a class declaration, where the type signature of the methods do not mention the class constraint, but outside the declaration they do.

Note that idioms like implicit parameters or the Reader monad do not give the guarantee that the parameter is (locally) constant.

More details can be found in the GHC proposal that I prepared, and I invite you to raise concern or voice support there.

Curiously, this problem must have bothered me for longer than I remember: I discovered that seven years ago, I wrote a Template Haskell based implementation of this idea in the seal-module package!

Less parentheses 1: Bulleted argument lists

The next two proposals are all about removing parentheses. I believe that Haskell’s tendency to express complex code with no or few parentheses is one of its big strengths, as it makes it easier to visualy parse programs. A common idiom is to use the $ operator to separate a function from a complex argument without parentheses, but it does not help when there are multiple complex arguments.

For that case I propose to steal an idea from the surprisingly successful markup language markdown, and use bulleted lists to indicate multiple arguments:

foo :: Baz
foo = bracket
        • some complicated code
          that is evaluated first
        • other complicated code for later
        • even more complicated code

I find this very easy to visually parse and navigate.

It is actually possible to do this now, if one defines (•) = id with infixl 0 •. A dedicated syntax extension (-XArgumentBullets) is preferable:

  • It only really adds readability if the bullets are nicely vertically aligned, which the compiler should enforce.
  • I would like to use $ inside these complex arguments, and multiple operators of precedence 0 do not mix. (infixl -1 • would help).
  • It should be possible to nest these, and distinguish different nesting levers based on their indentation.

Less parentheses 1: Whitespace precedence

The final proposal is the most daring. I am convinced that it improves readability and should be considered when creating a new language. As for Haskell, I am at the moment not proposing this as a language extension (but could be convinced to do so if there is enough positive feedback).

Consider this definition of append:

(++) :: [a] -> [a] -> [a]
[]     ++ ys = ys
(x:xs) ++ ys = x : (xs++ys)

Imagine you were explaining the last line to someone orally. How would you speak it? One common way to do so is to not read the parentheses out aloud, but rather speak parenthesised expression more quickly and add pauses otherwise.

We can do the same in syntax!

(++) :: [a] -> [a] -> [a]
[]   ++ ys = ys
x:xs ++ ys = x : xs++ys

The rule is simple: A sequence of tokens without any space is implicitly parenthesised.

The reaction I got in Oxford was horror and disgust. And that is understandable – we are very used to ignore spacing when parsing expressions (unless it is indentation, of course. Then we are no longer horrified, as our non-Haskell colleagues are when they see our code).

But I am convinced that once you let the rule sink in, you will have no problem parsing such code with ease, and soon even with greater ease than the parenthesised version. It is a very natural thing to look at the general structure, identify “compact chunks of characters”, mentally group them, and then go and separately parse the internals of the chunks and how the chunks relate to each other. More natural than first scanning everything for ( and ), matching them up, building a mental tree, and then digging deeper.

Incidentally, there was a non-programmer present during my presentation, and while she did not openly contradict the dismissive groan of the audience, I later learned that she found this variant quite obvious to understand and more easily to read than the parenthesised code.

Some FAQs about this:

  • What about an operator with space on one side but not on the other? I’d simply forbid that, and hence enforce readable code.
  • Do operator sections still require parenthesis? Yes, I’d say so.
  • Does this overrule operator precedence? Yes! a * b+c == a * (b+c).
  • What is a token? Good question, and I am not not decided. In particular: Is a parenthesised expression a single token? If so, then (Succ a)+b * c parses as ((Succ a)+b) * c, otherwise it should probably simply be illegal.
  • Can we extend this so that one space binds tighter than two spaces, and so on? Yes we can, but really, we should not.
  • This is incompatible with Agda’s syntax! Indeed it is, and I really like Agda’s mixfix syntax. Can’t have everything.
  • Has this been done before? I have not seen it in any language, but Lewis Wall has blogged this idea before.

Well, let me know what you think!

Debian Administration This site is going to go read-only

This site was born in late September 2004, and has now reached 13 years of age and that seems to be a fitting time to stop.

,

CryptogramFriday Squid Blogging: Make-Your-Own Squid Candy

It's Japanese.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityEquifax Breach Response Turns Dumpster Fire

I cannot recall a previous data breach in which the breached company’s public outreach and response has been so haphazard and ill-conceived as the one coming right now from big-three credit bureau Equifax, which rather clumsily announced Thursday that an intrusion jeopardized Social security numbers and other information on 143 million Americans.

WEB SITE WOES

As noted in yesterday’s breaking story on this breach, the Web site that Equifax advertised as the place where concerned Americans could go to find out whether they were impacted by this breach — equifaxsecurity2017.com
is completely broken at best, and little more than a stalling tactic or sham at worst.

In the early hours after the breach announcement, the site was being flagged by various browsers as a phishing threat. In some cases, people visiting the site were told they were not affected, only to find they received a different answer when they checked the site with the same information on their mobile phones.

phonelaptopequifax

Others (myself included) received not a yes or no answer to the question of whether we were impacted, but instead a message that credit monitoring services we were eligible for were not available and to check back later in the month. The site asked users to enter their last name and last six digits of their SSN, but at the prompting of a reader’s comment I confirmed that just entering gibberish names and numbers produced the same result as the one I saw when I entered my real information: Come back on Sept. 13.

Who’s responsible for this debacle? Well, Equifax of course. But most large companies that can afford to do so hire outside public relations or disaster response firms to walk them through the safest ways to notify affected consumers. In this case, Equifax appears to have hired global PR firm Edelman PR.

What gives me this idea? Until just a couple of hours ago, the copy of WordPress installed at equifaxsecurity2017.com included a publicly accessible user database entry showing a user named “Edelman” was the first (and only?) user registered on the site.

Code that was publicly available on equifaxsecurity2017.com until very recently showed account information for an outside PR firm.

I reached out to Edelman for more information and will update this story when I hear from them.

EARLY WARNING?

In its breach disclosure Thursday, Equifax said it hired an outside computer security forensic firm to investigate as soon as it discovered unauthorized access to its Web site. ZDNet published a story Thursday saying that the outside firm was Alexandria, Va.-based Mandiant — a security firm bought by FireEye in 2014.

Interestingly, anyone who happened to have been monitoring look-alike domains for Equifax.com prior to yesterday’s breach announcement may have had an early clue about the upcoming announcement. One interesting domain that was registered on Sept. 5, 2017 is “equihax.com,” which according to domain registration records was purchased by an Alexandria, Va. resident named Brandan Schondorfer.

A quick Google search shows that Schondorfer works for Mandiant. Ray Watson, a cybersecurity researcher who messaged me this morning on Twitter about this curiosity, said it is likely that Mandiant has been registering domains that might be attractive to phishers hoping to take advantage of public attention to the breach and spoof Equifax’s domain.

Watson said it’s equally likely the equihax.com domain was registered to keep it out of the hands of people who may be looking for domain names they can use to lampoon Equifax for its breach. Schondorfer has not yet returned calls seeking comment.

EQUIFAX EXECS PULL GOLDEN PARACHUTES?

Bloomberg moved a story yesterday indicating that three top executives at Equifax sold millions of dollars worth of stock during the time between when the company says it discovered the breach and when it notified the public and investors.

Shares of Equifax’s stock on the New York Stock Exchange [NSYE:EFX] were down more than 13 percent at time of publication versus yesterday’s price.

The executives reportedly told Bloomberg they didn’t know about the breach when they sold their shares. A law firm in New York has already announced it is investigating potential insider trading claims against Equifax.

CLASS ACTION WAIVER?

Yesterday’s story here pointed out the gross conflict of interest in Equifax’s consumer remedy for this breach: Offering a year’s worth of free credit monitoring services to all Americans via its own in-house credit monitoring service.

This is particularly rich because a) why should anyone trust Equifax to do anything right security-wise after this debacle and b) these credit monitoring services typically hard-sell consumers to sign up for paid credit protection plans when the free coverage expires.

Verbiage from the terms of service from Equifax's credit monitoring service TrustID Premier.

Verbiage from the terms of service from Equifax’s credit monitoring service TrustID Premier.

I have repeatedly urged readers to consider putting a security freeze on their accounts in lieu of or in addition to accepting these free credit monitoring offers, noting that credit monitoring services don’t protect you against identity theft (the most you can hope for is they alert you when ID thieves do steal your identity), while security freezes can prevent thieves from taking out new lines of credit in your name.

Several readers have written in to point out some legalese in the terms of service the Equifax requires all users to acknowledge before signing up for the service seems to include legal verbiage suggesting that those who do sign up for the free service will waive their rights to participate in future class action lawsuits against the company.

KrebsOnSecurity is still awaiting word from an actual lawyer who’s looking at this contract, but let me offer my own two cents on this.

Update, 9:45 p.m. ET: Equifax has updated their breach alert page to include the following response in regard to the unclear legalese:

“In response to consumer inquiries, we have made it clear that the arbitration clause and class action waiver included in the Equifax and TrustedID Premier terms of use does not apply to this cybersecurity incident.”

Original story:

Equifax will almost certainly see itself the target of multiple class action lawsuits as a result of this breach, but there is no guarantee those lawsuits will go the distance and result in a monetary windfall for affected consumers.

Even when these cases do result in a win for the plaintiff class, it can take years. After KrebsOnSecurity broke the story in 2013 that Experian had given access to 200 million consumer records to Vietnamese man running an identity theft service, two different law firms filed class action suits against Experian.

That case was ultimately tossed out of federal court and remanded to state court, where it is ongoing. That case was filed in 2015.

To close out the subject of civil lawsuits as a way to hold companies accountable for sloppy security, class actions — even when successful — rarely result in much of a financial benefit for affected consumers (very often the “reward” is a gift card or two-digit dollar amount per victim), while greatly enriching law firms that file the suits.

It’s my view that these class action lawsuits serve principally to take the pressure off of lawmakers and regulators to do something that might actually prevent more sloppy security practices in the future for the victim culpable companies. And as I noted in yesterday’s story, the credit bureaus have shown themselves time and again to be terribly unreliable stewards of sensitive consumer data: This time, the intruders were able to get in because Equifax apparently fell behind in patching its Internet-facing Web applications.

In May, KrebsOnSecurity reported that fraudsters exploited lax security at Equifax’s TALX payroll division, which provides online payroll, HR and tax services. In 2015, a breach at Experian jeopardized the personal data on at least 15 million consumers.

CAPITALIZING ON FEAR

Speaking of Experian, the company is now taking advantage of public fear over the breach — via hashtag #equifaxbreach, for example — to sign people up for their cleverly-named “CreditLock” subscription service (again, hat tip to @rayjwatson).

“When you have Experian Identity Theft Protection, you can instantly lock or unlock your Experian Credit File with the simple click of a button,” the ad enthuses. “Experian gives you instant access to your credit report.”

First off, all consumers have the legal right to instant access to their credit report via the Web site, annualcreditreport.com. This site, mandated by Congress, gives consumers the right to one free credit report from each of the three major bureaus (Equifax, Trans Union and Experian) every year.

Second, all consumers have a right to request that the bureaus “freeze” their credit files, which bars potential creditors or anyone else from viewing your credit history or credit file unless you thaw the freeze (temporarily or permanently).

I have made no secret of my disdain for the practice of companies offering credit monitoring in the wake of a data breach — especially in cases where the breach only involves credit card accounts, since credit monitoring services typically only look for new account fraud and do little or nothing to prevent fraud on existing consumer credit accounts.

Credit monitoring services rarely prevent identity thieves from stealing your identity. The most you can hope for from these services is that they will alert you as soon as someone does steal your identity. Also, the services can be useful in helping victims recover from ID theft.

My advice: Sign up for credit monitoring if you can (and you’re not holding out for a puny class action windfall) and then freeze your credit files at the major credit bureaus (it is generally not possible to sign up for credit monitoring services after a freeze is in place). Again, advice for how to file a freeze is available here.

Whether you are considering a freeze, credit monitoring, or a fraud alert (another, far less restrictive third option), please take a moment to read this story in its entirety. It includes a great deal of information that cannot be shared in a short column here.

Sociological ImagesAmericans At Work: Not A Pretty Picture

Originally posted at Reports from the Economic Front.

What is work like for Americans?  The results of the Rand Corporation’s American Working Conditions Survey (AWCS) paint a troubling picture. As the authors write in their summary:

The AWCS findings indicate that the American workplace is very physically and emotionally taxing, both for workers themselves and their families.

The authors do note more positive findings.  These include:

that workers appear to have a certain degree of autonomy, most feel confident about their skill set, and many receive social support on the job.

Despite the importance of work to our emotional and physical well-being, social relations, and the development of our capacities to shape our world, little has been published about our experience of work. Here, then, is a more detailed look at some of the Survey’s findings:

The Hazardous Workplace

An overwhelming fraction of Americans engage in intense physical exertion on the job. In addition to physical demands, more than one-half of American workers (55 percent) are exposed to unpleasant or potentially dangerous working conditions.

The Pressures of Work

Approximately two-thirds of Americans have jobs that involve working at very high speed at least half the time; the same fraction works to tight deadlines at least half the time.

The Long Work Day

While presence at the work place during business hours is required for most Americans, many take work home. About one-half of American workers do some work in their free time to meet work demands. Approximately one in ten workers report working in their free time “nearly every day” over the last month, two in ten workers report working in their free time “once or twice a week,” and two in ten workers report working in their free time “once or twice a month.” 

The Work Environment

Nearly one in five American workers were subjected to some form of verbal abuse, unwanted sexual attention, threats, or humiliating behavior at work in the past month or to physical violence, bullying or harassment, or sexual harassment at work in the past 12 months. 

At the same time, it is also true that:

While the workplace is a source of hostile social experiences for an important fraction of American workers, it is a source of supportive social experiences for many others. More than one-half of American workers agreed with the statement “I have very good friends at work,” with women more likely to report having very good friends at work than men (61 and 53 percent, respectively).

In sum, the survey’s results make clear that work in the United States is physically and emotionally demanding and dangerous for many workers. And with the government weakening many of the labor and employment regulations designed to protect worker rights and safety, it is likely that workplace conditions will worsen.

Worker organizing and workplace struggles for change need to be encouraged and supported. A recent Pew Research Center survey showed growing support for unions, especially among younger workers.  It is not hard to understand why.

(View original at https://thesocietypages.org/socimages)

CryptogramShadowBrokers Releases NSA UNITEDRAKE Manual

The ShadowBrokers released the manual for UNITEDRAKE, a sophisticated NSA Trojan that targets Windows machines:

Able to compromise Windows PCs running on XP, Windows Server 2003 and 2008, Vista, Windows 7 SP 1 and below, as well as Windows 8 and Windows Server 2012, the attack tool acts as a service to capture information.

UNITEDRAKE, described as a "fully extensible remote collection system designed for Windows targets," also gives operators the opportunity to take complete control of a device.

The malware's modules -- including FOGGYBOTTOM and GROK -- can perform tasks including listening in and monitoring communication, capturing keystrokes and both webcam and microphone usage, the impersonation users, stealing diagnostics information and self-destructing once tasks are completed.

More news.

UNITEDRAKE was mentioned in several Snowden documents and also in the TAO catalog of implants.

And Kaspersky Labs has found evidence of these tools in the wild, associated with the Equation Group -- generally assumed to be the NSA:

The capabilities of several tools in the catalog identified by the codenames UNITEDRAKE, STRAITBAZZARE, VALIDATOR and SLICKERVICAR appear to match the tools Kaspersky found. These codenames don't appear in the components from the Equation Group, but Kaspersky did find "UR" in EquationDrug, suggesting a possible connection to UNITEDRAKE (United Rake). Kaspersky also found other codenames in the components that aren't in the NSA catalog but share the same naming conventions­they include SKYHOOKCHOW, STEALTHFIGHTER, DRINKPARSLEY, STRAITACID, LUTEUSOBSTOS, STRAITSHOOTER, and DESERTWINTER.

ShadowBrokers has only released the UNITEDRAKE manual, not the tool itself. Presumably they're trying to sell that.

Worse Than FailureError'd: The Journey of a Thousand Miles Begins with a Single Error

Drew W. writes, "If I'm already at (undefined), why should I pay $389.99 to fly to (undefined)?"

 

"I'm glad I got this warning! I was planning on going to location near impacted roads," wrote Kelly G.

 

This submission is different - Peter G. included the perfect caption in the image as a response to Air New Zealand's in-flight survey.

 

"I have to admit, saving {{slider.total().discount}}% is tempting, but the discounted titles don't seem very interesting," Filippo wrote.

 

"Yes, YouTube app, I understand that maps without New Zealand exist. I'm not sure what else you're trying to tell me however," Robin S. writes.

 

Rebecca wrote, "I was asked to look at a user's personal laptop and started with updates. I knew Windows 8 was bad, but that bad?!"

 

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Krebs on SecurityBreach at Equifax May Impact 143M Americans

Equifax, one of the “big-three” U.S. credit bureaus, said today a data breach at the company may have affected 143 million Americans, jeopardizing consumer Social Security numbers, birth dates, addresses and some driver’s license numbers.

In a press release today, Equifax [NYSE:EFX] said it discovered the “unauthorized access” on July 29, after which it hired an outside forensics firm to investigate. Equifax said the investigation is still ongoing, but that the breach also jeopardized credit card numbers for roughly 209,000 U.S. consumers and “certain dispute documents with personal identifying information for approximately 182,000 U.S. consumers.”

In addition, the company said it identified unauthorized access to “limited personal information for certain UK and Canadian residents,” and that it would work with regulators in those countries to determine next steps.

“This is clearly a disappointing event for our company, and one that strikes at the heart of who we are and what we do. I apologize to consumers and our business customers for the concern and frustration this causes,” said Chairman and Chief Executive Officer Richard F. Smith in a statement released to the media, along with a video message. “We pride ourselves on being a leader in managing and protecting data, and we are conducting a thorough review of our overall security operations.”

Equifax said the attackers were able to break into the company’s systems by exploiting an application vulnerability to gain access to certain files. It did not say which application or which vulnerability was the source of the breach.

Equifax has set up a Web site — https://www.equifaxsecurity2017.com — that anyone concerned can visit to see if they may be impacted by the breach. The site also lets consumers enroll in TrustedID Premier, a 3-bureau credit monitoring service (Equifax, Experian and Trans Union) which also is operated by Equifax.

According to Equifax, when you begin, you will be asked to provide your last name and the last six digits of your Social Security number. Based on that information, you will receive a message indicating whether your personal information may have been impacted by this incident. Regardless of whether your information may have been impacted, the company says it will provide everyone the option to enroll in TrustedID Premier. The offer ends Nov. 21, 2017.

ANALYSIS

At time of publication, the Trustedid.com site Equifax is promoting for free credit monitoring services was only intermittently available, likely because of the high volume of traffic following today’s announcement.

As many readers here have shared in the comments already, the site Equifax has available for people to see whether they were impacted by the breach may not actually tell you whether you were affected. When I entered the last six digits of my SSN and my last name, the site threw a “system unavailable” page, asking me to try again later.

equifaxtry

When I tried again later, I received a notice stating my enrollment date for TrustedID Premier is Sept. 13, 2017, but it asked me to return again on or after that date to enroll. The message implied but didn’t say I was impacted.

enrollmentequifax

Maybe Equifax simply isn’t ready to handle everyone in America asking for credit protection all at once, but this could be seen as a ploy by the company assuming that many people simply won’t return again after news of the breach slips off of the front page.

Update, 11:40 p.m. ET: At a reader’s suggestion, I used a made-up last name and the last six digits of my Social Security number: The system returned the same response: Come back on Sept. 13. It’s difficult to tell if the site is just broken or if there is something more sinister going on here.

Also, perhaps because the site is so new and/or because there was a problem with one of the site’s SSL certificates, some browsers may be throwing a cert error when the site tries to load. This is the message that OpenDNS users are seeing right now if they try to visit www.equifaxsecurity2017.com:

opendns-equifax

Original story:

Several readers who have taken my advice and placed security freezes (also called a credit freeze) on their file with Equifax have written in asking whether this intrusion means cybercriminals could also be in possession of the unique PIN code needed to lift the freeze.

So far, the answer seems to be “no.” Equifax was clear that its investigation is ongoing. However, in a FAQ about the breach, Equifax said it has found no evidence to date of any unauthorized activity on the company’s core consumer or commercial credit reporting databases.

I have long urged consumers to assume that all of the personal information jeopardized in this breach is already compromised and for sale many times over in the cybercrime underground (because it demonstrably is for a significant portion of Americans). One step in acting on that assumption is placing a credit freeze on one’s file with the three major credit bureaus and with Innovis — a fourth bureau which runs credit checks for many businesses but is not as widely known as the big three.

More information on the difference between credit monitoring and a security freeze (and why consumers should take full advantage of both) can be found in this story.

I have made no secret of my disdain for the practice of companies offering credit monitoring in the wake of a data breach — especially in cases where the breach only involves credit card accounts, since credit monitoring services typically only look for new account fraud and do little or nothing to prevent fraud on existing consumer credit accounts.

Credit monitoring services rarely prevent identity thieves from stealing your identity. The most you can hope for from these services is that they will alert you as soon as someone does steal your identity. Also, the services can be useful in helping victims recover from ID theft.

My advice: Sign up for credit monitoring if you can, and then freeze your credit files at the major credit bureaus (it is generally not possible to sign up for credit monitoring services after a freeze is in place). Again, advice for how to file a freeze is available here.

The fact that the breached entity (Equifax) is offering to sign consumers up for its own identity protection services strikes me as pretty rich. Typically, the way these arrangements work is the credit monitoring is free for a period of time, and then consumers are pitched on purchasing additional protection when their free coverage expires. In the case of this offering, consumers are eligible for the free service for one year.

That the intruders were able to access such a large amount of sensitive consumer data via a vulnerability in the company’s Web site suggests Equifax may have fallen behind in applying security updates to its Internet-facing Web applications. Although the attackers could have exploited an unknown flaw in those applications, I would fully expect Equifax to highlight this fact if it were true — if for no other reason than doing so might make them less culpable and appear as though this was a crime which could have been perpetrated against any company running said Web applications.

This is hardly the first time Equifax or another major credit bureau has experienced a breach impacting a significant number of Americans. In May, KrebsOnSecurity reported that fraudsters exploited lax security at Equifax’s TALX payroll division, which provides online payroll, HR and tax services.

In 2015, a breach at Experian jeopardized the personal data on at least 15 million consumers. Experian also for several months granted access to its databases to a Vietnamese man posing as a private investigator in the U.S. In reality, the guy was running an identity theft service that let cyber thieves look up personal and financial data on more than 200 million Americans.

My take on this: The credit bureaus — which make piles of money by compiling incredibly detailed dossiers on consumers and selling that information to marketers — have for the most part shown themselves to be terrible stewards of very sensitive data, and are long overdue for more oversight from regulators and lawmakers.

In a statement released this evening, Sen. Mark Warner (D-Va.) called the Equifax breach “profoundly troubling.”

“While many have perhaps become accustomed to hearing of a new data breach every few weeks, the scope of this breach – involving Social Security Numbers, birth dates, addresses, and credit card numbers of nearly half the U.S. population – raises serious questions about whether Congress should not only create a uniform data breach notification standard, but also whether Congress needs to rethink data protection policies, so that enterprises such as Equifax have fewer incentives to collect large, centralized sets of highly sensitive data like SSNs and credit card information on millions of Americans,” said Warner, who heads the bipartisan Senate Cybersecurity Caucus. “It is no exaggeration to suggest that a breach such as this – exposing highly sensitive personal and financial information central for identity management and access to credit– represents a real threat to the economic security of Americans.”

It’s unclear why Web applications tied to so much sensitive consumer data were left unpatched, but a lack of security leadership at Equifax may have been a contributing factor. Until very recently, the company was searching for someone to fill the role of vice president of cybersecurity, which according to Equifax is akin to the role of a chief information security officer (CISO).

The company appears to have announced the breach after the close of the stock market on Thursday. Shares of Equifax closed trading on the NSYE at $142.72, up almost one percent over Wednesday’s price.

This is a developing story. Updates will be added as needed.

Further reading:

Are Credit Monitoring Services Really Worth It?

Report: Everyone Should Get a Security Freeze

How I Learned to Stop Worrying and Embrace the Security Freeze

Update: 8:38 p.m. ET: Added description of my experience trying to sign up for Equifax’s credit monitoring offer (it didn’t work and it may be completely broken).

CryptogramResearch on What Motivates ISIS -- and Other -- Fighters

Interesting research from Nature Human Behaviour: "The devoted actor's will to fight and the spiritual dimension of human conflict":

Abstract: Frontline investigations with fighters against the Islamic State (ISIL or ISIS), combined with multiple online studies, address willingness to fight and die in intergroup conflict. The general focus is on non-utilitarian aspects of human conflict, which combatants themselves deem 'sacred' or 'spiritual', whether secular or religious. Here we investigate two key components of a theoretical framework we call 'the devoted actor' -- sacred values and identity fusion with a group­ -- to better understand people's willingness to make costly sacrifices. We reveal three crucial factors: commitment to non-negotiable sacred values and the groups that the actors are wholly fused with; readiness to forsake kin for those values; and perceived spiritual strength of ingroup versus foes as more important than relative material strength. We directly relate expressed willingness for action to behaviour as a check on claims that decisions in extreme conflicts are driven by cost-benefit calculations, which may help to inform policy decisions for the common defense.

Worse Than FailureCodeSOD: Never Bother the Customer

Matthew H was given a pretty basic task: save some data as a blob. This task was made more complicated by their boss’s core philosophy, though.

Never. Bother. The. Customer..

“Right, but if the operation fails and we can’t continue?”

Never. Bother. The. Customer.

“Okay, sure, but what if they gave us bad input?”

Never. Bother. The. Customer.

“Okay, sure, but what if, by hitting okay, we’re going to format their entire hard drive?”

Never. Bother. The. Customer.

As such, for every method that Matthew wrote, he was compelled to write a “safe” version, like this:

protected void SaveToBlobStorageSafe()
{
        try
        {
                 SaveToBlobStorage();
        }
        catch (Exception ex)
        {
        }
}

No errors, no matter what the cause, were ever to be allowed to be seen by the user.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet Linux AustraliaJames Morris: Linux Plumbers Conference Sessions for Linux Security Summit Attendees

Folks attending the 2017 Linux Security Summit (LSS) next week may be also interested in attending the TPMs and Containers sessions at Linux Plumbers Conference (LPC) on the Wednesday.

The LPC TPMs microconf will be held in the morning and lead by Matthew Garret, while the containers microconf will be run by Stéphane Graber in the afternoon.  Several security topics will be discussed in the containers session, including namespacing and stacking of LSM, and namespacing of IMA.

Attendance on the Wednesday for LPC is at no extra cost for registered attendees of LSS.  Many thanks to the LPC organizers for arranging this!

There will be followup BOF sessions on LSM stacking and namespacing at LSS on Thursday, per the schedule.

This should be a very productive week for Linux security development: see you there!

TEDTo engage joyfully with the world: More bold ideas from the TED Fellows

Percussionist Kasiva Mutua performs to open the second set of TED Fellows talks at TEDGlobal 2017, on Monday, August 28, 2017. Photo: Bret Hartman / TED

“The politics of joy” is a phrase that resonates through this session of TED Fellows talks. These talks, by and large, come from people who’ve taken a hard look at the world and its problems and decided to engage joyfully, with creativity, fresh insight and heart. From a soccer project that empowers young refugees, to an SMS service for cow farmers, it’s a collection of bold ideas that might make a difference.

First, the music. We begin with a multi-crescendic rumble of drums and percussion from musician Kasiva Mutua (whom we’ll hear from again later).

TED Senior Fellow Su Kahumbu loves cows and chickens and the people who raise them. “We really underestimate the importance of our farmers,” she says, pointing to their key role not only in our food supply but for our global health. She’s devoted herself to bringing small farmers the tools they need to keep learning and to make a decent living. Thus: iCow, an SMS service that shares best practices, reminders and useful data for livestock farmers, even over low-end feature phones. As she says, “Farmers who use the service begin to see improvements in yields, incomes, profits and animal health within only 3 months.”

Marc Bamuthi Joseph is a theater artist who also deeply loves sports. In his latest piece, peh-LO-tah, he says, “I thought a lot about how soccer was a means for my own immigrant family to foster a sense of community and normality in the new context of the United States.” peh-LO-tah leverages sports and movement to help new young Americans find a place to call home — as well as a connection to the global community. Because, as he puts it: “Soccer is the only thing the entire planet can agree to do together. It’s the official sport of this spinning ball. I want to be able to connect the joy of the game to the ever moving footballer, to connect that moving footballer to immigrants who also moved in sight of a better position. Among these kids, I want to connect their families’ histories to the bliss of the goal scorer’s run.”

Joseph’s talk was received with a standing ovation, and not a few shouts of “gooooaaal” from the back.

Next, we watch a video made by TED Fellows and collaborators Ed Ou + Kitra Cahana, who are in the midst of an in-depth reporting project among the Nunavut Inuit people.

Miho Janvier is a solar physicist — or, to make it sound even more awesome, you can call her a solar storm chaser. She studies the solar weather and how it can disrupt life here on Earth, as in March 1989, in the Canadian province of Quebec, when a large solar storm shut down the entire electric grid, disrupting life for thousands of people. How do you study the weather on the sun? Sending up a probe is a nonstarter (it would just fry, obviously) so her work involves a lot of computer modeling, using data from many different space missions to help assemble a clear picture of space weather. The goal: to protect both Earth and any spacecraft we may care to send out there.

Miho Janvier studies the weather around the sun, and how it affects us here on Earth — disrupting phone calls, playing havoc with electric grids, even tweaking satellites. She speaks at TEDGlobal 2017 on Monday, August 28, 2017. Photo: Bret Hartman / TED

Up till now, she spoke to us with her drums. Now it’s time for her to speak words. Kasiva Mutua is a drummer — and she’s heard some people criticize her for it, for playing an instrument that is so physical, so sensual, so … male. She explains that this taboo “stems from the traditional and psychological belief that the woman is an inferior being.” But as a woman and an African, she feels called not only to drum but to share the continent’s culture of drumming that marks events from childbirth to marriages to burials. Inspired by the need to preserve culture, Kasiva goes fearless, and teaches boys and girls alike the significance of drums. “Women can be custodians of culture,” she says. “My drum and I? We’re here to stay.” Cue thunderous applause.

Imagine falling ill, going to a clinic or hospital, and being unable to communicate with your caregivers. Kyle DeCarlo works to build awareness of the needs of Deaf patients, whose experience in hospitals can be difficult. For instance, imagine you are a practiced lip reader — but your doctor speaks to you wearing a surgical mask? Kyle was born Deaf to hearing parents, and growing up he found that despite hours of speech therapy, and powerful hearing aid technology, he still struggled to communicate. Why? As DeCarlo frames it, people try to solve the wrong problem for Deaf people. They try to give people access to sound by improving their hearing, when they should actually be focused on giving people access to language, to communication. A focus on nonverbal communication will help millions of Deaf kids all over the world, DeCarlo suggests, and it doesn’t take a huge gesture. Sometimes a simple modification, like the FDA-approved semi-transparent surgical masks developed by the team Kyle works with, might make a huge difference to people communicating with their caregivers.

Globally, there are more cancer survivors than ever — in the US, for instance, one in twenty people has survived a bout with cancer. Victoria Forster is one of them, having survived a cancer that struck when she was 8 years old. She asks: What can we learn from this new community of long-term survivors — about side effects, the long-term effects of common treatments, and simply and powerfully, how to fully live?

“There is a renaissance of innovation and technology in Africa,” says Abdigani Diriye. More than 100 incubators, accelerators and hubs have risen in countries like Kenya, Nigeria, South Africa and Rwanda. Inspired, Diriye has started up a business accelerator in his home of Somalia. “There was no precedence in Somalia for a startup culture,” he says, but he’s now trained over 25 startups powered by “the immense talent, drive and creativity of Somali youth.”

Katlego Kolanyane-Kesupile is an international lifestyle writer, musician, performer, a proudly transgender “mainstream, whitewashed, new age, digestible queer.” But in this glossy life, a part of her past was missing — “a rural child lived within this shiny visage of fabulosity.” In a poetic talk, she examines how she’s continually linking up those two parts of her being, a VIP-lounge sophisticate whose roots are in “the little brown-skinned children frolicking in the streets of an incidental railway settlement … or an off-the-grid village, legs clad in dust stockings.”

Katlego Kolanyane-Kesupile poetically examines how she links her modern, glossy, VIP-lounge lifestyle with the child she once was in a small dusty village. She asks: Does modernity demand we put aside the child, the villager, the indigenous truth that completes who we are? Photo: Bret Hartman / TED

An unabashed tech-blog reading and Silicon-Valley-podcast-listening nerd, Soyapi Mumba works on creating electronic health records for healthcare providers in Malawi. He realized he couldn’t just use off-the-shelf software, because doctors in Malawi face a few challenges that existing software packages couldn’t handle — such as patients who don’t know their exact birthday, and whose names can be pronounced four different ways, all correct. On top of writing a brand-new system to store patient data, Mumba and his team modified some expired “internet appliances” (remember those?) to create a top-notch data-entry system, and even went so far as to weld up towers to carry a mesh network. “We have had to become jacks of all trade and build whole systems, including the infrastructure, from the ground up,” Soyapi says. “People are coming to Malawi to learn how we did it.”

Walé Oyéjidé is a fashion designer with Ikiré Jones whose clothes and accessories bear a message — to celebrate the culture and designs of the African diaspora and to rewrite cultural narratives. His culture-bending tapestries blend aesthetics from across the globe to craft a message about inclusivity. “The clothes we wear can be a great illustration of diplomatic soft power. Clothes can serve as bridges between our seemingly disparate cultures”

As he puts it: “For those of us from this beautiful continent, to be African is to be inspired by culture and to be filled with undying hope for the future. My work speaks for those who will no longer let their future be dictated by a troubled past. We stand ready to tell our own stories, without compromise, without apologies.”

Walé OyéjidéŽ creates clothing that rewrites cultural narratives, marrying the culture and designs of the African diaspora to tailored jacket forms and rich scarves. (And yes, that’s one of his symbolic scarves in the trailer for Black Panther.) Photo: Bret Hartman / TED

Next, we watch a video about TED Fellow and explorer Steve Boyes, who’s using technology and good old-fashioned boots on the ground to map the vital Okavango Delta, a 18,000-square-mile wetland wilderness that straddles the borders of Botswana, Namibia, and Angola. Bonus: Read our new story about Steve’s encounter with two tribes who’d rarely met outsiders before.

To crown two sets of brilliant, tear-inducing, awe-inspiring and simply beautiful talks that brought us to our feet nearly every single time, we are blessed with a full collaboration of Meklit Hadero, Kasiva Mutua, cellist Joshua Roman and Blinky Bill.

That’s it for the TED Fellows sessions. But the conference is just warming up. Also, click here to learn more about the TED Fellows program: who they are, what it’s like, and if you are sufficiently inspired, how to become one (applications for the 2018 class of Fellows are due on September 10, 2017).

One of many standing ovations during the TED Fellows talks, this time from audience members Faith Osier, Cristina James and Rosemary Okello-Orlale, during TEDGlobal 2017 on Monday, August 28, 2017, in Arusha, Tanzania. Photo: Bret Hartman / TED


,

TEDTEDWomen update: One year on, an extraordinary story of understanding and forgiveness

Thordis Elva and Tom Stranger speak during TEDWomen 2016. Photo: Marla Aufmuth / TED

When we started TEDWomen in 2010, we felt strongly that we wanted to include a series of talks we called “Duets” in which we would forego the traditional TED Talk model and present pairs of speakers instead of solo ones.

There is no question that the Duets sessions are often among the most popular and provocative.

One such talk, given last year in San Francisco, was one that we knew was going to be controversial from the outset because it was going to take us into entirely new territory… not only at TEDWomen but also online as men and women in every part of the world are struggling to come to terms with the global epidemic of sexual violence.

In this groundbreaking duet TED Talk, we heard the story of an extraordinary partnership that developed between a victim of sexual violence and her perpetrator — as they each searched for a path to understanding and forgiveness.

I learned about Thordis Elva from an Icelandic friend who told me that a friend of hers had reconnected, after many years, with her high school boyfriend who had raped her. Not to pursue the romance which had, of course, ended immediately, but to try to make sense of what had happened so that she could stop the blaming and shaming that threatened to take over her life.

As my friend recounted the plan Thordis had come up with to begin an online dialogue with Tom (who had been an exchange student from Australia), I could begin to see the potential for a TED Talk that would reveal a new possibility for millions of victims of sexual violence. I started a phone dialogue, first with Thordis, over a period of two years, as the conversation with Tom was underway. I’d hear from her from time to time about the insights she was gaining as well as the emotional rollercoaster of their reconnection, which eventually led them to meet in person and complete a healing process that transformed both of their lives.

They were ready to share their story, knowing that the impact would be huge but somewhat unpredictable. This was new territory — a rapist standing on a stage next to the woman he raped, telling their story of reconciliation and forgiveness together, to the world.

It was a tough road getting them ready for that moment. We sent many drafts of their TED Talk back and forth, trying to condense 20 years of their lives into 16 minutes. Many times, it wasn’t clear if they and their families (both had life partners) would go forward, but the more they wrote and talked and put their discoveries about themselves and about the nature of sexual crimes, the more they felt compelled to share it, hoping that it might provide a new path to healing for others.

I was almost as nervous as they were when they took the stage in San Francisco, knowing that it was potentially explosive to have a confessed perpetrator of a sexual crime standing in front of an audience who most certainly included many victims of similar violence. One of every three women experiences sexual violence in her lifetime.

The audience was quiet and leaned into the experience of hearing this unique story told with authenticity and conviction. They stood to applaud when the talk ended and both Thordis and Tom broke into tears backstage.

Here’s the talk if you haven’t seen it:

TEDWomen was Thordis and Tom’s first experience talking publicly together about what had happened between them. Since delivering it last October, a lot has happened. They wrote a book based on their talk — their journey from violence to understanding to forgiveness — and the video has been viewed over 3.2 million times on the TED website.

I caught up with them over email earlier this month. After releasing their book, South of Forgiveness, this spring, they spent two months traveling to Australia, Britain, Japan, Sweden, Iceland, the USA, Germany and Poland to promote it. They’ve been interviewed on the BBC, CBS, and NPR, and featured in Cosmopolitan, Marie Claire, Teen Vogue, The London Times, the Evening Standard, and Spiegel Online, to name just a few.

Thordis tells me that the book has received many favorable reviews, including one review in the UK Sunday Times she particularly appreciated that said: “Hats off to Elva and Stranger for a brave journey that might well change lives.”

“Public speaking has been a major part of what I do for the past decade and in the months to come, I’ll be speaking to a wide range of people — including police officers, shelter workers and politicians — about sexual violence, breaking the silence that surrounds it, healing from it and lifting the shame from survivors.

I’ve also been tending to my other passion, which is battling online abuse, including the non-consensual distribution of nude photos.”

Their talk has provoked strong reactions — for instance, after a petition drive this spring, a scheduled talk in London was cancelled, and instead became a more intimate facilitated conversation. As their journey shows, we are still learning how to have this discussion.

This fall, Thordis is releasing a short film for teenagers, “reminding them of the importance of consent in any type of intimate or sexual situation, whether it is online or offline.”

Tom says that “speaking to the TEDWomen audience was an unforgettable and profound experience, and the times since have been equally unforgettable and profound, also educative, at times difficult, and inspiring…”

“One woman shared with me her truth of being a survivor of rape. She must have had to steel herself to come up to me in person, but her powerful words “keep talking” have stayed with me. A face to face conversation with a man whose daughter was gang raped left me speechless, as did his verbalized support for the intention of our TED talk and book.”

I am grateful that they were willing to share their story with us at TEDWomen. As Thordis says, it’s important to get these stories out into the open and to give rape survivors the space to talk about their experiences. Being open to talking honestly about our global rape culture can be a big step towards changing the dynamics that are perpetuating it. In the United States alone, RAINN estimates there are over 300,000 victims of rape and sexual assault each year. Of those, 63% go unreported and only 0.6% of rapists are incarcerated. The statute of limitations had passed for Thordis to bring charges against Tom, but as she makes clear, her pupose was not to punish him but to try to understand him and the culture that contributed to his actions against her.

Tom noted that “very few people who have spoken to me about the TED talk haven’t, in some way, been personally affected by a man’s sexual violence. This reality has underlined to me the collective encumbrance we share, as men, to vocally address this subject, grow understandings, and promote new identities and stories that lift up the healthy and self-connected elements of manhood.”

He says he’s worked with many groups working on the issue to find the right language to engage other men in reflective conversations about masculinity and perceptions of manhood. “Such conversations,” he says, “are where I’m taking the lessons, knowledge and experiences I have gained, to build future collaborations and simply to continue the listening and talking.”

They hope their story will change lives and by all accounts, it already has. Thordis says, “I continually get messages from people all over the world who have found hope, inspiration and healing in our TED talk. It continues to be a humbling, life-changing experience.”


This year, TEDWomen will be held November 1–3 in New Orleans. The theme is Bridges: We build them, we cross them, and sometimes we even burn them. We’ll explore the many aspects of this year’s theme through curated TED Talks, including a session of memorable pairings to be announced earlySeptember. The hosts of this year’s duet session will be a married couple who have just completed two years of interviews with partners and they will add their insights, too, about how good partnerships come together and stay together…something they call “plus wonder’ and that’s how we think of our Duet sessions—it’s a plus-one TEDTalk.

Registration for TEDWomen 2017 is open, so if you haven’t registered yet, click this link and apply today — space is limited and I don’t want you to miss out.


TEDSneak peek: First look at the TEDWomen 2017 lineup

This November, we’re gathering in New Orleans for three days of TEDWomen — to share talks about bridging the world today and the world we all hope to build. Today, we’re announcing the first speakers on our lineup, a mix of powerful voices, creative insights and committed activism that will set the tone for our time together. Read on to learn more about these fascinating women and men — and of course, we’d love you to join us November 1–3 in New Orleans for TEDWomen 2017: Bridges.

Valarie Kaur: In an era of enormous rage, Valarie argues that Revolutionary Love — as a public ethic and shared practice — is the call of our times.  

What is the antidote to the rise in nationalism, polarization and hate in the U.S. and around the globe? Social justice activist Valarie Kaur argues we need to claim love as a public ethic. She redefines love not just as an emotion but as labor that births and transforms: Revolutionary Love. When Revolutionary Love is released from the domestic sphere and practiced in public, it disrupts the logic of capitalism, challenges structures of injustice, and shifts collective consciousness. Telling personal stories from the front lines of social movements, and drawing upon ethics, law, and neuroscience, Kaur argues that the practice of Revolutionary Love — for others, opponents, and ourselves — is the call of our times. Watch her viral speech: “Breathe and push.”

Abby Wambach and Glennon Doyle: Just married, Abby and Glennon think deeply on love, faith and equality.

Abby Wambach is the all-time leading scorer in international soccer history with 184 career goals. She was the United States’ leading scorer in the 2007 and 2011 Women’s World Cup tournaments and the 2004 and 2012 Olympics. (She missed Beijing 2008 due to a broken leg.) Her ability to wear down defenses with her physical play, aerial game and hard running has long been a key to the USA’s success. After winning the Women’s World Cup in 2015, Wambach retired as one of the most dominant players in the history of women’s soccer. For the next chapter of her career, she’s fighting for equality and inclusion across industries. Glennon Doyle, is the bestselling author of Love Warrior, a 2016 Oprah’s Book Club selection, as well as the bestseller Carry On, Warrior. She is an activist, speaker and founder of Together Rising, a nonprofit organization that has raised over $7 million for women, families and children in crisis. Glennon is also the creator of Momastery, an online community where millions of readers meet each week to experience her shameless and hilarious essays about marriage, motherhood, faith, mental health, addiction, recovery and connection.

Gretchen Carlson: After this news anchor stood up and spoke out about sexual harassment in the workplace, women all over the world began to take back their lives, careers and dignity.

Gretchen Carlson has been at the forefront of gender equality and diversity for the last decade, consistently defying expectations placed on her.  Advocating bravery and empowerment at every turn, Gretchen has become a force of innumerable power in our current cultural climate, taking on patriarchal notions of gender and sexuality head-on with extreme resilience and zeal. She’s a tireless advocate for young women and is committed to gender equality, exemplifying how you can truly have it all – a successful career, a substantive family life, and an altruistic purpose. Her book Be Fierce is due out October 17.

Justin Baldoni: An outspoken feminist, Justin has been doubling down on his efforts to start a dialogue with men to redefine masculinity.

Justin Baldoni is an actor, director, and entrepreneur whose efforts are focused on creating impactful media. He can be seen playing Rafael on CW’s award-winning phenomenon Jane the Virgin. In 2012, Baldoni created the most watched digital documentary series in history, “My Last Days,” a show about living told by the dying. On the heels of that success Baldoni founded Wayfarer Entertainment, a digital media studio focused on disruptive inspiration. In 2014 he started the annual Carnival of Love with a mission to improve the way the Los Angeles community views and interacts those experiencing homelessness. To support his work on Skid Row he started the Wayfarer Foundation, which supports his work breaking the cycle of homelessness and supporting individuals facing terminal illness.

Eve Abrams: Audio documentarian and educator Eve makes radio stories, mostly about her adopted home town, New Orleans.

Eve Abrams first fell in love with stories listening to her grandmother tell them. Eve is an award winning radio producer, writer, audio documentarian, and educator whose work focuses on the stories of her adopted hometown, New Orleans. She produces the audio project Unprisoned, piloted through AIR’s Finding America initiative, which tells how mass incarceration affects people serving time outside and investigates why Louisiana is the world’s per-capita incarceration capital. Unprisoned received a Gabriel Award and was a 2017 Peabody Finalist. Her 2015 documentary Along Saint Claude chronicled 300 years of change in New Orleans and received an Edward R. Murrow award. Eve Abrams is a Robert Rauschenberg Foundation Artist as Activist Fellow. She has been teaching for 25 years and currently teaches for the Society for Relief of Destitute Orphan Boys.

Christy Turlington Burns: 98% of deaths in childbirth are preventable, and as Christy says: “This is not an issue that needs a cure. We know how to save these women’s lives.”

Christy Turlington Burns is a mother, advocate, social entrepreneur, and founder & CEO of the innovative maternal health organization Every Mother Counts. Having endured a childbirth complication herself, Christy was compelled to direct and produce the documentary No Woman, No Cry about maternal health challenges that impact the lives of millions of girls and women around the world. Every Mother Counts aims to heighten awareness about our global maternal health crisis. While advocacy remains a key focus, Every Mother Counts has evolved into a 501(c)(3) investing in programs around the world to ensure all women have access to quality maternal health care.

Sally Kohn: This journalist and writer searches for common ground among political foes by focusing on the compassion and humanity in everyone.

Sally Kohn has a powerful vision for a more united United States. She’s a columnist and a political commentator for CNN, and is working on a book about hate that will be published in the Spring of 2018. As a former contributor to Fox News, this progressive lesbian sparred with some of the most conservative minds on television, and has sifted through hundreds of letters of hate mail a day. But she deeply believes we can find common ground in our shared humanity, political differences aside. Before we can achieve political correctness, she says, we must first establish emotional correctness — and this will ignite conversations that lead to real change. Watch Sally’s short TED Talks: “Let’s try emotional correctness,” and “Don’t like clickbait? Don’t click.

Cleo Wade: “The best thing about girl power is that over time it turns into woman power.”

Cleo Wade, an outspoken artist, speaker, poet, and the author of a forthcoming book, is an inspiring advocate for gender and race equality. She creates motivating messages, blending simplicity with positivity, femininity and arresting honesty. Her poems, accessible and empowering, speak to a greater future for all women, people of color, and the LGBTQ community, preaching love, acceptance, justice and peace.

Photos: Valarie Kaur by Sharat Raju; Gretchen Carlson by Brigitte Lacombe; Justin Baldoni by Koury Angelo; Christy Turlington Burns by Kassia Meador


Sociological ImagesMovie Times In Modern Times

One of the big themes in social theory is rationalization—the idea that people use institutions, routines, and other formal systems to make social interaction more efficient, but also less flexible and spontaneous. Max Weber famously wrote about bureaucracy, especially how even the most charismatic or creative individuals would eventually come to rely on stable routines. More recent works highlight just how much depend on rationalization in government, at work, and in pop culture.

With new tools in data analysis, we can see rationalization at work. Integrative Biology professor Claus Wilke (@ClausWilke) recently looked at a database of movies from IMDB since 1920. His figure (called a joyplot) lets us compare distributions of movie run times and see how they have changed over the years.

While older films had much more variation in length, we can see a clear pattern emerge where most movies now come in just shy of 100 minutes, and a small portion of short films stick to under 30. The mass market movie routine has clearly come to dominate as more films stick to a common structure.

What’s most interesting to me is not just these two peaks, but that we can also see the disappearance and return of short films between 1980 and 2010 and the some smoothing of the main distribution after 2000. Weber thought that new charismatic ideas could arise to challenge the rationalized status quo, even if those ideas would eventually become routines themselves. With the rise of online distribution for independent films, we may be in the middle of a new wave in charismatic cinema.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramSecurity Vulnerabilities in AT&T Routers

They're actually Arris routers, sold or given away by AT&T. There are several security vulnerabilities, some of them very serious. They can be fixed, but because these are routers it takes some skill. We don't know how many routers are affected, and estimates range from thousands to 138,000.

Among the vulnerabilities are hardcoded credentials, which can allow "root" remote access to an affected device, giving an attacker full control over the router. An attacker can connect to an affected router and log-in with a publicly-disclosed username and password, granting access to the modem's menu-driven shell. An attacker can view and change the Wi-Fi router name and password, and alter the network's setup, such as rerouting internet traffic to a malicious server.

The shell also allows the attacker to control a module that's dedicated to injecting advertisements into unencrypted web traffic, a common tactic used by internet providers and other web companies. Hutchins said that there was "no clear evidence" to suggest the module was running but noted that it was still vulnerable, allowing an attacker to inject their own money-making ad campaigns or malware.

I have written about router vulnerabilities, and why the economics of their production makes them inevitable.

Worse Than FailureNo Chemistry

Tyler G.’s “engagement manager”, Sheila, had a new gig for him. The Global Chemical Society, GCS, had their annual conference coming up, and their system for distributing the schedules was a set of USB thumb-drives with self-hosting web apps.

“You’ll be working with two GCS representatives, Jeff and Graham,” Sheila explained. “They’ll provide you with last year’s source code, and the data for this year’s schedule. You’ll need to wire them up.”

Later that day, the four of them- Tyler, Sheila, and Jeff and Graham- joined a Skype call. Only the top of Jeff’s shiny, bald head could be seen on his webcam, and Graham had joined audio-only.

Statler and Waldorf from the Muppet Show

Sheila managed the introductions. Tyler started the discussion by asking what format they could expect the schedule data to come in.

Jeff shrugged, or at least that’s what they guessed from the way the top of his head bobbed. “Graham, do you know?”

“I think it might be XML,” Graham replied, his voice muffled with static and saturated with background noise. “I can’t say for sure. We’ll send a preliminary data dump first.”

The Blob

The data arrived that afternoon, as a single XML file.

The first time Tyler tried to open it, Notepad++ crashed in protest. After a few attempts, he finally coaxed the editor into letting him see the file. It had no uniform format. Individual fields might be HTML-formatted strings, indecipherable base64-encoded binary blobs (with no indicator as to what data was encoded), and even their plaintext encodings switched from 8-bit to 16-bit arbitrarily.

As soon as Tyler explained to Sheila what a mess the data as, she called GCS reps for another video conferece. Jeff’s shiny pate bobbed around as he listened to their complaints. Sheila finally asked, “Can you do anything to clean up the data?”

“Not really, no,” Jeff replied. “This is how we get the data ourselves.”

“Absolutely not,” Graham concurred.

“We did this last year,” Jeff replied, “and we didn’t have any trouble.”

A Lack of Support

For weeks, Tyler worked on an importer for the XML blob. He figured out what the base64-encoded data was (PDF files), why the encoding kept changing (different language encodings), and why some text was HTML-formatted and some wasn’t (the entries were copied from email, with some as HTML and some as plaintext).

Jeff and Graham had no interest in the action items assigned no them, and continued to be the largest obstacles to the project. They offered no help, they changed their minds nearly daily, and when Sheila started scheduling daily calls with them, they used those calls as an opportunity to be sarcastic and insult Tyler.

Sheila, who had begun the project in a cheerful manner, started balling her fists during each call with Jeff and Graham, now nicknamed “Statler and Waldorf”. After one particularly grueling call, she cursed and muttered dark things about “How do they get anything done?”

After weeks of frustration, pulled hair, and cranky calls, Tyler’s importer was finished. With a few days to go before the conference, they had just enough time to hand the software off and get the USB sticks loaded.

During that morning’s video conference, Jeff and Graham announced that the format had changed to CSV. Sheila, barely keeping her voice level, asked why the format had changed.

“Oh, the industry standard changed,” Graham said.

“And why didn’t you tell us?”

Jeff’s shiny scalp tilted as part of an offscreen shrug. “Sorry. Guess we forgot.”

The Bitter End

The CSV-encoded data, the final and official data-dump for the conference, arrived just one day before the app was due. It came in three files, seemingly split at random, with plenty of repetition between the files. It was all the same, insanely encoded data, just wrapped as CSV rows instead of XML tags.

Tyler crunched his way through an all-nighter. By morning, the importer was finished. He sent the code to GCS’s servers, went home, and collapsed.

The coming Sunday, attendees would arrive at the GCS conference. They would be given a USB stick, that they could plug into their laptops. The conference app would work perfectly, taking the fractured, convoluted data, and presenting it as a scrollable, interactive calendar of panels, presentations, and convention hall hours. Some graduate student, a lab assistant to a Nobel lauerate, would open the app and wonder:

“This programming thing doesn’t seem like a lot of work.”

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: Software Freedom Day 2017 and LUV Annual General Meeting

Sep 16 2017 12:00
Sep 16 2017 18:00
Sep 16 2017 12:00
Sep 16 2017 18:00
Location: 
Electron Workshop, 31 Arden St. North Melbourne

 

Software Freedom Day 2017

It's that time of the year where we celebrate our freedoms in technology and raise a ruckus about all the freedoms that have been eroded away. The time of the year we look at how we might keep our increasingly digital lives under our our own control and prevent prying eyes from seeing things they shouldn't. You guessed it: It's Software Freedom Day!

LUV would like to acknowledge Electron Workshop for the venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 16, 2017 - 12:00

read more

,

Planet Linux AustraliaOpenSTEM: Space station trio returns to Earth: NASA’s Peggy Whitson racks up 665-day record | GeekWire

https://www.geekwire.com/2017/space-station-trio-returns-earth-nasas-peggy-whitson-sets-665-day-record/ NASA astronaut Peggy Whitson and two other spacefliers capped off a record-setting orbital mission with their return from the International Space Station.

CryptogramSecurity Flaw in Estonian National ID Card

We have no idea how bad this really is:

On 30 August, an international team of researchers informed the Estonian Information System Authority (RIA) of a vulnerability potentially affecting the digital use of Estonian ID cards. The possible vulnerability affects a total of almost 750,000 ID-cards issued starting from October 2014, including cards issued to e-residents. The ID-cards issued before 16 October 2014 use a different chip and are not affected. Mobile-IDs are also not impacted.

My guess is that it's worse than the politicians are saying:

According to Peterkop, the current data shows this risk to be theoretical and there is no evidence of anyone's digital identity being misused. "All ID-card operations are still valid and we will take appropriate actions to secure the functioning of our national digital-ID infrastructure. For example, we have restricted the access to Estonian ID-card public key database to prevent illegal use."

And because this system is so important in local politics, the effects are significant:

In the light of current events, some Estonian politicians called to postpone the upcoming local elections, due to take place on 16 October. In Estonia, approximately 35% of the voters use digital identity to vote online.

But the Estonian prime minister, Jüri Ratas, said at a press conference on 5 September that "this incident will not affect the course of the Estonian e-state." Ratas also recommended to use Mobile-IDs where possible. The prime minister said that the State Electoral Office will decide whether it will allow the usage of ID cards at the upcoming local elections.

The Estonian Police and Border Guard estimates it will take approximately two months to fix the issue with faulty cards. The authority will involve as many Estonian experts as possible in the process.

This is exactly the sort of thing I worry about as ID systems become more prevalent and more centralized. Anyone want to place bets on whether a foreign country is going to try to hack the next Estonian election?

Another article.

Cory DoctorowOur technology is haunted by demons controlled by transhuman life-forms

In my latest Locus column, “Demon-Haunted World,” I propose that the Internet of Cheating Things — gadgets that try to trick us into arranging our affairs to the benefit of corporate shareholders, to our own detriment — is bringing us back to the Dark Ages, when alchemists believed that the universe rearranged itself to prevent them from knowing the divine secrets of its workings.


From Dieselgate to Wannacry to HP’s sleazy printer ink chicanery, we are increasingly colonized by demon-haunted things controlled by nonhuman life-forms (corporations) that try to trick, coerce or scare us into acting against our own best interests.

Alchemists – like all humans – are mediocre lab-technicians. Without peer reviewers around to point out the flaws in their experiments, alchemists compounded their human frailty with bad experi­mental design. As a result, an alchemist might find that the same experiment would produce a ‘‘differ­ent outcome’’ every time.

In reality, the experiments lacked sufficient con­trols. But again, in the absence of a peer reviewer, alchemists were doomed to think up their own explanations for this mysterious variability in the natural world, and doomed again to have the self-serving logic of hubris infect these explanations.

That’s how alchemists came to believe that the world was haunted, that God, or the Devil, didn’t want them to understand the world. That the world actually rearranged itself when they weren’t looking to hide its workings from them. Angels punished them for trying to fly to the Sun. Devils tricked them when they tried to know the glory of God – indeed, Marcelo Rinesi from The Institute for Ethics and Emerging Technologies called modern computer science ‘‘applied demonology.’’

In the 21st century, we have come full circle. Non-human life forms – limited liability corpo­rations – are infecting the underpinnings of our ‘‘smart’’ homes and cities with devices that obey a different physics depending on who is using them and what they believe to be true about their surroundings.

Demon-Haunted World [Cory Doctorow/Locus]

Cory DoctorowWalkaway won the Dragon Award for Best Apocalyptic Novel

Yesterday, I left the Black Rock Desert after Burning Man and my phone came to life and informed me that my novel Walkaway had been awarded DragonCon’s Dragon Award for Best Apocalyptic Novel!

I couldn’t be more pleased. My sincere thanks to all the voters who supported the novel! By the way, Tor.com published Party Discipline — a novella set in the Walkaway world — while I was away in the desert. I think it came out great.

BEST SCIENCE FICTION NOVEL:
Babylon’s Ashes by James S.A. Corey

BEST FANTASY NOVEL (INCLUDING PARANORMAL):
Monster Hunter Memoirs: Grunge by Larry Correia and John Ringo

BEST YOUNG ADULT / MIDDLE GRADE NOVEL:
The Hammer of Thor by Rick Riordan

BEST MILITARY SCIENCE FICTION OR FANTASY NOVEL:
Iron Dragoons by Richard Fox

BEST ALTERNATE HISTORY NOVEL:
Fallout: The Hot War by Harry Turtledove

BEST APOCALYPTIC NOVEL:
Walkaway by Cory Doctorow

BEST HORROR NOVEL:
The Changeling by Victor LaValle

BEST COMIC BOOK:
The Dresden Files: Dog Men by Jim Butcher, Mark Powers, Diego Galindo

BEST GRAPHIC NOVEL:
Jim Butcher’s The Dresden Files: Wild Card by Jim Butcher, Carlos Gomez

BEST SCIENCE FICTION OR FANTASY TV SERIES:
Stranger Things, Netflix

BEST SCIENCE FICTION OR FANTASY MOVIE:
Wonder Woman directed by Patty Jenkins

BEST SCIENCE FICTION OR FANTASY PC / CONSOLE GAME:
The Legend of Zelda: Breath of the Wild by Nintendo

BEST SCIENCE FICTION OR FANTASY MOBILE GAME:
Pokémon GO by Niantic

BEST SCIENCE FICTION OR FANTASY BOARD GAME:
Betrayal at House on the Hill: Widow’s Walk by Avalon Hill

BEST SCIENCE FICTION OR FANTASY MINIATURES / COLLECTIBLE CARD / ROLE-PLAYING GAME
Magic the Gathering: Eldritch Moon by Wizards of the Coast

Here are the winners of the 2017 Dragon Awards
[Andrew Liptak/The Verge]

Sociological ImagesA New Era for Sociological Images

Dear readers,

This summer marks Sociological Images’ ten year anniversary. I founded the website with Gwen Sharp and have been at its helm ever since. With the support of the University of Minnesota (whose servers we used to crash); Doug Hartmann and Chris Uggen at The Society Pages (who have built an incredible site for SocImages to live on); Jon Smadja’s technical expertise (keeping it all humming); hundreds of insightful guest bloggers; and regular contributors like Jay Livingston, Philip Cohen, Martin Hart-Landsberg, and Tristan Bridges (who also did a grueling three month stint as Guest Editor), the site grew to a size that I never imagined possible. It reached at its peak over a half million visitors a month, boasts an archive of over 5,000 posts, and enjoys 162,000 social media followers. I am beyond grateful to my community of professional sociologists and to the sociology majors and sociology-curious out there in the world, without whom none of this could have happened.

When the site was new, and through 2012, I sat on panels at the American Sociological Association meetings that were titled something to the effect of “Will blogging ruin your career.” It did not. In 2015, Gwen Sharp and I were given the Distinguished Contributions to Teaching Award, the sixth of seven awards we would win for our shepherding of the site. Things changed, and fast. More and more sociologists came online, nearly a thousand of us joined Twitter, and sociology-themed blogs proliferated. Even fancy folk in the discipline started blogging, writing for high-profile news and opinion outlets, and building social media followers. This year a sociologist won a Pulitzer Prize for a book aimed at a — wait for it — general audience.

I am among those who have benefited from this public turn in sociology, the one that Michael Burawoy championed in 2004. It gave me the opportunity to speak to wider audiences, become visible to my peers, be recognized for thoughtful (and sometimes not-so-thoughtful) analyses, and a public reputation that led to a general audience book of my own. As the gatekeeper to Sociological Images and its social media, I also held in my hands the power to help other sociologists publicize their work, one that I tried to use liberally. This has been one of the most gratifying parts of being the site’s editor, and also one that I think has attracted much good will. Sociological Images has been, in short, an incredible boon to my career.

At this time, then, with my own career well-launched, it seems greedy to hold onto the reins. So, with this post I announce an editorial change and a new era for Sociological Images. I am so pleased and excited to introduce Evan Stewart as the new Editor and Principal Author. Evan is a late-stage PhD candidate at the University of Minnesota. After years as a board member and now Graduate Editor of The Society Pages, he brings many years of practice under the careful guidance of Chris Uggen and Doug Hartmann, sociologists who bring great wisdom to the practice of public sociology. I am thrilled that Evan is on board and confident that, under his leadership, the site will continue will be enjoyed by sociology-lovers and a resource for all sociologists who want to reach a broader public.

It is bittersweet to go. I will remain on as an occasional contributor (if Evan thinks my guest posts are of high enough quality to merit publication, of course) and share with him the keys to the social media accounts. I have new projects in the works that I am very excited about — many of which, in various ways, were made possible by SocImages — so you haven’t seen the last of me yet. And I will always be grateful to all of you for your part in the incredible journey the last decade has brought.

With that, please welcome Evan Stewart!

——————————-

Hello everyone!

I am honored to be taking up the torch at Sociological Images, and grateful to Lisa for the opportunity. Lisa has built a remarkable legacy with this blog, and I am looking forward to carrying it on into the future.

Sociological Images is the reason I got hooked on public sociology. I found the site late in my undergraduate career at Michigan State University. Lisa and Gwen were kind enough to publish, and later repost, a guest contribution I wrote for a visual sociology class—one that would eventually turn into a chapter for a book from UC Press.

When I started my graduate work in political and cultural sociology at the University of Minnesota, I jumped at the chance to do more public scholarship with The Society Pages. Thanks to Doug Hartmann and Chris Uggen, grad students on the TSP board learn to assess cutting-edge research from all over our field. Bringing this work to the public makes us better writers and scholars. With their support over four years at TSP, I founded and piloted two blogs: There’s Research on That!, which brings social science to the news of the day, and Social Studies Minnesotawhich covers UMN research in political science, psychology, mass communication, and beyond.

Doing great research and sharing great research with people go hand in hand. My research at the American Mosaic Project tackles big questions about how people think about who belongs in society—questions that matter to people! This is due in no small part to my wonderful academic advisor, Penny Edgell (who also blogs!), and graduate colleagues like Jacqui Frost and Jack Delehanty,

Now, I am excited to bring my experience to Sociological Images. The site has been a home for honing the sociological imagination for a decade, and I aim to keep it that way. You can expect to see a lot of familiar names as I continue to bring you top-notch blogging about the social world from SocImages’ old friends. I’ll also be producing original content and cross-posting material by the The Society Pages’ graduate editorial board. Look for some new folks from Minnesota in the by-lines, as well as future calls for guest submissions.

There will be more to come in the next few months, but for now say hi on Twitter and stay tuned for the first new posts coming this week!

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityWho Is Marcus Hutchins?

In early August 2017, FBI agents in Las Vegas arrested 23-year-old British security researcher Marcus Hutchins on suspicion of authoring and/or selling “Kronos,” a strain of malware designed to steal online banking credentials. Hutchins was virtually unknown to most in the security community until May 2017 when the U.K. media revealed him as the “accidental hero” who inadvertently halted the global spread of WannaCry, a ransomware contagion that had taken the world by storm just days before.

Relatively few knew it before his arrest, but Hutchins has for many years authored the popular cybersecurity blog MalwareTech. When this fact became more widely known — combined with his hero status for halting Wannacry — a great many MalwareTech readers quickly leapt to his defense to denounce his arrest. They reasoned that the government’s case was built on flimsy and scant evidence, noting that Hutchins has worked tirelessly to expose cybercriminals and their malicious tools. To date, some 226 supporters have donated more than $14,000 to his defense fund.

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image: twitter.com/malwaretechblog

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image: twitter.com/malwaretechblog

At first, I did not believe the charges against Hutchins would hold up under scrutiny. But as I began to dig deeper into the history tied to dozens of hacker forum pseudonyms, email addresses and domains he apparently used over the past decade, a very different picture began to emerge.

In this post, I will attempt to describe and illustrate more than three weeks’ worth of connecting the dots from what appear to be Hutchins’ earliest hacker forum accounts to his real-life identity. The clues suggest that Hutchins began developing and selling malware in his mid-teens — only to later develop a change of heart and earnestly endeavor to leave that part of his life squarely in the rearview mirror.

GH0STHOSTING/IARKEY

I began this investigation with a simple search of domain name registration records at domaintools.com [full disclosure: Domain Tools recently was an advertiser on this site]. A search for “Marcus Hutchins” turned up a half dozen domains registered to a U.K. resident by the same name who supplied the email address “surfallday2day@hotmail.co.uk.”

One of those domains — Gh0sthosting[dot]com (the third character in that domain is a zero) — corresponds to a hosting service that was advertised and sold circa 2009-2010 on Hackforums[dot]net, a massively popular forum overrun with young, impressionable men who desperately wish to be elite coders or hackers (or at least recognized as such by their peers).

The surfallday2day@hotmail.co.uk address tied to Gh0sthosting’s initial domain registration records also was used to register a Skype account named Iarkey that listed its alias as “Marcus.” A Twitter account registered in 2009 under the nickname “Iarkey” points to Gh0sthosting[dot]com.

Gh0sthosting was sold by a Hackforums user who used the same Iarkey nickname, and in 2009 Iarkey told fellow Hackforums users in a sales thread for his business that Gh0sthosting was “mainly for blackhats wanting to phish.” In a separate post just a few days apart from that sales thread, Iarkey responds that he is “only 15” years old, and in another he confirms that his email address is surfallday2day@hotmail.co.uk.

daloseronly15

A review of the historic reputation tied to the Gh0sthosting domain suggests that at least some customers took Iarkey up on his offer: Malwaredomainlist.com, for example, shows that around this same time in 2009 Gh0sthosting was observed hosting plenty of malware, including trojan horse programs, phishing pages and malware exploits.

A “reverse WHOIS” search at Domaintools.com shows that Iarkey’s surfallday2day email address was used initially to register several other domains, including uploadwith[dot]us and thecodebases[dot]com.

Shortly after registering Gh0sthosting and other domains tied to his surfallday2day@hotmail.co.uk address, Iarkey evidently thought better of including his real name and email address in his domain name registration records. Thecodebases[dot]com, for example, changed its WHOIS ownership to a “James Green” in the U.K., and switched the email to “herpderpderp2@hotmail.co.uk.”

A reverse WHOIS lookup at domaintools.com for that email address shows it was used to register a Hackforums parody (or phishing?) site called Heckforums[dot]net. The domain records showed this address was tied to a Hackforums clique called “Atthackers.” The records also listed a Michael Chanata from Florida as the owner. We’ll come back to Michael Chanata and Atthackers at the end of this post.

DA LOSER/FLIPERTYJOPKINS

As early as 2009, Iarkey was outed several times on Hackforums as being Marcus Hutchins from the United Kingdom. In most of those instances he makes no effort to deny the association — and in a handful of posts he laments that fellow members felt the need to “dox” him by posting his real address and name in the hacking forum for all to see.

Iarkey, like many other extremely active Hackforums users, changed his nickname on the forum constantly, and two of his early nicknames on Hackforums around 2009 were “Flipertyjopkins” and “Da Loser“.

Hackforums user “Da Loser” is doxed by another member.

Happily, Hackforums has a useful feature that allows anyone willing to take the time to dig through a user’s postings to learn when and if that user was previously tied to another account.

This is especially evident in multi-page Hackforums discussion threads that span many days or weeks: If a user changes his nickname during that time, the forum is set up so that it includes the user’s most previous nickname in any replies that quote the original nickname — ostensibly so that users can follow along with who’s who and who said what to whom.

In the screen shot below, for instance, we can see one of Hutchins’ earliest accounts — Da Loser — being quoted under his Flipertyjopkins nickname.

A screen shot showing Hackforums’ tendency to note when users switch between different usernames.

Both the Da Loser and Flipertyjopkins identities on Hackforums referenced the same domains in 2009 as theirs — Gh0sthosting — as well as another domain called “hackblack.co[dot]uk.” Da Loser references the hackblack domain as the place where other Hackforums users can download “the sourcecode of my IE/MSN messenger password stealer (aka M_Stealer).”

In another post, Da Loser brags about how his password stealing program goes undetected by multiple antivirus scanners, pointing to a (now deleted) screenshot at a Photobucket account for a “flipertyjopkins”:

Another screenshot from Da Loser’s postings in June 2009 shows him advertising the Hackblack domain and the Surfallday2day@hotmail.co.uk address:

Hackforums user “Da Loser” advertises his “Hackblack” hosting and points to the surfallday2day email address.

An Internet search for this Hackblack domain reveals a thread on the Web hosting forum MyBB started by a user Flipertyjopkins, who asks other members for help configuring his site, which he lists as http://hackblack.freehost10[dot]com.

A user named Flipertyjopkins asks for help for his domain, hackblack.freehost10[dot]com.

Poking around the Web for these nicknames and domains turned up a Youtube user account named Flipertyjopkins that includes several videos uploaded 7-8 years ago that instruct viewers on how to use various types of password-stealing malware. In one of the videos — titled “Hotmail cracker v1.3” — Flipertyjopkins narrates how to use a piece of malware by the same name to steal passwords from unsuspecting victims.

Approximately two minutes and 48 seconds into the video, we can briefly see an MSN Messenger chat window shown behind the Microsoft Notepad application he is using to narrate the video. The video clearly shows that the MSN Messenger client is logged in to with the address “hutchins22@hotmail.com.”

The email address “hutchins22@hotmail.com” can be seen briefly in the background of this video.

To close out the discussion of Flipertyjopkins, I should note that this email address showed up multiple times in the database leak from Hostinger.co.uk, a British Web hosting company that got hacked in 2015. A copy of that database can be found in several places online, and it shows that one Hostinger customer named Marcus used an account under the email address flipertyjopkins@gmail.com.

According to the leaked user database, the password for that account — “emmy009” — also was used to register two other accounts at Hostinger, including the usernames “hacker” (email address: flipertyjopkins@googlemail.com) and “flipertyjopkins” (email: surfallday2day@hotmail.co.uk).

ELEMENT PRODUCTS/GONE WITH THE WIND

Most of the activities and actions that can be attributed to Iarkey/Flipertyjopkins/Da Loser et. al on Hackforums are fairly small-time  — and hardly rise to the level of coding from scratch a complex banking trojan and selling it to cybercriminals.

However, multiple threads on Hackforums state that Hutchins around 2011-2012 switched to two new nicknames that corresponded to users who were far more heavily involved in coding and selling complex malicious software: “Element Products,” and later, “Gone With The Wind.”

Hackforums’ nickname preservation feature leaves little doubt that the user Element Products at some point in 2012 changed his nickname to Gone With the Wind. However, for almost a week I could not see any signs of a connection between these two accounts and the ones previously and obviously associated with Hutchins (Flipertyjopkins, Iarkey, etc.).

In the meantime, I endeavored to find out as much as possible about Element Products — a suite of software and services including a keystroke logger, a “stresser” or online attack service, as well as a “no-distribute” malware scanner.

Unlike legitimate scanning services such as Virustotal — which scan malicious software against dozens of antivirus tools and then share the output with all participating antivirus companies — no-distribute scanners are made and marketed to malware authors who wish to see how broadly their malware is detected without tipping off the antivirus firms to a new, more stealthy version of the code.

Indeed, Element Scanner — which was sold in subscription packages starting at $40 per month — scanned all customer malware with some 37 different antivirus tools. But according to posts from Gone With the Wind, the scanner merely resold the services of scan4you[dot]net, a multiscanner that was extremely powerful and popular for several years across a variety of underground cybercrime forums.

element

According to a story at Bleepingcomputer.com, scan4you disappeared in July 2017, around the same time that two Latvian men were arrested for running an unnamed no-distribute scanner.

[Side note: Element Scanner was later incorporated as the default scanning application of “Blackshades,” a remote access trojan that was extremely popular on Hackforums for several years until its developers and dozens of customers were arrested in an international law enforcement sting in May 2014. Incidentally, as the story linked in the previous sentence explains, the administrator and owner of Hackforums would play an integral role in setting up many of his forum’s users for the Blackshades sting operation.]

According to one thread on Hackforums, Element Products was sold in 2012 to another Hackforums user named “Dal33t.” This was the nickname used by Ammar Zuberi, a young man from Dubai who — according to this this January 2017 KrebsOnSecurity story — may have been associated with a group of miscreants on Hackforums that specialized in using botnets to take high-profile Web sites offline. Zuberi could not be immediately reached for comment.

I soon discovered that Element Products was by far the least harmful product that this user sold on Hackforums. In a separate thread in 2012, Element Products announces the availability of a new product he had for sale — dubbed the “Ares Form Grabber” — a program that could be used to surreptitiously steal usernames and passwords from victims.

Element Products/Gone With The Wind also advertised himself on Hackforums as an authorized reseller of the infamous exploit kit known as “Blackhole.” Exploit kits are programs made to be stitched into hacked and malicious Web sites so that when visitors browse to the site with outdated and insecure browser plugins the browser is automatically infected with whatever malware the attacker wishes to foist on the victim.

In addition, Element Products ran a “bot shop,” in which he sold access to bots claimed to have enslaved through his own personal use of Blackhole:

Gone With The Wind’s “Bot Shop,” which sold access to computers hacked with the help of the Blackhole exploit kit.

A bit more digging showed that the Element Products user on Hackforums co-sold his wares along with another Hackforums user named “Kill4Joy,” who advertised his contact address as kill4joy@live.com.

Ironically, Hackforums was itself hacked in 2012, and a leaked copy of the user database from that hack shows this Kill4Joy user initially registered on the forum in 2011 with the email address rohang93@live.com.

A reverse WHOIS search at domaintools.com shows that email address was used to register several domain names, including contegoprint.info. The registration records for that domain show that it was registered by a Rohan Gupta from Illinois.

I learned that Gupta is now attending graduate school at the University of Illinois at Urbana-Champaign, where he is studying computer engineering. Reached via telephone, Gupta confirmed that he worked with the Hackforums user Element Products six years ago, but said he only handled sales for the Element Scanner product, which he says was completely legal.

“I was associated with Element Scanner which was non-malicious,” Gupta said. “It wasn’t black hat, and I wasn’t associated with the programming, I just assisted with the sales.”

Gupta said his partner and developer of the software went by the name Michael Chanata and communicated with him via a Skype account registered to the email address atthackers@hotmail.com.

Recall that we heard at the beginning of this story that the name Michael Chanata was tied to Heckforums.net, a domain closely connected to the Iarkey nickname on Hackforums. Curious to see if this Michael Chanata character showed up somewhere on Hackforums, I used the forum’s search function to find out.

The following screenshot from a July 2011 Hackforums thread suggests that Michael Chanata was yet another nickname used by Da Loser, a Hackforums account associated with Marcus Hutchins’ early email addresses and Web sites.

Hackforums shows that the user “Da Loser” at the same time used the nickname “Michael Chanata.”

BV1/ORGY

Interesting connections, to be sure, but I wasn’t satisfied with this finding and wanted more conclusive evidence of the supposed link. So I turned to “passive DNS” tools from Farsight Security — which keeps a historic record of which domain names map to which IP addresses.

Using Farsight’s tools, I found that Element Scanner’s various Web sites (elementscanner[dot]com/net/su/ru) were at one point hosted at the Internet address 184.168.88.189 alongside just a handful of other interesting domains, including bigkeshhosting[dot]com and bvnetworks[dot]com.

At first, I didn’t fully recognize the nicknames buried in each of these domains, but a few minutes of searching on Hackforums reminded me that bigkeshhosting[dot]com was a project run by a Hackforums user named “Orgy.”

I originally wrote about Orgy — whose real name is Robert George Danielson — in a 2012 story about a pair of stresser or “booter” (DDoS-for-hire) sites. As noted in that piece, Danielson has had several brushes with the law, including a guilty plea for stealing multiple firearms from the home of a local police chief.

I also learned that the bvnetworks[dot]com domain belonged to Orgy’s good friend and associate on Hackforums — a user who for many years went by the nickname “BV1.” In real life, BV1 is 27-year-old Brendan Johnston, a California man who went to prison in 2014 for his role in selling the Blackshades trojan.

When I discovered the connection to BV1, I searched my inbox for anything related to this nickname. Lo and behold, I found an anonymous tip I’d received through KrebsOnSecurity.com’s contact form in March 2013 which informed me of BV1’s real identity and said he was close friends with Orgy and the Hackforums user Iarkey.

According to this anonymous informant, Iarkey was an administrator of an Internet relay chat (IRC) forum that BV1 and Orgy frequented called irc.voidptr.cz.

“You already know that Orgy is running a new booter, but BV1 claims to have ‘left’ the hacking business because all the information on his family/himself has been leaked on the internet, but that is a lie,” the anonymous tipster wrote. “If you connect to http://irc.voidptr. cz ran by ‘touchme’ aka ‘iarkey’ from hackforums you can usually find both BV1 and Orgy in there.”

TOUCHME/TOUCH MY MALWARE/MAYBE TOUCHME

Until recently, I was unfamiliar with the nickname TouchMe. Naturally, I started digging into Hackforums again. An exhaustive search on the forum shows that TouchMe — and later “Touch Me Maybe” and “Touch My Malware” — were yet other nicknames for the same account.

In a Hackforums post from July 2012, the user Touch Me Maybe pointed to a writeup that he claimed to have authored on his own Web site: touchmymalware.blogspot.com:

The Hackforums user “Touch Me Maybe” seems to refer to his own blog and malware analysis at touchmymalware.blogspot.com, which now redirects to Marcus Hutchins’ blog — Malwaretech.com

If you visit this domain name now, it redirects to Malwaretech.com, which is the same blog that Hutchins was updating for years until his arrest in August.

There are other facts to support a connection between MalwareTech and the IRC forum voidptr.cz: A passive DNS scan for irc.voidptr.cz at Farsight Security shows that at one time the IRC channel was hosted at the Internet address 52.86.95.180 — where it shared space with just one other domain: irc.malwaretech.com.

All of the connections explained in this blog post — and some that weren’t — can be seen in the following mind map that I created with the excellent MindNode Pro for Mac.

A mind map I created to keep track of the myriad data points mentioned in this story. Click the image to enlarge.

Following Hutchins’ arrest, multiple Hackforums members posted what they suspected about his various presences on the forum. In one post from October 2011, Hackforums founder and administrator Jesse “Omniscient” LaBrocca said Iarkey had hundreds of accounts on Hackforums.

In one of the longest threads on Hackforums about Hutchins’ arrest there are several postings from a user named “Previously Known As” who self-identifies in that post and multiple related threads as BV1. In one such post, dated Aug. 7, 2017, BV1 observes that Hutchins failed to successfully separate his online selves from his real life identity.

Brendan “BV1” Johnston says he worried his old friend’s operational security mistakes would one day catch up with him.

“He definitely thought he separated TouchMe/MWT from iarkey/Element,” said BV1. “People warned him, myself included, that people can still connect MWT to iarkey, but he never seemed to care too much. He has so many accounts on HF at this point, I doubt someone will be able to connect all the dots. It sucks that some of the worst accounts have been traced back to him already. He ran a hosting company and a Minecraft server with Orgy and I.”

In a brief interview with KrebsOnSecurity, Brendan “BV1” Johnston said Hutchins was a good friend. Johnston said Hutchins had — like many others who later segued into jobs in the information security industry — initially dabbled in the dark side. But Johnston said his old friend sincerely tried to turn things around in late 2012 — when Gone With the Wind sold most of his coding projects to other Hackforums members and began focusing on blogging about poorly-written malware.

“I feel like I know Marcus better than most people do online, and when I heard about the accusations I was completely shocked,” Johnston said. “He tried for such a long time to steer me down a straight and narrow path that seeing this tied to him didn’t make sense to me at all.”

Let me be clear: I have no information to support the claim that Hutchins authored or sold the Kronos banking trojan. According to the government, Hutchins did so in 2014 on the Dark Web marketplace AlphaBay — which was taken down in July 2017 as part of a coordinated, global law enforcement raid on AlphaBay sellers and buyers alike.

However, the findings in this report suggest that for several years Hutchins enjoyed a fairly successful stint coding malicious software for others, said Nicholas Weaver, a security researcher at the International Computer Science Institute and a lecturer at UC Berkeley.

“It appears like Mr. Hutchins had a significant and prosperous blackhat career that he at least mostly gave up in 2013,” Weaver said. “Which might have been forgotten if it wasn’t for the involuntary British press coverage on WannaCry raising his profile and making him out as a ‘hero’.”

Weaver continued:

“I can easily imagine the Feds taking the opportunity to use a penny-ante charge against a known ‘bad guy’ when they can’t charge for more significant crimes,” he said. “But the Feds would have done far less collateral damage if they actually provided a criminal complaint with these sorts of detail rather than a perfunctory indictment.”

Hutchins did not try to hide the fact that he has written and published unique malware strains, which in the United States at least is a form of protected speech.

In December 2014, for example, Hutchins posted to his Github page the source code to TinyXPB, malware he claims to have written that is designed to seize control of a computer so that the malware loads before the operating system can even boot up.

While the publicly available documents related to his case are light on details, it seems clear that prosecutors can make a case against those who attempt to sell malware to cybercriminals — such as on hacker forums like AlphaBay — if they can demonstrate the accused had knowledge and intent that the malware would be used to commit a crime.

The Justice Department’s indictment against Hutchins suggests that the prosecution is relying heavily on the word of an unnamed co-conspirator who became a confidential informant for the government. Update, 9:08 a.m.: Several readers on Twitter disagreed with the previous statement, noting that U.S. prosecutors have said the other unnamed suspect in the Hutchins indictment is still at large.

Original story:

According to a story at BankInfoSecurity, the evidence submitted by prosecutors for the government includes:

  • Statements made by Hutchins after he was arrested.
  • A CD containing two audio recordings from a county jail in Nevada where he was detained by the FBI.
  • 150 pages of Jabber chats between the defendant and an individual.
  • Business records from Apple, Google and Yahoo.
  • Statements (350 pages) by the defendant from another internet forum, which were seized by the government in another district.
  • Three to four samples of malware.
  • A search warrant executed on a third party, which may contain some privileged information.

Hutchins declined to comment for this story, citing his ongoing prosecution. He has pleaded not guilty to all four counts against him, including conspiracy to distribute malicious software with the intent to cause damage to 10 or more affected computers without authorization, and conspiracy to distribute malware designed to intercept protected electronic communications. FBI officials have not yet responded to requests for comment.

Worse Than FailureCodeSOD: Gotta Get 'Em All

LINQ brings functional programming and loads of syntactic sugar to .NET languages. It’s a nice feature, although as James points out, it helps if your fellow developers have even the slightest clue about what they’re doing.

// some validation checking
var retrieveDocIdList = this.storedDocumentManager.GetAllForClientNotRetrieved(client.Id).Select(x => x.Id.ToString(CultureInfo.InvariantCulture)).ToList();

retrieveDocIdList.ForEach(id => {
    var storedDoc = this.storedDocumentManager.Get(int.Parse(id))
// do some other stuff with the doc
});

James writes:

The code snippet is somewhat paraphrased because the actual code is poorly formatted and full of junk, but this is the main point.
It seems to be a requirement that previous developers leave weird code behind with no documentation or comments explaining what they were thinking at the time.

Well, first, “poorly formatted and full of junk” is our stock-in-trade, but we do appreciate the focus on the main WTF. Let’s see if we can piece together what the developers were thinking.

If you’ve read an article here before, your eyes almost certainly will catch the x.Id.ToString and the int.Parse(id) calls. Right off the bat, you know something’s fishy. But let’s walk through it.

this.storedDocumentManager.GetAllForClientNotRetrieved(client.Id) returns a list of all the documents that have not beeen loaded from the database. Now, by default, this is equivalent to a SELECT *, so instead of getting all that data, they pick off just the IDs as a string in the Select call.

Now, they have a list of IDs of documents that they don’t have loaded. So now, they can take each ID, and in a ForEach call… fetch the entire document from the database.

Well, that’s what it does, but what were they thinking? We may never know, but at a guess, someone knew that “Select star bad, select specific fields”, and then they just applied that knowledge without any further thought. The other possibility is that the team of developers wrote individual lines without talking to anyone else, and then just kinda mashed it together without understanding how it worked.

James replaced the entire thing:

foreach (var storedDoc in this.storedDocumentManager.GetAllForClientNotRetrieved(client.Id)) {
    //do some other stuff with the doc
}
[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

,

Rondam RamblingsSupporting Robert E. Lee is no longer an acceptable position

I am a German Jew, a descendant of holocaust survivors.  I am also a Southern boy, having spent my formative years from age 5 through 24 in Kentucky, Tennessee, and Virginia.  I tell you this to provide some perspective on what I am about to say: Robert E. Lee had many fine qualities.  So did Adolf Hitler. Bear with me. In the aftermath of World War I, the Allies were determined that Germany

CryptogramNew Techniques in Fake Reviews

Research paper: "Automated Crowdturfing Attacks and Defenses in Online Review Systems."

Abstract: Malicious crowdsourcing forums are gaining traction as sources of spreading misinformation online, but are limited by the costs of hiring and managing human workers. In this paper, we identify a new class of attacks that leverage deep learning language models (Recurrent Neural Networks or RNNs) to automate the generation of fake online reviews for products and services. Not only are these attacks cheap and therefore more scalable, but they can control rate of content output to eliminate the signature burstiness that makes crowdsourced campaigns easy to detect.

Using Yelp reviews as an example platform, we show how a two phased review generation and customization attack can produce reviews that are indistinguishable by state-of-the-art statistical detectors. We conduct a survey-based user study to show these reviews not only evade human detection, but also score high on "usefulness" metrics by users. Finally, we develop novel automated defenses against these attacks, by leveraging the lossy transformation introduced by the RNN training and generation cycle. We consider countermeasures against our mechanisms, show that they produce unattractive cost-benefit tradeoffs for attackers, and that they can be further curtailed by simple constraints imposed by online service providers.

Worse Than FailureClassic WTF: #include "pascal.h"

It's Labor Day in the US, where to honor workers, some people get a day off, but retail stores are open with loads of sales. We're reaching back to the old days of 2004 for this one. -- Remy

Ludwig Von Anon sent in some code from the UI component of a large, multi-platform system he has the pleasure of working on. At first glance, the code didn't seem all too bad ...

procedure SelectFontIntoDC(Integer a) begin
 declare fonthandle fh;
 if (gRedraw is not false) then begin
   fh = CreateFontIndirect(gDC);
   SelectObject(gDC, fh);
   DeleteObject(fh);
 end;
end;

Seems fairly normal, right? Certainly nothing that meets our ... standards. Of course, when you factor in the name of the codefile (which ends in ".c") and the header included throughout the entire project ("pascal.h"), I think it becomes pretty apparent that we're entering Whiskey Tango Foxtrot country:

#define procedure void
#define then
#define is
#define not !=
#define begin {
#define end }

Yeeeeeeeee Haw!  Sorry, just can't get enough of Mr. Burleson.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaTim Serong: Understanding BlueStore, Ceph’s New Storage Backend

On June 1, 2017 I presented Understanding BlueStore, Ceph’s New Storage Backend at OpenStack Australia Day Melbourne. As the video is up (and Luminous is out!), I thought I’d take the opportunity to share it, and write up the questions I was asked at the end.

First, here’s the video:

The bit at the start where the audio cut out was me asking “Who’s familiar with Ceph?” At this point, most of the 70-odd people in the room put their hands up. I continued with “OK, so for the two people who aren’t…” then went into the introduction.

After the talk we had a Q&A session, which I’ve paraphrased and generally cleaned up here.

With BlueStore, can you still easily look at the objects like you can through the filesystem when you’re using FileStore?

There’s not a regular filesystem anymore, so you can’t just browse through it. However you can use `ceph-objectstore-tool` to “mount” an offline OSD’s data via FUSE and poke around that way. Some more information about this can be found in Sage Weil’s recent blog post: New in Luminous: BlueStore.

Do you have real life experience with BlueStore for how IOPS performance scales?

We (SUSE) haven’t released performance numbers yet, so I will instead refer you to Sage Weil’s slides from Vault 2017, and Allan Samuel’s slides from SCALE 15x, which together include a variety of performance graphs for different IO patterns and sizes. Also, you can expect to see more about this on the Ceph blog in the coming weeks.

What kind of stress testing has been done for corruption in BlueStore?

It’s well understood by everybody that it’s sort of important to stress test these things and that people really do care if their data goes away. Ceph has a huge battery of integration tests, various of which are run on a regular basis in the upstream labs against Ceph’s master and stable branches, others of which are run less frequently as needed. The various downstreams all also run independent testing and QA.

Wouldn’t it have made sense to try to enhance existing POSIX filesystems such as XFS, to make them do what Ceph needs?

Long answer: POSIX filesystems still need to provide POSIX semantics. Changing the way things work (or adding extensions to do what Ceph needs) in, say, XFS, assuming it’s possible at all, would be a big, long, scary, probably painful project.

Short answer: it’s really a different use case; better to build a storage engine that fits the use case, than shoehorn in one that doesn’t.

Best answer: go read New in Luminous: BlueStore ;-)

,

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 3, week 9

OpenSTEM’s ® Understanding Our World® Units are designed to cover 9 weeks of the term, because we understand that life happens. Sports carnivals, excursions and other special events are also necessary parts of the school year and even if the calendar runs according to plan, having a little bit of breathing space at the end of […]

Don MartiJavaScript and not kicking puppies

(Updated 4 Sep 2017: add screenshot and how to see the warning.)

Advice from yan, on Twitter:

I decided not to do that for this site.

Yes, user tracking is creepy, and yes, collecting user information without permission is wrong. But read on for what could be a better approach for sites that can make a bigger difference.

First of all, Twitter is so far behind in their attempts to do surveillance marketing that they're more funny and heartening than ominous. If getting targeted by one of the big players is like getting tracked down by a pack of hunting dogs, then Twitter targeting is like watching a puppy chew on your sock. Twitter has me in their database as...

  • Owner of eight luxury cars and a motorcycle.

  • Medical doctor advising patients about eating High Fructose Corn Syrup.

  • Owner of prime urban real estate looking for financing to build a hotel.

  • Decision-maker for a city water system, looking to read up on the pros and cons of cast iron and concrete pipes.

  • Active in-market car shopper, making all decisions based on superficial shit like whether the car has Beats® brand speakers in the doors. (Hey, where am I supposed to park car number 9?)

So if Twitter is the minor leagues of creepy, and they probably won't be something we have to worry about for long anyway, maybe we can think about whether there's anything that sites can do about riskier kinds of tracking. Getting a user protected from being tracked by one Tweet is a start. But helping users get started with client-side privacy tools that protect from Twitter tracking everywhere can help with not just Twitter tracking, but with the serious trackers that show up in other places.

Blocking Twitter tracking: like kicking a puppy?

Funny wrong Twitter ad targeting is one of my reliable Internet amusements for the day. But that's not why I'm not especially concerned with tagging quoted Tweets. Just doing that doesn't protect this site's visitors from retargeting schemes on other sites.

And every time someone clicks on a retargeted ad from a local business on a social site (probably Facebook, since more people spend more time there) then that's 65 cents or whatever of marketing money that could have gone to local news, bus benches, Little League, or some other sustainable, signal-carrying marketing project. (That's not even counting the medium to heavy treason angle that makes me really uncomfortable about seeing money move in Facebook's direction.)

So, instead of messing with quoted Tweet tagging, I set up this script:

warn3p.js

This will load the Aloodo third-party tracking detector, and, if the browser shows up as easily trackable from site to site, switch out the page header to nag the user.

screenshot of tracking warning

(If you are viewing this site from an unprotected browser and still not seeing the warning, it means that your browser has not yet visited enough domains with the Aloodo script to detect that you're trackable. Take a tracking protection test to expose your browser to more fake tracking, then try again.)

If the other side wants it hidden, then reveal it

Surveillance marketers want tracking to happen behind the scenes, so make it obvious. If you have a browser or privacy tool that you want to recommend, it's easy to put in the link. Every retargeted ad impression that's prevented from happening is more marketing money to pay for ad-sponsored resources that users really want. I know I can't get all the users of this site perfectly protected from all surveillance marketing everywhere, but hey, 65 cents is 65 cents.

Bonus tweet

Bob Hoffman's new book is out! Go click on this quoted Tweet, and do what it says.

Good points here: you don't need to be Magickal Palo Alto Bros to get people to spend more time on your site. USA Today’s Facebook-like mobile site increased time spent per article by 75 percent

The Dumb Fact of Google Money

Join Mozilla and Stanford’s open design sprint for an accessible web

Just Following Orders

Headless mode in Firefox

Hard Drive Stats for Q2 2017

The Time When Google Got Forbes to Pull a Published Story

Disabling Intel ME 11 via undocumented mode

Despite Disavowals, Leading Tech Companies Help Extremist Sites Monetize Hate

Trump Damaged Democracy, Silicon Valley Will Finish It Off

What should you think about when using Facebook?

Ad buyers blast Facebook Audience Network for placing ads on Breitbart

Ice-cold Kaspersky shows the industry how to handle patent trolls

How the GDPR will disrupt Google and Facebook

Rural America Is Building Its Own Internet Because No One Else Will Disabling Intel ME 11 via undocumented mode

Younger adults more likely than their elders to prefer reading news

,

Harald WeltePurism Librem 5 campaign

There's a new project currently undergoing crowd funding that might be of interest to the former Openmoko community: The Purism Librem 5 campaign.

Similar to Openmoko a decade ago, they are aiming to build a FOSS based smartphone built on GNU/Linux without any proprietary drivers/blobs on the application processor, from bootloader to userspace.

Furthermore (just like Openmoko) the baseband processor is fully isolated, with no shared memory and with the Linux-running application processor being in full control.

They go beyond what we wanted to do at Openmoko in offering hardware kill switches for camera/phone/baseband/bluetooth. During Openmoko days we assumed it is sufficient to simply control all those bits from the trusted Linux domain, but of course once that might be compromised, a physical kill switch provides a completely different level of security.

I wish them all the best, and hope they can leave a better track record than Openmoko. Sure, we sold some thousands of phones, but the company quickly died, and the state of software was far from end-user-ready. I think the primary obstacles/complexities are verification of the hardware design as well as the software stack all the way up to the UI.

The budget of ~ 1.5 million seems extremely tight from my point of view, but then I have no information about how much Puri.sm is able to invest from other sources outside of the campaign.

If you're a FOSS developer with a strong interest in a Free/Open privacy-first smartphone, please note that they have several job openings, from Kernel Developer to OS Developer to UI Developer. I'd love to see some talents at work in that area.

It's a bit of a pity that almost all of the actual technical details are unspecified at this point (except RAM/flash/main-cpu). No details on the cellular modem/chipset used, no details on the camera, neither on the bluetooth chipset, wifi chipset, etc. This might be an indication of the early stage of their plannings. I would have expected that one has ironed out those questions before looking for funding - but then, it's their campaign and they can run it as they see it fit!

I for my part have just put in a pledge for one phone. Let's see what will come of it. In case you feel motivated by this post to join in: Please keep in mind that any crowdfunding campaign bears significant financial risks. So please make sure you made up your mind and don't blame my blog post for luring you into spending money :)

Sociological ImagesFrom our archives: Work in America

Monday is Labor Day in the U.S. Though to many it is a last long weekend for recreation and shopping before the symbolic end of summer, the federal holiday, officially established in 1894, celebrates the contributions of labor.

Here are a few dozen SocImages posts on a range of issues related to workers, from the history of the labor movement, to current workplace conditions, to the impacts of the changing economy on workers’ pay:

The Social Construction of Work

Work in Popular Culture

Unemployment, Underemployment, and the “Class War”

Unions and Unionization

Economic Change, Globalization, and the Great Recession

Work and race, ethnicity, religion, and immigration

Gender and Work

The U.S. in International Perspective

Academia

Just for Fun

Bonus!

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

,

Harald WelteFirst actual XMOS / XCORE project

For many years I've been fascinated by the XMOS XCore architecture. It offers a surprisingly refreshing alternative virtually any other classic microcontroller architectures out there. However, despite reading a lot about it years ago, being fascinated by it, and even giving a short informal presentation about it once, I've so far never used it. Too much "real" work imposes a high barrier to spending time learning about new architectures, languages, toolchains and the like.

Introduction into XCore

Rather than having lots of fixed-purpose built-in "hard core" peripherals for interfaces such as SPI, I2C, I2S, etc. the XCore controllers have a combination of

  • I/O ports for 1/4/8/16/32 bit wide signals, with SERDES, FIFO, hardware strobe generation, etc
  • Clock blocks for using/dividing internal or external clocks
  • hardware multi-threading that presents 8 logical threads on each core
  • xCONNECT links that can be used to connect multiple processors over 2 or 5 wires per direction
  • channels as a means of communication (similar to sockets) between threads, whether on the same xCORE or a remote core via xCONNECT
  • an extended C (xC) programming language to make use of parallelism, channels and the I/O ports

In spirit, it is like a 21st century implementation of some of the concepts established first with Transputers.

My main interest in xMOS has been the flexibility that you get in implementing not-so-standard electronics interfaces. For regular I2C, UART, SPI, etc. there is of course no such need. But every so often one encounters some interface that's very rately found (like the output of an E1/T1 Line Interface Unit).

Also, quite often I run into use cases where it's simply impossible to find a microcontroller with a sufficient number of the related peripherals built-in. Try finding a microcontroller with 8 UARTs, for example. Or one with four different PCM/I2S interfaces, which all can run in different clock domains.

The existing options of solving such problems basically boil down to either implementing it in hard-wired logic (unrealistic, complex, expensive) or going to programmable logic with CPLD or FPGAs. While the latter is certainly also quite interesting, the learning curve is steep, the tools anything but easy to use and the synthesising time (and thus development cycles) long. Furthermore, your board design will be more complex as you have that FPGA/CPLD and a microcontroller, need to interface the two, etc (yes, in high-end use cases there's the Zynq, but I'm thinking of several orders of magnitude less complex designs).

Of course one can also take a "pure software" approach and go for high-speed bit-banging. There are some ARM SoCs that can toggle their pins. People have reported rates like 14 MHz being possible on a Raspberry Pi. However, when running a general-purpose OS in parallel, this kind of speed is hard to do reliably over long term, and the related software implementations are going to be anything but nice to write.

So the XCore is looking like a nice alternative for a lot of those use cases. Where you want a microcontroller with more programmability in terms of its I/O capabilities, but not go as far as to go full-on with FPGA/CPLD development in Verilog or VHDL.

My current use case

My current use case is to implement a board that can accept four independent PCM inputs (all in slave mode, i.e. clock provided by external master) and present them via USB to a host PC. The final goal is to have a board that can be combined with the sysmoQMOD and which can interface the PCM audio of four cellular modems concurrently.

While XMOS is quite strong in the Audio field and you can find existing examples and app notes for I2S and S/PDIF, I couldn't find any existing code for a PCM slave of the given requirements (short frame sync, 8kHz sample rate, 16bit samples, 2.048 MHz bit clock, MSB first).

I wanted to get a feeling how well one can implement the related PCM slave. In order to test the slave, I decided to develop the matching PCM master and run the two against each other. Despite having never written any code for XMOS before, nor having used any of the toolchain, I was able to implement the PCM master and PCM slave within something like ~6 hours, including simulation and verification. Sure, one can certainly do that in much less time, but only once you're familiar with the tools, programming environment, language, etc. I think it's not bad.

The biggest problem was that the clock phase for a clocked output port cannot be configured, i.e. the XCore insists on always clocking out a new bit at the falling edge, while my use case of course required the opposite: Clocking oout new signals at the rising edge. I had to use a second clock block to generate the inverted clock in order to achieve that goal.

Beyond that 4xPCM use case, I also have other ideas like finally putting the osmo-e1-xcvr to use by combining it with an XMOS device to build a portable E1-to-USB adapter. I have no clue if and when I'll find time for that, but if somebody wants to join in: Let me know!

The good parts

Documentation excellent

I found the various pieces of documentation extremely useful and very well written.

Fast progress

I was able to make fast progress in solving the first task using the XMOS / Xcore approach.

Soft Cores developed in public, with commit log

You can find plenty of soft cores that XMOS has been developing on github at https://github.com/xcore, including the full commit history.

This type of development is a big improvement over what most vendors of smaller microcontrollers like Atmel are doing (infrequent tar-ball code-drops without commit history). And in the case of the classic uC vendors, we're talking about drivers only. In the XMOS case it's about the entire logic of the peripheral!

You can for example see that for their I2C core, the very active commit history goes back to January 2011.

xSIM simulation extremely helpful

The xTIMEcomposer IDE (based on Eclipse) contains extensive tracing support and an extensible near cycle accurate simulator (xSIM). I've implemented a PCM mater and PCM slave in xC and was able to simulate the program while looking at the waveforms of the logic signals between those two.

The bad parts

Unfortunately, my extremely enthusiastic reception of XMOS has suffered quite a bit over time. Let me explain why:

Hard to get XCore chips

While the product portfolio on on the xMOS website looks extremely comprehensive, the vast majority of the parts is not available from stock at distributors. You won't even get samples, and lead times are 12 weeks (!). If you check at digikey, they have listed a total of 302 different XMOS controllers, but only 35 of them are in stock. USB capable are 15. With other distributors like Farnell it's even worse.

I've seen this with other semiconductor vendors before, but never to such a large extent. Sure, some packages/configurations are not standard products, but having only 11% of the portfolio actually available is pretty bad.

In such situations, where it's difficult to convince distributors to stock parts, it would be a good idea for XMOS to stock parts themselves and provide samples / low quantities directly. Not everyone is able to order large trays and/or capable to wait 12 weeks, especially during the R&D phase of a board.

Extremely limited number of single-bit ports

In the smaller / lower pin-count parts, like the XU[F]-208 series in QFN/LQFP-64, the number of usable, exposed single-bit ports is ridiculously low. Out of the total 33 I/O lines available, only 7 can be used as single-bit I/O ports. All other lines can only be used for 4-, 8-, or 16-bit ports. If you're dealing primarily with serial interfaces like I2C, SPI, I2S, UART/USART and the like, those parallel ports are of no use, and you have to go for a mechanically much larger part (like XU[F]-216 in TQFP-128) in order to have a decent number of single-bit ports exposed. Those parts also come with twice the number of cores, memory, etc- which you don't need for slow-speed serial interfaces...

Change to a non-FOSS License

XMOS deserved a lot of praise for releasing all their soft IP cores as Free / Open Source Software on github at https://github.com/xcore. The License has basically been a 3-clause BSD license. This was a good move, as it meant that anyone could create derivative versions, whether proprietary or FOSS, and there would be virtually no license incompatibilities with whatever code people wanted to write.

However, to my very big disappointment, more recently XMOS seems to have changed their policy on this. New soft cores (released at https://github.com/xmos as opposed to the old https://github.com/xcore) are made available under a non-free license. This license is nothing like BSD 3-clause license or any other Free Software or Open Source license. It restricts the license to use the code together with an XMOS product, requires the user to contribute fixes back to XMOS and contains references to importand export control. This license is incopatible with probably any FOSS license in existance, making it impossible to write FOSS code on XMOS while using any of the new soft cores released by XMOS.

But even beyond that license change, not even all code is provided in source code format anymore. The new USB library (lib_usb) is provided as binary-only library, for example.

If you know anyone at XMOS management or XMOS legal with whom I could raise this topic of license change when transitioning from older sc_* software to later lib_* code, I would appreciate this a lot.

Proprietary Compiler

While a lot of the toolchain and IDE is based on open source (Eclipse, LLVM, ...), the actual xC compiler is proprietary.

Harald WelteThe sad state of voice support in cellular modems

Cellular modems have existed for decades and come in many shapes and kinds. They contain the cellular baseband processor, RF frontend, protocol stack software and anything else required to communicate with a cellular network. Basically a phone without display or input.

During the last decade or so, the vast majority of cellular modems come as LGA modules, i.e. a small PCB with all components on the top side (and a shielding can), which has contact pads on the bottom so you can solder it onto your mainboard. You can obtain them from vendors such as Sierra Wireless, u-blox, Quectel, ZTE, Huawei, Telit, Gemalto, and many others.

In most cases, the vendors now also solder those modules to small adapter boards to offer the same product in mPCIe form-factor. Other modems are directly manufactured in mPCIe or NGFF aka m.2 form-factor.

As long as those modems were still 2G / 2.5G / 2.75G, the main interconnection with the host (often some embedded system) was a serial UART. The Audio input/output for voice calls was made available as analog signals, ready to connect a microphone and spekaer, as that's what the cellular chipsets were designed for in the smartphones. In the Openmoko phones we also interfaced the audio of the cellular modem in analog, exactly for that reason.

From 3G onwards, the primary interface towards the host is now USB, with the modem running as a USB device. If your laptop contains a cellular modem, you will see it show up in the lsusb output.

From that point onwards, it would have made a lot of sense to simply expose the audio also via USB. Simply offer a multi-function USB device that has both whatever virutal serial ports for AT commands and network device for IP, and add a USB Audio device to it. It would simply show up as a "USB sound card" to the host, with all standard drivers working as expected. Sadly, nobody seems to have implemented this, at least not in a supported production version of their product

Instead, what some modem vendors have implemented as an ugly hack is the transport of 8kHz 16bit PCM samples over one of the UARTs. See for example the Quectel UC-20 or the Simcom SIM7100 which implement such a method.

All the others ignore any acess to the audio stream from software to a large part. One wonders why that is. From a software and systems architecture perspective it would be super easy. Instead, what most vendors do, is to expose a digital PCM interface. This is suboptimal in many ways:

  • there is no mPCIe standard on which pins PCM should be exposed
  • no standard product (like laptop, router, ...) with mPCIe slot will have anything connected to those PCM pins

Furthermore, each manufacturer / modem seems to support a different subset of dialect of the PCM interface in terms of

  • voltage (almost all of them are 1.8V, while mPCIe signals normally are 3.3V logic level)
  • master/slave (almost all of them insist on being a clock master)
  • sample format (alaw/ulaw/linear)
  • clock/bit rate (mostly 2.048 MHz, but can be as low as 128kHz)
  • frame sync (mostly short frame sync that ends before the first bit of the sample)
  • endianness (mostly MSB first)
  • clock phase (mostly change signals at rising edge; sample at falling edge)

It's a real nightmare, when it could be so simple. If they implemented USB-Audio, you could plug a cellular modem into any board with a mPCIe slot and it would simply work. As they don't, you need a specially designed mainboard that implements exactly the specific dialect/version of PCM of the given modem.

By the way, the most "amazing" vendor seems to be u-blox. Their Modems support PCM audio, but only the solder-type version. They simply didn't route those signals to the mPCIe slot, making audio impossible to use when using a connectorized modem. How inconvenient.

Summary

If you want to access the audio signals of a cellular modem from software, then you either

  • have standard hardware and pick one very specific modem model and hope this is available sufficiently long during your application, or
  • build your own hardware implementing a PCM slave interface and then pick + choose your cellular modem

On the Osmocom mpcie-breakout board and the sysmocom QMOD board we have exposed the PCM related pins on 2.54mm headers to allow for some separate board to pick up that PCM and offer it to the host system. However, such separate board hasn't been developed so far.