Planet Russell

,

CryptogramModifying a Tesla to Become a Surveillance Platform

From DefCon:

At the Defcon hacker conference today, security researcher Truman Kain debuted what he calls the Surveillance Detection Scout. The DIY computer fits into the middle console of a Tesla Model S or Model 3, plugs into its dashboard USB port, and turns the car's built-in cameras­ -- the same dash and rearview cameras providing a 360-degree view used for Tesla's Autopilot and Sentry features­ -- into a system that spots, tracks, and stores license plates and faces over time. The tool uses open source image recognition software to automatically put an alert on the Tesla's display and the user's phone if it repeatedly sees the same license plate. When the car is parked, it can track nearby faces to see which ones repeatedly appear. Kain says the intent is to offer a warning that someone might be preparing to steal the car, tamper with it, or break into the driver's nearby home.

Worse Than FailureKeeping Busy

Djungarian Hamster Pearl White run wheel

In 1979, Argle was 18, happy to be working at a large firm specializing in aerospace equipment. There was plenty of opportunity to work with interesting technology and learn from dozens of more senior programs—well, usually. But then came the day when Argle's boss summoned him to his cube for something rather different.

"This is a listing of the code we had prior to the last review," the boss said, pointing to a stack of printed Fortran code that was at least 6 inches thick. "This is what we have now." He gestured to a second printout that was slightly thicker. "I need you to read through this code and, in the old code, mark lines with 'WAS' where there was a change and 'IS' in the new listing to indicate what it was changed to."

Argle frowned at the daunting paper mountains. "I'm sorry, but, why do you need this exactly?"

"It's for FAA compliance," the boss said, waving his hand toward his cubicle's threshold. "Thanks!"

Weighed down with piles of code, Argle returned to his cube with a similarly sinking heart. At this place and time, he'd never even heard of UNIX, and his coworkers weren't likely to know anything about it, either. Their development computer had a TMS9900 CPU, the same one in the TI-99 home computer, and it ran its own proprietary OS from Texas Instruments. There was no diff command or anything like it. The closest analog was a file comparison program, but it only reported whether two files were identical or not.

Back at his cube, Argle stared at the printouts for a while, dreading the weeks of manual, mind-numbing dullness that loomed ahead of him. There was no way he'd avoid errors, no matter how careful he was. There was no way he'd complete this to every stakeholder's satisfaction. He was staring imminent failure in the face.

Was there a better way? If there weren't already a program for this kind of thing, could he write his own?

Argle had never heard of the Hunt–McIlroy algorithm, but he thought he might be able to do line comparisons between files, then hunt ahead in one file or the other until he re-synched again. He asked one of the senior programmers for the files' source code. Within one afternoon of tinkering, he'd written his very own diff program.

The next morning, Argle handed his boss 2 newly printed stacks of code, with "WAS -->" and "IS -->" printed neatly on all the relevant lines. As the boss began flipping through the pages, Argle smiled proudly, anticipating the pleasant surprise and glowing praise to come.

Quite to Argle's surprise, his boss fixed him with a red-faced, accusing glare. "Who said you could write a program?!"

Argle was speechless at first. "I was hired to program!" he finally blurted. "Besides, that's totally error-free! I know I couldn't have gotten everything correct by hand!"

The boss sighed. "I suppose not."

It wasn't until Argle was much older that his boss' reaction made any sense to him. The boss' goal hadn't been "compliance." He simply hadn't had anything constructive for Argle to do, and had thought he'd come up with a brilliant way to keep the new young hire busy and out of his hair for a few weeks.

Writer's note: Through the ages and across time, absolutely nothing has changed. In 2001, I worked at a (paid, thankfully) corporate internship where I was asked to manually browse through a huge network share and write down what every folder contained, all the way through thousands of files and sub-folders. Fortunately, I had heard of the dir command in DOS. Within 30 minutes, I proudly handed my boss the printout of the output—to his bemusement and dismay. —Ellis

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianDirk Eddelbuettel: Rcpp now used by 1750 CRAN packages

1751 Rcpp packages

Since this morning, Rcpp stands at just over 1750 reverse-dependencies on CRAN. The graph on the left depicts the growth of Rcpp usage (as measured by Depends, Imports and LinkingTo, but excluding Suggests) over time.

Rcpp was first released in November 2008. It probably cleared 50 packages around three years later in December 2011, 100 packages in January 2013, 200 packages in April 2014, and 300 packages in November 2014. It passed 400 packages in June 2015 (when I tweeted about it), 500 packages in late October 2015, 600 packages in March 2016, 700 packages last July 2016, 800 packages last October 2016, 900 packages early January 2017,
1000 packages in April 2017, 1250 packages in November 2017, and 1500 packages in November 2018. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is availble too.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of 2015, seven percent just before Christmas 2015, eight percent last summer, nine percent mid-December 2016, cracked ten percent in the summer of 2017 and eleven percent in 2018. We are currently at 11.83 percent: a little over one in nine packages. There is more detail in the chart: how CRAN seems to be pushing back more and removing more aggressively (which my CRANberries tracks but not in as much detail as it could), how the growth of Rcpp seems to be slowing somewhat outright and even more so as a proportion of CRAN – just like one would expect a growth curve to.

1753 Rcpp packages

1750+ user packages is pretty mind-boggling. We can use the progression of CRAN itself compiled by Henrik in a series of posts and emails to the main development mailing list. Not that long ago CRAN itself did not have 1500 packages, and here we are at almost 14810 with Rcpp at 11.84% and still growing (though maybe more slowly). Amazeballs.

The Rcpp team continues to aim for keeping Rcpp as performant and reliable as it has been. A really big shoutout and Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cory DoctorowMy MMT Podcast appearance, part 2: monopoly, money, and the power of narrative


Last week, the Modern Monetary Theory Podcast ran part 1 of my interview with co-host Christian Reilly; they’ve just published the second and final half of our chat (MP3), where we talk about the link between corruption and monopoly, how to pitch monetary theory to people who want to abolish money altogether, and how stories shape the future.

If you’re new to MMT, here’s my brief summary of its underlying premises: “Governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.”

Planet DebianJoey Hess: releasing two haskell libraries in one day: libmodbus and git-lfs

The first library is a libmodbus binding in haskell.

There are a couple of other haskell modbus libraries, but none that support serial communication out of the box. I've been using a python library to talk to my solar charge controller, but it is not great at dealing with the slightly flakey interface. The libmodbus C library has features that make it more robust, and it also supports fast batched reads.

So a haskell interface to it seemed worth starting while I was doing laundry, and then for some reason it seemed worth writing a whole bunch more FFIs that I may never use, so it covers libmodbus fairly extensively. 660 lines of code all told.

Writing a good binding to a C library has art to it. I've seen ones that are so close you feel you're writing C and not haskell. On the other hand, some are so far removed from the underlying library that its documentation does not carry over at all.

I tried to strike a balance. Same function names so the extensive libmodbus documentation is easy to refer to while using it, but plenty of haskell data types so you won't mix up the parity with the stop bits.

And while it uses a mutable vector under the hood as the buffer for the FFI interface, so it can be just as fast as the C library, I also made functions for reading stuff like registers and coils be polymorphic so easier data types can be used at the expense of a bit of extra allocation.

The big win in this haskell binding is that you can leverage all the nice haskell libraries for dealing with binary data to parse the modbus data, rather than the ad-hoc integer and float conversion stuff from the C library.

For example, the Epever solar charge controller has its own slightly nonstandard way to represent 16 bit and 32 bit floats. Using the binary library to parse its registers in applicative style came out quite nice:

data Epever = Epever
    { pv_array_voltage :: Float
    , pv_array_current :: Float
    , pv_array_power :: Float
    , battery_voltage :: Float
    } deriving (Show)

getEpever :: Get Epever
getEpever = Epever
    <$> epeverfloat  -- register 0x3100
    <*> epeverfloat  -- register 0x3101
    <*> epeverfloat2 -- register 0x3102 (low) and 0x3103 (high)
    <*> epeverfloat  -- register 0x3104
 where
    epeverfloat = decimals 2 <$> getWord16host
    epeverfloat2 = do
        l <- getWord16host
        h <- getWord16host
        return (decimals 2 (l + h*2^16))
    decimals n v = fromIntegral v / (10^n)

The second library is a git-lfs implementation in pure Haskell.

Emphasis on the pure -- there is not a scrap of IO code in this library, just 400+ lines of data types, parsing, and serialization.

I wrote it a couple weeks ago so git-annex can store files in a git-lfs remote. I've also used it as a git-lfs server, mostly while exploring interesting edge cases of git-lfs.


This work was sponsored by Jake Vosloo on Patreon.

Cory DoctorowWhere to catch me at Burning Man!

This is my last day at my desk until Labor Day: tomorrow, we’re driving to Burning Man to get our annual dirtrave fix! If you’re heading to the playa, here’s three places and times you can find me:

Seating is always limited at these things (our living room is big, but it’s not that big!) so come by early!

I hope you have an amazing burn — we always do! This year I’m taking a break from working in the cafe pulling shots in favor of my first-ever Greeter shift, which I’m really looking forward to.

While we’re on the subject, there’s still time to sign up for the Liminal Labs Assassination Game!

Google AdsenseAdditional safeguards to protect the quality of our ad network

Supporting a healthy ads ecosystem that works for publishers, advertisers, and users continues to be a top priority in our effort to sustain a free and open web. As the ecosystem evolves, our ad systems and defenses must adapt as well. Today, we’d like to highlight some of our efforts to protect the quality of our ad network, and the benefits to our publishers and the advertising ecosystem. 


Last year, we introduced a site verification process in AdSense to provide additional safeguards before a publisher can serve ads. This feature allows us to provide more direct feedback to our publishers on the eligibility of their site, while allowing us to communicate issues sooner and lessen the likelihood of future violations. As an added benefit, confirming which websites a publisher intends to monetize allows us to reduce potential misuse of a publisher's ad code, such as when a bad actor tries to claim a website as their own, or when they use a legitimate publisher's ad code to serve ads on bad content in an attempt to demonetize the good website — each day, we now block more than 120 million ad requests with this feature. 


This year, we’re enhancing our defenses even more by improving the systems that identify potentially invalid traffic or high risk activities before ads are served. These defenses allow us to limit ad serving as needed to further protect our advertisers and users, while maximizing revenue opportunities for legitimate publishers. While most publishers will not notice any changes to their ad traffic, we are working on improving the experience for those that may be impacted, by providing more transparency around these actions. Publishers on AdSense and AdMob that are affected will soon be notified of these ad traffic restrictions directly in their Policy Center. This will allow them to understand why they may be experiencing reduced ad serving, and what steps they can take to resolve any issues and continue partnering with us.


We’re excited for what’s to come, and will continue to roll out improvements to these systems with all of our users in mind. Look out for future updates on our ongoing efforts to promote and sustain a healthy ads ecosystem.


Posted by: 
Andres Ferrate - Chief Advocate for Ad Traffic Quality

Krebs on SecurityForced Password Reset? Check Your Assumptions

Almost weekly now I hear from an indignant reader who suspects a data breach at a Web site they frequent that has just asked the reader to reset their password. Further investigation almost invariably reveals that the password reset demand was not the result of a breach but rather the site’s efforts to identify customers who are reusing passwords from other sites that have already been hacked.

But ironically, many companies taking these proactive steps soon discover that their explanation as to why they’re doing it can get misinterpreted as more evidence of lax security. This post attempts to unravel what’s going on here.

Over the weekend, a follower on Twitter included me in a tweet sent to California-based job search site Glassdoor, which had just sent him the following notice:

The Twitter follower expressed concern about this message, because it suggested to him that in order for Glassdoor to have done what it described, the company would have had to be storing its users’ passwords in plain text. I replied that this was in fact not an indication of storing passwords in plain text, and that many companies are now testing their users’ credentials against lists of hacked credentials that have been leaked and made available online.

The reality is Facebook, Netflix and a number of big-name companies are regularly combing through huge data leak troves for credentials that match those of their customers, and then forcing a password reset for those users. Some are even checking for password re-use on all new account signups.

The idea here is to stymie a massively pervasive problem facing all companies that do business online today: Namely, “credential-stuffing attacks,” in which attackers take millions or even billions of email addresses and corresponding cracked passwords from compromised databases and see how many of them work at other online properties.

So how does the defense against this daily deluge of credential stuffing work? A company employing this strategy will first extract from these leaked credential lists any email addresses that correspond to their current user base.

From there, the corresponding cracked (plain text) passwords are fed into the same process that the company relies upon when users log in: That is, the company feeds those plain text passwords through its own password “hashing” or scrambling routine.

Password hashing is designed to be a one-way function which scrambles a plain text password so that it produces a long string of numbers and letters. Not all hashing methods are created equal, and some of the most commonly used methods — MD5 and SHA-1, for example — can be far less secure than others, depending on how they’re implemented (more on that in a moment). Whatever the hashing method used, it’s the hashed output that gets stored, not the password itself.

Back to the process: If a user’s plain text password from a hacked database matches the output of what a company would expect to see after running it through their own internal hashing process, that user is then prompted to change their password to something truly unique.

Now, password hashing methods can be made more secure by amending the password with what’s known as a “salt” — or random data added to the input of a hash function to guarantee a unique output. And many readers of the Twitter thread on Glassdoor’s approach reasoned that the company couldn’t have been doing what it described without also forgoing this additional layer of security.

My tweeted explanatory reply as to why Glassdoor was doing this was (in hindsight) incomplete and in any case not as clear as it should have been. Fortunately, Glassdoor’s chief information officer Anthony Moisant chimed in to the Twitter thread to explain that the salt is in fact added as part of the password testing procedure.

“In our [user] database, we’ve got three columns — username, salt value and scrypt hash,” Moisant explained in an interview with KrebsOnSecurity. “We apply the salt that’s stored in the database and the hash [function] to the plain text password, and that resulting value is then checked against the hash in the database we store. For whatever reason, some people have gotten it into their heads that there’s no possible way to do these checks if you salt, but that’s not true.”

CHECK YOUR ASSUMPTIONS

You — the user — can’t be expected to know or control what password hashing methods a given site uses, if indeed they use them at all. But you can control the quality of the passwords you pick.

I can’t stress this enough: Do not re-use passwords. And don’t recycle them either. Recycling involves rather lame attempts to make a reused password unique by simply adding a digit or changing the capitalization of certain characters. Crooks who specialize in password attacks are wise to this approach as well.

If you have trouble remembering complex passwords (and this describes most people), consider relying instead on password length, which is a far more important determiner of whether a given password can be cracked by available tools in any timeframe that might be reasonably useful to an attacker.

In that vein, it’s safer and wiser to focus on picking passphrases instead of passwords. Passphrases are collections of multiple (ideally unrelated) words mushed together. Passphrases are not only generally more secure, they also have the added benefit of being easier to remember.

According to a recent blog entry by Microsoft group program manager Alex Weinert, none of the above advice about password complexity amounts to a hill of beans from the attacker’s standpoint.

Weinert’s post makes a compelling argument that as long as we’re stuck with passwords, taking full advantage of the most robust form of multi-factor authentication (MFA) offered by a site you frequent is the best way to deter attackers. Twofactorauth.org has a handy list of your options here, broken down by industry.

“Your password doesn’t matter, but MFA does,” Weinert wrote. “Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.”

Glassdoor’s Moisant said the company doesn’t currently offer MFA for its users, but that it is planning to roll that out later this year to both consumer and business users.

Password managers also can be useful for those who feel encumbered by having to come up with passphrases or complex passwords. If you’re uncomfortable with entrusting a third-party service or application to handle this process for you, there’s absolutely nothing wrong with writing down your passwords, provided a) you do not store them in a file on your computer or taped to your laptop or screen or whatever, and b) that your password notebook is stored somewhere relatively secure, i.e. not in your purse or car, but something like a locked drawer or safe.

Although many readers will no doubt take me to task on that last bit of advice, as in all things security related it’s important not to let the perfect become the enemy of the good. Many people (think moms/dads/grandparents) can’t be bothered to use password managers  — even when you go through the trouble of setting them up on their behalf. Instead, without an easier, non-technical method they will simply revert to reusing or recycling passwords.

CryptogramGoogle Finds 20-Year-Old Microsoft Windows Vulnerability

There's no indication that this vulnerability was ever used in the wild, but the code it was discovered in -- Microsoft's Text Services Framework -- has been around since Windows XP.

Planet DebianRuss Allbery: Review: Trail of Lightning

Review: Trail of Lightning, by Rebecca Roanhorse

Series: The Sixth World #1
Publisher: Saga
Copyright: 2018
ISBN: 1-5344-1351-0
Format: Kindle
Pages: 286

Maggie Hoskie is a monster hunter. Trained and then inexplicably abandoned by Neizghání, an immortal monster-slayer of her people, the Diné (Navajo), she's convinced that she's half-monster herself. Given that she's the sort of monster hunter who also kills victims that she thinks may be turned into monsters themselves, she may have a point. Apart from contracts to kill things, she stays away from nearly everyone except Tah, a medicine man and nearly her only friend.

The monster that she kills at the start of the book is a sign of a larger problem. Tah says that it was created by someone else using witchcraft. Maggie isn't thrilled at the idea of going after the creator alone, given that witchcraft is what Neizghání rescued her from in an event that takes Maggie most of the book to be willing to describe. Tah's solution is a partner: Tah's grandson Kai, a handsome man with a gift for persuasion who has never hunted a monster before.

If you've read any urban fantasy, you have a pretty good idea of where the story goes from there, and that's a problem. The hair-trigger, haunted kick-ass woman with a dark past, the rising threat of monsters, the protagonist's fear that she's a monster herself, and the growing romance with someone who will accept her is old, old territory. I've read versions of this from Laurell K. Hamilton twenty-five years ago to S.L. Huang's ongoing Cas Russell series. To stand out in this very crowded field, a series needs some new twist. Roanhorse's is the deep grounding in Native American culture and mythology. It worked well enough for many people to make it a Hugo, Nebula, and World Fantasy nominee. It didn't work for me.

I partly blame a throw-away line in Mike Kozlowski's review of this book for getting my hopes up. He said in a parenthetical note that "the book is set in Dinétah, a Navajo nation post-apocalyptically resurgent." That sounded great to me; I'd love to read about what sort of society the Diné might build if given the opportunity following an environmental collapse. Unfortunately, there's nothing resurgent about Maggie's community or people in this book. They seem just as poor and nearly as screwed as they are in our world; everyone else has just been knocked down even farther (or killed) and is kept at bay by magical walls. There's no rebuilding of civilization here, just isolated settlements desperate for water, plagued by local warlords and gangs, and facing the added misery of supernatural threats. It's bleak, cruel, and unremittingly hot, which does not make for enjoyable reading.

What Roanhorse does do is make extensive use of Native American mythology to shape the magic system, creatures, and supernatural world view of the book. This is great. We need a wider variety of magic systems in fantasy, and drawing on mythological systems other than Celtic, Greek, Roman, and Norse is a good start. (Roanhorse herself is Ohkay Owingeh Pueblo, not Navajo, but I assume without any personal knowledge that her research here is reasonably good.) But, that said, the way the mythology plays out in this book didn't work for me. It felt scattered and disconnected, and therefore arbitrary.

Some of the difficulty here is inherent in the combination of my unfamiliarity and the challenge of adopting real-world mythological systems for stories. As an SFF reader, one of the things I like from the world-building is structure. I like seeing how the pieces of the magical system fit together to build a coherent set of rules, and how the protagonists manipulate those rules in the story. Real-world traditions are rarely that neat and tidy. If the reader is already familiar with the tradition, they can fill in a lot of the untold back story that makes the mythology feel more coherent. If the author cannot assume that knowledge, they can get stuck between simplifying and restructuring the mythology for easy understanding or showing only scattered and apparently incoherent pieces of a vast system. I think the complaints about the distorted and simplified version of Celtic mythology in a lot of fantasy novels from those familiar with the real thing is the flip-side to this problem; it's worse mythology, but it may be more approachable storytelling.

I'm sure it didn't help that one of the most important mythological figures of this book is Coyote, a trickster god. I have great intellectual appreciation for the role of trickster gods in mythological systems, but this is yet more evidence that I rarely get along with them in stories. Coyote in this story is less of an unreliable friend and more of a straight-up asshole who was not fun to read about.

That brings me to my largest complaint about this novel: I liked exactly one person in the entire story. Grace, the fortified bar owner, is great and I would have happily read a book about her. Everyone else, including Maggie, ranged from irritating to unbearably obnoxious. I was saying the eight deadly words ("I don't care what happens to these people") by page 100.

Here, tastes will differ. Maggie acts the way that she does because she's sitting on a powder keg of unprocessed emotional injury from abuse, made far worse by Neizghání's supposed "friendship." It's realistic that she shuts down, refuses to have meaningful conversations, and lashes out at everyone on a hair trigger. I felt sympathy, but I didn't like her, and liking her is important when the book is written in very immediate present-tense first person. Kai is better, but he's a bit too much of a stereotype, and I have an aversion to supposedly-charming men. I think some of the other characters could have been good if given enough space (Tah, for instance), but Maggie's endless loop of self-hatred doesn't give them any room to breathe.

Add on what I thought were structural and mechanical flaws (the first-person narration is weirdly specific and detail-oriented in a way that felt like first-novel mechanical problems, and the ending is one of the least satisfying and most frustrating endings I have ever read in a book of this sort) and I just didn't like this. Clearly there are a lot of people nominating and voting for awards who think I'm wrong, so your mileage may vary. But I thought it was unoriginal except for the mythology, unsatisfying in the mythology, and full of unlikable characters and unpleasant plot developments. I'm unlikely to read more in this series.

Followed by Storm of Locusts.

Rating: 4 out of 10

,

Planet DebianPhilipp Kern: Alpha: Self-service buildd givebacks

Builds on Debian's build farm sometimes fail transiently. Sometimes those failures are legitimate flakes, for instance when an in-progress build happens to exhaust its resources because of other builds on the same machine. Until now, you always needed to mail the buildd, wanna-build admins or the Release Team directly in order to get the builds re-queued.

As an alpha trial I implemented self-service givebacks as a web script. As SSO for Debian developers is now a thing, it is trivial to add authentication in a way that a role account can use to act on your behalf. While at work this would all be an RPC service, I figured that a little CGI script would do the job just as well. So lo and behold, accessing
https://buildd.debian.org/auth/giveback.cgi?pkg=<package>&suite=<suite>&arch=<arch> with the right parameters set:

You are authenticated as pkern. ✓
Working on package fife, suite sid and architecture mipsel. ✓
Package version 0.4.2-1 in state Build-Attempted, can be given back. ✓
Successfully given back the package. ✓

Note that you need to be a Debian developer with a valid SSO client certificate to access this service.

So why do I say alpha? We still expect Debian developers to act responsibly when looking at build failures. A lot of times there is a legitimate bug in the package and the last thing we would like to see as a project is someone addressing flakiness by continuously retrying a build. Access to this service is logged. Most people coming to us today did their due diligence and tried reproducing the issue on a porterbox. We still expect these things to happen but this aims to cut on the round-trip time until an admin gets around to process your request, which have been longer than necessary recently. We will audit the logs and see if particular packages stand out.

There can also still be bugs. Please file them against buildd.debian.org when you see them. Please include a copy of the output, which includes validation and important debugging information when requests are rejected. Also this all only works for packages in Build-Attempted. If the build has been marked as Failed (which is a manual process), you still need to mail us. And lastly the API can still change. Luckily the state change can only happen once, so it's not much of a problem for the GET request to be retried. But it should likely move to POST anyhow. In that case I will update this post to reflect the new behavior.

Thanks to DSA for making sure that I run the service sensibly using a dedicated role account as well as WSGI and doing the work to set up the necessary bits.

CryptogramSurveillance as a Condition for Humanitarian Aid

Excellent op-ed on the growing trend to tie humanitarian aid to surveillance.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don't apply for people who are starving.

Planet DebianBits from Debian: salsa.debian.org: Postmortem of failed Docker registry move

The Salsa admin team provides the following report about the failed migration of the Docker container registry. The Docker container registry stores Docker images, which are for example used in the Salsa CI toolset. This migration would have moved all data off to Google Cloud Storage (GCS) and would have lowered the used file system space on Debian systems significantly.

The Docker container registry is part of the Docker distribution toolset. This system supports multiple backends for file storage: local, Amazon Simple Storage Service (Amazon S3) and Google Cloud Storage (GCS). As Salsa already uses GCS for data storage, the Salsa admin team decided to move all the Docker registry data off to GCS too.

Migration and rollback

On 2019-08-06 the migration process was started. The migration itself went fine, although it took a bit longer than anticipated. However, as not all parts of the migration had been properly tested, a test of the garbage collection triggered a bug in the software.

On 2019-08-10 the Salsa admins started to see problems with garbage collection. The job running it timed out after one hour. Within this timeframe it not even managed to collect information about all used layers to see what it can cleanup. A source code analysis showed that this design flaw can't be fixed.

On 2019-08-13 the change was rolled back to storing data on the file system.

Docker registry data storage

The Docker registry stores all of the data sans indexing or reverse references in a file system-like structure comprised of 4 separate types of information: Manifests of images and contents, tags for the manifests, deduplicaed layers (or blobs) which store the actual data, and lastly links which show which deduplicated blogs belong to their respective images, all of this does not allow for easy searching within the data.

The file system structure is built as append-only which allows for adding blobs and manifests, addition, modification, or deletion of tags. However cleanup of items other than tags is not achievable within the maintenance tools.

There is a garbage collection process which can be used to clean up unreferenced blobs, however according to the documentation the process can only be used while the registry is set to read-only and unfortunately it cannot be used to clean up unused links.

Docker registry garbage collection on external storage

For the garbage collection the registry tool needs to read a lot of information as there is no indexing of the data. The tool connects to the storage medium and proceeds to download … everything, every single manifest and information about the referenced blobs, which now takes up over 1 second to process a single manifest. This process will take up a significant amount of time, which in the current configuration of external storage would make the clean up nearly impossible.

Leasons learned

The Docker registry is a data storage tool that can only properly be used in append-only mode. If you never cleanup, it works well.

As soon as you want to actually remove data, it goes bad. For Salsa clean up of old data is actually a necessity, as the registry currently grows about 20GB per day.

Next steps

Sadly there is not much that can be done using the existing Docker container registry. Maybe GitLab or someone else would like to contribute a new implementation of a Docker registry, either integrated into GitLab itself or stand-alone?

Planet DebianRaphaël Hertzog: Promoting Debian LTS with stickers, flyers and a video

With the agreement of the Debian LTS contributors funded by Freexian, earlier this year I decided to spend some Freexian money on marketing: we sponsored DebConf 19 as a bronze sponsor and we prepared some stickers and flyers to give out during the event.

The stickers only promote the Debian LTS project with the semi-official logo we have been using and a link to the wiki page. You can see them on the back of a laptop in the picture below. As you can see, we have made two variants with different background colors:

The flyers and the video are meant to introduce the Debian LTS project and to convince companies to sponsor the Debian LTS project through the Freexian offer. Those are short documents and they can’t explain the precise relationship between Debian LTS and Freexian. We try to show that Freexian is just an intermediary between contributors and companies, but some persons will still have the feeling that a commercial entity is organizing Debian LTS.

Check out the video on YouTube:

The inside of the flyer looks like this:

Click on the picture to see it full size

Note that due to some delivery issues, we have left-over flyers and stickers. If you want some to give out during a free software event, feel free to reach out to me.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, July 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 199 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned but did nothing (plus 10 extra hours from June), thus he is carrying over 18h to August.
  • Ben Hutchings did 18.5 hours (out of 18.5 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 21 hours (out of 18.5h allocated + 17h remaining, thus keeping 14.5 extra hours for August).
  • Hugo Lefeuvre did 9.75 hours (out of 18.5 hours, thus carrying over 8.75h to Augustq).
  • Jonas Meurer did 19 hours (out of 17 hours allocated plus 2h extra hours June).
  • Markus Koschany did 18.5 hours (out of 18.5 hours allocated).
  • Mike Gabriel did 15.75 hours (out of 18.5 hours allocated plus 7.25 extra hours from June, thus carrying over 10h to August.).
  • Ola Lundqvist did 0.5 hours (out of 8 hours allocated plus 8 extra hours from June, then he gave 7.5h back to the pool, thus he is carrying over 8 extra hours to August).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 18.5 hours (out of 18.5 hours allocated).
  • Thorsten Alteholz did 18.5 hours (out of 18.5 hours allocated).

Evolution of the situation

July was different than other months. First, some people have been on actual vacations, while 4 of the above 14 contributors met in Curitiba, Brazil, for DebConf19. There, a talk about LTS (slides, video) was given, followed by a Q&ligA session. Also a new promotional video about Debian LTS, aimed at potential sponsors was shown there for the first time.

DebConf19 was also a success in respect to on-boarding of new contributors, we’ve found three potential new contributors, one of them is already in training.

The security tracker (now for oldoldstable as Buster has been released and thus Jessie became oldoldstable) currently lists 51 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureCodeSOD: I'm Sooooooo Random, LOL

There are some blocks of code that require a preamble, and an explanation of the code and its flow. Often you need to provide some broader context.

Sometimes, you get some code like Wolf found, which needs no explanation:

export function generateRandomId(): string { counter++; return 'id' + counter; }

I mean, I guess that's a slightly better than this solution. Wolf found this because some code downstream was expecting random, unique IDs, and wasn't getting them.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianDirk Eddelbuettel: RcppQuantuccia 0.0.3

A maintenance release of RcppQuantuccia arrived on CRAN earlier today.

RcppQuantuccia brings the Quantuccia header-only subset / variant of QuantLib to R. At the current stage, it mostly offers date and calendaring functions.

This release was triggered by some work CRAN is doing on updating C++ standards for code in the repository. Notably, under C++11 some constructs such ptr_fun, bind1st, bind2nd, … are now deprecated, and CRAN prefers the code base to not issue such warnings (as e.g. now seen under clang++-9). So we updated the corresponding code in a good dozen or so places to the (more current and compliant) code from QuantLib itself.

We also took this opportunity to significantly reduce the footprint of the sources and the installed shared library of RcppQuantuccia. One (unexported) feature was pricing models via Brownian Bridges based on quasi-random Sobol sequences. But the main source file for these sequences comes in at several megabytes in sizes, and allocates a large number of constants. So in this version the file is excluded, making the current build of RcppQuantuccia lighter in size and more suitable for the (simpler, popular and trusted) calendar functions. We also added a new holiday to the US calendar.

The complete list changes follows.

Changes in version 0.0.3 (2019-08-19)

  • Updated Travis CI test file (#8)).

  • Updated US holiday calendar data with G H Bush funeral date (#9).

  • Updated C++ use to not trigger warnings [CRAN request] (#9).

  • Comment-out pragmas to suppress warnings [CRAN Policy] (#9).

  • Change build to exclude Sobol sequence reducing file size for source and shared library, at the cost of excluding market models (#10).

Courtesy of CRANberries, there is also a diffstat report relative to the previous release. More information is on the RcppQuantuccia page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJaskaran Singh: GSoC Final Report

Introduction:

The Debian Patch Porting System aims to systematize and partially automate the security patch porting process.

In this Google Summer of Code (2019), I wrote a webcrawler to extract security patches for a given security vulnerability identifier. This webcrawler or patch-finder serves as the first step of the Debian Patch Porting System.

The Patch-finder should recognize numerous vulnerability identifiers. These identifiers can be security advisories (DSA, GLSA, RHSA), vulnerability identifiers (OVAL, CVE), etc. So far, it can identify CVE, DSA (Debian Security Advisory), GLSA (Gentoo Linux Security Advisory) and RHSA (Red Hat Security Advisory).

Each vulnerability identifier has a list of entrypoint URLs associated with it. These URLs are used to initiate the patch finding.

Vulnerabilities that are not CVEs are generic vulnerabilities. If a generic vulnerability is given, its “aliases” (i.e. CVEs that are related to the generic vulnerability) are determined. This method was chosen because CVEs are quite possibly the most widely used security vulnerability and thus would have the most number of patches associated to them. Once the aliases are determined, the entrypoint URLs of the aliases are crawled for the patch-finding.

The Patch-finder is based on the web crawling and scraping framework Scrapy.

What was done:

During these three months, I have:

  • Used Scrapy to implement a spider to collect patch links.
  • Implemented a recursive patch-finding process. Any links that the patch-finder finds on a page (in a certain area of interest, of course) that are not patch links are followed.
  • Implemented a crawler to extract patches from Debian Packages.
  • Implemented a crawler to extract patches from a given GitHub repository.

Here’s a link to the patch-finder’s Github Repository which I have used for GSoC.

TODO:

There is a lot more stuff to be done, from solving small bugs to implementing major features. Some of these issues are on the project’s GitHub issue tracker here. Following is a summary of these issues and a few more ideas:

  • A way to uniquely identify patches. This is so that the same patches are not scraped and collected.
  • A Database, and a corresponding database API.
  • Store patches in the database, along with any other information.
  • Collect not only patches but other information relevant to the vulnerability.
  • Integrate the Github crawler/parser in the crawling process.
  • A way to check the relevancy of the patch to the vulnerability. A naive solution is, of course, to simply check for mention of the vulnerability ID in the patch description.
  • Efficient page filters. Certain links should not be crawled because it is obvious they will not yield any patches, for example homepages.
  • A better way to scrape links, rather than using a URL’s corresponding xpath.
  • A more efficient testing framework.
  • More crawlers/parsers.

Personal Notes:

Google Summer of Code has been a super comfortable and fun experience for me. I’ve learnt tonnes about Python, Open Source and Software Development. My mentors Luciano Bello and László Böszörményi have been immensely helpful and have guided me through these three months.

I plan to continue working on this project and hopefully develop it to a state where Debian and everyone who needs it can use it conveniently.

,

Cory DoctorowPodcast: A cycle of renewal, broken: How Big Tech and Big Media abuse copyright law to slay competition

In my latest podcast (MP3), I read my essay “A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition”, published today on EFF’s Deeplinks; it’s the latest in my ongoing series of case-studies of “adversarial interoperability,” where new services unseated the dominant companies by finding ways to plug into existing products against those products’ manufacturers. This week’s installment recounts the history of cable TV, and explains how the legal system in place when cable was born was subsequently extinguished (with the help of the cable companies who benefitted from it!) meaning that no one can do to cable what cable once did to broadcasters.

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own “community antenna television” networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born.

The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment.

The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes.

By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony’s Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were “capable of substantial noninfringing uses.”

MP3

Planet DebianJonathan Dowland: Shared notes and TODO lists

When it comes to organising myself, I've long been anachronistic. I've relied upon paper notebooks for most of my life. In the last 15 years I've stuck to a particular type of diary/notebook hybrid, with a week-to-view on the left-hand side of pages and lined notebook pages on the right.

This worked well for me for my own personal stuff but obviously didn't work well for family things that need to be shared. Trying to find systems that work for both my wife and I has proven really challenging. The best we've come up with so far is a shared (IMAP) account and Apple's notes apps.

On iOS, Apple's low-frills note-taking app lets you synchronise your notes with a mail account (over IMAP). It stores them individually in HTML format, one email per note page, in a mailbox called "Notes". You can set up note syncing to the same account from multiple devices, and so we have a "family" mailbox set up on both my phone and my wife's. I can also get at the notes using any other mail client if I need to.

This works surprisingly well, but not perfectly. In particular synchronising changes to notes can go wrong if we both have the same note page open at the same time. The failure mode is not the worst: it duplicates the note into two; but it's still a problem.

Can anyone recommend a simple, more robust system for sharing notes — and task lists — between people? For task lists, it would be lovely (but not essential) if we could tick things off. At the moment we manage that just as free-form text.

Planet DebianHolger Levsen: 20190818-cccamp

Home again

Two days ago I finally arrived home again and was greeted with this very nice view when entering the area:

(These images were taken yesterday from inside the venue.)

To give an idea of scale, the Pesthörnchen flag on top is 2m wide :)

Since today, there's also a rainbow flag next to the Pesthörnchen one. I'm very much looking forward to the next days, though buildup is big fun already.

Planet DebianAntoine Beaupré: KNOB attack: Is my Bluetooth device insecure?

A recent attack against Bluetooth, called KNOB, has been making waves last week. In essence, it allows an attacker to downgrade the security of a Bluetooth so much that it's possible for the attacker to break the encryption key and spy on all the traffic. The attack is so devastating that some have described it as the "stop using bluetooth" flaw.

This is my attempt at answering my own lingering questions about "can I still use Bluetooth now?" Disclaimer: I'm not an expert in Bluetooth at all, and just base this analysis on my own (limited) knowledge of the protocol, and some articles (including the paper) I read on the topic.

Is Bluetooth still safe?

It really depends what "safe" means, and what your threat model is. I liked how the Ars Technica article put it:

It's also important to note the hurdles—namely the cost of equipment and a surgical-precision MitM—that kept the researchers from actually carrying out their over-the-air attack in their own laboratory. Had the over-the-air technique been easy, they almost certainly would have done it.

In other words, the active attack is really hard to do, and the researchers didn't actually do one at all! It's a theoretical flaw, at this point, and while it's definitely possible, it's not what the researchers did:

The researchers didn't carry out the man-in-the-middle attack over the air. They did, however, root a Nexus 5 device to perform a firmware attack. Based on the response from the other device—a Motorola G3—the researchers said they believe that both attacks would work.

This led some researchers to (boldy) say they would still use a Bluetooth keyboard:

Dan Guido, a mobile security expert and the CEO of security firm Trail of Bits, said: "This is a bad bug, although it is hard to exploit in practice. It requires local proximity, perfect timing, and a clear signal. You need to fully MitM both peers to change the key size and exploit this bug. I'm going to apply the available patches and continue using my bluetooth keyboard."

So, what's safe and what's not, in my much humbler opinion?

Keyboards: bad

The attack is a real killer for Bluetooth keyboards. If an active attack is leveraged, it's game over: everything you type is visible to the attacker, and that includes, critically, passwords. In theory, one could even input keyboard events into the channel, which allows basically arbitrary code execution on the host.

Some, however, made the argument that it's probably easier to implant a keylogger in the device than actually do that attack, but I disagree: this requires physical access, while the KNOB attack can be done remotely.

How far this can be done, by the way, is still open to debate. The Telegraph claimed "a mile" in a click-bait title, but I think such an attacker would need to be much closer for this to work, more in the range of "meters" than "kilometers". But it still means "a black van sitting outside your house" instead of "a dude breaking into your house", which is a significant difference.

Other input devices: hum

I'm not sure mice and other input devices are such a big deal, however. Extracting useful information from those mice moving around the screen is difficult without seeing what's behind that screen.

So unless you use an on-screen keyboard or have special input devices, I don't think those are such a big deal when spied upon.

They could be leveraged with other attacks to make you "click through" some things an attacker would otherwise not be able to do.

Speakers: okay

I think I'll still keep using my Bluetooth speakers. But that's because I don't have much confidential audio I listen to. I listen to music, movies, and silly cat videos; not confidential interviews with victims of repression that should absolutely have their identities protected. And if I ever come across such material, I now know that I should not trust that speaker..

Otherwise, what's an attacker going to do here: listen to my (ever decreasing) voicemail (which is transmitted in cleartext by email anyways)? Listen to that latest hit? Meh.

Do keep in mind that some speakers have microphones in them as well, so that's not the entire story...

Headsets and microphones: hum

Headsets and microphones are another beast, as they can listen to other things in your environment. I do feel much less comfortable using those devices now. What makes the entire thing really iffy is some speakers do have microphones in them and all of a sudden everything around you can listen on your entire life.

(It seems like a given, with "smart home assistants" these days, but I still like to think my private conversations at home are private, in general. And I generally don't want to be near any of those "smart" devices, to be honest.)

One mitigating circumstance here is that the attack needs to happen during the connection (or pairing? still unclear) negociation, which doesn't happen that often if everything works correctly. Unfortunately, this happens more than often exactly with speakers and headsets. That's because many of those devices stupidly have low limits on the number of devices they can pair with. For example, the Bose Soundlink II can only pair with 8 other devices. If you count three device by person (laptop, workstation, phone), you quickly hit the limit when you move the device around. So I end up repairing that device quite often.

And that would be if the attack takes place during the pairing phase. As it turns out, the attack window is much wider: the attack happens during the connexion stage (see Figure 1, page 1049 in the paper), after devices have paired. This actually happens way more often than just during pairing. Any time your speaker or laptop will go to sleep, it will disconnect. Then to start using the device again, the BT layer will renegociate that keysize, and the attack can happen again.

(I have written the authors of the paper to clarify at which stage the attack happens and will update this post when/if they reply. Update: Daniele Antonioli has confirmed the attack takes place at connect phase.)

Fortunarely, the Bose Soundlink II has no microphone, which I'm thankful of. But my Bluetooth headset does have a microphone, which makes me less comfortable.

File and contact transfers: bad

Bluetooth, finally, is also used to transfer stuff other than audio of course. It's clunky, weird and barely working, but it's possible to send files over Bluetooth, and some headsets and car controllers will ask you permission to list your contacts so that "smart" features like "OK Google, call dad please" will work.

This attack makes it possible for an attacker to steal your contacts, when connecting devices. It can also intercept file transfers and so on.

That's pretty bad, to say the least.

Unfortunately, the "connection phase" mitigation described above is less relevant here. It's less likely you'll be continuously connecting two phones (or your phone and laptop) together for the purpose of file transfers. What's more likely is you'll connect the devices for explicit purpose of the file transfer, and therefore an attacker has a window for attack at every transfer.

I don't really use the "contacts" feature anyways (because it creeps me the hell out in the first place), so that's not a problem for me. But the file transfer problem will certainly give me pause the next time I ever need to feel the pain of transfering files over Bluetooth again, which I hope is "never".

It's interesting to note the parallel between this flaw, which will mostly affect Android file transfers, and the recent disclosure of flaws with Apple's Airdrop protocol which was similarly believed to be secure, even though it was opaque and proprietary. Now, think a bit about how Airdrop uses Bluetooth to negociate part of the protocol, and you can feel like I feel that everything in security just somewhat keeps crashes down and we don't seem to be able to make any progress at all.

Overall: meh

I've always been uncomfortable with Bluetooth devices: the pairing process has no sort of authentication whatsoever. The best you get is to enter a pin, and it's often "all zeros" or some trivially easy thing to bruteforce. So Bluetooth security has always felt like a scam, and I especially never trusted keyboards with passwords, in particular.

Like many branded attacks, I think this one might be somewhat overstated. Yes, it's a powerful attack, but Bluetooth implementations are already mostly proprietary junk that is undecipherable from the opensource world. There are no or very few open hardware implementations, so it's somewhat of expected we find things like this.

I have also found the response from the Bluetooth SIG is particularly alarming:

To remedy the vulnerability, the Bluetooth SIG has updated the Bluetooth Core Specification to recommend a minimum encryption key length of 7 octets for BR/EDR connections.

7 octets is 56 bits. That's the equivalent of DES, which was broken in 56 hours back, over 20 years ago. That's far from enough. But what's more disturbing is that this key size negociation protocol might be there "because 'some' governments didn't want other governments to have stronger encryption", ie. it would be a backdoor.

The 7-byte lower bound might also be there because of Apple lobbying. Their AirPods were implemented as not-standards-compliant and already have that lower 7-byte bound, so by fixing the standard to match one Apple implementation, they would reduce the cost of their recall/replacements/upgrades.

Overally, this behavior of the standards body is what should make us suspicious of any Bluetooth device going forward, and question the motivations of the entire Bluetooth standardization process. We can't use 56 bits keys anymore, and I can't believe I need to explicitely say so, but it seems it's where we're at with Bluetooth these days.

TEDWhat does it mean to become a TED Fellow?

Every year, TED begins a new search looking for the brightest thinkers and innovators to be part of the TED Fellows program. With nearly 500 visionaries representing 300 different disciplines, these extraordinary individuals are making waves, disrupting the status quo and creating real impact.

Through a rigorous application process, we narrow down our candidate pool of thousands to just 20 exceptional people. (Trust us, this is not easy to do.) You may be wondering what makes for a good application (read more about that here), but just as importantly: What exactly does it mean to be a TED Fellow? Yes, you’ll work hand-in-hand with the Fellows team to give a TED Talk on stage, but being a Fellow is so much more than that. Here’s what happens once you get that call.

1. You instantly have a built-in support system.

Once selected, Fellows become part of our active global community. They are connected to a diverse network of other Fellows who they can lean on for support, resources and more. To get a better sense of who these people are (fishing cat conservationists! space environmentalists! police captains!), take a closer look at our class of 2019 Fellows, who represent 12 countries across four continents. Their common denominator? They are looking to address today’s most complex challenges and collaborate with others — which could include you.

2. You can participate in TED’s coaching and mentorship program.

To help Fellows achieve an even greater impact with their work, they are given the opportunity to participate in a one-of-a-kind coaching and mentoring initiative. Collaboration with a world-class coach or mentor helps Fellows maximize effectiveness in their professional and personal lives and make the most of the fellowship.

The coaches and mentors who support the program are some of the world’s most effective and intuitive individuals, each inspired by the TED mission. Fellows have reported breakthroughs in financial planning, organizational effectiveness, confidence and interpersonal relationships thanks to coaches and mentors. Head here to learn more about this initiative. 

3. You’ll receive public relations guidance and professional development opportunities, curated through workshops and webinars. 

Have you published exciting new research or launched a groundbreaking project? We partner with a dedicated PR agency to provide PR training and valuable media opportunities with top tier publications to help spread your ideas beyond the TED stage. The TED Fellows program has been recognized by PR News for our “PR for Fellows” program.

In addition, there are vast opportunities for Fellows to hone their skills and build new ones through invigorating workshops and webinars that we arrange throughout the year. We also maintain a Fellows Blog, where we continue to spotlight Fellows long after they give their talks.

***

Over the last decade, our program has helped Fellows impact the lives of more than 180 million people. Success and innovation like this doesn’t happen in a vacuum — it’s sparked by bringing Fellows together and giving them this kind of support. If this sounds like a community you want to join, apply to become a TED Fellow by August 27, 2019 11:59pm UTC.

Planet DebianJonathan Dowland: NAS upgrade

After 5 years of continuous service, the mainboard in my NAS recently failed (at the worst possible moment). I opted to replace the mainboard with a more modern version of the same idea: ASRock J4105-ITX featuring the Intel J4105, an integrated J-series Celeron CPU, designed to be passively cooled, and I've left the rest of the machine as it was.

In the process of researching which CPU/mainboard to buy, I was pointed at the Odroid-H2: a single-board computer (SBC) designed/marketed at a similar sector to things like the Raspberry PI (but featuring the exact same CPU as the mainboard I eventually settled on). I've always felt that the case I'm using for my NAS is too large, but didn't want to spend much money on a smaller one. The ODroid-H2 has a number of cheap, custom-made cases for different use-cases, including one for NAS-style work, which is in a very small footprint: the "Case 1". Unfortunately this case positions two disk drives flat, one vertically above the other, and both above the SBC. I was too concerned that one drive would be heating the other, and cumulatively both heating the SBC at that orientation. The case is designed with a fan but I want to avoid requiring one. I had too many bad memories of trying to control the heat in my first NAS, the Thecus n2100, which (by default) oriented the drives in the same way (and for some reason it never occurred to me to rotate that device into the "toaster" orientation).

I've mildly revised my NAS page to reflect the change. Interestingly most of the niggles I was experiencing were all about the old mainboard, so I've moved them on a separate page (J1900N-D3V) in case they are useful to someone.

At some point in the future I hope to spend a little bit of time on the software side of things, as some of the features of my set up are no longer working as they should: I can't remote-decrypt the main disk via SSH on boot, and the first run of any backup fails due to some kind of race condition in the systemd unit dependencies. (The first attempt does not correctly mount the backup partition; the second attempt always succeeds).

Krebs on SecurityThe Rise of “Bulletproof” Residential Networks

Cybercrooks increasingly are anonymizing their malicious traffic by routing it through residential broadband and wireless data connections. Traditionally, those connections have been mainly hacked computers, mobile phones, or home routers. But this story is about so-called “bulletproof residential VPN services” that appear to be built by purchasing or otherwise acquiring discrete chunks of Internet addresses from some of the world’s largest ISPs and mobile data providers.

In late April 2019, KrebsOnSecurity received a tip from an online retailer who’d seen an unusual number of suspicious transactions originating from a series of Internet addresses assigned to a relatively new Internet provider based in Maryland called Residential Networking Solutions LLC.

Now, this in itself isn’t unusual; virtually every provider has the occasional customers who abuse their access for fraudulent purposes. But upon closer inspection, several factors caused me to look more carefully at this company, also known as “Resnet.”

An examination of the IP address ranges assigned to Resnet shows that it maintains an impressive stable of IP blocks — totaling almost 70,000 IPv4 addresses — many of which had until quite recently been assigned to someone else.

Most interestingly, about ten percent of those IPs — more than 7,000 of them — had until late 2018 been under the control of AT&T Mobility. Additionally, the WHOIS registration records for each of these mobile data blocks suggest Resnet has been somehow reselling data services for major mobile and broadband providers, including AT&T, Verizon, and Comcast Cable.

The WHOIS records for one of several networks associated with Residential Networking Solutions LLC.

Drilling down into the tracts of IPs assigned to Resnet’s core network indicates those 7,000+ mobile IP addresses under Resnet’s control were given the label  “Service Provider Corporation” — mostly those beginning with IPs in the range 198.228.x.x.

An Internet search reveals this IP range is administered by the Wireless Data Service Provider Corporation (WDSPC), a non-profit formed in the 1990s to manage IP address ranges that could be handed out to various licensed mobile carriers in the United States.

Back when the WDSPC was first created, there were quite a few mobile wireless data companies. But today the vast majority of the IP space managed by the WDSPC is leased by AT&T Mobility and Verizon Wireless — which have gradually acquired most of their competing providers over the years.

A call to the WDSPC revealed the nonprofit hadn’t leased any new wireless data IP space in more than 10 years. That is, until the organization received a communication at the beginning of this year that it believed was from AT&T, which recommended Resnet as a customer who could occupy some of the company’s mobile data IP address blocks.

“I’m afraid we got duped,” said the person answering the phone at the WDSPC, while declining to elaborate on the precise nature of the alleged duping or the medium that was used to convey the recommendation.

AT&T declined to discuss its exact relationship with Resnet  — or if indeed it ever had one to begin with. It responded to multiple questions about Resnet with a short statement that said, “We have taken steps to terminate this company’s services and have referred the matter to law enforcement.”

Why exactly AT&T would forward the matter to law enforcement remains unclear. But it’s not unheard of for hosting providers to forge certain documents in their quest for additional IP space, and anyone caught doing so via email, phone or fax could be charged with wire fraud, which is a federal offense that carries punishments of up to $500,000 in fines and as much as 20 years in prison.

WHAT IS RESNET?

The WHOIS registration records for Resnet’s main Web site, resnetworking[.]com, are hidden behind domain privacy protection. However, a cursory Internet search on that domain turned up plenty of references to it on Hackforums[.]net, a sprawling community that hosts a seemingly never-ending supply of up-and-coming hackers seeking affordable and anonymous ways to monetize various online moneymaking schemes.

One user in particular — a Hackforums member who goes by the nickname “Profitvolt” — has spent several years advertising resnetworking[.]com and a number of related sites and services, including “unlimited” AT&T 4G/LTE data services, and the immediate availability of more than 1 million residential IPs that he suggested were “perfect for botting, shoe buying.”

The Hackforums user “Profitvolt” advertising residential proxies.

Profitvolt advertises his mobile and residential data services as ideal for anyone who wishes to run “various bots,” or “advertising campaigns.” Those services are meant to provide anonymity when customers are doing things such as automating ad clicks on platforms like Google Adsense and Facebook; generating new PayPal accounts; sneaker bot activity; credential stuffing attacks; and different types of social media spam.

For readers unfamiliar with this term, “shoe botting” or “sneaker bots” refers to the use of automated bot programs and services that aid in the rapid acquisition of limited-release, highly sought-after designer shoes that can then be resold at a profit on secondary markets. All too often, it seems, the people who profit the most in this scheme are using multiple sets of compromised credentials from consumer accounts at online retailers, and/or stolen payment card data.

To say shoe botting has become a thorn in the side of online retailers and regular consumers alike would be a major understatement: A recent State of The Internet Security Report (PDF) from Akamai (an advertiser on this site) noted that such automated bot activity now accounts for almost half of the Internet bandwidth directed at online retailers. The prevalance of shoe botting also might help explain Footlocker‘s recent $100 million investment in goat.com, the largest secondary shoe resale market on the Web.

In other discussion threads, Profitvolt advertises he can rent out an “unlimited number” of so-called “residential proxies,” a term that describes home or mobile Internet connections that can be used to anonymously relay Internet traffic for a variety of dodgy deals.

From a ne’er-do-well’s perspective, the beauty of routing one’s traffic through residential IPs is that few online businesses will bother to block malicious or suspicious activity emanating from them.

That’s because in general the pool of IP addresses assigned to residential or mobile wireless connections cycles intermittently from one user to the next, meaning that blacklisting one residential IP for abuse or malicious activity may only serve to then block legitimate traffic (and e-commerce) from the next user who gets assigned that same IP.

A BULLETPROOF PLAN?

In one early post on Hackforums, Profitvolt laments the untimely demise of various “bulletproof” hosting providers over the years, from the Russian Business Network and Atrivo/Intercage, to McColo, 3FN and Troyak, among others.

All of these Internet providers had one thing in common: They specialized in cultivating customers who used their networks for nefarious purposes — from operating botnets and spamming to hosting malware. They were known as “bulletproof” because they generally ignored abuse complaints, or else blamed any reported abuse on a reseller of their services.

In that Hackforums post, Profitvolt bemoans that “mediums which we use to distribute [are] locking us out and making life unnecessarily hard.”

“It’s still sketchy, so I am not going all out to reveal my plans, but currently I am starting off with a 32 GB RAM server with a 1 GB unmetered up-link in a Caribbean country,” Profitvolt told forum members, while asking in different Hackforums posts whether there are any other users from the dual-island Caribbean nation of Trinidad and Tobago on the forum.

“To be quite honest, the purpose of this is to test how far we can stretch the leniency before someone starts asking questions, or we start receiving emails,” Profitvolt continued.

Hackforums user Profitvolt says he plans to build his own “bulletproof” hosting network catering to fellow forum users who might want to rent his services for a variety of dodgy activities.

KrebsOnSecurity started asking questions of Resnet after stumbling upon several indications that this company was enabling different types of online abuse in bite-sized monthly packages. The site resnetworking[.]com appears normal enough on the surface, but a review of the customer packages advertised on it suggests the company has courted a very specific type of client.

“No bullshit, just proxies,” reads one (now hidden or removed) area of the site’s shopping cart. Other promotions advertise the use of residential proxies to promote “growth services” on multiple social media platforms including CraigslistFacebook, Google, Instagram, Spotify, Soundcloud and Twitter.

Resnet also peers with or partners with several other interesting organizations, including:

residential-network[.]com, also known as “IAPS Security Services” (formerly intl-alliance[.]com), which advertises the sale of residential VPNs and mobile 4G/IPv6 proxies aimed at helping customers avoid being blocked while automating different types of activity, from mass-creating social media and email accounts to bulk message sending on platforms like WhatsApp and Facebook.

Laksh Cybersecurity and Defense LLC, which maintains Hexproxy[.]com, another residential proxy service that largely courts customers involved in shoe botting.

-Several chunks of IP space from a Russian provider variously known by the names “SERVERSGET” and “Men Danil Valentinovich,” which has been associated with numerous instances of hijacking vast swaths of IP addresses from other organizations quite recently.

Some of Profitvolt’s discussion threads on Hackforums.

WHO IS RESNET?

Resnetworking[.]com lists on its home page the contact phone number 202-643-8533. That number is tied to the registration records for several domains, including resnetworking[.]com, residentialvpn[.]info, and residentialvpn[.]org. All of those domains also have in their historic WHOIS records the name Joshua Powder and Residential Networking Solutions LLC.

Running a reverse WHOIS lookup via Domaintools.com on “Joshua Powder” turns up almost 60 domain names — most of them tied to the email address joshua.powder@gmail.com. Among those are resnetworking[.]info, resvpn[.]com/net/org/info, tobagospeaks[.]com, tthack[.]com and profitvolt[.]com. Recall that “Profitvolt” is the nickname of the Hackforums user advertising resnetworking[.]com.

The email address josh@tthack.com was used to register an account on the scammer-friendly site blackhatworld[.]com under the nickname “BulletProofWebHost.” Here’s a list of domains registered to this email address.

A search on the Joshua Powder and tthack email addresses at Hyas, a startup that specializes in combining data from a number of sources to provide attribution of cybercrime activity, further associates those to mafiacloud@gmail.com and to the phone number 868-360-9983, which is a mobile number assigned by Digicel Trinidad and Tobago Ltd. A full list of domains tied to that 868- number is here.

Hyas’s service also pointed to this post on the Facebook page of the Prince George’s County Economic Development Corporation in Maryland, which appears to include a 2017 photo of Mr. Powder posing with county officials.

‘A GLORIFIED SOLUTIONS PROVIDER’

Roughly three weeks ago, KrebsOnSecurity called the 202 number listed at the top of resnetworking[.]com. To my surprise, a man speaking in a lovely Caribbean-sounding accent answered the call and identified himself as Josh Powder. When I casually asked from where he’d acquired that accent, Powder said he was a native of New Jersey but allowed that he has family members who now live in Trinidad and Tobago.

Powder said Residential Networking Solutions LLC is “a normal co-location Internet provider” that has been in operation for about three years and employs some 65 people.

“You’re not the first person to call us about residential VPNs,” Powder said. “In the past, we did have clients that did host VPNs, but it’s something that’s been discontinued since 2017. All we are is a glorified solutions provider, and we broker and lease Internet lines from different companies.”

When asked about the various “botting” packages for sale on Resnetworking[.]com, Powder replied that the site hadn’t been updated in a while and that these were inactive offers that resulted from a now-discarded business model.

“When we started back in 2016, we were really inexperienced, and hired some SEO [search engine optimization] firms to do marketing,” he explained. “Eventually we realized that this was creating a shitstorm, because it started to make us look a specific way to certain people. So we had to really go through a process of remodeling. That process isn’t complete, and the entire web site is going to retire in about a week’s time.”

Powder maintains that his company does have a contract with AT&T to resell LTE and 4G data services, and that he has a similar arrangement with Sprint. He also suggested that one of the aforementioned companies which partnered with Resnet — IAPS Security Services — was responsible for much of the dodgy activity that previously brought his company abuse complaints and strange phone calls about VPN services.

“That guy reached out to us and he leased service from us and nearly got us into a lot of trouble,” Powder said. “He was doing a lot of illegal stuff, and I think there is an ongoing matter with him legally. That’s what has caused us to be more vigilant and really look at what we do and change it. It attracted too much nonsense.”

Interestingly, when one visits IAPS Security Services’ old domain — intl-alliance[.]com — it now forwards to resvpn[.]com, which is one of the domains registered to Joshua Powder.

Shortly after our conversation, the monthly packages I asked Powder about that were for sale on resnetworking[.]com disappeared from the site, or were hidden behind a login. Also, Resnet’s IPv6 prefixes (a la IAPS Security Services) were removed from the company’s list of addresses. At the same time, a large number of Profitvolt’s posts prior to 2018 were deleted from Hackforums.

EPILOGUE

It appears that the future of low-level abuse targeting some of the most popular Internet destinations is tied to the increasing willingness of the world’s biggest ISPs to resell discrete chunks of their address space to whomever is able to pay for them.

Earlier this week, I had a Skype conversation with an individual who responded to my requests for more information from residential-network[.]com, and this person told me that plenty of mobile and land-line ISPs are more than happy to sell huge amounts of IP addresses to just about anybody.

“Mobile providers also sell mass services,” the person who responded to my Skype request offered. “Rogers in Canada just opened a new package for unlimited 4G data lines and we’re currently in negotiations with them for that service as well. The UK also has 4G providers that have unlimited data lines as well.”

The person responding to my Skype messages said they bought most of their proxies from a reseller at customproxysolutions[.]com, which advertises “the world’s largest network of 4G LTE modems in the United States.”

He added that “Rogers in Canada has a special offer that if you buy more than 50 lines you get a reduced price lower than the $75 Canadian Dollar price tag that they would charge for fewer than 50 lines. So most mobile ISPs want to sell mass lines instead of single lines.”

It remains unclear how much of the Internet address space claimed by these various residential proxy and VPN networks has been acquired legally or through other means. But it seems that Resnet and its business associates are in fact on the cutting edge of what it means to be a bulletproof Internet provider today.

CryptogramInfluence Operations Kill Chain

Influence operations are elusive to define. The Rand Corp.'s definition is as good as any: "the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent." Basically, we know it when we see it, from bots controlled by the Russian Internet Research Agency to Saudi attempts to plant fake stories and manipulate political debate. These operations have been run by Iran against the United States, Russia against Ukraine, China against Taiwan, and probably lots more besides.

Since the 2016 US presidential election, there have been an endless series of ideas about how countries can defend themselves. It's time to pull those together into a comprehensive approach to defending the public sphere and the institutions of democracy.

Influence operations don't come out of nowhere. They exploit a series of predictable weaknesses -- and fixing those holes should be the first step in fighting them. In cybersecurity, this is known as a "kill chain." That can work in fighting influence operations, too­ -- laying out the steps of an attack and building the taxonomy of countermeasures.

In an exploratory blog post, I first laid out a straw man information operations kill chain. I started with the seven commandments, or steps, laid out in a 2018 New York Times opinion video series on "Operation Infektion," a 1980s Russian disinformation campaign. The information landscape has changed since the 1980s, and these operations have changed as well. Based on my own research and feedback from that initial attempt, I have modified those steps to bring them into the present day. I have also changed the name from "information operations" to "influence operations," because the former is traditionally defined by the US Department of Defense in ways that don't really suit these sorts of attacks.

Step 1: Find the cracks in the fabric of society­ -- the social, demographic, economic, and ethnic divisions. For campaigns that just try to weaken collective trust in government's institutions, lots of cracks will do. But for influence operations that are more directly focused on a particular policy outcome, only those related to that issue will be effective.

Countermeasures: There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the "common political knowledge" necessary for democracies to function. That shared knowledge has to be strengthened, thereby making it harder to exploit the inevitable cracks. It needs to be made unacceptable -- or at least costly -- for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. The public must learn to become reflexively suspicious of information that makes them angry at fellow citizens. These cracks can't be entirely sealed, as they emerge from the diversity that makes democracies strong, but they can be made harder to exploit. Much of the work in "norms" falls here, although this is essentially an unfixable problem. This makes the countermeasures in the later steps even more important.

Step 2: Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives. In 2016, this consisted of creating social media accounts run either by human operatives or automatically by bots, making them seem legitimate, gathering followers. In the years following, this has gotten subtler. As social media companies have gotten better at deleting these accounts, two separate tactics have emerged. The first is microtargeting, where influence accounts join existing social circles and only engage with a few different people. The other is influencer influencing, where these accounts only try to affect a few proxies (see step 6) -- either journalists or other influencers -- who can carry their message for them.

Countermeasures: This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize. It would be best to delete accounts early, before those accounts have the time to establish themselves.

This might involve normally competitive companies working together, since operations and account names often cross platforms, and cross-platform visibility is an important tool for identifying them. Taking down accounts as early as possible is important, because it takes time to establish the legitimacy and reach of any one account. The NSA and US Cyber Command worked with the FBI and social media companies to take down Russian propaganda accounts during the 2018 midterm elections. It may be necessary to pass laws requiring Internet companies to do this. While many social networking companies have reversed their "we don't care" attitudes since the 2016 election, there's no guarantee that they will continue to remove these accounts -- especially since their profits depend on engagement and not accuracy.

Step 3: Seed distortion by creating alternative narratives. In the 1980s, this was a single "big lie," but today it is more about many contradictory alternative truths -- a "firehose of falsehood" -- that distort the political debate. These can be fake or heavily slanted news stories, extremist blog posts, fake stories on real-looking websites, deepfake videos, and so on.

Countermeasures: Fake news and propaganda are viruses; they spread through otherwise healthy populations. Fake news has to be identified and labeled as such by social media companies and others, including recognizing and identifying manipulated videos known as deepfakes. Facebook is already making moves in this direction. Educators need to teach better digital literacy, as Finland is doing. All of this will help people recognize propaganda campaigns when they occur, so they can inoculate themselves against their effects. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

Step 4: Wrap those narratives in kernels of truth. A core of fact makes falsehoods more believable and helps them spread. Releasing stolen emails from Hillary Clinton's campaign chairman John Podesta and the Democratic National Committee, or documents from Emmanuel Macron's campaign in France, were both an example of that kernel of truth. Releasing stolen emails with a few deliberate falsehoods embedded among them is an even more effective tactic.

Countermeasures: Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Fake news sows confusion just by being there. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. The media needs to learn skepticism about the chain of information and to exercise caution in how they approach debunked stories.

Step 5: Conceal your hand. Make it seem as if the stories came from somewhere else.

Countermeasures: Here the answer is attribution, attribution, attribution. The quicker an influence operation can be pinned on an attacker, the easier it is to defend against it. This will require efforts by both the social media platforms and the intelligence community, not just to detect influence operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

Step 6: Cultivate proxies who believe and amplify the narratives. Traditionally, these people have been called "useful idiots." Encourage them to take action outside of the Internet, like holding political rallies, and to adopt positions even more extreme than they would otherwise.

Countermeasures: We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth -- in the words of one report, we must "make the truth louder." Of course, there will always be true believers for whom no amount of fact-checking or counter-speech will suffice; this is not intended for them. Focus instead on persuading the persuadable.

Step 7: Deny involvement in the propaganda campaign, even if the truth is obvious. Although since one major goal is to convince people that nothing can be trusted, rumors of involvement can be beneficial. The first was Russia's tactic during the 2016 US presidential election; it employed the second during the 2018 midterm elections.

Countermeasures: When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA's Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

Step 8: Play the long game. Strive for long-term impact over immediate effects. Engage in multiple operations; most won't be successful, but some will.

Countermeasures: Counterattacks can disrupt the attacker's ability to maintain influence operations, as US Cyber Command did during the 2018 midterm elections. The NSA's new policy of "persistent engagement" (see the article by, and interview with, US Cyber Command Commander Paul Nakasone here) is a strategy to achieve this. So are targeted sanctions and indicting individuals involved in these operations. While there is little hope of bringing them to the United States to stand trial, the possibility of not being able to travel internationally for fear of being arrested will lead some people to refuse to do this kind of work. More generally, we need to better encourage both politicians and social media companies to think beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Deterring them will require a different theory. It will require, as the political scientist Henry Farrell and I have postulated, thinking of democracy itself as an information system and understanding "Democracy's Dilemma": how the very tools of a free and open society can be subverted to attack that society. We need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of influence operations, if we can publicly attribute, if we can respond either diplomatically or otherwise -- we can deter these attacks from nation-states.

None of these defensive actions is sufficient on its own. Steps overlap and in some cases can be skipped. Steps can be conducted simultaneously or out of order. A single operation can span multiple targets or be an amalgamation of multiple attacks by multiple actors. Unlike a cyberattack, disrupting will require more than disrupting any particular step. It will require a coordinated effort between government, Internet platforms, the media, and others.

Also, this model is not static, of course. Influence operations have already evolved since the 2016 election and will continue to evolve over time -- especially as countermeasures are deployed and attackers figure out how to evade them. We need to be prepared for wholly different kinds of influencer operations during the 2020 US presidential election. The goal of this kill chain is to be general enough to encompass a panoply of tactics but specific enough to illuminate countermeasures. But even if this particular model doesn't fit every influence operation, it's important to start somewhere.

Others have worked on similar ideas. Anthony Soules, a former NSA employee who now leads cybersecurity strategy for Amgen, presented this concept at a private event. Clint Watts of the Alliance for Securing Democracy is thinking along these lines as well. The Credibility Coalition's Misinfosec Working Group proposed a "misinformation pyramid." The US Justice Department developed a "Malign Foreign Influence Campaign Cycle," with associated countermeasures.

The threat from influence operations is real and important, and it deserves more study. At the same time, there's no reason to panic. Just as overly optimistic technologists were wrong that the Internet was the single technology that was going to overthrow dictators and liberate the planet, so pessimists are also probably wrong that it is going to empower dictators and destroy democracy. If we deploy countermeasures across the entire kill chain, we can defend ourselves from these attacks.

But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they're surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. And as more people, and multiple parties, conduct influence operations, they will increasingly be seen as how the game of politics is played in the information age. This means that the line will increasingly blur between influence operations and politics as usual, and that domestic influencers will be using them as part of campaigning. Defending democracy against foreign influence also necessitates making our own political debate healthier.

This essay previously appeared in Foreign Policy.

Worse Than FailureLowest Bidder Squared

Stack of coins 0214

Initech was in dire straits. The website was dog slow, and the budget had been exceeded by a factor of five already trying to fix it. Korbin, today's submitter, was brought in to help in exchange for decent pay and an office in their facility.

He showed up only to find a boxed-up computer and a brand new flat-packed desk, also still in the box. The majority of the space was a video-recording studio that saw maybe 4-6 hours of use a week. After setting up his office, Korbin spent the next day and a half finding his way around the completely undocumented C# code. The third day, there was a carpenter in the studio area. Inexplicably, said carpenter decided he needed to contact-glue carpet to a set of huge risers ... indoors. At least a gallon of contact cement was involved. In minutes, Korbin got a raging headache, and he was essentially gassed out of the building for the rest of the day. Things were not off to a good start.

Upon asking around, Korbin quickly determined that the contractors originally responsible for coding the website had underbid the project by half, then subcontracted the whole thing out to a team in India to do the work on the cheap. The India team had then done the very same thing, subcontracting it out to the most cut-rate individuals they could find. Everything had been written in triplicate for some reason, making it impossible to determine what was actually powering the website and what was dead code. Furthermore, while this was a database-oriented site, there were no stored procedures, and none of the (sub)subcontractors seemed to understand how to use a JOIN command.

In an effort to tease apart what code was actually needed, Korbin turned on profiling. Only ... it was already on in the test version of the site. With a sudden ominous hunch, he checked the live site—and sure enough, profiling was running in production as well. He shut it off, and instantly, the whole site became more responsive.

The next fix was also pretty simple. The site had a bad habit of asking for information it already had, over and over, without any JOINs. Reducing the frequency of database hits improved performance again, bringing it to within an order of magnitude of what one might expect from a website.

While all this was going on, the leaderboard page had begun timing out. Sure enough, it was an N-squared solution: open database, fetch record, close database, repeat, then compare the two records, putting them in order and beginning again. With 500 members, it was doing 250,000 passes each time someone hit the page. Korbin scrapped the whole thing in favor of the site's first stored procedure, then cached it to call only once a day.

The weeks went on, and the site began to take shape, finally getting something like back on track. Thanks to the botched rollout, however, many of the company's endorsements had vanished, and backers were pulling out. The president got on the phone with some VIP about Facebook—because as we all know, the solution to any company's problem is the solution to every company's problems.

"Facebook was written in PHP. He told me it was the best thing out there. So we're going to completely redo the website in PHP," the president confidently announced at the next all-hands meeting. "I want to hear how long everyone thinks this will take to get done."

The only developers left at that point were Korbin and a junior kid just out of college, with one contractor with some experience on the project.

"Two weeks. Maybe three," the kid replied.

They went around the table, and all the non-programmers chimed in with the 2-3 week assessment. Next to last came the experienced contractor. Korbin's jaw nearly dropped when he weighed in at 3-4 weeks.

"None of that is realistic!" Korbin proclaimed. "Even with the existing code as a road map, it's going to take 4-6 months to rewrite. And with the inevitable feature-creep and fixes for things found in testing, it is likely to take even longer."

Korbin was told the next day he could pick up his final check. Seven months later, he ran into the junior kid again, and asked how the rewrite went.

"It's still ongoing," he admitted.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Planet DebianRuss Allbery: Review: Spinning Silver

Review: Spinning Silver, by Naomi Novik

Publisher: Del Rey
Copyright: 2018
ISBN: 0-399-18100-8
Format: Kindle
Pages: 465

Miryem is the daughter of the village moneylender and the granddaughter (via her mother) of a well-respected moneylender in the city. Her grandfather is good at his job. Her father is not. He's always willing to loan the money out, but collecting it is another matter, and the village knows that and takes advantage of it. Each year is harder than the one before, in part because they have less and less money and in part because the winter is getting harsher and colder. When Miryem's mother falls ill, that's the last straw: she takes her father's ledger and goes to collect the money her family is rightfully owed.

Rather to her surprise, she's good at the job in all the ways her father is not. Daring born of desperation turns into persistent, cold anger at the way her family had been taken advantage of. She's good with numbers, has an eye for investments, and is willing to be firm and harden her heart where her father was not. Her success leads to good food, a warmer home, and her mother's recovery. It also leads to the attention of the Staryk.

The Staryk are the elves of Novik's world. They claim everything white in the forest, travel their own mysterious ice road, and raid villages when they choose. And, one night, one of the Staryk comes to Miryem's house and leaves a small bag of Staryk silver coins, challenging her to turn them into the gold the Staryk value so highly.

This is just the start of Spinning Silver, and Miryem is only one of a broadening cast. She demands the service of Wanda and her younger brother as payment for their father's debt, to the delight (hidden from Miryem) of them both since this provides a way to escape their abusive father. The Staryk silver becomes jewelry with surprising magical powers, which Miryem sells to the local duke for his daughter. The duke's daughter, in turn, draws the attention of the czar, who she met as a child when she found him torturing squirrels. And Miryem finds herself caught up in the world of the Staryk, which works according to rules that she can barely understand and may be a trap that she cannot escape.

Novik makes a risky technical choice in this book and pulls it off beautifully: the entirety of Spinning Silver is written in first person with frequently shifting narrators that are not signaled outside of the text. I think there were five different narrators in total, and I may be forgetting some. Despite that, I was never confused for more than a paragraph about who was speaking due to Novik's command of the differing voices. Novik uses this to great effect to show the inner emotions and motivations of the characters without resorting to the distancing effect of wandering third-person.

That's important for this novel because these characters are not emotionally forthcoming. They can't be. Each of them is operating under sharp constraints that make too much emotion unsafe: Wanda and her brother are abused, the Duke's daughter is valuable primarily as a political pawn and later is juggling the frightening attention of the czar, and Miryem is carefully preserving an icy core of anger against her parents' ineffectual empathy and is trying to navigate the perilous and trap-filled world of the Staryk. The caution and occasional coldness of the characters does require the reader do some work to extrapolate emotions, but I thought the overall effect worked.

Miryem's family is, of course, Jewish. The nature of village interactions with moneylenders make that obvious before the book explicitly states it. I thought Novik built some interesting contrasts between Miryem's navigation of the surrounding anti-Semitism and her navigation of the rules of the Staryk, which start off as far more alien than village life but become more systematic and comprehensible than the pervasive anti-Semitism as Miryem learns more. But I was particularly happy that Novik includes the good as well as the bad of Jewish culture among unforgiving neighbors: a powerful sense of family, household religious practices, Jewish weddings, and a cautious but very deep warmth that provides the emotional core for the last part of the book.

Novik also pulls off a rare feat in the plot structure by transforming most of the apparent villains into sympathetic characters and, unlike The Song of Ice and Fire, does this without making everyone awful. The Staryk, the duke, and even the czar are obvious villains on first appearances, but in each case the truth is more complicated and more interesting. The plot of Spinning Silver is satisfyingly complex and ever-changing, with just the right eventual payoffs for being a good (but cautious and smart!) person.

There were places when Spinning Silver got a bit bleak, such as when the story lingered a bit too long on Miryem trying and failing to navigate the Staryk world while getting herself in deeper and deeper, but her core of righteous anger and the protagonists' careful use of all the leverage that they have carried me through. The ending is entirely satisfying and well worth the journey. Recommended.

Rating: 8 out of 10

,

Planet DebianMarkus Koschany: My Free Software Activities in July 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

DebConf 19 in Curitiba

I have been attending DebConf 19 in Curitiba, Brazil from 16.7.2019 to 28.7.2019. I gave two talks about games in Debian and the Long Term Support project, together with Hugo Lefeuvre, Chris Lamb and Holger Levsen. Especially the Games talk had some immediate positive impact. In response to it Reiner Herrmann and Giovanni Mascellani provided patches for release critical bugs related to GCC-9 and the Python 2 removal and we could already fix some of the more important problems for our current release cycle.

I had a lot of fun in Brazil and again met a couple of new and interesting people.  Thanks to all who helped organizing DebConf 19 and made it the great event it was!

Debian Games

  • We are back in business which means packaging new upstream versions of popular games. I packaged new versions of atomix, dreamchess and pygame-sdl2,
  • uploaded minetest 5.0.1 to unstable and backported it later to buster-backports,
  • uploaded new versions of freeorion and warzone2100 to Buster,
  • fixed bug #931415 in freeciv and #925866 in xteddy,
  • became the new uploader of enemylines7.
  • I reviewed and sponsored patches from Reiner Herrmann to port several games to python3-pygame including whichwayisup, funnyboat and monsterz,
  • from Giovanni Mascellani ember and enemylines7.

Debian Java

  • I packaged new upstream versions of robocode, jboss-modules, jboss-jdeparser2, wildfly-common, commons-dbcp2, jboss-logging-tools, jboss-logmanager, libpdfbox2.java, jboss-logging, jboss-xnio, libjide-oss-java,  sweethome3d, sweethome3d-furniture, pdfsam, libsambox-java, libsejda-java, jackson-jr, jackson-dataformat-xml, libsmali-java and apktool.

Misc

  • I updated the popular Firefox/Chromium addons ublock-origin, https-everywhere and privacybadger and also packaged new upstream versions of wabt and binaryen which are both required for building webassembly files from source.

Debian LTS

This was my 41. month as a paid contributor and I have been paid to work 18,5 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1854-1. Issued a security update for libonig fixing 1 CVE.
  • DLA-1860-1. Issued a security update for libxslt fixing 4 CVE.
  • DLA-1846-2. Issued a regression update for unzip to address a Firefox build failure.
  • DLA-1873-1. Issued a security update for proftpd-dfsg fixing 1 CVE.
  • DLA-1886-1. Issued a security update for openjdk-7 fixing 4 CVE.
  • DLA-1890-1. Issued a security update for kde4libs fixing 1 CVE.
  • DLA-1891-1. Reviewed and sponsored a security update for openldap fixing 2 CVE prepared by Ryan Tandy.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fourteenth month and I have been paid to work 15 hours on ELTS.

  • I was in charge of our ELTS frontdesk from 15.07.2019 until 21.07.2019 and I triaged CVE in openjdk7, libxslt, libonig, php5, wireshark, python2.7, libsdl1.2, patch, suricata and libssh2.
  • ELA-143-1. Issued a security update for libonig fixing 1 CVE.
  • ELA-145-1.  Issued a security update for libxslt fixing 2 CVE.
  • ELA-151-1. Issued a security update for linux fixing 3 CVE.
  • ELA-154-1. Issued a security update for openjdk-7 fixing 4 CVE.

Thanks for reading and see you next time.

,

Planet DebianMichael Stapelberg: Linux distributions: Can we do without hooks and triggers?

Hooks are an extension feature provided by all package managers that are used in larger Linux distributions. For example, Debian uses apt, which has various maintainer scripts. Fedora uses rpm, which has scriptlets. Different package managers use different names for the concept, but all of them offer package maintainers the ability to run arbitrary code during package installation and upgrades. Example hook use cases include adding daemon user accounts to your system (e.g. postgres), or generating/updating cache files.

Triggers are a kind of hook which run when other packages are installed. For example, on Debian, the man(1) package comes with a trigger which regenerates the search database index whenever any package installs a manpage. When, for example, the nginx(8) package is installed, a trigger provided by the man(1) package runs.

Over the past few decades, Open Source software has become more and more uniform: instead of each piece of software defining its own rules, a small number of build systems are now widely adopted.

Hence, I think it makes sense to revisit whether offering extension via hooks and triggers is a net win or net loss.

Hooks preclude concurrent package installation

Package managers commonly can make very little assumptions about what hooks do, what preconditions they require, and which conflicts might be caused by running multiple package’s hooks concurrently.

Hence, package managers cannot concurrently install packages. At least the hook/trigger part of the installation needs to happen in sequence.

While it seems technically feasible to retrofit package manager hooks with concurrency primitives such as locks for mutual exclusion between different hook processes, the required overhaul of all hooks¹ seems like such a daunting task that it might be better to just get rid of the hooks instead. Only deleting code frees you from the burden of maintenance, automated testing and debugging.

① In Debian, there are 8620 non-generated maintainer scripts, as reported by find shard*/src/*/debian -regex ".*\(pre\|post\)\(inst\|rm\)$" on a Debian Code Search instance.

Triggers slow down installing/updating other packages

Personally, I never use the apropos(1) command, so I don’t appreciate the man(1) package’s trigger which updates the database used by apropos(1). The process takes a long time and, because hooks and triggers must be executed serially (see previous section), blocks my installation or update.

When I tell people this, they are often surprised to learn about the existance of the apropos(1) command. I suggest adopting an opt-in model.

Unnecessary work if programs are not used between updates

Hooks run when packages are installed. If a package’s contents are not used between two updates, running the hook in the first update could have been skipped. Running the hook lazily when the package contents are used reduces unnecessary work.

As a welcome side-effect, lazy hook evaluation automatically makes the hook work in operating system images, such as live USB thumb drives or SD card images for the Raspberry Pi. Such images must not ship the same crypto keys (e.g. OpenSSH host keys) to all machines, but instead generate a different key on each machine.

Why do users keep packages installed they don’t use? It’s extra work to remember and clean up those packages after use. Plus, users might not realize or value that having fewer packages installed has benefits such as faster updates.

I can also imagine that there are people for whom the cost of re-installing packages incentivizes them to just keep packages installed—you never know when you might need the program again…

Implemented in an interpreted language

While working on hermetic packages (more on that in another blog post), where the contained programs are started with modified environment variables (e.g. PATH) via a wrapper bash script, I noticed that the overhead of those wrapper bash scripts quickly becomes significant. For example, when using the excellent magit interface for Git in Emacs, I encountered second-long delays² when using hermetic packages compared to standard packages. Re-implementing wrappers in a compiled language provided a significant speed-up.

Similarly, getting rid of an extension point which mandates using shell scripts allows us to build an efficient and fast implementation of a predefined set of primitives, where you can reason about their effects and interactions.

② magit needs to run git a few times for displaying the full status, so small overhead quickly adds up.

Incentivizing more upstream standardization

Hooks are an escape hatch for distribution maintainers to express anything which their packaging system cannot express.

Distributions should only rely on well-established interfaces such as autoconf’s classic ./configure && make && make install (including commonly used flags) to build a distribution package. Integrating upstream software into a distribution should not require custom hooks. For example, instead of requiring a hook which updates a cache of schema files, the library used to interact with those files should transparently (re-)generate the cache or fall back to a slower code path.

Distribution maintainers are hard to come by, so we should value their time. In particular, there is a 1:n relationship of packages to distribution package maintainers (software is typically available in multiple Linux distributions), so it makes sense to spend the work in the 1 and have the n benefit.

Can we do without them?

If we want to get rid of hooks, we need another mechanism to achieve what we currently achieve with hooks.

If the hook is not specific to the package, it can be moved to the package manager. The desired system state should either be derived from the package contents (e.g. required system users can be discovered from systemd service files) or declaratively specified in the package build instructions—more on that in another blog post. This turns hooks (arbitrary code) into configuration, which allows the package manager to collapse and sequence the required state changes. E.g., when 5 packages are installed which each need a new system user, the package manager could update /etc/passwd just once.

If the hook is specific to the package, it should be moved into the package contents. This typically means moving the functionality into the program start (or the systemd service file if we are talking about a daemon). If (while?) upstream is not convinced, you can either wrap the program or patch it. Note that this case is relatively rare: I have worked with hundreds of packages and the only package-specific functionality I came across was automatically generating host keys before starting OpenSSH’s sshd(8)³.

There is one exception where moving the hook doesn’t work: packages which modify state outside of the system, such as bootloaders or kernel images.

③ Even that can be moved out of a package-specific hook, as Fedora demonstrates.

Conclusion

Global state modifications performed as part of package installation today use hooks, an overly expressive extension mechanism.

Instead, all modifications should be driven by configuration. This is feasible because there are only a few different kinds of desired state modifications. This makes it possible for package managers to optimize package installation.

Planet DebianMichael Stapelberg: Linux package managers are slow

I measured how long the most popular Linux distribution’s package manager take to install small and large packages (the ack(1p) source code search Perl script and qemu, respectively).

Where required, my measurements include metadata updates such as transferring an up-to-date package list. For me, requiring a metadata update is the more common case, particularly on live systems or within Docker containers.

All measurements were taken on an Intel(R) Core(TM) i9-9900K CPU @ 3.60GHz running Docker 1.13.1 on Linux 4.19, backed by a Samsung 970 Pro NVMe drive boasting many hundreds of MB/s write performance.

See Appendix B for details on the measurement method and command outputs.

Measurements

Keep in mind that these are one-time measurements. They should be indicative of actual performance, but your experience may vary.

ack (small Perl program)

distribution package manager data wall-clock time rate
Fedora dnf 107 MB 29s 3.7 MB/s
NixOS Nix 15 MB 14s 1.1 MB/s
Debian apt 15 MB 4s 3.7 MB/s
Arch Linux pacman 6.5 MB 3s 2.1 MB/s
Alpine apk 10 MB 1s 10.0 MB/s

qemu (large C program)

distribution package manager data wall-clock time rate
Fedora dnf 266 MB 1m8s 3.9 MB/s
Arch Linux pacman 124 MB 1m2s 2.0 MB/s
Debian apt 159 MB 51s 3.1 MB/s
NixOS Nix 262 MB 38s 6.8 MB/s
Alpine apk 26 MB 2.4s 10.8 MB/s


The difference between the slowest and fastest package managers is 30x!

How can Alpine’s apk and Arch Linux’s pacman be an order of magnitude faster than the rest? They are doing a lot less than the others, and more efficiently, too.

Pain point: too much metadata

For example, Fedora transfers a lot more data than others because its main package list is 60 MB (compressed!) alone. Compare that with Alpine’s 734 KB APKINDEX.tar.gz.

Of course the extra metadata which Fedora provides helps some use case, otherwise they hopefully would have removed it altogether. The amount of metadata seems excessive for the use case of installing a single package, which I consider the main use-case of an interactive package manager.

I expect any modern Linux distribution to only transfer absolutely required data to complete my task.

Pain point: no concurrency

Because they need to sequence executing arbitrary package maintainer-provided code (hooks and triggers), all tested package managers need to install packages sequentially (one after the other) instead of concurrently (all at the same time).

In my blog post “Can we do without hooks and triggers?”, I outline that hooks and triggers are not strictly necessary to build a working Linux distribution.

Thought experiment: further speed-ups

Strictly speaking, the only required feature of a package manager is to make available the package contents so that the package can be used: a program can be started, a kernel module can be loaded, etc.

By only implementing what’s needed for this feature, and nothing more, a package manager could likely beat apk’s performance. It could, for example:

  • skip archive extraction by mounting file system images (like AppImage or snappy)
  • use compression which is light on CPU, as networks are fast (like apk)
  • skip fsync when it is safe to do so, i.e.:
    • package installations don’t modify system state
    • atomic package installation (e.g. an append-only package store)
    • automatically clean up the package store after crashes

Current landscape

Here’s a table outlining how the various package managers listed on Wikipedia’s list of software package management systems fare:

name scope package file format hooks/triggers
AppImage apps image: ISO9660, SquashFS no
snappy apps image: SquashFS yes: hooks
FlatPak apps archive: OSTree no
0install apps archive: tar.bz2 no
nix, guix distro archive: nar.{bz2,xz} activation script
dpkg distro archive: tar.{gz,xz,bz2} in ar(1) yes
rpm distro archive: cpio.{bz2,lz,xz} scriptlets
pacman distro archive: tar.xz install
slackware distro archive: tar.{gz,xz} yes: doinst.sh
apk distro archive: tar.gz yes: .post-install
Entropy distro archive: tar.bz2 yes
ipkg, opkg distro archive: tar{,.gz} yes

Conclusion

As per the current landscape, there is no distribution-scoped package manager which uses images and leaves out hooks and triggers, not even in smaller Linux distributions.

I think that space is really interesting, as it uses a minimal design to achieve significant real-world speed-ups.

I have explored this idea in much more detail, and am happy to talk more about it in my post “Introducing the distri research linux distribution".

There are a couple of recent developments going into the same direction:

Appendix B: measurement details

ack

You can expand each of these:

Fedora’s dnf takes almost 30 seconds to fetch and unpack 107 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y ack
Fedora Modular 30 - x86_64            4.4 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  3.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           17 MB/s |  19 MB     00:01
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  44 Packages

Total download size: 13 M
Installed size: 42 M
[…]
real	0m29.498s
user	0m22.954s
sys	0m1.085s

NixOS’s Nix takes 14s to fetch and unpack 15 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i perl5.28.2-ack-2.28'
unpacking channels...
created 2 symlinks in user environment
installing 'perl5.28.2-ack-2.28'
these paths will be fetched (14.91 MiB download, 80.83 MiB unpacked):
  /nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2
  /nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48
  /nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man
  /nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27
  /nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31
  /nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53
  /nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16
  /nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28
copying path '/nix/store/gkrpl3k6s43fkg71n0269yq3p1f0al88-perl5.28.2-ack-2.28-man' from 'https://cache.nixos.org'...
copying path '/nix/store/iykxb0bmfjmi7s53kfg6pjbfpd8jmza6-glibc-2.27' from 'https://cache.nixos.org'...
copying path '/nix/store/x4knf14z1p0ci72gl314i7vza93iy7yc-perl5.28.2-File-Next-1.16' from 'https://cache.nixos.org'...
copying path '/nix/store/89gi8cbp8l5sf0m8pgynp2mh1c6pk1gk-attr-2.4.48' from 'https://cache.nixos.org'...
copying path '/nix/store/svgkibi7105pm151prywndsgvmc4qvzs-acl-2.2.53' from 'https://cache.nixos.org'...
copying path '/nix/store/k8lhqzpaaymshchz8ky3z4653h4kln9d-coreutils-8.31' from 'https://cache.nixos.org'...
copying path '/nix/store/57iv2vch31v8plcjrk97lcw1zbwb2n9r-perl-5.28.2' from 'https://cache.nixos.org'...
copying path '/nix/store/zfj7ria2kwqzqj9dh91kj9kwsynxdfk0-perl5.28.2-ack-2.28' from 'https://cache.nixos.org'...
building '/nix/store/q3243sjg91x1m8ipl0sj5gjzpnbgxrqw-user-environment.drv'...
created 56 symlinks in user environment
real	0m 14.02s
user	0m 8.83s
sys	0m 2.69s

Debian’s apt takes almost 10 seconds to fetch and unpack 16 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y ack-grep)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [233 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8270 kB]
Fetched 8502 kB in 2s (4764 kB/s)
[…]
The following NEW packages will be installed:
  ack ack-grep libfile-next-perl libgdbm-compat4 libgdbm5 libperl5.26 netbase perl perl-modules-5.26
The following packages will be upgraded:
  perl-base
1 upgraded, 9 newly installed, 0 to remove and 60 not upgraded.
Need to get 8238 kB of archives.
After this operation, 42.3 MB of additional disk space will be used.
[…]
real	0m9.096s
user	0m2.616s
sys	0m0.441s

Arch Linux’s pacman takes a little over 3s to fetch and unpack 6.5 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm ack)
:: Synchronizing package databases...
 core            132.2 KiB  1033K/s 00:00
 extra          1629.6 KiB  2.95M/s 00:01
 community         4.9 MiB  5.75M/s 00:01
[…]
Total Download Size:   0.07 MiB
Total Installed Size:  0.19 MiB
[…]
real	0m3.354s
user	0m0.224s
sys	0m0.049s

Alpine’s apk takes only about 1 second to fetch and unpack 10 MB.

% docker run -t -i alpine
/ # time apk add ack
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
(1/4) Installing perl-file-next (1.16-r0)
(2/4) Installing libbz2 (1.0.6-r7)
(3/4) Installing perl (5.28.2-r1)
(4/4) Installing ack (3.0.0-r0)
Executing busybox-1.30.1-r2.trigger
OK: 44 MiB in 18 packages
real	0m 0.96s
user	0m 0.25s
sys	0m 0.07s

qemu

You can expand each of these:

Fedora’s dnf takes over a minute to fetch and unpack 266 MB.

% docker run -t -i fedora /bin/bash
[root@722e6df10258 /]# time dnf install -y qemu
Fedora Modular 30 - x86_64            3.1 MB/s | 2.7 MB     00:00
Fedora Modular 30 - x86_64 - Updates  2.7 MB/s | 2.4 MB     00:00
Fedora 30 - x86_64 - Updates           20 MB/s |  19 MB     00:00
Fedora 30 - x86_64                     31 MB/s |  70 MB     00:02
[…]
Install  262 Packages
Upgrade    4 Packages

Total download size: 172 M
[…]
real	1m7.877s
user	0m44.237s
sys	0m3.258s

NixOS’s Nix takes 38s to fetch and unpack 262 MB.

% docker run -t -i nixos/nix
39e9186422ba:/# time sh -c 'nix-channel --update && nix-env -i qemu-4.0.0'
unpacking channels...
created 2 symlinks in user environment
installing 'qemu-4.0.0'
these paths will be fetched (262.18 MiB download, 1364.54 MiB unpacked):
[…]
real	0m 38.49s
user	0m 26.52s
sys	0m 4.43s

Debian’s apt takes 51 seconds to fetch and unpack 159 MB.

% docker run -t -i debian:sid
root@b7cc25a927ab:/# time (apt update && apt install -y qemu-system-x86)
Get:1 http://cdn-fastly.deb.debian.org/debian sid InRelease [149 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian sid/main amd64 Packages [8426 kB]
Fetched 8574 kB in 1s (6716 kB/s)
[…]
Fetched 151 MB in 2s (64.6 MB/s)
[…]
real	0m51.583s
user	0m15.671s
sys	0m3.732s

Arch Linux’s pacman takes 1m2s to fetch and unpack 124 MB.

% docker run -t -i archlinux/base
[root@9604e4ae2367 /]# time (pacman -Sy && pacman -S --noconfirm qemu)
:: Synchronizing package databases...
 core       132.2 KiB   751K/s 00:00
 extra     1629.6 KiB  3.04M/s 00:01
 community    4.9 MiB  6.16M/s 00:01
[…]
Total Download Size:   123.20 MiB
Total Installed Size:  587.84 MiB
[…]
real	1m2.475s
user	0m9.272s
sys	0m2.458s

Alpine’s apk takes only about 2.4 seconds to fetch and unpack 26 MB.

% docker run -t -i alpine
/ # time apk add qemu-system-x86_64
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.10/community/x86_64/APKINDEX.tar.gz
[…]
OK: 78 MiB in 95 packages
real	0m 2.43s
user	0m 0.46s
sys	0m 0.09s

Planet DebianMichael Stapelberg: distri: a Linux distribution to research fast package management

Over the last year or so I have worked on a research linux distribution in my spare time. It’s not a distribution for researchers (like Scientific Linux), but my personal playground project to research linux distribution development, i.e. try out fresh ideas.

This article focuses on the package format and its advantages, but there is more to distri, which I will cover in upcoming blog posts.

Motivation

I was a Debian Developer for the 7 years from 2012 to 2019, but using the distribution often left me frustrated, ultimately resulting in me winding down my Debian work.

Frequently, I was noticing a large gap between the actual speed of an operation (e.g. doing an update) and the possible speed based on back of the envelope calculations. I wrote more about this in my blog post “Package managers are slow”.

To me, this observation means that either there is potential to optimize the package manager itself (e.g. apt), or what the system does is just too complex. While I remember seeing some low-hanging fruit¹, through my work on distri, I wanted to explore whether all the complexity we currently have in Linux distributions such as Debian or Fedora is inherent to the problem space.

I have completed enough of the experiment to conclude that the complexity is not inherent: I can build a Linux distribution for general-enough purposes which is much less complex than existing ones.

① Those were low-hanging fruit from a user perspective. I’m not saying that fixing them is easy in the technical sense; I know too little about apt’s code base to make such a statement.

Key idea: packages are images, not archives

One key idea is to switch from using archives to using images for package contents. Common package managers such as dpkg(1) use tar(1) archives with various compression algorithms.

distri uses SquashFS images, a comparatively simple file system image format that I happen to be familiar with from my work on the gokrazy Raspberry Pi 3 Go platform.

This idea is not novel: AppImage and snappy also use images, but only for individual, self-contained applications. distri however uses images for distribution packages with dependencies. In particular, there is no duplication of shared libraries in distri.

A nice side effect of using read-only image files is that applications are immutable and can hence not be broken by accidental (or malicious!) modification.

Key idea: separate hierarchies

Package contents are made available under a fully-qualified path. E.g., all files provided by package zsh-amd64-5.6.2-3 are available under /ro/zsh-amd64-5.6.2-3. The mountpoint /ro stands for read-only, which is short yet descriptive.

Perhaps surprisingly, building software with custom prefix values of e.g. /ro/zsh-amd64-5.6.2-3 is widely supported, thanks to:

  1. Linux distributions, which build software with prefix set to /usr, whereas FreeBSD (and the autotools default), which build with prefix set to /usr/local.

  2. Enthusiast users in corporate or research environments, who install software into their home directories.

Because using a custom prefix is a common scenario, upstream awareness for prefix-correctness is generally high, and the rarely required patch will be quickly accepted.

Key idea: exchange directories

Software packages often exchange data by placing or locating files in well-known directories. Here are just a few examples:

  • gcc(1) locates the libusb(3) headers via /usr/include
  • man(1) locates the nginx(1) manpage via /usr/share/man.
  • zsh(1) locates executable programs via PATH components such as /bin

In distri, these locations are called exchange directories and are provided via FUSE in /ro.

Exchange directories come in two different flavors:

  1. global. The exchange directory, e.g. /ro/share, provides the union of the share sub directory of all packages in the package store.
    Global exchange directories are largely used for compatibility, see below.

  2. per-package. Useful for tight coupling: e.g. irssi(1) does not provide any ABI guarantees, so plugins such as irssi-robustirc can declare that they want e.g. /ro/irssi-amd64-1.1.1-1/out/lib/irssi/modules to be a per-package exchange directory and contain files from their lib/irssi/modules.

Search paths sometimes need to be fixed

Programs which use exchange directories sometimes use search paths to access multiple exchange directories. In fact, the examples above were taken from gcc(1) ’s INCLUDEPATH, man(1) ’s MANPATH and zsh(1) ’s PATH. These are prominent ones, but more examples are easy to find: zsh(1) loads completion functions from its FPATH.

Some search path values are derived from --datadir=/ro/share and require no further attention, but others might derive from e.g. --prefix=/ro/zsh-amd64-5.6.2-3/out and need to be pointed to an exchange directory via a specific command line flag.

FHS compatibility

Global exchange directories are used to make distri provide enough of the Filesystem Hierarchy Standard (FHS) that third-party software largely just works. This includes a C development environment.

I successfully ran a few programs from their binary packages such as Google Chrome, Spotify, or Microsoft’s Visual Studio Code.

Fast package manager

I previously wrote about how Linux distribution package managers are too slow.

distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link).

Its speed comes largely from an architecture which allows the package manager to do less work. Specifically:

  1. Package images can be added atomically to the package store, so we can safely skip fsync(2) . Corruption will be cleaned up automatically, and durability is not important: if an interactive installation is interrupted, the user can just repeat it, as it will be fresh on their mind.

  2. Because all packages are co-installable thanks to separate hierarchies, there are no conflicts at the package store level, and no dependency resolution (an optimization problem requiring SAT solving) is required at all.
    In exchange directories, we resolve conflicts by selecting the package with the highest monotonically increasing distri revision number.

  3. distri proves that we can build a useful Linux distribution entirely without hooks and triggers. Not having to serialize hook execution allows us to download packages into the package store with maximum concurrency.

  4. Because we are using images instead of archives, we do not need to unpack anything. This means installing a package is really just writing its package image and metadata to the package store. Sequential writes are typically the fastest kind of storage usage pattern.

Fast installation also make other use-cases more bearable, such as creating disk images, be it for testing them in qemu(1) , booting them on real hardware from a USB drive, or for cloud providers such as Google Cloud.

Fast package builder

Contrary to how distribution package builders are usually implemented, the distri package builder does not actually install any packages into the build environment.

Instead, distri makes available a filtered view of the package store (only declared dependencies are available) at /ro in the build environment.

This means that even for large dependency trees, setting up a build environment happens in a fraction of a second! Such a low latency really makes a difference in how comfortable it is to iterate on distribution packages.

Package stores

In distri, package images are installed from a remote package store into the local system package store /roimg, which backs the /ro mount.

A package store is implemented as a directory of package images and their associated metadata files.

You can easily make available a package store by using distri export.

To provide a mirror for your local network, you can periodically distri update from the package store you want to mirror, and then distri export your local copy. Special tooling (e.g. debmirror in Debian) is not required because distri install is atomic (and update uses install).

Producing derivatives is easy: just add your own packages to a copy of the package store.

The package store is intentionally kept simple to manage and distribute. Its files could be exchanged via peer-to-peer file systems, or synchronized from an offline medium.

distri’s first release

distri works well enough to demonstrate the ideas explained above. I have branched this state into branch jackherer, distri’s first release code name. This way, I can keep experimenting in the distri repository without breaking your installation.

From the branch contents, our autobuilder creates:

  1. disk images, which…

  2. a package repository. Installations can pick up new packages with distri update.

  3. documentation for the release.

The project website can be found at https://distr1.org. The website is just the README for now, but we can improve that later.

The repository can be found at https://github.com/distr1/distri

Project outlook

Right now, distri is mainly a vehicle for my spare-time Linux distribution research. I don’t recommend anyone use distri for anything but research, and there are no medium-term plans of that changing. At the very least, please contact me before basing anything serious on distri so that we can talk about limitations and expectations.

I expect the distri project to live for as long as I have blog posts to publish, and we’ll see what happens afterwards. Note that this is a hobby for me: I will continue to explore, at my own pace, parts that I find interesting.

My hope is that established distributions might get a useful idea or two from distri.

There’s more to come: subscribe to the distri feed

I don’t want to make this post too long, but there is much more!

Please subscribe to the following URL in your feed reader to get all posts about distri:

https://michael.stapelberg.ch/posts/tags/distri/feed.xml

Next in my queue are articles about hermetic packages and good package maintainer experience (including declarative packaging).

Feedback or questions?

I’d love to discuss these ideas in case you’re interested!

Please send feedback to the distri mailing list so that everyone can participate!

Planet DebianCyril Brulebois: Sending HTML messages with Net::XMPP (Perl)

Executive summary

It’s perfectly possible! Jump to the HTML demo!

Longer version

This started with a very simple need: wanting to improve the notifications I’m receiving from various sources. Those include:

  • changes or failures reported during Puppet runs on my own infrastructure, and on at a customer’s;
  • build failures for the Debian Installer;
  • changes in banking amounts;
  • and lately: build status for jobs in a customer’s Jenkins instance.

I’ve been using plaintext notifications for a number of years but I decided to try and pimp them a little by adding some colors.

While the XMPP-sending details are usually hidden in a local module, here’s a small self-contained example: connecting to a server, sending credentials, and then sending a message to someone else. Of course, one might want to tweak the Configuration section before trying to run this script…

#!/usr/bin/perl
use strict;
use warnings;

use Net::XMPP;

# Configuration:
my $hostname = 'example.org';
my $username = 'bot';
my $password = 'call-me-alan';
my $resource = 'demo';
my $recipient = 'human@example.org';

# Open connection:
my $con = Net::XMPP::Client->new();
my $status = $con->Connect(
    hostname       => $hostname,
    connectiontype => 'tcpip',
    tls            => 1,
    ssl_ca_path    => '/etc/ssl/certs',
);
die 'XMPP connection failed'
    if ! defined($status);

# Log in:
my @result = $con->AuthSend(
    hostname => $hostname,
    username => $username,
    password => $password,
    resource => $resource,
);
die 'XMPP authentication failed'
    if $result[0] ne 'ok';

# Send plaintext message:
my $msg = 'Hello, World!';
my $res = $con->MessageSend(
    to   => $recipient,
    body => $msg,
    type => 'chat',
);
die('ERROR: XMPP message failed')
    if $res != 0;

For reference, here’s what the XML message looks like in Gajim’s XML console (on the receiving end):

<message type='chat' to='human@example.org' from='bot@example.org/demo'>
  <body>Hello, World!</body>
</message>

Issues start when one tries to send some HTML message, e.g. with the last part changed to:

# Send plaintext message:
my $msg = 'This is a <b>failing</b> test';
my $res = $con->MessageSend(
    to   => $recipient,
    body => $msg,
    type => 'chat',
);

as that leads to the following message:

<message type='chat' to='human@example.org' from='bot@example.org/demo'>
  <body>This is a &lt;b&gt;failing&lt;/b&gt; test</body>
</message>

So tags are getting encoded and one gets to see the uninterpreted “HTML code”.

Trying various things to embed that inside <body> and <html> tags, with or without namespaces, led nowhere.

Looking at a message sent from Gajim to Gajim (so that I could craft an HTML message myself and inspect it), I’ve noticed it goes this way (edited to concentrate on important parts):

<message xmlns="jabber:client" to="human@example.org/Gajim" type="chat">
  <body>Hello, World!</body>
  <html xmlns="http://jabber.org/protocol/xhtml-im">
    <body xmlns="http://www.w3.org/1999/xhtml">
      <p>Hello, <strong>World</strong>!</p>
    </body>
  </html>
</message>

Two takeaways here:

  • The message is send both in plaintext and in HTML. It seems Gajim archives the plaintext version, as opening the history/logs only shows the textual version.

  • The fact that the HTML message is under a different path (/message/html as opposed to /message/body) means that one cannot use the MessageSend method to send HTML messages…

This was verified by checking the documentation and code of the Net::XMPP::Message module. It comes with various getters and setters for attributes. Those are then automatically collected when the message is serialized into XML (through the GetXML() method). Trying to add handling for a new HTML attribute would mean being extra careful as that would need to be treated with $type = 'raw'

Oh, wait a minute! While using git grep in the sources, looking for that raw type thing, I’ve discovered what sounded promising: an InsertRawXML() method, that doesn’t appear anywhere in either the code or the documentation of the Net::XMPP::Message module.

It’s available, though! Because Net::XMPP::Message is derived from Net::XMPP::Stanza:

use Net::XMPP::Stanza;
use base qw( Net::XMPP::Stanza );

which then in turn comes with this function:

##############################################################################
#
# InsertRawXML - puts the specified string onto the list for raw XML to be
#                included in the packet.
#
##############################################################################

Let’s put that aside for a moment and get back to the MessageSend() method. It wants parameters that can be passed to the Net::XMPP::Message SetMessage() method, and here is its entire code:

###############################################################################
#
# MessageSend - Takes the same hash that Net::XMPP::Message->SetMessage
#               takes and sends the message to the server.
#
###############################################################################
sub MessageSend
{
    my $self = shift;

    my $mess = $self->_message();
    $mess->SetMessage(@_);
    $self->Send($mess);
}

The first assignment is basically equivalent to my $mess = Net::XMPP::Message->new();, so what this function does is: creating a Net::XMPP::Message for us, passing all parameters there, and handing the resulting object over to the Send() method. All in all, that’s merely a proxy.

HTML demo

The question becomes: what if we were to create that object ourselves, then tweaking it a little, and then passing it directly to Send(), instead of using the slightly limited MessageSend()? Let’s see what the rewritten sending part would look like:

# Send HTML message:
my $text = 'This is a working test';
my $html = 'This is a <b>working</b> test';

my $message = Net::XMPP::Message->new();
$message->SetMessage(
    to   => $recipient,
    body => $text,
    type => 'chat',
);
$message->InsertRawXML("<html><body>$html</body></html>");
my $res = $con->Send($message);

And tada!

<message type='chat' to='human@example.org' from='bot@example.org/demo'>
  <body>This is a working test</body>
  <html>
    <body>This is a <b>working</b> test</body>
  </html>
</message>

I’m absolutely no expert when it comes to XMPP standards, and one might need/want to set some more metadata like xmlns but I’m happy enough with this solution that I thought I’d share it as is. ;)

,

CryptogramFriday Squid Blogging: Robot Squid Propulsion

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we're told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There's also plenty of work to do with using the fins for dynamic control, which the researchers say will "reveal the superiority of the natural flying squid movement."

I can't find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianBits from Debian: Debian celebrates 26 years, Happy DebianDay!

26 years ago today in a single post to the comp.os.linux.development newsgroup, Ian Murdock announced the completion of a brand new Linux release named Debian.

Since that day we’ve been into outer space, typed over 1,288,688,830 lines of code, spawned over 300 derivatives, were enhanced with 6,155 known contributors, and filed over 975,619 bug reports.

We are home to a community of thousands of users around the globe, we gather to host our annual Debian Developers Conference DebConf which spans the world in a different country each year, and of course today's many DebianDay celebrations held around the world.

It's not too late to throw an impromptu DebianDay celebration or to go and join one of the many celebrations already underway.

As we celebrate our own anniversary, we also want to celebrate our many contributors, developers, teams, groups, maintainers, and users. It is all of your effort, support, and drive that continue to make Debian truly: The universal operating system.

Happy DebianDay!

Planet DebianJonathan McDowell: DebConf19: Brazil

My first DebConf was DebConf4, held in Porte Alegre, Brazil back in 2004. Uncle Steve did the majority of the travel arrangements for 6 of us to go. We had some mishaps which we still tease him about, but it was a great experience. So when I learnt DebConf19 was to be in Brazil again, this time in Curitiba, I had to go. So last November I realised flights were only likely to get more expensive, that I’d really kick myself if I didn’t go, and so I booked my tickets. A bunch of life happened in the meantime that mean the timing wasn’t particularly great for me - it’s been a busy 6 months - but going was still the right move.

One thing that struck me about DC19 is that a lot of the faces I’m used to seeing at a DebConf weren’t there. Only myself and Steve from the UK DC4 group made it, for example. I don’t know if that’s due to the travelling distances involved, or just the fact that attendance varies and this happened to be a year where a number of people couldn’t make it. Nonetheless I was able to catch up with a number of people I only really see at DebConfs, as well as getting to hang out with some new folk.

Given how busy I’ve been this year and expect to be for at least the next year I set myself a hard goal of not committing to any additional tasks. That said DebConf often provides a welcome space to concentrate on technical bits. I reviewed and merged dkg’s work on WKD and DANE for the Debian keyring under debian.org - we’re not exposed to the recent keyserver network issues due to the fact the keyring is curated, but providing additional access to our keyring makes sense if it can be done easily. I spent some time with Ian Jackson talking about dgit - I’m not a user of it at present, but I’m intrigued by the potential for being able to do Debian package uploads via signed git tags. Of course I also attended a variety of different talks (and, as usual, at times the schedule conflicted such that I had a difficult choice about which option to chose for a particular slot).

This also marks the first time I did a non-team related talk at DebConf, warbling about my home automation (similar to my NI Dev Conf talk but with some more bits about the Debian involvement thrown in):

In addition I co-presented a couple of talks for teams I’m part of:

I only realised late in the week that 2 talks I’d normally expect to attend, an Software in the Public Interest BoF and a New Member BoF, were not on the schedule, but to be honest I don’t think I’d have been able to run either even if I’d realised in advance.

Finally, DebConf wouldn’t be DebConf without playing with some embedded hardware at some point, and this year it was the Caninos Loucos Labrador. This is a Brazilian grown single board ARM based computer with a modular form factor designed for easy integration into bigger projects. There;s nothing particularly remarkable about the hardware and you might ask why not just use a Pi? The reason is that import duties in Brazil make such things prohibitively expensive - importing a $35 board can end up costing $150 by the time shipping, taxes and customs fees are all taken into account. The intent is to design and build locally, as components can be imported with minimal taxes if the final product is being assembled within Brazil. And Mercosul allows access to many other South American countries without tariffs. I’d have loved to get hold of one of the boards, but they’ve only produced 1000 in the initial run and really need to get them into the hands of people who can help progress the project rather than those who don’t have enough time.

Next year DebConf20 is in Haifa - a city I’ve spent some time in before - but I’ve made the decision not to attend; rather than spending a single 7-10 day chunk away from home I’m going to aim to attend some more local conferences for shorter periods of time.

CryptogramSoftware Vulnerabilities in the Boeing 787

Boeing left its software unprotected, and researchers have analyzed it for vulnerabilities:

At the Black Hat security conference today in Las Vegas, Santamarta, a researcher for security firm IOActive, plans to present his findings, including the details of multiple serious security flaws in the code for a component of the 787 known as a Crew Information Service/Maintenance System. The CIS/MS is responsible for applications like maintenance systems and the so-called electronic flight bag, a collection of navigation documents and manuals used by pilots. Santamarta says he found a slew of memory corruption vulnerabilities in that CIS/MS, and he claims that a hacker could use those flaws as a foothold inside a restricted part of a plane's network. An attacker could potentially pivot, Santamarta says, from the in-flight entertainment system to the CIS/MS to send commands to far more sensitive components that control the plane's safety-critical systems, including its engine, brakes, and sensors. Boeing maintains that other security barriers in the 787's network architecture would make that progression impossible.

Santamarta admits that he doesn't have enough visibility into the 787's internals to know if those security barriers are circumventable. But he says his research nonetheless represents a significant step toward showing the possibility of an actual plane-hacking technique. "We don't have a 787 to test, so we can't assess the impact," Santamarta says. "We're not saying it's doomsday, or that we can take a plane down. But we can say: This shouldn't happen."

Boeing denies that there's any problem:

In a statement, Boeing said it had investigated IOActive's claims and concluded that they don't represent any real threat of a cyberattack. "IOActive's scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system," the company's statement reads. "IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we're disappointed in IOActive's irresponsible presentation."

This being Black Hat and Las Vegas, I'll say it this way: I would bet money that Boeing is wrong. I don't have an opinion about whether or not it's lying.

Worse Than FailureError'd: What About the Fish?

"On the one hand, I don't want to know what the fish has to do with Boris Johnson's love life...but on the other hand I have to know!" Mark R. writes.

 

"Not sure if that's a new GDPR rule or the Slack Mailbot's weekend was just that much better then mine," Adam G. writes.

 

Connor W. wrote, "You know what, I think I'll just stay inside."

 

"It's great to see that an attempt at personalization was made, but whatever happened to 'trust but verify'?" writes Rob H.

 

"For a while, I thought that, maybe, I didn't actually know how to use my iPhone's alarm. Instead, I found that it just wasn't working right. So, I contacted Apple Support, and while they were initially skeptical that it was an iOS issue, this morning, I actually have proof!" Markus G. wrote.

 

Tim G. writes, "I guess that's better than an angry error message."

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianFrançois Marier: Passwordless restricted guest account on Ubuntu

Here's how I created a restricted but not ephemeral guest account on an Ubuntu 18.04 desktop computer that can be used without a password.

Create a user that can login without a password

First of all, I created a new user with a random password (using pwgen -s 64):

adduser guest

Then following these instructions, I created a new group and added the user to it:

addgroup nopasswdlogin
adduser guest nopasswdlogin

In order to let that user login using GDM without a password, I added the following to the top of /etc/pam.d/gdm-password:

auth    sufficient      pam_succeed_if.so user ingroup nopasswdlogin

Note that this user is unable to ssh into this machine since it's not part of the sshuser group I have setup in my sshd configuration.

Privacy settings

In order to reduce the amount of digital traces left between guest sessions, I logged into the account using a GNOME session and then opened gnome-control-center. I set the following in the privacy section:

Then I replaced Firefox with Brave in the sidebar, set it as the default browser in gnome-control-center:

and configured it to clear everything on exit:

Create a password-less system keyring

In order to suppress prompts to unlock gnome-keyring, I opened seahorse and deleted the default keyring.

Then I started Brave, which prompted me to create a new keyring so that it can save the contents of its password manager securely. I set an empty password on that new keyring, since I'm not going to be using it.

I also made sure to disable saving of passwords, payment methods and addresses in the browser too.

Restrict user account further

Finally, taking an idea from this similar solution, I prevented the user from making any system-wide changes by putting the following in /etc/polkit-1/localauthority/50-local.d/10-guest-policy.pkla:

[guest-policy]
Identity=unix-user:guest
Action=*
ResultAny=no
ResultInactive=no
ResultActive=no

If you know of any other restrictions that could be added, please leave a comment!

,

Planet DebianJulian Andres Klode: APT Patterns

If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.

so, what are patterns?

Patterns allow you to specify complex search queries to select the packages you want to install/show. For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages. Or the pattern ?automatic allows you find all automatically installed packages.

You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.

There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)): Find all packages x that depend on another package that recommends x. I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.

reducing pattern syntax

aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:

  1. If ?foo takes arguments (like ?depends did), then ?bar is the argument.
  2. Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)

I find that very confusing. So, when looking at implementing patterns in APT, I went for a different approach. I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.

This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing. That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.

aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.

So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:

  1. we do not support concatenation instead of ?and.
  2. we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not
apt not understanding invalid patterns

apt not understanding invalid patterns

Supported syntax

At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version. This was done as a starting point for the merge, patterns for versions will come in the next round.

We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.

The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.

logic patterns

These patterns provide the basic means to combine other patterns into more complex expressions, as well as ?true and ?false patterns.

?and(PATTERN, PATTERN, ...)

Selects objects where all specified patterns match.

?false

Selects nothing.

?not(PATTERN)

Selects objects where PATTERN does not match.

?or(PATTERN, PATTERN, ...)

Selects objects where at least one of the specified patterns match.

?true

Selects all objects.

package patterns

These patterns select specific packages.

?architecture(WILDCARD)

Selects packages matching the specified architecture, which may contain wildcards using any.

?automatic

Selects packages that were installed automatically.

?broken

Selects packages that have broken dependencies.

?config-files

Selects packages that are not fully installed, but have solely residual configuration files left.

?essential

Selects packages that have Essential: yes set in their control file.

?exact-name(NAME)

Selects packages with the exact specified name.

?garbage

Selects packages that can be removed automatically.

?installed

Selects packages that are currently installed.

?name(REGEX)

Selects packages where the name matches the given regular expression.

?obsolete

Selects packages that no longer exist in repositories.

?upgradable

Selects packages that can be upgraded (have a newer candidate).

?virtual

Selects all virtual packages; that is packages without a version. These exist when they are referenced somewhere in the archive, for example because something depends on that name.

examples

apt remove ?garbage

Remove all packages that are automatically installed and no longer needed - same as apt autoremove

apt purge ?config-files

Purge all packages that only have configuration files left

oddities

Some things are not yet where I want them:

  • ?architecture does not support all, native, or same
  • ?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)

the future

Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.

Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.

There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.

Maybe I should allow using [] instead of () so larger patterns become more readable, and/or some support for comments.

There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.

feedback

Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.

CryptogramBypassing Apple FaceID's Liveness Detection Feature

Apple's FaceID has a liveness detection feature, which prevents someone from unlocking a victim's phone by putting it in front of his face while he's sleeping. That feature has been hacked:

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim's FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim's face the researchers demonstrated how they could bypass Apple's FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

Worse Than FailureCodeSOD: A Devil With a Date

Jim was adding a feature to the backend. This feature updated a few fields on an object, and then handed the object off as JSON to the front-end.

Adding the feature seemed pretty simple, but when Jim went to check out its behavior in the front-end, he got validation errors. Something in the data getting passed back by his web service was fighting with the front end.

On its surface, that seemed like a reasonable problem, but when looking into it, Jim discovered that it was the record_update_date field which was causing validation issues. The front-end displayed this as a read only field, so there was no reason to do any client-side validation in the first place, and that field was never sent to the backend, so there was even less than no reason to do validation.

Worse, the field had, at least to the eye, a valid date: 2019-07-29T00:00:00.000Z. Even weirder, if Jim changed the backend to just return 2019-07-29, everything worked. He dug into the validation code to see what might be wrong about it:

/**
 * Custom validation
 *
 * This is a callback function for ajv custom keywords
 *
 * @param  {object} wsFormat aiFormat property content
 * @param  {object} data Data (of element type) from document where validation is required
 * @param  {object} itemSchema Schema part from wsValidation keyword
 * @param  {string} dataPath Path to document element
 * @param  {object} parentData Data of parent object
 * @param  {string} key Property name
 * @param  {object} rootData Document data
 */
function wsFormatFunction(wsFormat, data, itemSchema, dataPath, parentData, key, rootData) {

    let valid;
    switch (aiFormat) {
        case 'date': {
            let regex = /^\d\d\d\d-[0-1]\d-[0-3](T00:00:00.000Z)?\d$/;
            valid = regex.test(data);
            break;
        }
        case 'date-time': {
            let regex = /^\d\d\d\d-[0-1]\d-[0-3]\d[t\s](?:[0-2]\d:[0-5]\d:[0-5]\d|23:59:60)(?:\.\d+)?(?:z|[+-]\d\d:\d\d)$/i;
            valid = regex.test(data);
            break;
        }
        case 'time': {
            let regex = /^(0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9]$/;
            valid = regex.test(data);
            break;
        }
        default: throw 'Unknown wsFormat: ' + wsFormat;
    }

    if (!valid) {
        wsFormatFunction['errors'] = wsFormatFunction['errors'] || [];

        wsFormatFunction['errors'].push({
            keyword: 'wsFormat',
            dataPath: dataPath,
            message: 'should match format "' + wsFormat + '"',
            schema: itemSchema,
            data: data
        });
    }

    return valid;
}

When it starts with “Custom validation” and it involves dates, you know you’re in for a bad time. Worse, it’s custom validation, dates, and regular expressions written by someone who clearly didn’t understand regular expressions.

Let’s take a peek at the branch which was causing Jim’s error, and examine the regex:

/^\d\d\d\d-[0-1]\d-[0-3](T00:00:00.000Z)?\d$/

It should start with four digits, followed by a dash, followed by a value between 0 and 1. Then another digit, then a dash, then a number between 0 and 3, then the time (optionally), then a final digit.

It’s obvious why Jim’s perfectly reasonable date wasn’t working: it needed to be 2019-07-2T00:00:00.000Z9. Or, if Jim just didn’t include the timestamp, not only would 2019-07-29 be a valid date, but so would 2019-19-39, which just so happens to be my birthday. Mark your calendars for the 39th of Undevigintiber.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramSide-Channel Attack against Electronic Locks

Several high-security electronic locks are vulnerable to side-channel attacks involving power monitoring.

Cory DoctorowMy appearance on the MMT podcast

I’ve been following the Modern Monetary Theory debate for about 18 months, and I’m largely a convert: governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.

I was delighted to be invited onto the MMT Podcast to discuss the ways that MMT dovetails with the fight against monopoly and inequality, and how science-fiction storytelling can bring complicated technical subjects (like adversarial interoperability) to life.

We talked so long that they’ve split it into two episodes, the first of which is now live (MP3).

Krebs on SecurityMeet Bluetana, the Scourge of Pump Skimmers

Bluetana,” a new mobile app that looks for Bluetooth-based payment card skimmers hidden inside gas pumps, is helping police and state employees more rapidly and accurately locate compromised fuel stations across the nation, a study released this week suggests. Data collected in the course of the investigation also reveals some fascinating details that may help explain why these pump skimmers are so lucrative and ubiquitous.

The new app, now being used by agencies in several states, is the brainchild of computer scientists from the University of California San Diego and the University of Illinois Urbana-Champaign, who say they developed the software in tandem with technical input from the U.S. Secret Service (the federal agency most commonly called in to investigate pump skimming rings).

The Bluetooth pump skimmer scanner app ‘Bluetana’ in action.

Gas pumps are a perennial target of skimmer thieves for several reasons. They are usually unattended, and in too many cases a handful of master keys will open a great many pumps at a variety of filling stations.

The skimming devices can then be attached to electronics inside the pumps in a matter of seconds, and because they’re also wired to the pump’s internal power supply the skimmers can operate indefinitely without the need of short-lived batteries.

And increasingly, these pump skimmers are fashioned to relay stolen card data and PINs via Bluetooth wireless technology, meaning the thieves who install them can periodically download stolen card data just by pulling up to a compromised pump and remotely connecting to it from a Bluetooth-enabled mobile device or laptop.

According to the study, some 44 volunteers  — mostly law enforcement officials and state employees — were equipped with Bluetana over a year-long experiment to test the effectiveness of the scanning app.

The researchers said their volunteers collected Bluetooth scans at 1,185 gas stations across six states, and that Bluetana detected a total of 64 skimmers across four of those states. All of the skimmers were later collected by law enforcement, including two that were reportedly missed in manual safety inspections of the pumps six months earlier.

While several other Android-based apps designed to find pump skimmers are already available, the researchers said Bluetana was developed with an eye toward eliminating false-positives that some of these other apps can fail to distinguish.

“Bluetooth technology used in these skimmers are also used for legitimate products commonly seen at and near gas stations such as speed-limit signs, weather sensors and fleet tracking systems,” said Nishant Bhaskar, UC San Diego Ph.D. student and principal author of the study. “These products can be mistaken for skimmers by existing detection apps.”

BLACK MARKET VALUE

The fuel skimmer study also helps explain how quickly these hidden devices can generate huge profits for the organized gangs that typically deploy them. The researchers found the skimmers their app found collected data from roughly 20 -25 payment cards each day — evenly distributed between debit and credit cards (although they note estimates from payment fraud prevention companies and the Secret Service that put the average figure closer to 50-100 cards daily per compromised machine).

The academics also studied court documents which revealed that skimmer scammers often are only able to “cashout” stolen cards — either through selling them on the black market or using them for fraudulent purchases — a little less than half of the time. This can result from the skimmers sometimes incorrectly reading card data, daily withdrawal limits, or fraud alerts at the issuing bank.

“Based on the prior figures, we estimate the range of per-day revenue from a skimmer is $4,253 (25 cards per day, cashout of $362 per card, and 47% cashout success rate), and our high end estimate is $63,638 (100 cards per day per day, $1,354 cashout per card, and cashout success rate of 47%),” the study notes.

Not a bad haul either way, considering these skimmers typically cost about $25 to produce.

Those earnings estimates assume an even distribution of credit and debit card use among customers of a compromised pump: The more customers pay with a debit card, the more profitable the whole criminal scheme may become. Armed with your PIN and debit card data, skimmer thieves or those who purchase stolen cards can clone your card and pull money out of your account at an ATM.

“Availability of a PIN code with a stolen debit card in particular, can increase its value five-fold on the black market,” the researchers wrote.

This highlights a warning that KrebsOnSecurity has relayed to readers in many previous stories on pump skimming attacks: Using a debit card at the pump can be way riskier than paying with cash or a credit card.

The black market value, impact to consumers and banks, and liability associated with different types of card fraud.

And as the above graphic from the report illustrates, there are different legal protections for fraudulent transactions on debit vs. credit cards. With a credit card, your maximum loss on any transactions you report as fraud is $50; with a debit card, that protection only extends for within two days of the unauthorized transaction. After that, the maximum consumer liability can increase to $500 within 60 days, and to an unlimited amount after 60 days.

In practice, your bank or debit card issuer may still waive additional liabilities, and many do. But even then, having your checking account emptied of cash while your bank sorts out the situation can still be a huge hassle and create secondary problems (bounced checks, for instance).

Interestingly, this advice against using debit cards at the pump often runs counter to the messaging pushed by fuel station owners themselves, many of whom offer lower prices for cash or debit card transactions. That’s because credit card transactions typically are more expensive to process.

For all its skimmer-skewering prowess, Bluetana will not be released to the public. The researchers said the primary reason for this is highlighted in the core findings of the study.

“There are many legitimate devices near gas stations that look exactly like skimmers do in Bluetooth scans,” said UCSD Assistant Professor Aaron Schulman, in an email to KrebsOnSecurity. “Flagging suspicious devices in Bluetana is a only a way of notifying inspectors that they need to gather more data around the gas station to determine if the Bluetooth transmissions appear to be emanating from a device inside of of the pumps. If it does, they can then open the pump door and confirm that the signal strength rises, and begin their visual inspection for the skimmer.”

One of the best tips for avoiding fuel card skimmers is to favor filling stations that have updated security features, such as custom keys for each pump, better compartmentalization of individual components within the machine, and tamper protections that physically shut down a pump if the machine is improperly accessed.

How can you spot a gas station with these updated features, you ask? As noted in last summer’s story, How to Avoid Card Skimmers at the Pumps, these newer-model machines typically feature a horizontal card acceptance slot along with a raised metallic keypad. In contrast, older, less secure pumps usually have a vertical card reader a flat, membrane-based keypad.

Newer, more tamper-resistant fuel pumps include pump-specific key locks, raised metallic keypads, and horizontal card readers.

The researchers will present their work on Bluetana later today at the USENIX Security 2019 conference in Santa Clara, Calif. A copy of their paper is available here (PDF).

If you enjoyed this story, check out my series on all things skimmer-related: All About Skimmers. Looking for more information on fuel pump skimming? Have a look at some of these stories.

CryptogramExploiting GDPR to Get Private Information

A researcher abused the GDPR to get information on his fiancee:

It is one of the first tests of its kind to exploit the EU's General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company -- especially tech ones -- they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests, but said they had included:

  • a UK hotel chain that shared a complete record of his partner's overnight stays

  • two UK rail companies that provided records of all the journeys she had taken with them over several years

  • a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey.

CryptogramAttorney General Barr and Encryption

Last month, Attorney General William Barr gave a major speech on encryption policy足what is commonly known as "going dark." Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability -- a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats -- is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how足 -- an approach we have derisively named "nerd harder."

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having -- not the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about "consumer cybersecurity" and not "nuclear launch codes." This is true, but it ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There's no longer a difference between consumer tech and government tech -- it's all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTE -- which became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003 -- which seems to have been a National Security Agency operation -- and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that it is not about iPhones and data at rest. It is about communications足 -- data in transit. The "going dark" debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr's latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody -- even at the cost of law enforcement access足 -- than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

Worse Than FailureCodeSOD: A Loop in the String

Robert was browsing through a little JavaScript used at his organization, and found this gem of type conversion.

//use only for small numbers
function StringToInteger (str) {
    var int = -1;
    for (var i=0; i<=100; i++) {
        if (i+"" == str) {
            int = i;
            break;
        }
    }
    return int;
}

So, this takes our input str, which is presumably a string, and it starts counting from 0 to 100. i+"" coerces the integer value to a string, which we compare against our string. If it’s a match, we’ll store that value and break out of the loop.

Obviously, this has a glaring flaw: the 100 is hardcoded. So what we really need to do is add a search_low and search_high parameter, so we can write the for loop as i = search_low; i <= search_high; i++ instead. Because that’s the only glaring flaw in this code. I can’t think of any possible better way of converting strings to integers. Not a one.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramPhone Pharming for Ad Fraud

Interesting article on people using banks of smartphones to commit ad fraud for profit.

No one knows how prevalent ad fraud is on the Internet. I believe it is surprisingly high -- here's an article that places losses between $6.5 and $19 billion annually -- and something companies like Google and Facebook would prefer remain unresearched.

Krebs on SecurityPatch Tuesday, August 2019 Edition

Most Microsoft Windows (ab)users probably welcome the monthly ritual of applying security updates about as much as they look forward to going to the dentist: It always seems like you were there just yesterday, and you never quite know how it’s all going to turn out. Fortunately, this month’s patch batch from Redmond is mercifully light, at least compared to last month.

Okay, maybe a trip to the dentist’s office is still preferable. In any case, today is the second Tuesday of the month, which means it’s once again Patch Tuesday (or — depending on your setup and when you’re reading this post — Reboot Wednesday). Microsoft today released patches to fix some 93 vulnerabilities in Windows and related software, 35 of which affect various Server versions of Windows, and another 70 that apply to the Windows 10 operating system.

Although there don’t appear to be any zero-day vulnerabilities fixed this month — i.e. those that get exploited by cybercriminals before an official patch is available — there are several issues that merit attention.

Chief among those are patches to address four moderately terrifying flaws in Microsoft’s Remote Desktop Service, a feature which allows users to remotely access and administer a Windows computer as if they were actually seated in front of the remote computer. Security vendor Qualys says two of these weaknesses can be exploited remotely without any authentication or user interaction.

“According to Microsoft, at least two of these vulnerabilities (CVE-2019-1181 and CVE-2019-1182) can be considered ‘wormable’ and [can be equated] to BlueKeep,” referring to a dangerous bug patched earlier this year that Microsoft warned could be used to spread another WannaCry-like ransomware outbreak. “It is highly likely that at least one of these vulnerabilities will be quickly weaponized, and patching should be prioritized for all Windows systems.”

Fortunately, Remote Desktop is disabled by default in Windows 10, and as such these flaws are more likely to be a threat for enterprises that have enabled the application for various purposes. For those keeping score, this is the fourth time in 2019 Microsoft has had to fix critical security issues with its Remote Desktop service.

For all you Microsoft Edge and Internet Exploiter Explorer users, Microsoft has issued the usual panoply of updates for flaws that could be exploited to install malware after a user merely visits a hacked or booby-trapped Web site. Other equally serious flaws patched in Windows this month could be used to compromise the operating system just by convincing the user to open a malicious file (regardless of which browser the user is running).

As crazy as it may seem, this is the second month in a row that Adobe hasn’t issued a security update for its Flash Player browser plugin, which is bundled in IE/Edge and Chrome (although now hobbled by default in Chrome). However, Adobe did release important updates for its Acrobat and free PDF reader products.

If the tone of this post sounds a wee bit cantankerous, it might be because at least one of the updates I installed last month totally hosed my Windows 10 machine. I consider myself an equal OS abuser, and maintain multiple computers powered by a variety of operating systems, including Windows, Linux and MacOS.

Nevertheless, it is frustrating when being diligent about applying patches introduces so many unfixable problems that you’re forced to completely reinstall the OS and all of the programs that ride on top of it. On the bright side, my newly-refreshed Windows computer is a bit more responsive than it was before crash hell.

So, three words of advice. First off, don’t let Microsoft decide when to apply patches and reboot your computer. On the one hand, it’s nice Microsoft gives us a predictable schedule when it’s going to release patches. On the other, Windows 10 will by default download and install patches whenever it pleases, and then reboot the computer.

Unless you change that setting. Here’s a tutorial on how to do that. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Secondly, it doesn’t hurt to wait a few days to apply updates.  Very often fixes released on Patch Tuesday have glitches that cause problems for an indeterminate number of Windows systems. When this happens, Microsoft then patches their patches to minimize the same problems for users who haven’t yet applied the updates, but it sometimes takes a few days for Redmond to iron out the kinks.

Finally, please have some kind of system for backing up your files before applying any updates. You can use third-party software for this, or just the options built into Windows 10. At some level, it doesn’t matter. Just make sure you’re backing up your files, preferably following the 3-2-1 backup rule. Thankfully, I’m vigilant about backing up my files.

And, as ever, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Planet DebianSteve Kemp: That time I didn't find a kernel bug, or did I?

Recently I saw a post to the linux kernel mailing-list containing a simple fix for a use-after-free bug. The code in question originally read:

    hdr->pkcs7_msg = pkcs7_parse_message(buf + buf_len, sig_len);
    if (IS_ERR(hdr->pkcs7_msg)) {
        kfree(hdr);
        return PTR_ERR(hdr->pkcs7_msg);
    }

Here the bug is obvious once it has been pointed out:

  • A structure is freed.
    • But then it is dereferenced, to provide a return value.

This is the kind of bug that would probably have been obvious to me if I'd happened to read the code myself. However patch submitted so job done? I did have some free time so I figured I'd scan for similar bugs. Writing a trivial perl script to look for similar things didn't take too long, though it is a bit shoddy:

  • Open each file.
  • If we find a line containing "free(.*)" record the line and the thing that was freed.
  • The next time we find a return look to see if the return value uses the thing that was free'd.
    • If so that's a possible bug. Report it.

Of course my code is nasty, but it looked like it immediately paid off. I found this snippet of code in linux-5.2.8/drivers/media/pci/tw68/tw68-video.c:

    if (hdl->error) {
        v4l2_ctrl_handler_free(hdl);
        return hdl->error;
    }

That looks promising:

  • The structure hdl is freed, via a dedicated freeing-function.
  • But then we return the member error from it.

Chasing down the code I found that linux-5.2.8/drivers/media/v4l2-core/v4l2-ctrls.c contains the code for the v4l2_ctrl_handler_free call and while it doesn't actually free the structure - just some members - it does reset the contents of hdl->error to zero.

Ahah! The code I've found looks for an error, and if it was found returns zero, meaning the error is lost. I can fix it, by changing to this:

    if (hdl->error) {
        int err = hdl->error;
        v4l2_ctrl_handler_free(hdl);
        return err;
    }

I did that. Then looked more closely to see if I was missing something. The code I've found lives in the function tw68_video_init1, that function is called only once, and the return value is ignored!

So, that's the story of how I scanned the Linux kernel for use-after-free bugs and contributed nothing to anybody.

Still fun though.

I'll go over my list more carefully later, but nothing else jumped out as being immediately bad.

There is a weird case I spotted in ./drivers/media/platform/s3c-camif/camif-capture.c with a similar pattern. In that case the function involved is s3c_camif_create_subdev which is invoked by ./drivers/media/platform/s3c-camif/camif-core.c:

        ret = s3c_camif_create_subdev(camif);
        if (ret < 0)
                goto err_sd;

So I suspect there is something odd there:

  • If there's an error in s3c_camif_create_subdev
    • Then handler->error will be reset to zero.
    • Which means that return handler->error will return 0.
    • Which means that the s3c_camif_create_subdev call should have returned an error, but won't be recognized as having done so.
    • i.e. "0 < 0" is false.

Of course the error-value is only set if this code is hit:

    hdl->buckets = kvmalloc_array(hdl->nr_of_buckets,
                      sizeof(hdl->buckets[0]),
                      GFP_KERNEL | __GFP_ZERO);
    hdl->error = hdl->buckets ? 0 : -ENOMEM;

Which means that the registration of the sub-device fails if there is no memory, and at that point what can you even do?

It's a bug, but it isn't a security bug.

Planet DebianRicardo Mones: When your mail hub password is updated...

don't
 forget
  to
   run
    postmap
     on
      your
       /etc/postfix/sasl_passwd

(repeat 100 times sotto voce or until falling asleep, whatever happens first).

Planet DebianSven Hoexter: Debian/buster on HPE DL360G10 - interfaces change back to ethX

For yet unknown reasons some recently installed HPE DL360G10 running buster changed back the interface names from the expected "onboard" based names eno5 and eno6 to ethX after a reboot.

My current workaround is a link file which kind of enforces the onboard scheme.

$ cat /etc/systemd/network/101-onboard-rd.link 
[Link]
NamePolicy=onboard kernel database slot path
MACAddressPolicy=none

The hosts are running the latest buster kernel

Linux foobar 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linu

A downgrade of the kernel did not change anything. So I currently like to believe this is not related a kernel change.

I tried to collect a few information on one of the broken systems while in a broken state:

root@foobar:~# SYSTEMD_LOG_LEVEL=debug udevadm test-builtin net_setup_link /sys/class/net/eth0
=== trie on-disk ===
tool version:          241
file size:         9492053 bytes
header size             80 bytes
strings            2069269 bytes
nodes              7422704 bytes
Load module index
Found container virtualization none.
timestamp of '/etc/systemd/network' changed
Skipping overridden file '/usr/lib/systemd/network/99-default.link'.
Parsed configuration file /etc/systemd/network/99-default.link
Created link configuration context.
ID_NET_DRIVER=i40e
eth0: No matching link configuration found.
Builtin command 'net_setup_link' fails: No such file or directory
Unload module index
Unloaded link configuration context.

root@foobar:~# udevadm test-builtin net_id /sys/class/net/eth0 2>/dev/null
ID_NET_NAMING_SCHEME=v240
ID_NET_NAME_MAC=enx48df37944ab0
ID_OUI_FROM_DATABASE=Hewlett Packard Enterprise
ID_NET_NAME_ONBOARD=eno5
ID_NET_NAME_PATH=enp93s0f0

Most interesting hint right now seems to be that /sys/class/net/eth0/name_assign_type is invalid While on sytems before the reboot that breaks it, and after setting the .link file fix, contains a 4.

Since those hosts were intially installed with buster there are no remains on any ethX related configuration present. If someone has an idea what is going on write a mail (sven at stormbind dot net), or blog on planet.d.o.

I found a vaguely similar bug report for a Dell PE server in #929622, though that was a change from 4.9 (stretch) to the 4.19 stretch-bpo kernel and the device names were not changed back to the ethX scheme, and Ben found a reason for it inside the kernel. Also the hardware is different using bnxt_en, while I've tg3 and i40e in use.

Cory DoctorowPodcast: Interoperability and Privacy: Squaring the Circle

In my latest podcast (MP3), I read my essay “Interoperability and Privacy: Squaring the Circle, published today on EFF’s Deeplinks; it’s another in the series of “adversarial interoperability” explainers, this one focused on how privacy and adversarial interoperability relate to each other.

Even if we do manage to impose interoperability on Facebook in ways that allow for meaningful competition, in the absence of robust anti-monopoly rules, the ecosystem that grows up around that new standard is likely to view everything that’s not a standard interoperable component as a competitive advantage, something that no competitor should be allowed to make incursions upon, on pain of a lawsuit for violating terms of service or infringing a patent or reverse-engineering a copyright lock or even more nebulous claims like “tortious interference with contract.”

In other words, the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that’s not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge.

Standards should be the floor on interoperability, but adversarial interoperability should be the ceiling. Adversarial interoperability takes place when a new company designs a product or service that works with another company’s existing products or services, without seeking permission to do so.

MP3

Worse Than FailureCodeSOD: Nullable Knowledge

You’ve got a decimal value- maybe. It could be nothing at all, and you need to handle that null gracefully. Fortunately for you, C# has “nullable types”, which make this task easy.

Ian P’s co-worker made this straightforward application of nullable types.

public static decimal ValidateDecimal(decimal? value)
{
if (value == null) return 0;
decimal returnValue = 0;
Decimal.TryParse(value.ToString(), out returnValue);
return returnValue;
}

The lack of indentation was in the original.

The obvious facepalm is the Decimal.TryParse call. If our decimal has a value, we could just return it, but no, instead, we convert it to a string then convert that string back into a Decimal.

But the real problem here is someone who doesn’t understand what .NET’s nullable types offer. For starters, one could make the argument that value.HasValue() is more readable than value == null, though that’s clearly debatable. That’s not really the problem though.

The purpose of ValidateDecimal is to return the input value, unless the input value was null, in which case we want to return 0. Nullable types have a lovely GetValueOrDefault() method, which returns the value, or a reasonable default. What is the default for any built in numeric type?

0.

This method doesn’t need to exist, it’s already built in to the decimal? type. Of course, the built-in method almost certainly doesn’t do a string conversion to get its value, so the one with a string is better, is it knot?

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecuritySEC Investigating Data Leak at First American Financial Corp.

The U.S. Securities and Exchange Commission (SEC) is investigating a security failure on the Web site of real estate title insurance giant First American Financial Corp. that exposed more than 885 million personal and financial records tied to mortgage deals going back to 2003, KrebsOnSecurity has learned.

First American Financial Corp.

In May, KrebsOnSecurity broke the news that the Web site for Santa Ana, Calif.-based First American [NYSE:FAFexposed some 885 million documents related to real estate closings over the past 16 years, including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts and drivers license images. No authentication was required to view the documents.

The initial tip on that story came from Ben Shoval, a real estate developer based in Seattle. Shoval said he recently received a letter from the SEC’s enforcement division which stated the agency was investigating the data exposure to determine if First American had violated federal securities laws.

In its letter, the SEC asked Shoval to preserve and share any documents or evidence he had related to the data exposure.

“This investigation is a non-public, fact-finding inquiry,” the letter explained. “The investigation does not mean that we have concluded that anyone has violated the law.”

The SEC declined to comment for this story.

Word of the SEC investigation comes weeks after regulators in New York said they were investigating the company in what could turn out to be the first test of the state’s strict new cybersecurity regulation, which requires financial companies to periodically audit and report on how they protect sensitive data, and provides for fines in cases where violations were reckless or willful. First American also is now the target of a class action lawsuit that alleges it “failed to implement even rudimentary security measures.”

First American has issued a series of statements over the past few months that seem to downplay the severity of the data exposure, which the company said was the result of a “design defect” in its Web site.

On June 18, First American said a review of system logs by an outside forensic firm, “based on guidance from the company, identified 484 files that likely were accessed by individuals without authorization. The company has reviewed 211 of these files to date and determined that only 14 (or 6.6%) of those files contain non-public personal information. The company is in the process of notifying the affected consumers and will offer them complimentary credit monitoring services.”

In a statement on July 16, First American said its now-completed investigation identified just 32 consumers whose non-public personal information likely was accessed without authorization.

“These 32 consumers have been notified and offered complimentary credit monitoring services,” the company said.

First American has not responded to questions about how long this “design defect” persisted on its site, how far back it maintained access logs, or how far back in those access logs the company’s review extended.

Updated, Aug, 13, 8:40 a.m.: Added “no comment” from the SEC.

Planet DebianRobert McQueen: Flathub, brought to you by…

Over the past 2 years Flathub has evolved from a wild idea at a hackfest to a community of app developers and publishers making over 600 apps available to end-users on dozens of Linux-based OSes. We couldn’t have gotten anything off the ground without the support of the 20 or so generous souls who backed our initial fundraising, and to make the service a reality since then we’ve relied on on the contributions of dozens of individuals and organisations such as Codethink, Endless, GNOME, KDE and Red Hat. But for our day to day operations, we depend on the continuous support and generosity of a few companies who provide the services and resources that Flathub uses 24/7 to build and deliver all of these apps. This post is about saying thank you to those companies!

Running the infrastructure

Mythic Beasts Logo

Mythic Beasts is a UK-based “no-nonsense” hosting provider who provide managed and un-managed co-location, dedicated servers, VPS and shared hosting. They are also conveniently based in Cambridge where I live, and very nice people to have a coffee or beer with, particularly if you enjoy talking about IPv6 and how many web services you can run on a rack full of Raspberry Pis. The “heart” of Flathub is a physical machine donated by them which originally ran everything in separate VMs – buildbot, frontend, repo master – and they have subsequently increased their donation with several VMs hosted elsewhere within their network. We also benefit from huge amounts of free bandwidth, backup/storage, monitoring, management and their expertise and advice at scaling up the service.

Starting with everything running on one box in 2017 we quickly ran into scaling bottlenecks as traffic started to pick up. With Mythic’s advice and a healthy donation of 100s of GB / month more of bandwidth, we set up two caching frontend servers running in virtual machines in two different London data centres to cache the commonly-accessed objects, shift the load away from the master server, and take advantage of the physical redundancy offered by the Mythic network.

As load increased and we brought a CDN online to bring the content closer to the user, we also moved the Buildbot (and it’s associated Postgres database) to a VM hosted at Mythic in order to offload as much IO bandwidth from the repo server, to keep up sustained HTTP throughput during update operations. This helped significantly but we are in discussions with them about a yet larger box with a mixture of disks and SSDs to handle the concurrent read and write load that we need.

Even after all of these changes, we keep the repo master on one, big, physical machine with directly attached storage because repo update and delta computations are hugely IO intensive operations, and our OSTree repos contain over 9 million inodes which get accessed randomly during this process. We also have a physical HSM (a YubiKey) which stores the GPG repo signing key for Flathub, and it’s really hard to plug a USB key into a cloud instance, and know where it is and that it’s physically secure.

Building the apps

Our first build workers were under Alex’s desk, in Christian’s garage, and a VM donated by Scaleway for our first year. We still have several ARM workers donated by Codethink, but at the start of 2018 it became pretty clear within a few months that we were not going to keep up with the growing pace of builds without some more serious iron behind the Buildbot. We also wanted to be able to offer PR and test builds, beta builds, etc ­­— all of which multiplies the workload significantly.

Packet Logo

Thanks to an introduction by the most excellent Jorge Castro and the approval and support of the Linux Foundation’s CNCF Infrastructure Lab, we were able to get access to an “all expenses paid” account at Packet. Packet is a “bare metal” cloud provider — like AWS except you get entire boxes and dedicated switch ports etc to yourself – at a handful of main datacenters around the world with a full range of server, storage and networking equipment, and a larger number of edge facilities for distribution/processing closer to the users. They have an API and a magical provisioning system which means that at the click of a button or one method call you can bring up all manner of machines, configure networking and storage, etc. Packet is clearly a service built by engineers for engineers – they are smart, easy to get hold of on e-mail and chat, share their roadmap publicly and set priorities based on user feedback.

We currently have 4 Huge Boxes (2 Intel, 2 ARM) from Packet which do the majority of the heavy lifting when it comes to building everything that is uploaded, and also use a few other machines there for auxiliary tasks such as caching source downloads and receiving our streamed logs from the CDN. We also used their flexibility to temporarily set up a whole separate test infrastructure (a repo, buildbot, worker and frontend on one box) while we were prototyping recent changes to the Buildbot.

A special thanks to Ed Vielmetti at Packet who has patiently supported our requests for lots of 32-bit compatible ARM machines, and for his support of other Linux desktop projects such as GNOME and the Freedesktop SDK who also benefit hugely from Packet’s resources for build and CI.

Delivering the data

Even with two redundant / load-balancing front end servers and huge amounts of bandwidth, OSTree repos have so many files that if those servers are too far away from the end users, the latency and round trips cause a serious problem with throughput. In the end you can’t distribute something like Flathub from a single physical location – you need to get closer to the users. Fortunately the OSTree repo format is very efficient to distribute via a CDN, as almost all files in the repository are immutable.

Fastly Logo

After a very speedy response to a plea for help on Twitter, Fastly – one of the world’s leading CDNs – generously agreed to donate free use of their CDN service to support Flathub. All traffic to the dl.flathub.org domain is served through the CDN, and automatically gets cached at dozens of points of presence around the world. Their service is frankly really really cool – the configuration and stats are reallly powerful, unlike any other CDN service I’ve used. Our configuration allows us to collect custom logs which we use to generate our Flathub stats, and to define edge logic in Varnish’s VCL which we use to allow larger files to stream to the end user while they are still being downloaded by the edge node, improving throughput. We also use their API to purge the summary file from their caches worldwide each time the repository updates, so that it can stay cached for longer between updates.

To get some feelings for how well this works, here are some statistics: The Flathub main repo is 929 GB, of which 73 GB are static deltas and 1.9 GB of screenshots. It contains 7280 refs for 640 apps (plus runtimes and extensions) over 4 architectures. Fastly is serving the dl.flathub.org domain fully cached, with a cache hit rate of ~98.7%. Averaging 9.8 million hits and 464 Gb downloaded per hour, Flathub uses between 1-2 Gbps sustained bandwidth depending on the time of day. Here are some nice graphs produced by the Fastly management UI (the numbers are per-hour over the last month):

Graph showing the requests per hour over the past month, split by hits and misses.
Graph showing the data transferred per hour over the past month.

To buy the scale of services and support that Flathub receives from our commercial sponsors would cost tens if not hundreds of thousands of dollars a month. Flathub could not exist without Mythic Beasts, Packet and Fastly‘s support of the free and open source Linux desktop. Thank you!

CryptogramEvaluating the NSA's Telephony Metadata Program

Interesting analysis: "Examining the Anomalies, Explaining the Value: Should the USA FREEDOM Act's Metadata Program be Extended?" by Susan Landau and Asaf Lubin.

Abstract: The telephony metadata program which was authorized under Section 215 of the PATRIOT Act, remains one of the most controversial programs launched by the U.S. Intelligence Community (IC) in the wake of the 9/11 attacks. Under the program major U.S. carriers were ordered to provide NSA with daily Call Detail Records (CDRs) for all communications to, from, or within the United States. The Snowden disclosures and the public controversy that followed led Congress in 2015 to end bulk collection and amend the CDR authorities with the adoption of the USA FREEDOM Act (UFA).

For a time, the new program seemed to be functioning well. Nonetheless, three issues emerged around the program. The first concern was over high numbers: in both 2016 and 2017, the Foreign Intelligence Surveillance Court issued 40 orders for collection, but the NSA collected hundreds of millions of CDRs, and the agency provided little clarification for the high numbers. The second emerged in June 2018 when the NSA announced the purging of three years' worth of CDR records for "technical irregularities." Finally, in March 2019 it was reported that the NSA had decided to completely abandon the program and not seek its renewal as it is due to sunset in late 2019.

This paper sheds significant light on all three of these concerns. First, we carefully analyze the numbers, showing how forty orders might lead to the collection of several million CDRs, thus offering a model to assist in understanding Intelligence Community transparency reporting across its surveillance programs. Second, we show how the architecture of modern telephone communications might cause collection errors that fit the reported reasons for the 2018 purge. Finally, we show how changes in the terrorist threat environment as well as in the technology and communication methods they employ ­ in particular the deployment of asynchronous encrypted IP-based communications ­ has made the telephony metadata program far less beneficial over time. We further provide policy recommendations for Congress to increase effective intelligence oversight.

Worse Than FailureInternship of Things

Mindy was pretty excited to start her internship with Initech's Internet-of-Things division. She'd been hearing at every job fair how IoT was still going to be blowing up in a few years, and how important it would be for her career to have some background in it.

It was a pretty standard internship. Mindy went to meetings, shadowed developers, did some light-but-heavily-supervised changes to the website for controlling your thermostat/camera/refrigerator all in one device.

As part of testing, Mindy created a customer account on the QA environment for the site. She chucked a junk password at it, only to get a message: "Your password must be at least 8 characters long, contain at least three digits, not in sequence, four symbols, at least one space, and end with a letter, and not be more than 10 characters."

"Um, that's quite the password rule," Mindy said to her mentor, Bob.

"Well, you know how it is, most people use one password for every site, and we don't want them to do that here. That way, when our database leaks again, it minimizes the harm."

"Right, but it's not like you're storing the passwords anyway, right?" Mindy said. She knew that even leaked hashes could be dangerous, but good salting/hashing would go a long way.

"Of course we are," Bob said. "We're selling web connected thermostats to what can be charitably called 'twelve-o-clock flashers'. You know what those are, right? Every clock in their house is flashing twelve?" Bob sneered. "They can't figure out the site, so we often have to log into their account to fix the things they break."

A few days later, Initech was ready to push a firmware update to all of the Model Q baby monitor cameras. Mindy was invited to watch the process so she could understand their workflow. It started off pretty reasonable: their CI/CD system had a verified build, signed off, ready to deploy.

"So, we've got a deployment farm running in the cloud," Bob explained. "There are thousands of these devices, right? So we start by putting the binary up in an S3 bucket." Bob typed a few commands to upload the binary. "What's really important for our process is that it follows this naming convention. Because the next thing we're going to do is spin up a half dozen EC2 instances- virtual servers in the cloud."

A few more commands later, and then Bob had six sessions open to cloud servers in tmux. "Now, these servers are 'clean instances', so the very first thing I have to do is upload our SSH keys." Bob ran an ssh-copy-id command to copy the SSH key from his computer up to the six cloud VMs.

"Wait, you're using your personal SSH keys?"

"No, that'd be crazy!" Bob said. "There's one global key for every one of our Model Q cameras. We've all got a copy of it on our laptops."

"All… the developers?"

"Everybody on the team," Bob said. "Developers to management."

"On their laptops?"

"Well, we were worried about storing something so sensitive on the network."

Bob continued the process, which involved launching a script that would query a webservice to see which Model Q cameras were online, then sshing into them, having them curl down the latest firmware, and then self-update. "For the first few days, we leave all six VMs running, but once most of them have gotten the update, we'll just leave one cloud service running," Bob explained. "Helps us manage costs."

It's safe to say Mindy learned a lot during her internship. Mostly, she learned, "don't buy anything from Initech."

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianWilliam (Bill) Blough: Free Software Activities (July 2019)


Debian

  • Bug 932626: passwordsafe — Non-English locales don't work due to translation files being installed in the wrong directory.

    The fixed versions are:

    • unstable/testing: 1.06+dfsg-2
    • buster: 1.06+dfsg-1+deb10u1 (via 932945)
    • stretch: 1.00+dfsg-1+deb9u1 (via 932944)
  • Bug 932947: file — The --mime-type flag fails on arm64 due to seccomp

    Recently, there was a message on debian-devel about enabling seccomp sandboxing for the file utility. While I knew that passwordsafe uses file to determine some mime type information, testing on my development box (which is amd64-based) didn't show any problems.

    However, this was happening around the same time that I was preparing the the fix for 932626 as noted above. Lo and behold, when I uploaded the fix, everything went fine except for on the arm64 architecture. The build there failed due to the package's test suite failing.

    After doing some troubleshooting on one of the arm64 porterboxes, it was clear that the seccomp change to file was the culprit. I haven't worked with arm64 very much, so I don't know all of the details. But based on my research, it appears that arm64 doesn't implement the access() system call, but uses faccessat() instead. However, in this case, seccomp was allowing calls to access(), but not calls to faccessat(). This led to the difference in behavior between arm64 and the other architectures.

    So I filed the bug to let the maintainer know the details, in hopes that the seccomp filters could be adjusted. However, it seems he took it as the "final straw" with regard to some of the other problems he was hearing about, and decided to revert the seccomp change altogether.

    Once the change was reverted, I requested a rebuild of the failed passwordsafe package on arm64 so it could be rebuilt against the fixed dependency without doing another full upload.

  • I updated django-cas-server in unstable to 1.1.0, which is the latest upstream version. I also did some miscellaneous cleanup/maintenance on the packaging.

  • I attended DebConf19 in Curitiba, Brazil.

    This was my 3rd DebConf, and my first trip to Brazil. Actually, it was my first trip to anywhere in the Southern Hemisphere.

    As usual, DebConf was quite enjoyable. From a technical perspective, there were lots of interesting talks. I learned some new things, and was also exposed to some new (to me) projects and techniques, as well as some new ideas in general. It also gave me some ideas of other ways/places I could potentially contribute to Debian.

    From a social perspective, it was a good opportunity to see and spend time with people that I normally only get to interact with via email or irc. I also enjoyed being able to put faces/voices to names that I only see on mailing lists. Even if I don't know or interact with them much, it really helps my mental picture when I'm reading things they wrote. And of course, I met some new people, too. It was nice to share stories and experiences over food and drinks, or in the hacklabs.

    If any of the DebConf team read this, thanks for your hard work. It was another great DebConf.

Planet DebianSteve McIntyre: DebConf in Brazil again!

Highvoltage and me

I was lucky enough to meet up with my extended Debian family again this year. We went back to Brazil for the first time since 2004, this time in Curitiba. And this time I didn't lose anybody's clothes! :-)

Rhonda modelling our diversity T!

I had a very busy time, as usual - lots of sessions to take part in, and lots of conversations with people from all over. As part of the Community Team (ex-AH Team), I had a lot of things to catch up on too, and a sprint report to send. Despite all that, I even managed to do some technical things too!

I ran sessions about UEFI Secure Boot, the Arm ports and the Community Team. I was meant to be running a session for the web team too, but the dreaded DebConf 'flu took me out for a day. It's traditional - bring hundreds of people together from all over the world, mix them up with too much alcohol and not enough sleep and many people get ill... :-( Once I'm back from vacation, I'll be doing my usual task of sending session summaries to the Debian mailing lists to describe what happened in my sessions.

Maddog showed a group of us round the micro-brewery at Hop'n'Roll which was extra fun. I'm sure I wasn't the only experienced guy there, but it's always nice to listen to geeky people talking about their passion.

Small group at Hop'n'Roll

Of course, I could't get to all the sessions I wanted to - there's just too many things going on in DebConf week, and sessions clash at the best of times. So I have a load of videos on my laptop to watch while I'm away. Heartfelt thanks to our always-awesome video team for their efforts to make that possible. And I know that I had at least one follower at home watching the live streams too!

Pepper watching the Arm BoF

Planet DebianDaniel Lange: Cleaning a broken GnuPG (gpg) key

I've long said that the main tools in the Open Source security space, OpenSSL and GnuPG (gpg), are broken and only a complete re-write will solve this. And that is still pending as nobody came forward with the funding. It's not a sexy topic, so it has to get really bad before it'll get better.

Gpg has a UI that is close to useless. That won't substantially change with more bolted-on improvements.

Now Robert J. Hansen and Daniel Kahn Gillmor had somebody add ~50k signatures (read 1, 2, 3, 4 for the g{l}ory details) to their keys and - oops - they say that breaks gpg.

But does it?

I downloaded Robert J. Hansen's key off the SKS-Keyserver network. It's a nice 45MB file when de-ascii-armored (gpg --dearmor broken_key.asc ; mv broken_key.asc.gpg broken_key.gpg).

Now a friendly:

$ /usr/bin/time -v gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

User-ID "Robert J. Hansen <rjh@sixdemonbag.org>": 49705 Signaturen entfernt
User-ID "Robert J. Hansen <rob@enigmail.net>": 49704 Signaturen entfernt
User-ID "Robert J. Hansen <rob@hansen.engineering>": 49701 Signaturen entfernt

pub  rsa3072/0x1DCBDC01B44427C7
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: SC  
     Vertrauen: unbekannt     Gültigkeit: unbekannt
sub  ed25519/0xA83CAE94D3DC3873
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: S  
sub  cv25519/0xAA24CC81B8AED08B
     erzeugt: 2017-04-05  verfällt: niemals     Nutzung: E  
sub  rsa3072/0xDC0F82625FA6AADE
     erzeugt: 2015-07-16  verfällt: niemals     Nutzung: E  
[ unbekannt ] (1). Robert J. Hansen <rjh@sixdemonbag.org>
[ unbekannt ] (2)  Robert J. Hansen <rob@enigmail.net>
[ unbekannt ] (3)  Robert J. Hansen <rob@hansen.engineering>

        Command being timed: "gpg --no-default-keyring --keyring ./broken_key.gpg --batch --quiet --edit-key 0x1DCBDC01B44427C7 clean save quit"
        User time (seconds): 3911.14
        System time (seconds): 2442.87
        Percent of CPU this job got: 99%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 1:45:56
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 107660
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 1
        Minor (reclaiming a frame) page faults: 26630
        Voluntary context switches: 43
        Involuntary context switches: 59439
        Swaps: 0
        File system inputs: 112
        File system outputs: 48
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0
 

And the result is a nicely useable 3835 byte file of the clean public key. If you supply a keyring instead of --no-default-keyring it will also keep the non-self signatures that are useful for you (as you apparently know the signing party).

So it does not break gpg. It does break things that call gpg at runtime and not asynchronously. I heard Enigmail is affected, quelle surprise.

Now the main problem here is the runtime. 1h45min is just ridiculous. As Filippo Valsorda puts it:

Someone added a few thousand entries to a list that lets anyone append to it. GnuPG, software supposed to defeat state actors, suddenly takes minutes to process entries. How big is that list you ask? 17 MiB. Not GiB, 17 MiB. Like a large picture. https://dev.gnupg.org/T4592

If I were a gpg / SKS keyserver developer, I'd

  • speed this up so the edit-key run above completes in less than 10 s (just getting rid of the lseek/read dance and deferring all time-based decisions should get close)
  • (ideally) make the drop-sig import-filter syntax useful (date-ranges, non-reciprocal signatures, ...)
  • clean affected keys on the SKS keyservers (needs coordination of sysops, drop servers from unreachable people)
  • (ideally) use the opportunity to clean all keyserver filesystem and the message board over pgp key servers keys, too
  • only accept new keys and new signatures on keys extending the strong set (rather small change to the existing codebase)

That way another key can only be added to the keyserver network if it contains at least one signature from a previously known strong-set key. Attacking the keyserver network would become at least non-trivial. And the web-of-trust thing may make sense again.

Updates

09.07.2019

GnuPG 2.2.17 has been released with another set of quickly bolted together fixes:

  * gpg: Ignore all key-signatures received from keyservers.  This
    change is required to mitigate a DoS due to keys flooded with
    faked key-signatures.  The old behaviour can be achieved by adding
    keyserver-options no-self-sigs-only,no-import-clean
    to your gpg.conf.  [#4607]
  * gpg: If an imported keyblocks is too large to be stored in the
    keybox (pubring.kbx) do not error out but fallback to an import
    using the options "self-sigs-only,import-clean".  [#4591]
  * gpg: New command --locate-external-key which can be used to
    refresh keys from the Web Key Directory or via other methods
    configured with --auto-key-locate.
  * gpg: New import option "self-sigs-only".
  * gpg: In --auto-key-retrieve prefer WKD over keyservers.  [#4595]
  * dirmngr: Support the "openpgpkey" subdomain feature from
    draft-koch-openpgp-webkey-service-07. [#4590].
  * dirmngr: Add an exception for the "openpgpkey" subdomain to the
    CSRF protection.  [#4603]
  * dirmngr: Fix endless loop due to http errors 503 and 504.  [#4600]
  * dirmngr: Fix TLS bug during redirection of HKP requests.  [#4566]
  * gpgconf: Fix a race condition when killing components.  [#4577]

Bug T4607 shows that these changes are all but well thought-out. They introduce artificial limits, like 64kB for WKD-distributed keys or 5MB for local signature imports (Bug T4591) which weaken the web-of-trust further.

I recommend to not run gpg 2.2.17 in production environments without extensive testing as these limits and the unverified network traffic may bite you. Do validate your upgrade with valid and broken keys that have segments (packet groups) surpassing the above mentioned limits. You may be surprised what gpg does. On the upside: you can now refresh keys (sans signatures) via WKD. So if your buddies still believe in limiting their subkey validities, you can more easily update them bypassing the SKS keyserver network. NB: I have not tested that functionality. So test before deploying.

10.08.2019

Christopher Wellons (skeeto) has released his pgp-poisoner tool. It is a go program that can add thousands of malicious signatures to a GNUpg key per second. He comments "[pgp-poisoner is] proof that such attacks are very easy to pull off. It doesn't take a nation-state actor to break the PGP ecosystem, just one person and couple evenings studying RFC 4880. This system is not robust." He also hints at the next likely attack vector, public subkeys can be bound to a primary key of choice.

Planet DebianPetter Reinholdtsen: Legal to share more than 16,000 movies listed on IMDB?

The recent announcement of from the New York Public Library on its results in identifying books published in the USA that are now in the public domain, inspired me to update the scripts I use to track down movies that are in the public domain. This involved updating the script used to extract lists of movies believed to be in the public domain, to work with the latest version of the source web sites. In particular the new edition of the Retro Film Vault web site now seem to list all the films available from that distributor, bringing the films identified there to more than 12.000 movies, and I was able to connect 46% of these to IMDB titles.

The new total is 16307 IMDB IDs (aka films) in the public domain or creative commons licensed, and unknown status for 31460 movies (possibly duplicates of the 16307).

The complete data set is available from a public git repository, including the scripts used to create it.

Anyway, this is the summary of the 28 collected data sources so far:

 2361 entries (   50 unique) with and 22472 without IMDB title ID in free-movies-archive-org-search.json
 2363 entries (  146 unique) with and     0 without IMDB title ID in free-movies-archive-org-wikidata.json
  299 entries (   32 unique) with and    93 without IMDB title ID in free-movies-cinemovies.json
   88 entries (   52 unique) with and    36 without IMDB title ID in free-movies-creative-commons.json
 3190 entries ( 1532 unique) with and    13 without IMDB title ID in free-movies-fesfilm-xls.json
  620 entries (   24 unique) with and   283 without IMDB title ID in free-movies-fesfilm.json
 1080 entries (  165 unique) with and   651 without IMDB title ID in free-movies-filmchest-com.json
  830 entries (   13 unique) with and     0 without IMDB title ID in free-movies-icheckmovies-archive-mochard.json
   19 entries (   19 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-gb.json
 7410 entries ( 7101 unique) with and     0 without IMDB title ID in free-movies-imdb-c-expired-us.json
 1205 entries (   41 unique) with and     0 without IMDB title ID in free-movies-imdb-pd.json
  163 entries (   22 unique) with and    88 without IMDB title ID in free-movies-infodigi-pd.json
  158 entries (  103 unique) with and     0 without IMDB title ID in free-movies-letterboxd-looney-tunes.json
  113 entries (    4 unique) with and     0 without IMDB title ID in free-movies-letterboxd-pd.json
  182 entries (   71 unique) with and     0 without IMDB title ID in free-movies-letterboxd-silent.json
  248 entries (   85 unique) with and     0 without IMDB title ID in free-movies-manual.json
  158 entries (    4 unique) with and    64 without IMDB title ID in free-movies-mubi.json
   85 entries (    1 unique) with and    23 without IMDB title ID in free-movies-openflix.json
  520 entries (   22 unique) with and   244 without IMDB title ID in free-movies-profilms-pd.json
  343 entries (   14 unique) with and    10 without IMDB title ID in free-movies-publicdomainmovies-info.json
  701 entries (   16 unique) with and   560 without IMDB title ID in free-movies-publicdomainmovies-net.json
   74 entries (   13 unique) with and    60 without IMDB title ID in free-movies-publicdomainreview.json
  698 entries (   16 unique) with and   118 without IMDB title ID in free-movies-publicdomaintorrents.json
 5506 entries ( 2941 unique) with and  6585 without IMDB title ID in free-movies-retrofilmvault.json
   16 entries (    0 unique) with and     0 without IMDB title ID in free-movies-thehillproductions.json
  110 entries (    2 unique) with and    29 without IMDB title ID in free-movies-two-movies-net.json
   73 entries (   20 unique) with and   131 without IMDB title ID in free-movies-vodo.json
16307 unique IMDB title IDs in total, 12509 only in one list, 31460 without IMDB title ID

New this time is a list of all the identified IMDB titles, with title, year and running time, provided in free-complete.json. this file also indiciate which source is used to conclude the video is free to distribute.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

,

CryptogramFriday Squid Blogging: Sinuous Asperoteuthis Mangoldae Squid

Great video of the Sinuous Asperoteuthis Mangoldae Squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityiNSYNQ Ransom Attack Began With Phishing Email

A ransomware outbreak that hit QuickBooks cloud hosting firm iNSYNQ in mid-July appears to have started with an email phishing attack that snared an employee working in sales for the company, KrebsOnSecurity has learned. It also looks like the intruders spent roughly ten days rooting around iNSYNQ’s internal network to properly stage things before unleashing the ransomware. iNSYNQ ultimately declined to pay the ransom demand, and it is still working to completely restore customer access to files.

Some of this detail came in a virtual “town hall” meeting held August 8, in which iNSYNQ chief executive Elliot Luchansky briefed customers on how it all went down, and what the company is doing to prevent such outages in the future.

A great many iNSYNQ’s customers are accountants, and when the company took its network offline on July 16 in response to the ransomware outbreak, some of those customers took to social media to complain that iNSYNQ was stonewalling them.

“We could definitely have been better prepared, and it’s totally unacceptable,” Luchansky told customers. “I take full responsibility for this. People waiting ridiculous amounts of time for a response is unacceptable.”

By way of explaining iNSYNQ’s initial reluctance to share information about the particulars of the attack early on, Luchansky told customers the company had to assume the intruders were watching and listening to everything iNSYNQ was doing to recover operations and data in the wake of the ransomware outbreak.

“That was done strategically for a good reason,” he said. “There were human beings involved with [carrying out] this attack in real time, and we had to assume they were monitoring everything we could say. And that posed risks based on what we did say publicly while the ransom negotiations were going on. It could have been used in a way that would have exposed customers even more. That put us in a really tough bind, because transparency is something we take very seriously. But we decided it was in our customers’ best interests to not do that.”

A paid ad that comes up prominently when one searches for “insynq” in Google.

Luchansky did not say how much the intruders were demanding, but he mentioned two key factors that informed the company’s decision not to pay up.

“It was a very substantial amount, but we had the money wired and were ready to pay it in cryptocurrency in the case that it made sense to do so,” he told customers. “But we also understood [that paying] would put a target on our heads in the future, and even if we actually received the decryption key, that wasn’t really the main issue here. Because of the quick reaction we had, we were able to contain the encryption part” to roughly 50 percent of customer systems, he said.

Luchansky said the intruders seeded its internal network with MegaCortex, a potent new ransomware strain first spotted just a couple of months ago that is being used in targeted attacks on enterprises. He said the attack appears to have been carefully planned out in advance and executed “with human intervention all the way through.”

“They decided they were coming after us,” he said. “It’s one thing to prepare for these sorts of events but it’s an entirely different experience to deal with first hand.”

According to an analysis of MegaCortex published this week by Accenture iDefense, the crooks behind this ransomware strain are targeting businesses — not home users — and demanding ransom payments in the range of two to 600 bitcoins, which is roughly $20,000 to $5.8 million.

“We are working for profit,” reads the ransom note left behind by the latest version of MegaCortex. “The core of this criminal business is to give back your valuable data in the original form (for ransom of course).”

A portion of the ransom note left behind by the latest version of MegaCortex. Image: Accenture iDefense.

Luchansky did not mention in the town hall meeting exactly when the initial phishing attack was thought to have occurred, noting that iNSYNQ is still working with California-based CrowdStrike to gain a more complete picture of the attack.

But Alex Holden, founder of Milwaukee-based cyber intelligence firm Hold Security, showed KrebsOnSecurity information obtained from monitoring dark web communications which suggested the problem started on July 6, after an employee in iNSYNQ’s sales division fell for a targeted phishing email.

“This shows that even after the initial infection, if companies act promptly they can still detect and stop the ransomware,” Holden said. “For these infections hackers take sometimes days, weeks, or even months to encrypt your data.”

iNSYNQ did not respond to requests for comment on Hold Security’s findings.

Asked whether the company had backups of customer data and — if so — why iNSYNQ decided not to restore from those, Luchansky said there were backups but that some of those were also infected.

“The backup system is backing up the primary system, and that by definition entails some level of integration,” Luchansky explained. “The way our system was architected, the malware had spread into the backups as well, at least a little bit. So [by] just turning the backups back on, there was a good chance the the virus would then start to spread through the backup system more. So we had to treat the backups similarly to how we were treating the primary systems.”

Luchansky said their backup system has since been overhauled, and that if a similar attack happened in the future it would take days instead of weeks to recover. However, he declined to get into specifics about exactly what had changed, which is too bad because in every ransomware attack story I’ve written this seems to be the detail most readers are interested in and arguing about.

The CEO added that iNSYNQ also will be partnering with a company that helps firms detect and block targeted phishing attacks, and that it envisioned being able to offer this to its customers at a discounted rate. It wasn’t clear from Luchansky’s responses to questions whether the cloud hosting firm was also considering any kind of employee anti-phishing education and/or testing service.

Luchansky said iNSYNQ was able to restore access to more than 90 percent of customer files by Aug. 2 — roughly two weeks after the ransomware outbreak — and that the company would be offering customers a two month credit as a result of the outage.

Sociological ImagesData Science Needs Social Science

What do college graduates do with a sociology major? We just got an updated look from Phil Cohen this week:

These are all great career fields for our students, but as I was reading the list I realized there is a huge industry missing: data science and analytics. From Netflix to national policy, many interesting and lucrative jobs today are focused on properly observing, understanding, and trying to predict human behavior. With more sociology graduate programs training their students in computational social science, there is a big opportunity to bring those skills to teaching undergraduates as well.

Of course, data science has its challenges. Social scientists have observed that the booming field has some big problems with bias and inequality, but this is sociology’s bread and butter! When we talk about these issues, we usually go straight to very important conversations about equity, inclusion, and justice, and rightfully so; it is easy to design algorithms that seem like they make better decisions, but really just develop their own biases from watching us.

We can also tackle these questions by talking about research methods–another place where sociologists shine! We spend a lot of time thinking about whether our methods for observing people are valid and reliable. Are we just watching talk, or action? Do people change when researchers watch them? Once we get good measures and a strong analytic approach, can we do a better job explaining how and why bias happens to prevent it in the future?

Sociologists are well-positioned to help make sense of big questions in data science, and the field needs them. According to a recent industry report, only 5% of data scientists come out of the social sciences! While other areas of study may provide more of the technical skills to work in analytics, there is only so much that the technology can do before companies and research centers need to start making sense of social behavior. 

Source: Burtch Works Executive Recruiting. 2018. “Salaries of Data Scientists.” Emphasis Mine

So, if students or parents start up the refrain of “what can you do with a sociology major” this fall, consider showing them the social side of data science!

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianAndy Simpkins: gov.uk paperwork

[Edit: Removed accusation of non UK hosting – thank you to Richard Mortimer & Philipp Edelmann for pointing out I had incorrectly looked up the domain “householdresponce.com” in place of  “householdresponse.com”.  Learn to spell…]

I live in England, the government keeps an Electoral Roll, a list of people registered to vote.  This list needs to be maintained, so once a year we are required to update the database.  To make sure we don’t forget we get sent a handy letter through the door looking like this:

Is it a scam?

Well that’s the first page anyway.   Correctly addressed to the “Current Occupier”.   So why am I posting about this?

Phishing emails land in our inbox all the time (hopefully only a few because our spam filters eat the rest).  These are unsolicited emails trying to trick us into doing something, usually they look like something official and warn us about something that we should take action about, for example an email that looks like it has come from your bank warning about suspicious activity in your account, they then ask you to follow a link to the ‘banks website’ where you can login and confirm if the activity is genuine – obviously taking you through a ‘man in the middle’ website that harvests your account credentials.

The government is justifiably concerned about this (as to are banks and other businesses that are impersonated in this way) and so run media campaigns to educate the public in the dangers of such scams and what to look out for.

So back to the “Household Enquiry” form…

How do I know that this is genuine?  Well I don’t.  I can’t easily verify the letter, I can’t be sure who sent it, It arrived through my letterbox unbidden, and even if I was expecting it wouldn’t the perfect time to send such a scam letter be at the same time genuine letters are being distributed?

All I can do read the letter carefully and apply the same rational tests that I would to any unsolicited (e)mail.

1) Does it claim to come from a source I would have dealings with (bulk mailing is so cheep that sending to huge numbers of people is still effective even if most of the recipients will know it is a scam because they wouldn’t have dealings with the alleged sender).  Yes it claims to have been sent by South Cambridge District Council and They are my county council and would send be this letter.

2) Do all the communication links point to the sender?  No.  Stop this is probably a scam.

 

Alarm bells should now be ringing – their preferred method of communication is for me to visits the website www.householdresponse.com/southcambs.  Sure they have gov.uk website mentioned and they claim to be south Cambridgeshire District Council and they have an email address elections@scambs.gov.uk  but all the fake emails claiming to come from my bank look like the come from my bank as well – the only thing that doesn’t is the link they want you to follow.  Just like this letter….

Ok Time for a bit of detective work

:~$whois householdresponse.com
 Domain Name: HOUSEHOLDRESPONSE.COM
 Registry Domain ID: 2036860356_DOMAIN_COM-VRSN
 Registrar WHOIS Server: whois.easyspace.com
 Registrar URL: http://www.easyspace.com
 Updated Date: 2018-05-23T05:56:38Z
 Creation Date: 2016-06-22T09:24:15Z
 Registry Expiry Date: 2020-06-22T09:24:15Z
 Registrar: EASYSPACE LIMITED
 Registrar IANA ID: 79
 Registrar Abuse Contact Email: abuse@easyspace.com
 Registrar Abuse Contact Phone: +44.3707555066
 Domain Status: clientTransferProhibited https://icann.org/epp#clientTransferProhibited
 Domain Status: clientUpdateProhibited https://icann.org/epp#clientUpdateProhibited
 Name Server: NS1.NAMECITY.COM
 Name Server: NS2.NAMECITY.COM
 DNSSEC: unsigned
 URL of the ICANN Whois Inaccuracy Complaint Form: https://www.icann.org/wicf/
>>> Last update of whois database: 2019-08-09T17:05:57Z <<<
<snip>

Really?  just a hosting companies details for a domain claiming to be from local government?

:~$nslookup householdresponse.com
<snip>
Name: householdresponse.com
Address: 62.25.101.164

:~$ whois 62.25.101.164
<snip>
% Information related to '62.25.64.0 - 62.25.255.255'

% Abuse contact for '62.25.64.0 - 62.25.255.255' is 'ipabuse@vodafone.co.uk'

inetnum: 62.25.64.0 - 62.25.255.255
netname: UK-VODAFONE-WORLDWIDE-20000329
country: GB
org: ORG-VL225-RIPE
admin-c: GNOC4-RIPE
tech-c: GNOC4-RIPE
status: ALLOCATED PA
mnt-by: RIPE-NCC-HM-MNT
mnt-by: VODAFONE-WORLDWIDE-MNTNER
mnt-lower: VODAFONE-WORLDWIDE-MNTNER
mnt-domains: VODAFONE-WORLDWIDE-MNTNER
mnt-routes: VODAFONE-WORLDWIDE-MNTNER
created: 2017-10-18T09:50:20Z
last-modified: 2017-10-18T09:50:20Z
source: RIPE # Filtered

organisation: ORG-VL225-RIPE
org-name: Vodafone Limited
org-type: LIR
address: Vodafone House, The Connection
address: RG14 2FN
address: Newbury
address: UNITED KINGDOM
phone: +44 1635 33251
admin-c: GSOC-RIPE
tech-c: GSOC-RIPE
abuse-c: AR40377-RIPE
mnt-ref: CW-EUROPE-GSOC
mnt-by: RIPE-NCC-HM-MNT
mnt-by: CW-EUROPE-GSOC
created: 2017-05-11T14:35:11Z
last-modified: 2018-01-03T15:48:36Z
source: RIPE # Filtered

role: Cable and Wireless IP GNOC Munich
remarks: Cable&Wireless Worldwide Hostmaster
address: Smale House
address: London SE1
address: UK
admin-c: DOM12-RIPE
admin-c: DS3356-RIPE
admin-c: EJ343-RIPE
admin-c: FM1414-RIPE
admin-c: MB4
tech-c: AB14382-RIPE
tech-c: MG10145-RIPE
tech-c: DOM12-RIPE
tech-c: JO361-RIPE
tech-c: DS3356-RIPE
tech-c: SA79-RIPE
tech-c: EJ343-RIPE
tech-c: MB4
tech-c: FM1414-RIPE
abuse-mailbox: ipabuse@vodafone.co.uk
nic-hdl: GNOC4-RIPE
mnt-by: CW-EUROPE-GSOC
created: 2004-02-03T16:44:58Z
last-modified: 2017-05-25T12:03:34Z
source: RIPE # Filtered

% Information related to '62.25.64.0/18AS1273'

route: 62.25.64.0/18
descr: Vodafone Hosting
origin: AS1273
mnt-by: ENERGIS-MNT
created: 2019-02-28T08:50:03Z
last-modified: 2019-02-28T08:57:04Z
source: RIPE

% Information related to '62.25.64.0/18AS2529'

route: 62.25.64.0/18
descr: Energis UK
origin: AS2529
mnt-by: ENERGIS-MNT
created: 2014-03-26T16:21:40Z
last-modified: 2014-03-26T16:21:40Z
source: RIPE

Is this a scam…
I only wish it was :-(

A quick search of https://www.scambs.gov.uk/elections/electoral-registration-faqs/ and the very first thing on the webpage is a link to www.householdresponse.com/southcambs…

A phone call to the council, just to confirm that they haven’t been hacked and I am told yes this is for real.

OK lets look at the Privacy statement (on the same letter)

Right a link to a uk gov website… https://www.scambs.gov.uk/privacynotice

A Copy of this page as of 2019-08-09 because websites have a habit of changing can be found here
http://koipond.org.uk/photo/Screenshot_2019-08-09_CustomerPrivacyNotice.png

[Edit
I originally thought that this was being hosted outside the UK (on a US based server) which would be outside of GPDR.  I am still pissed off that this looks and feels ‘spammy’ and that the site is being hosted outside of a gov.uk based server, but this is not the righteous rage that I previously felt]

Summary Of Issue

  1. UK Government, District and Local Councils should be an exemplar of best practice.  Any correspondence from any part of UK government should only use websites within the subdomain gov.uk  (fraud prevention)

Actions taken

  • 2019-08-09
    • I spoke with South Cambridgeshire District Council and confirmed that this was genuine
    • Spoke with South Cambridgeshire District Council Electoral Services Team and made them aware of both issues (and sent follow up email)
    • Spoke with the ICO and asked for advice.  The will take up the issue if South Cambs do not resolve this within 20 working days.
    • Spoken again with ICO – even though I had mistakenly believed this was being hosted outside UK and this is not the case, they are still interested in pushing for a move to a .gov.uk domain

 

Planet DebianMike Gabriel: Cudos to the Rspamd developers

I just migrated the first / a customer's mail server site away from Amavis+SpamAssassin to Rspamd. Main reasons for the migration were speed and the setup needed a polish up anyway. People on site had been complaining about too much SPAM for quite a while. Plus, it is always good to dive into something new. Mission accomplished.

Implemented functionalities:

  • Sophos AV (savdi) antivirus checks backend
  • Clam AV antivirus backend as fallback
  • Auto-Learner CRON Job for SPAM mails published by https://artinvoice.hu
  • Work-around lacking http proxy support

Unfortunately, I could not enable the full scope of Rspamd features, as that specific site I worked on is on a private network, behind a firewall, etc. Some features don't make sense there (e.g. greylisting) or are hard-disabled in Rspamd once it detects that the mail host is on some local network infrastructure (local as in RFC-1918, or the corresponding fd00:: RFC for IPv6 I currently can't remember).

Cudos + Thanks!

Rspamd is just awesome!!! I am really really pleased with the result (and so is the customer, I heard). Thanks to the upstream developers, thanks to the Debian maintainers of the rspamd Debian package. [1]

Credits + Thanks for sharing your Work

The main part of the work had already been documented in a blog post [2] by someome with the nick "zac" (no real name found). Thanks for that!

The Sophos AV integration was a little tricky at the start, but worked out well, after some trial and error, log reading, Rspamd code studies, etc.

On half way through, there was popped up one tricky part, that could be avoided by the Rspamd upstream maintainers in future releases. As far as I took from [3], Rspamd lacks support for retrieving its map files and such (hosted on *.rspamd.com, or other 3rd party providers) via a http proxy server. This was nearly a full blocker for my last project, as the customer's mail gateway is part of a larger infrastructure and hosted inside a double ring of firewalls. Only access to the internet leads over a non-transparent squid proxy server (one which I don't have control over).

To work around this, I set up a transparent https proxy on "localhost", using a neat Python script [4]. Thanks for sharing this script.

I love all the sharing we do in FLOSS

Working on projects like this is just pure fun. And deeply interesting, as well. Such a project is fun as this one has been 98% FLOSS and 100% in the spirit of FLOSS and the correlated sharing mentality. I love this spirit of sharing ones work with the rest of the world, may someone find what I have to share useful or not.

I invite everyone to join in with sharing and in fact, for the IT business, I dearly recommend it.

I did not post config snippets here and such (as some of them are really customer specific), but if you stumble over similar issues when setting up your anti-SPAM gateway mail site using Rspamd, feel free to poke me and I'll see how I can help.

light+love
Mike (aka sunweaver at debian.org)

References

Worse Than FailureError'd: Intentionally Obtuse

"Normally I do pretty well on the Super Quiz, but then they decided to do it in Latin," writes Mike S.

 

"Uh oh, this month's AWS costs are going to be so much higher than last month's!" Ben H. writes.

 

Amanda C. wrote, "Oh, neat, Azure has some recommendations...wait...no...'just kidding' I guess?"

 

"Here I never thought that SQL Server log space could go negative, and yet, here we are," Michael writes.

 

"I love the form factor on this motherboard, but I'm not sure what case to buy with it," Todd C. writes, "Perhaps, if it isn't working, I can just give it a little kick?"

 

Maarten C. writes, "Next time, I'll name my spreadsheets with dog puns...maybe that'll make things less ruff."

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramSupply-Chain Attack against the Electron Development Platform

Electron is a cross-platform development system for many popular communications apps, including Skype, Slack, and WhatsApp. Security vulnerabilities in the update system allows someone to silently inject malicious code into applications. From a news article:

At the BSides LV security conference on Tuesday, Pavel Tsakalidis demonstrated a tool he created called BEEMKA, a Python-based tool that allows someone to unpack Electron ASAR archive files and inject new code into Electron's JavaScript libraries and built-in Chrome browser extensions. The vulnerability is not part of the applications themselves but of the underlying Electron framework -- ­and that vulnerability allows malicious activities to be hidden within processes that appear to be benign. Tsakalidis said that he had contacted Electron about the vulnerability but that he had gotten no response -- ­and the vulnerability remains.

While making these changes required administrator access on Linux and MacOS, it only requires local access on Windows. Those modifications can create new event-based "features" that can access the file system, activate a Web cam, and exfiltrate information from systems using the functionality of trusted applications­ -- including user credentials and sensitive data. In his demonstration, Tsakalidis showed a backdoored version of Microsoft Visual Studio Code that sent the contents of every code tab opened to a remote website.

Basically, the Electron ASAR files aren't signed or encrypted, so modifying them is easy.

Note that this attack requires local access to the computer, which means that an attacker that could do this could do much more damaging things as well. But once an app has been modified, it can be distributed to other users. It's not a big deal attack, but it's a vulnerability that should be closed.

CryptogramAT&T Employees Took Bribes to Unlock Smartphones

This wasn't a small operation:

A Pakistani man bribed AT&T call-center employees to install malware and unauthorized hardware as part of a scheme to fraudulently unlock cell phones, according to the US Department of Justice. Muhammad Fahd, 34, was extradited from Hong Kong to the US on Friday and is being detained pending trial.

An indictment alleges that "Fahd recruited and paid AT&T insiders to use their computer credentials and access to disable AT&T's proprietary locking software that prevented ineligible phones from being removed from AT&T's network," a DOJ announcement yesterday said. "The scheme resulted in millions of phones being removed from AT&T service and/or payment plans, costing the company millions of dollars. Fahd allegedly paid the insiders hundreds of thousands of dollars­ -- paying one co-conspirator $428,500 over the five-year scheme."

In all, AT&T insiders received more than $1 million in bribes from Fahd and his co-conspirators, who fraudulently unlocked more than 2 million cell phones, the government alleged. Three former AT&T customer service reps from a call center in Bothell, Washington, already pleaded guilty and agreed to pay the money back to AT&T.

Worse Than FailureCodeSOD: Swimming Downstream

When Java added their streams API, they brought the power and richness of functional programming styles to the JVM, if we ignore all the other non-Java JVM languages that already did this. Snark aside, streams were a great addition to the language, especially if we want to use them absolutely wrong.

Like this code Miles found.

See, every object in the application needs to have a unique identifier. So, for every object, there’s a method much like this one:

/**
     * Get next priceId
     *
     * @return next priceId
     */
    public String createPriceId() {
        List<String> ids = this.prices.stream().map(m -> m.getOfferPriceId()).collect(Collectors.toList());
        for (Integer i = 0; i < ids.size(); i++) {
            ids.set(i, ids.get(i).split("PR")[1]);
        }
        try {
            List<Integer> intIds = ids.stream().map(id -> Integer.parseInt(id)).collect(Collectors.toList());
            Integer max = intIds.stream().mapToInt(id -> id).max().orElse(0);
            return "PR" + (max + 1);
        } catch (Exception e) {
            return "PR" + 1;
        }
    }

The point of a stream is that you can build a processing pipeline: starting with a list, you can perform a series of operations but only touch each item in the stream once. That, of course, isn’t what we do here.

First, we map the prices to extract the offerPriceId and convert it into a list. Now, this list is a set of strings, so we iterate across that list of IDs, to break the "PR" prefix off. Then, we’ll map that list of IDs again, to parse the strings into integers. Then, we’ll cycle across that new list one more time, to find the max value. Then we can return a new ID.

And if anything goes wrong in this process, we won’t complain. We just return an ID that’s likely incorrect- "PR1". That’ll probably cause an error later, right? They can deal with it then.

Everything here is just wrong. This is the wrong way to use streams- the whole point is this could have been a single chain of function calls that only needed to iterate across the input data once. It’s also the wrong way to handle exceptions. And it’s also the wrong way to generate IDs.

Worse, a largely copy/pasted version of this code, with the names and prefixes changed, exists in nearly every model class. And these are database model classes, according to Miles, so one has to wonder if there might be a better way to generate IDs…

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Valerie AuroraGoth fashion tips for Ehlers-Danlos Syndrome

A woman wearing a dramatic black hooded jacket typing on a laptop
Skingraft hoodie, INC shirt, Fisherman’s Wharf fingerless gloves

My ideal style could perhaps be best described as “goth chic”—a lot of structured black somewhere on the border between couture and business casual—but because I have Ehlers-Danlos Syndrome, I more often end up wearing “sport goth”: a lot of stretchy black layers in washable fabrics with flat shoes. With great effort, I’ve nudged my style back towards “goth chic,” at least on good days. Enough people have asked me about my gear that I figured I’d share what I’ve learned with other EDS goths (or people who just like being comfortable and also wearing a lot of black).

Here are the constraints I’m operating under:

  • Flat shoes with thin soles to prevent ankle sprains and foot and back pain
  • Stretchy/soft shoes without pressure points to prevent blisters on soft skin
  • Can’t show sweat because POTS causes excessive sweating, also I walk a lot
  • Layers because POTS, walking, and San Francisco weather means I need to adjust my temperature a lot
  • Little or no weight on shoulders due to hypermobile shoulders
  • No tight clothes on abdomen due to pain (many EDS folks don’t have this problem but I do)
  • Soft fabric only touching skin due to sensitive easily irritated skin
  • Warm wrists to prevent hands from losing circulation due to Reynaud’s or POTS

On the other hand, I have a few things that make fashion easier for me. For starters, I can afford a four-figure annual clothing budget. I still shop a lot at thrift stores, discount stores like Ross, or discount versions of more expensive stores like Nordstrom Rack but I can afford a few expensive pieces at full price. Many of the items on this page can be found used on Poshmark, eBay, and other online used clothing marketplaces. I also recommend doing the math for “cost per wear” to figure out if you would save money if you wore a more expensive but more durable piece for a longer period of time. I usually keep clothing and shoes for several years and repair as necessary.

I currently fit within the “standard” size ranges of most clothing and shoe brands, but many of the brands I recommend here have a wider range of sizes. I’ve included the size range where relevant.

Finally, as a cis woman with an extremely femme body type, I can wear a wide range of masculine and feminine styles without being hassled in public for being gender-nonconforming (I still get hassled in public for being a woman, yay). Most of the links here are to women’s styles, but many brands also have men’s styles. (None of these brands have unisex styles that I know of.)

Shoes and socks

Shoes are my favorite part of fashion! I spend much more money on shoes than I used to because more expensive shoes are less likely to give me blisters. If I resole/reheel/polish them regularly, they can last for several years instead of a few months, so they cost the same per wear. Functional shoes are notoriously hard for EDS people to find, so the less often I have to search for new shoes, the better. I nearly always wear my shoes until they can no longer be repaired. If this post does nothing other than convince you that it is economical and wise to spend more money on shoes, I have succeeded.

Woman wearing two coats and holding two rolling bags
Via Spiga trench, Mossimo hoodie, VANELi flats, Aimee Kestenberg rolling laptop bag, Travelpro rolling bag

Smartwool black socks – My poor tender feet need cushiony socks that don’t sag or rub. Smartwool socks are expensive but last forever, and you can get them in 100% black so that you can wash them with your black clothes without covering them in little white balls. I wear mostly the men’s Walk Light Crew and City Slicker, with occasional women’s Hide and Seek No Show.

Skechers Cleo flats – These are a line of flats in a stretchy sweater-like material. The heel can be a little scratchy, but I sewed ribbon over the seam and it was fine. The BOBS line of Skechers is also extremely comfortable. Sizes 5 – 11.

VANELi flats – The sportier versions of these shoes are obscenely comfortable and also on the higher end of fashion. I wore my first pair until they had holes in the soles, and then I kept wearing them another year. I’m currently wearing out this pair. You can get them majorly discounted at DSW and similar places. Sizes 5 – 12.

Stuart Weitzman 5050 boots – These over-the-knee boots are the crown jewel of any EDS goth wardrobe. First, they are almost totally flat and roomy in the toe. Second, the elastic in the boot shaft acts like compression socks, helping with POTS. Third, they look amazing. Charlize Theron wore them in “Atomic Blonde” while performing martial arts. Angelina Jolie wears these in real life. The downside is the price, but there is absolutely no reason to pay full price. I regularly find them in Saks Off 5th for 30% off. Also, they last forever: with reheeling, my first pair lasted around three years of heavy use. Stuart Weitzman makes several other flat boots with elastic shafts which are also worth checking out, but they have been making the 5050 for around 25 years so this style should always be available. Sizes 4 – 12, runs about a half size large.

Pants/leggings/skirts

A woman wearing black leggings and a black sweater
Patty Boutik sweater, Demobaza leggings, VANELi flats

Satina high-waisted leggings – I wear these extremely cheap leggings probably five days a week under skirts or dresses. Available in two sizes, S – L and XL – XXXL. If you can wear tight clothing, you might want to check out the Spanx line of leggings (e.g. the Moto Legging) which I would totally wear if I could.

Toad & Co. Women’s Chaka skirt – I wear this skirt probably three days a week. Ridiculously comfortable and only middling expensive. Sizes XS – L.

NYDJ jeans/leggings – These are pushing it for me in terms of tightness, but I can wear them if I’m standing or walking most of the day. Expensive, but they look professional and last forever. Sizes 00 – 28, including petites, and  they run at least a size large.

Demobaza leggings – The leggings made mostly of stretch material are amazingly comfortable, but also obscenely expensive. They also last forever. Sizes XS – L.

Tops

Patty Boutik – This strange little label makes comfortable tops with long long sleeves and long long bodies, and it keeps making the same styles for years. Unfortunately, they tend to sell out of the solid black versions of my favorite tops on a regular basis. I order two or three of my favorite styles whenever they are in stock as they are reasonably cheap. I’ve been wearing the 3/4 sleeve boat neck shirt at least once a week for about 5 years now. Sizes XS – XL, tend to run a size small.

14th and Union – This label makes very simple pieces out of the most comfortable fabrics I’ve ever worn for not very much money. I wear this turtleneck long sleeve tee about once a week. I also like their skirts. Sizes XS to XL, standard and petite.

Macy’s INC – This label is a reliable source of stretchy black clothing at Macy’s prices. It often edges towards club wear but keeps the simplicity I prefer.

Coats

Mossimo hoodie – Ugh, I love this thing. It’s the perfect cheap fashion staple. I often wear it underneath other coats. Not sure about sizes since it is only available on resale sites.

Skingraft Royal Hoodie – A vastly more expensive version of the black hoodie, but still comfortable, stretchy, and washable. And oh so dramatic. Sizes XS – L.

3/4 length hooded black trench coat – Really any brand will do, but I’ve mostly recently worn out a Calvin Klein and am currently wearing a Via Spiga.

Accessories

A woman wearing all black with a fanny pack
Mossimo hoodie, Toad & Co. skirt, T Tahari fanny pack, Satina leggings, VANELi flats

Fingerless gloves – The cheaper, the better! I buy these from the tourist shops at Fisherman’s Wharf in San Francisco for under $10. I am considering these gloves from Demobaza.

Medline folding cane – Another cheap fashion staple for the EDS goth! Sturdy, adjustable, folding black cane with clean sleek lines.

T Tahari Logo Fanny Pack – I stopped being able to carry a purse right about the time fanny packs came back into style! Ross currently has an entire fanny pack section, most of which are under $13. If I’m using a backpack or the rolling laptop bag, I usually keep my wallet, phone, keys, and lipstick in the fanny pack for easy access.

Duluth Child’s Pack, Envelope style – A bit expensive, but another simple fashion staple. I used to carry the larger roll-top canvas backpack until I realized I was packing it full of stuff and aggravating my shoulders. The child’s pack barely fits a small laptop and a few accessories.

Aimee Kestenberg rolling laptop bag – For the days when I need more than I can fit in my tiny backpack and fanny pack. It has a strap to fit on to the handle of a rolling luggage bag, which is great for air travel.

Apple Watch – The easiest way to diagnose POTS! (Look up “poor man’s tilt table test.”) A great way to track your heart rate and your exercise, two things I am very focused on as someone with EDS. When your first watch band wears out, go ahead and buy a random cheap one off the Internet.

That’s my EDS goth fashion tips! If you have more, please share them in the comments.

,

Krebs on SecurityWho Owns Your Wireless Service? Crooks Do.

Incessantly annoying and fraudulent robocalls. Corrupt wireless company employees taking hundreds of thousands of dollars in bribes to unlock and hijack mobile phone service. Wireless providers selling real-time customer location data, despite repeated promises to the contrary. A noticeable uptick in SIM-swapping attacks that lead to multi-million dollar cyberheists.

If you are somehow under the impression that you — the customer — are in control over the security, privacy and integrity of your mobile phone service, think again. And you’d be forgiven if you assumed the major wireless carriers or federal regulators had their hands firmly on the wheel.

No, a series of recent court cases and unfortunate developments highlight the sad reality that the wireless industry today has all but ceded control over this vital national resource to cybercriminals, scammers, corrupt employees and plain old corporate greed.

On Tuesday, Google announced that an unceasing deluge of automated robocalls had doomed a feature of its Google Voice service that sends transcripts of voicemails via text message.

Google said “certain carriers” are blocking the delivery of these messages because all too often the transcripts resulted from unsolicited robocalls, and that as a result the feature would be discontinued by Aug. 9. This is especially rich given that one big reason people use Google Voice in the first place is to screen unwanted communications from robocalls, mainly because the major wireless carriers have shown themselves incapable or else unwilling to do much to stem the tide of robocalls targeting their customers.

AT&T in particular has had a rough month. In July, the Electronic Frontier Foundation (EFF) filed a class action lawsuit on behalf of AT&T customers in California to stop the telecom giant and two data location aggregators from allowing numerous entities — including bounty hunters, car dealerships, landlords and stalkers — to access wireless customers’ real-time locations without authorization.

And on Monday, the U.S. Justice Department revealed that a Pakistani man was arrested and extradited to the United States to face charges of bribing numerous AT&T call-center employees to install malicious software and unauthorized hardware as part of a scheme to fraudulently unlock cell phones.

Ars Technica reports the scam resulted in millions of phones being removed from AT&T service and/or payment plans, and that the accused allegedly paid insiders hundreds of thousands of dollars to assist in the process.

We should all probably be thankful that the defendant in this case wasn’t using his considerable access to aid criminals who specialize in conducting unauthorized SIM swaps, an extraordinarily invasive form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

Late last month, a federal judge in New York rejected a request by AT&T to dismiss a $224 million lawsuit over a SIM-swapping incident that led to $24 million in stolen cryptocurrency.

The defendant in that case, 21-year-old Manhattan resident Nicholas Truglia, is alleged to have stolen more than $80 million from victims of SIM swapping, but he is only one of many individuals involved in this incredibly easy, increasingly common and lucrative scheme. The plaintiff in that case alleges that he was SIM-swapped on two different occasions, both allegedly involving crooked or else clueless employees at AT&T wireless stores.

And let’s not forget about all the times various hackers figured out ways to remotely use a carrier’s own internal systems for looking up personal and account information on wireless subscribers.

So what the fresh hell is going on here? And is there any hope that lawmakers or regulators will do anything about these persistent problems? Gigi Sohn, a distinguished fellow at the Georgetown Institute for Technology Law and Policy, said the answer — at least in this administration — is probably a big “no.”

“The takeaway here is the complete and total abdication of any oversight of the mobile wireless industry,” Sohn told KrebsOnSecurity. “Our enforcement agencies aren’t doing anything on these topics right now, and we have a complete and total breakdown of oversight of these incredibly powerful and important companies.”

Aaron Mackey, a staff attorney at the EFF, said that on the location data-sharing issue, federal law already bars the wireless carriers from sharing this with third parties without the expressed consent of consumers.

“What we’ve seen is the Federal Communications Commission (FCC) is well aware of this ongoing behavior about location data sales,” Mackey said. “The FCC has said it’s under investigation, but there has been no public action taken yet and this has been going on for more than a year. The major wireless carriers are not only violating federal law, but they’re also putting people in harm’s way. There are countless stories of folks being able to pretend to be law enforcement and gaining access to information they can use to assault and harass people based on the carriers making location data available to a host of third parties.”

On the issue of illegal SIM swaps, Wired recently ran a column pointing to a solution that many carriers in Africa have implemented which makes it much more difficult for SIM swap thieves to ply their craft.

“The carrier would set up a system to let the bank query phone records for any recent SIM swaps associated with a bank account before they carried out a money transfer,” wrote Wired’s Andy Greenberg in April. “If a SIM swap had occurred in, say, the last two or three days, the transfer would be blocked. Because SIM swap victims can typically see within minutes that their phone has been disabled, that window of time let them report the crime before fraudsters could take advantage.”

For its part, AT&T says it is now offering a solution to help diminish the fallout from unauthorized SIM swaps, and that the company is planning on publishing a consumer blog on this soon. Here are some excerpts from what they sent on that front:

“Our AT&T Authentication and Verification Service, or AAVS. AAVS offers a new method to help businesses determine that you are, in fact, you,” AT&T said in a statement. “This is how it works. If a business or company builds the AAVS capability into its website or mobile app, it can automatically connect with us when you attempt to log-in. Through that connection, the number and the phone are matched to confirm the log-in. If it detects something fishy, like the SIM card not in the right device, the transaction won’t go through without further authorization.”

“It’s like an automatic background check on your phone’s history, but with no personal information changing hands, and it all happens in a flash without you knowing. Think about how you do business with companies on your mobile device now. You typically log into an online account or a mobile app using a password or fingerprint. Some tasks might require you to receive a PIN from your institution for additional security, but once you have access, you complete your transactions. With AAVS, the process is more secure, and nothing changes for you. By creating an additional layer of security without adding any steps for the consumer, we can take larger strides in helping businesses and their customers better protect their data and prevent fraud. Even if it is designed to go unnoticed, we want you to know that extra layer of protection exists.   In fact, we’re offering it to dozens of financial institutions.”

“We are working with several leading banks to roll out this service to protect their customers accessing online accounts and mobile apps in the coming months, with more to follow. By directly working with those banks, we can help to better protect your information.”

In terms of combating the deluge of robocalls, Sohn says we already have a workable approach to arresting these nuisance calls: It’s an authentication procedure known as “SHAKEN/STIR,” and it is premised on the idea that every phone has a certificate of authenticity attached to it that can be used to validate if the call is indeed originating from the number it appears to be calling from.

Under a SHAKEN/STIR regime, anyone who is spoofing their number (and most of these robocalls are spoofed to appear as though they come from a number that is in the same prefix as yours) gets automatically blocked.

“The FCC could make the carriers provide robocall apps for free to customers, but they’re not,” Sohn said. “The carriers instead are turning around and charging customers extra for this service. There was a fairly strong anti-robocalls bill that passed the House, but it’s now stuck in the legislative graveyard that is the Senate.”

AT&T said it and the other major carriers in the US are adopting SHAKEN/STIR and do not plan to charge for it. The company said it is working on building this feature into its Call Protect app, which is free and is meant to help customers block unwanted calls.

What about the prospects of any kind of major overhaul to the privacy laws in this country that might give consumers more say over who can access their private data and what recourse they may have when companies entrusted with that information screw up?

Sohn said there are few signs that anyone in Congress is seriously championing consumer privacy as a major legislative issue. Most of the nascent efforts to bring privacy laws in the United States into the 21st Century she said are interminably bogged down on two sticky issues: Federal preemption of stronger state laws, and the ability of consumers to bring a private right of civil action in the courts against companies that violate those provisions.

“It’s way past time we had a federal privacy bill,” Sohn said. “Companies like Facebook and others are practically begging for some type of regulatory framework on consumer privacy, yet this congress can’t manage to put something together. To me it’s incredible we don’t even have a discussion draft yet. There’s not even a bill that’s being discussed and debated. That is really pitiful, and the closer we get to elections, the less likely it becomes because nobody wants to do anything that upsets their corporate contributions. And, frankly, that’s shameful.”

Update, Aug. 8, 2:05 p.m. ET: Added statements and responses from AT&T.

CryptogramBrazilian Cell Phone Hack

I know there's a lot of politics associated with this story, but concentrate on the cybersecurity aspect for a moment. The cell phones of a thousand Brazilians, including senior government officials, were hacked -- seemingly by actors much less sophisticated than rival governments.

Brazil's federal police arrested four people for allegedly hacking 1,000 cellphones belonging to various government officials, including that of President Jair Bolsonaro.

Police detective João Vianey Xavier Filho said the group hacked into the messaging apps of around 1,000 different cellphone numbers, but provided little additional information at a news conference in Brasilia on Wednesday. Cellphones used by Bolsonaro were among those attacked by the group, the justice ministry said in a statement on Thursday, adding that the president was informed of the security breach.

[...]

In the court order determining the arrest of the four suspects, Judge Vallisney de Souza Oliveira wrote that the hackers had accessed Moro's Telegram messaging app, along with those of two judges and two federal police officers.

When I say that smartphone security equals national security, this is the kind of thing I am talking about.

Planet DebianDirk Eddelbuettel: RQuantLib 0.4.10: Pure maintenance

A new version 0.4.10 of RQuantLib just got onto CRAN; a Debian upload will follow in due course.

QuantLib is a very comprehensice free/open-source library for quantitative finance; RQuantLib connects it to the R environment and language.

This version does two things related to the new upstream QuantLib release 1.16. First, it updates the Windows build script in two ways: it uses binaries for the brand new 1.16 release as prepapred by Jeroen, and it sets win-builder up for the current and “prospective next version”, also set up by Jeroen. I also updated the Dockerfile used for CI to pick QuantLib 1.16 from Debian’s unstable repo as it is too new to have moved to testing (which the r-base container we build on defaults to). The complete set of changes is listed below:

Changes in RQuantLib version 0.4.10 (2019-08-07)

  • Changes in RQuantLib build system:

    • The src/Makevars.win and tools/winlibs.R file get QuantLib 1.16 for either toolchain (Jeroes in #136).

    • The custom Docker container now downloads QuantLib from Debian unstable to get release 1.16 (from yesterday, no less)

Courtesy of CRANberries, there is also a diffstat report for the this release. As always, more detailed information is on the RQuantLib page. Questions, comments etc should go to the new rquantlib-devel mailing list. Issue tickets can be filed at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureCodeSOD: Seven First Dates

Your programming language is PHP, which represents datetimes as milliseconds since the epoch. Your database, on the other hand, represents datetimes as seconds past the epoch. Now, your database driver certainly has methods to handle this, but can you really trust that?

Nancy found some code which simply needs to check: for the past week, how many customers arrived each day?

$customerCount = array();
$result2 = array();
$result3 = array();
$result4 = array();

$min = 20;
$max = 80;

for ( $i = $date; $i < $date + $days7 + $day; $i += $day ) {

	$first_datetime = date('Y-m-d H:i',substr($i - $day,0,-3));
	$second_datetime = date('Y-m-d H:i',substr($i,0,-3));

	$sql = $mydb ->prepare("SELECT 
								COUNT(DISTINCT Customer.ID) 'Customer'
				            FROM Customer
				                WHERE Timestamp BETWEEN %s AND %s",$first_datetime,$second_datetime);
	$output = $mydb->get_row($sql);
	array_push( $customerCount, $output->Customer == null ? 0 : $output->Customer);
}

array_push( $result4, $customerCount );
array_push( $result4, $result2 );
array_push( $result4, $result3 );

return $result4;

If you have a number of milliseconds and you wish to convert it to seconds, you might do something silly and divide by 1,000, but here we have a more obvious solution: substr the last three digits off to create our $first_datetime and $second_datetime.

Using that, we can prepare a separate query for each day, looping across them to populate $customerCount.

Once we’ve collected all the counts in $customerCount, we then push that into $result4. And then we push the empty $result2 into $result4, followed by the equally empty $result3, at which point we can finally return $result4.

There’s no $result1, but it looks like $customerCount was a renamed version of that, just by the sequence of declarations. And then $min and $max are initialized but never used, and from that, it’s very easy to understand what happened here.

The original developer copied some sample code from a tutorial, but they didn’t understand it. They knew they had a goal, and they knew that their goal was similar to the tutorial, so they just blundered about changing things until they got the results they expected.

Nancy threw all this out and replaced it with a GROUP BY query.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Planet DebianJonas Meurer: debian lts report 2019.07

Debian LTS report for July 2019

This month I was allocated 17 hours. I also had 2 hours left over from Juney, which makes a total of 19 hours. I spent all of them on the following tasks/ issues.

  • DLA-1843-1: Fixed CVE-2019-10162 and CVE-2019-10163 in pdns.
  • DLA-1852-1: Fixed CVE-2019-9948 in python3.4. Also found, debugged and fixed several further regressions in the former CVE-2019-9740 patches.
  • Improved testing of LTS uploads: We had some internal discussion in the Debian LTS team on how to improve the overall quality of LTS security uploads by doing more (semi-)automated testing of the packages before uploading them to jessie-security. I tried to summarize the internal discussion, bringing it to the public debian-lts mailinglist. I also did a lot of testing and worked on Jessie support in Salsa-CI. Now that salsa-ci-team/images MR !74 and ci-team/debci MR !89 got merged, we only have to wait for a new debci release in order to enable autopkgtest Jessie support in Salsa-CI. Afterwards, we can use the Salsa-CI pipeline for (semi-)automatic testing of packages targeted at jessie-security.

Worse Than FailureCodeSOD: Bunny Bunny

When you deploy any sort of complicated architecture, like microservices, you also end up needing to deploy some way to route messages between all the various bits and bobs in your application. You could build this yourself, but you’ll usually use an off-the-shelf product, like Kafka or RabbitMQ.

This is the world Tina lives in. They have a microservice-based architecture, glued together with a RabbitMQ server. The various microservices need to connect to the RabbitMQ, and thus, they need to be able to check if that connection were successful.

Now, as you can imagine, that would be a built-in library method for pretty much any MQ client library, but if people used the built-in methods for common tasks, we’d have far fewer articles to write.

Tina’s co-worker solved the “am I connected?” problem thus:

def are_we_connected_to_rabbitmq():
    our_node_ip = get_server_ip_address()
    consumer_connected = False
    response = requests.get("http://{0}:{1}@{2}:15672/api/queues/{3}/{4}".format(
        self.username,
        self.password,
        self.rabbitmq_host,
        self.vhost,
        self.queue))

    if response and response.status_code == 200:
        js_response = json.loads(response.content)
        consumer_details = js_response.get('consumer_details', [])
        for consumer in consumer_details:
            peer_host = consumer.get('channel_details', {}).get(
                'peer_host')
            if peer_host == our_node_ip:
                consumer_connected = True
                break

    return consumer_connected

To check if our queue consumer has successfully connected to the queue, we send an HTTP request to one of RabbitMQ’s restful endpoints to find a list of all of the connected consumers. Then we check to see if any of those consumers has our IP address. If one does, that must be us, so we must be connected!

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianLouis-Philippe Véronneau: Paying for The Internet

For a while now, I've been paying for The Internet. Not the internet connection provided by my ISP, mind you, but for the stuff I enjoy online and the services I find useful.

Most of the Internet as we currently know it is funded by ads. I hate ads and I take a vicious pride in blocking them with the help of great projects like uBlock Orign and NoScript. More fundamentally, I believe the web shouldn't be funded via ads:

  • they control your brain (that alone should be enough to ban ads)
  • they create morally wrong economic incentives towards consumerism
  • they create important security risks and make websites gather data on you

A sticker with a feminist anti-ads message

I could go on like this, but I feel those are pretty strong arguments. Feel free to disagree.

So I've started paying. Paying for my emails. Paying for the comics I enjoy online 1. Paying for the few YouTube channels I like. Paying for the newspapers I read.

At the moment, The Internet costs me around 260 USD per year. Luckily for me, I'm privileged enough that it doesn't have a significant impact on my finances. I also pay for a lot of the software I use and enjoy by making patches and spending time working on them. I feel that's a valid way to make The Internet a more sustainable place.

I don't think individual actions like this one have a very profound impact on how things work, but like riding your bike to work or eating locally produced organic food, it opens a window into a possible future. A better future.

Planet DebianDirk Eddelbuettel: #23: Debugging with Docker and Rocker – A Concrete Example helping on macOS

Welcome to the 23nd post in the rationally reasonable R rants series, or R4 for short. Today’s post was motivated by an exchange on the r-devel list earlier in the day, and a few subsequent off-list emails.

Roger Koenker posted a question: how to best debug an issue arising only with gfortran-9 which is difficult to get hold off on his macOS development platform. Some people followed up, and I mentioned that I had good success using Docker, and particularly our Rocker containers—and outlined a quick mini-tutorial (which had one mini-typo lacking the imporant slash in -w /work). Roger and I followed up over a few more off-list emails, and by and large this worked for him.

So what follows below is a jointly written / edited ‘mini HOWTO’ of how to deploy Docker on macOS for debugging under particular toolchains more easily available on Linux. Windows and Linux use should be very similar, albeit differ in the initial install. In fact, I frequently debug or test in Docker sessions when I do not want to install on my Linux host system. Roger sent one version (I had also edited) back to the list. What follows is my final version.

Debugging with Docker: Getting Hold of Particular Compilers

Context: The quantreg package was seen exhibiting errors when compiled with gfortran-9. The following shows how to use gfortran-9 on macOS by virtue of Docker. It is written in Roger Koenker’s voice, but authored by Roger and myself.

With extensive help from Dirk Eddelbuettel I have installed docker on my mac mini from

https://hub.docker.com/editions/community/docker-ce-desktop-mac

which installs from a dmg in quite standard fashion. This has allowed me to simulate running R in a Debian environment with gfortran-9 and begin the process of debugging my ancient rqbr.f code.

Some further details:

Step 0: Install Docker and Test

Install Docker for macOS following this Docker guide. Do some initial testing, e.g.

Step 1: Download r-base and test OS

We use the plainest Rocker container rocker/r-base, in the aliased form of the official Docker container for, i.e. r-base. We first ‘pull’, then test the version and drop into bash as second test.

Step 2: Setup the working directory

We tell Docker to run from the current directory and access the files therein. For the work on quantreg package this is projects/rq for RogerL

This put the contents of projects/rq into the /work directory, and starts the session in /work (as can be seen from the prompt).

Next, we update the package information inside the container:

Step 3: Install gcc-9 and gfortran-9

root@90521904fa86:/work# apt-get install gcc-9 gfortran-9
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following additional packages will be installed:
cpp-9 gcc-9-base libasan5 libatomic1 libcc1-0 libgcc-9-dev libgcc1 libgfortran-9-dev
libgfortran5 libgomp1 libitm1 liblsan0 libquadmath0 libstdc++6 libtsan0 libubsan1
Suggested packages:
gcc-9-locales gcc-9-multilib gcc-9-doc libgcc1-dbg libgomp1-dbg libitm1-dbg libatomic1-dbg
libasan5-dbg liblsan0-dbg libtsan0-dbg libubsan1-dbg libquadmath0-dbg gfortran-9-multilib
gfortran-9-doc libgfortran5-dbg libcoarrays-dev
The following NEW packages will be installed:
cpp-9 gcc-9 gfortran-9 libgcc-9-dev libgfortran-9-dev
The following packages will be upgraded:
gcc-9-base libasan5 libatomic1 libcc1-0 libgcc1 libgfortran5 libgomp1 libitm1 liblsan0
libquadmath0 libstdc++6 libtsan0 libubsan1
13 upgraded, 5 newly installed, 0 to remove and 71 not upgraded.
Need to get 35.6 MB of archives.
After this operation, 107 MB of additional disk space will be used.
Do you want to continue? [Y/n] Y
Get:1 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libasan5 amd64 9.1.0-10 [390 kB]
Get:2 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libubsan1 amd64 9.1.0-10 [128 kB]
Get:3 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libtsan0 amd64 9.1.0-10 [295 kB]
Get:4 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gcc-9-base amd64 9.1.0-10 [190 kB]
Get:5 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libstdc++6 amd64 9.1.0-10 [500 kB]
Get:6 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libquadmath0 amd64 9.1.0-10 [145 kB]
Get:7 http://cdn-fastly.deb.debian.org/debian testing/main amd64 liblsan0 amd64 9.1.0-10 [137 kB]
Get:8 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libitm1 amd64 9.1.0-10 [27.6 kB]
Get:9 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgomp1 amd64 9.1.0-10 [88.1 kB]
Get:10 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgfortran5 amd64 9.1.0-10 [633 kB]
Get:11 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libcc1-0 amd64 9.1.0-10 [47.7 kB]
Get:12 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libatomic1 amd64 9.1.0-10 [9,012 B]
Get:13 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgcc1 amd64 1:9.1.0-10 [40.5 kB]
Get:14 http://cdn-fastly.deb.debian.org/debian testing/main amd64 cpp-9 amd64 9.1.0-10 [9,667 kB]
Get:15 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgcc-9-dev amd64 9.1.0-10 [2,346 kB]
Get:16 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gcc-9 amd64 9.1.0-10 [9,945 kB]
Get:17 http://cdn-fastly.deb.debian.org/debian testing/main amd64 libgfortran-9-dev amd64 9.1.0-10 [676 kB]
Get:18 http://cdn-fastly.deb.debian.org/debian testing/main amd64 gfortran-9 amd64 9.1.0-10 [10.4 MB]
Fetched 35.6 MB in 6s (6,216 kB/s)      
debconf: delaying package configuration, since apt-utils is not installed
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../libasan5_9.1.0-10_amd64.deb ...
Unpacking libasan5:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../libubsan1_9.1.0-10_amd64.deb ...
Unpacking libubsan1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../libtsan0_9.1.0-10_amd64.deb ...
Unpacking libtsan0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../gcc-9-base_9.1.0-10_amd64.deb ...
Unpacking gcc-9-base:amd64 (9.1.0-10) over (9.1.0-8) ...
Setting up gcc-9-base:amd64 (9.1.0-10) ...
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../libstdc++6_9.1.0-10_amd64.deb ...
Unpacking libstdc++6:amd64 (9.1.0-10) over (9.1.0-8) ...
Setting up libstdc++6:amd64 (9.1.0-10) ...
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../0-libquadmath0_9.1.0-10_amd64.deb ...
Unpacking libquadmath0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../1-liblsan0_9.1.0-10_amd64.deb ...
Unpacking liblsan0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../2-libitm1_9.1.0-10_amd64.deb ...
Unpacking libitm1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../3-libgomp1_9.1.0-10_amd64.deb ...
Unpacking libgomp1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../4-libgfortran5_9.1.0-10_amd64.deb ...
Unpacking libgfortran5:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../5-libcc1-0_9.1.0-10_amd64.deb ...
Unpacking libcc1-0:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../6-libatomic1_9.1.0-10_amd64.deb ...
Unpacking libatomic1:amd64 (9.1.0-10) over (9.1.0-8) ...
Preparing to unpack .../7-libgcc1_1%3a9.1.0-10_amd64.deb ...
Unpacking libgcc1:amd64 (1:9.1.0-10) over (1:9.1.0-8) ...
Setting up libgcc1:amd64 (1:9.1.0-10) ...
Selecting previously unselected package cpp-9.
(Reading database ... 17787 files and directories currently installed.)
Preparing to unpack .../cpp-9_9.1.0-10_amd64.deb ...
Unpacking cpp-9 (9.1.0-10) ...
Selecting previously unselected package libgcc-9-dev:amd64.
Preparing to unpack .../libgcc-9-dev_9.1.0-10_amd64.deb ...
Unpacking libgcc-9-dev:amd64 (9.1.0-10) ...
Selecting previously unselected package gcc-9.
Preparing to unpack .../gcc-9_9.1.0-10_amd64.deb ...
Unpacking gcc-9 (9.1.0-10) ...
Selecting previously unselected package libgfortran-9-dev:amd64.
Preparing to unpack .../libgfortran-9-dev_9.1.0-10_amd64.deb ...
Unpacking libgfortran-9-dev:amd64 (9.1.0-10) ...
Selecting previously unselected package gfortran-9.
Preparing to unpack .../gfortran-9_9.1.0-10_amd64.deb ...
Unpacking gfortran-9 (9.1.0-10) ...
Setting up libgomp1:amd64 (9.1.0-10) ...
Setting up libasan5:amd64 (9.1.0-10) ...
Setting up libquadmath0:amd64 (9.1.0-10) ...
Setting up libatomic1:amd64 (9.1.0-10) ...
Setting up libgfortran5:amd64 (9.1.0-10) ...
Setting up libubsan1:amd64 (9.1.0-10) ...
Setting up cpp-9 (9.1.0-10) ...
Setting up libcc1-0:amd64 (9.1.0-10) ...
Setting up liblsan0:amd64 (9.1.0-10) ...
Setting up libitm1:amd64 (9.1.0-10) ...
Setting up libtsan0:amd64 (9.1.0-10) ...
Setting up libgcc-9-dev:amd64 (9.1.0-10) ...
Setting up gcc-9 (9.1.0-10) ...
Setting up libgfortran-9-dev:amd64 (9.1.0-10) ...
Setting up gfortran-9 (9.1.0-10) ...
Processing triggers for libc-bin (2.28-10) ...
root@90521904fa86:/work# pwd

Here filenames and versions reflect the Debian repositories as of today, August 5, 2019. While minor details may change at a future point in time, the key fact is that we get the components we desire via a single call as the Debian system has a well-honed package system

Step 4: Prepare Package

At this point Roger removed some dependencies from the package quantreg that he knew were not relevant to the debugging problem at hand.

Step 5: Set Compiler Flags

Next, set compiler flags as follows:

adding the values

CC=gcc-9
FC=gfortran-9
F77=gfortran-9

to the file. Alternatively, one can find the settings of CC, FC, CXX, … in /etc/R/Makeconf (which for the Debian package is a softlink to R’s actual Makeconf) and alter them there.

Step 6: Install the Source Package

Now run

which uses the gfortran-9 compiler, and this version did reproduce the error initially reported by the CRAN maintainers.

Step 7: Debug!

With the tools in place, and the bug reproduces, it is (just!) a matter of finding the bug and fixing it.

And that concludes the tutorial.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Cory DoctorowPodcast: “IBM PC Compatible”: how adversarial interoperability saved PCs from monopolization

In my latest podcast (MP3), I read my essay “IBM PC Compatible”: how adversarial interoperability saved PCs from monopolization, published today on EFF’s Deeplinks; it’s another installment in my series about “adversarial interoperability,” and the role it has historically played in keeping tech open and competitive. This time, I relate the origin story of the “PC compatible” computer, with help from Tom Jennings (inventor of FidoNet!) who played a key role in the story.

All that changed in 1981, when IBM entered the PC market with its first personal computer, which quickly became the de facto standard for PC hardware. There are many reasons that IBM came to dominate the fragmented PC market: they had the name recognition (“No one ever got fired for buying IBM,” as the saying went) and the manufacturing experience to produce reliable products.

Equally important was IBM’s departure from its usual business practice of pursuing advantage by manufacturing entire systems, down to the subcomponents. Instead, IBM decided to go with an “open” design that incorporated the same commodity parts that the existing PC vendors were using, including MS-DOS and Intel’s 8086 chip. To accompany this open hardware, IBM published exhaustive technical documentation that covered every pin on every chip, every way that programmers could interact with IBM’s firmware (analogous to today’s “APIs”), as well as all the non-standard specifications for its proprietary ROM chip, which included things like the addresses where IBM had stored the fonts it bundled with the system.

Once IBM’s PC became the standard, rival hardware manufacturers realized that they had to create systems that were compatible with IBM’s systems. The software vendors were tired of supporting a lot of idiosyncratic hardware configurations, and IT managers didn’t want to have to juggle multiple versions of the software they relied on. Unless non-IBM PCs could run software optimized for IBM’s systems, the market for those systems would dwindle and wither.

MP3

Planet DebianRomain Perier: Free software activities in May, June and July 2019

Hi Planet, it is been a long time since my last post.
Here is an update covering what I have been doing in my free software activities during May, June and July 2019.

May

Only contributions related to Debian were done in May
  •  linux: Update to 5.1 (including porting of all debian patches to the new release)
  • linux: Update to 5.1.2
  • linux: Update to 5.1.3
  • linux: Update to 5.1.5
  • firmware-nonfree: misc-nonfree: Add GV100 signed firmwares (Closes: #928672)

June

Debian

  • linux: Update to 5.1.7
  • linux: Update to 5.1.8
  • linux: Update to 5.1.10
  • linux: Update to 5.1.11
  • linux: Update to 5.1.15
  • linux: [sparc64] Fix device naming inconsistency between sunhv_console and sunhv_reg (Closes: #926539)
  • raspi3-firmware:  New upstream version 1.20190517
  • raspi3-firmware: New upstream version 1.20190620+1

Kernel Self Protection Project

I have recently joined the kernel self protection protect, which basically intends to harden the mainline linux kernel the most as possible by adding subsystems that improve the security or make internal subsystems more robust to some common errors that might lead to security issues.

As a first contribution, Kees Cook asked me to check all the NLA_STRING for non-nul terminated strings. Internal functions of NLA attrs expect to have standard nul-terminated strings and use standard strings functions like strcmp() or equivalent. Few drivers were using non-nul terminated strings in some cases, which might lead to buffer overflow. I have checked all the NLA_STRING uses in all drivers and forwarded a status for all of these. Everything were already fixed in linux-next (hopefully).

July

Debian

  • linux: Update to 5.1.16
  • linux: Update to 5.2-rc7 (including porting of all debian patches to the new release)
  • linux: Update to 5.2
  • linux: Update to 5.2.1
  • linux: [rt] Update to 5.2-rt1
  • linux: Update to 5.2.4
  • ethtool: New upstream version 5.2
  • raspi3-firmware: Fixed lintians warnings about the binaries blobs for the raspberry PI 4
  • raspi3-firmware: New upstream version 1.20190709
  • raspi3-firmware: New upstream version 1.20190718
The following CVEs are for buster-security:
  • linux: [x86] x86/insn-eval: Fix use-after-free access to LDT entry (CVE-2019-13233)
  • linux: [powerpc*] mm/64s/hash: Reallocate context ids on fork (CVE-2019-12817)
  • linux: nfc: Ensure presence of required attributes in the deactivate_target handler (CVE-2019-12984)
  • linux: binder: fix race between munmap() and direct reclaim (CVE-2019-1999)
  • linux: scsi: libsas: fix a race condition when smp task timeout (CVE-2018-20836)
  • linux: Input: gtco - bounds check collection indent level (CVE-2019-13631)

Kernel Self Protection Project

I am currently improving the API of the internal kernel subsystem "tasklet". This is an old API and like "timer" it has several limitations regarding the way informations are passed to the callback handler. A future patch set will be sent to upstream, I will probably write a blog post about it.

Planet DebianReproducible Builds: Reproducible Builds in July 2019

Welcome to the July 2019 report from the Reproducible Builds project!

In these reports we outline the most important things that we have been up over the past month. As a quick recap, whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed to end users as pre-compiled binaries.

The motivation behind the reproducible builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

In July’s report, we cover:

  • Front pageMedia coverage, upstream news, etc.
  • Distribution workShenanigans at DebConf19
  • Software developmentSoftware transparency, yet more diffoscope work, etc.
  • On our mailing listGNU tools, education and buildinfo files
  • Getting in touch… and how to contribute

If you are interested in contributing to our project, we enthusiastically invite you to visit our Contribute page on our website.


Front page

Nico Alt wrote a detailed and well-researched article titled “Trust is good, control is better” which discusses Reproducible builds in F-Droid the alternative application repository for Android mobile phones. In contrast to the bigger commercial app stores F-Droid only offers apps that are free and open source software. The post not only demonstrates using diffoscope but talks more generally about how reproducible builds can prevent single developers or other important centralised infrastructure becoming targets for toolchain-based attacks.

Later in the month, F-Droid’s aforementioned reproducibility status was mentioned on episode 68 of the Late Night Linux podcast. (direct link to 14:12)

Morten (“Foxboron”) Linderud published his academic thesis “Reproducible Builds: break a log, good things come in trees” which investigates and describes how transparency log overlays can provide additional security guarantees for computers automatically producing software packages. The thesis was part of Morten’s studies at the University of Bergen, Norway and is an extension of the work New York University Tandon School of Engineering has been doing with package rebuilder integration in APT.

Mike Hommey posted to his blog about Reproducing the Linux builds of Firefox 68 which leverages that builds shipped by Mozilla should be reproducible from this version. He discusses the problems caused by the builds being optimised with Profile-Guided Optimisation (PGO) but armed with the now-published profiling data, Mike provides Docker-based instructions how to reproduce the published builds yourself.

Joel Galenson has been making progress on a reproducible Rust compiler which includes support for a --remap-path-prefix argument, related to the concepts and problems involved in the BUILD_PATH_PREFIX_MAP proposal to fix issues with build paths being embedded in binaries.

Lastly, Alessio Treglia posted to their blog about Cosmos Hub and Reproducible Builds which describes the reproducibility work happening in the Cosmos Hub, a network of interconnected blockchains. Specifically, Alessio talks about work being done on the Gaia development kit for the Hub.



Distribution work

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution where enabling. Enabling Link Time Optimization (LTO) in this distribution’s “Tumbleweed” branch caused multiple issues due to the number of cores on the build host being added to the CFLAGS variable. This affected, for example, a debuginfo/rpm header as well as resulted in in CFLAGS appearing in built binaries such as fldigi, gmp, haproxy, etc.

As highlighted in last month’s report, the OpenWrt project (a Linux operating system targeting embedded devices such as wireless network routers) hosted a summit in Hamburg, Germany. Their full summit report and roundup is now available that covers many general aspects within that distribution, including the work on reproducible builds that was done during the event.

Debian

It was an extremely productive time in Debian this month in and around DebConf19, the 20th annual conference for both contributors and users and was held at the Federal University of Technology in Paraná (UTFPR) in Curitiba, Brazil, from July 21 to 28. The conference was preceded by “DebCamp” from the 14th until the 19th with an additional “Open Day” that is targeted at the more-general public on the 20th.

There were a number of talks touching on the topic of reproducible builds and secure toolchains throughout the conference, including:

There were naturally countless discussions regarding Reproducible Builds in and around the conference on the questions of tooling, infrastructure and our next steps as a project.

The release of Debian 10 buster has also meant the release cycle for the next release (codenamed “bullseye”) has just begun. As part of this, the Release Team recently announced that Debian will no longer allow binaries built and uploaded by maintainers on their own machines to be part of the upcoming release. This is great news not only for toolchain security in general but also in that it will ensure that all binaries that will form part of this release will likely have a .buildinfo file and thus metadata that could be used by others to reproduce and verify the builds.

Holger Levsen filed a bug against the underlying tool that maintains the Debian archive (“dak”) after he noticed that .buildinfo metadata files were not being automatically propagated if packages had to be manually approved or processed in the so-called “NEW queue”. After it was pointed out that the files were being retained in a separate location, Benjamin Hof proposed a potential patch for the issue which is pending review.

David Bremner posted to his blog post about “Yet another buildinfo database” that provides a SQL interface for querying .buildinfo attestation documents, particularly focusing on identifying packages that were built with a specific — and possibly buggy — build-dependency. Later at DebConf, David demonstrated his tool live (starting at 36:30).

Ivo de Decker (“ivodd”) scheduled rebuilds of over 600 packages that last experienced an upload to the archive in December 2016 or earlier. This was so that they would be built using a version of the low-level dpkg package build tool that supports the generation of reproducible binary packages. The effect of this on the main archive will be deliberately staggered and thus visible throughout the upcoming weeks, potentially resulting in some of these packages now failing to build.

Joaquin de Andres posted an update regarding the work being done on continuous integration on Debian’s GitLab instance at DebConf19 in which he mentions, inter alia, a tool called atomic-reprotest. This is a relatively new utility to help debug failures logged by our reprotest tool which attempts to test whether a build is reproducible or not. This tool was also mentioned in a subsequent lightning talk.

Chris Lamb filed two bugs to drop the test jobs for both strip-nondeterminism (#932366) and reprotest (#932374) after modifying them to build on the Salsa server’s own continuous integration platform and Holger Levsen shortly resolved them.

Lastly, 63 reviews of Debian packages were added, 72 were updated and 22 were removed this month, adding to our large knowledge about identified issues. Chris Lamb added and categorised four new issue types, umask_in_java_jar_file, built_by-in_java_manifest_mf, timestamps_in_manpages_generated_by_lopsubgen and codadef_coda_data_files.


Software development

The goal of Benjamin Hof’s Software Transparency effort is to improve on the cryptographic signatures of the APT package manager by introducing a Merkle tree-based transparency log for package metadata and source code, in a similar vein to certificate transparency. This month, he pushed a number of repositories to our revision control system for further future development and review.

In addition, Bernhard M. Wiedemann updated his (deliberately) unreproducible demonstration project to add support for floating point variations as well as changes in the project’s copyright year.

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of such patches, including:

Neal Gompa, Michael Schröder & Miro Hrončok responded to Fedora’s recent change to rpm-config with some new developments within rpm to fix an unreproducible “Build Date” and reverted a change to the Python interpreter to switch back to unreproducible/time-based compile caches.

Lastly, kpcyrd submitted a pull request for Alpine Linux to add SOURCE_DATE_EPOCH support to the abuild build tool in this operating system.


diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb made the following changes:

  • Add support for Java .jmod modules (#60). However, not all versions of file(1) support detection of these files yet, so we perform a manual comparison instead [].
  • If a command fails to execute but does not print anything to standard error, try and include the first line of standard output in the message we include in the difference. This was motivated by readelf(1) returning its error messages on standard output. [#59) []
  • Add general support for file(1) 5.37 (#57) but also adjust the code to not fail in tests when, eg, we do not have sufficiently newer or older version of file(1) (#931881).
  • Factor out the ability to ignore the exit codes of zipinfo and zipinfo -v in the presence of non-standard headers. [] but only override the exit code from our special-cased calls to zipinfo(1) if they are 1 or 2 to avoid potentially masking real errors [].
  • Cease ignoring test failures in stable-backports. []
  • Add missing textual DESCRIPTION headers for .zip and “Mozilla”-optimised .zip files. []
  • Merge two overlapping environment variables into a single DIFFOSCOPE_FAIL_TESTS_ON_MISSING_TOOLS. []
  • Update some reporting:
    • Re-add “return code” noun to “Command foo exited with X” error messages. []
    • Use repr(..)-style output when printing DIFFOSCOPE_TESTS_FAIL_ON_MISSING_TOOLS in skipped test rationale text. []
    • Skip the extra newline in Output:\nfoo. []
  • Add some explicit return values to appease Pylint, etc. []
  • Also include the python3-tlsh in the Debian test dependencies. []
  • Released and uploaded releasing versions 116, 117, 118, 119 & 120. [][][][][]

In addition, Marc Herbert provided a patch to catch failures to disassemble ELF binaries. []


Project website

There was a yet more effort put into our our website this month, including:

  • Bernhard M. Wiedemann:
    • Update multiple works to use standard (or at least consistent) terminology. []
    • Document an alternative Python snippet in the SOURCE_DATE_EPOCH examples examples. []
  • Chris Lamb:
    • Split out our non-fiscal sponsors with a description [] and make them non-display three-in-a-row [].
    • Correct references to 1&1 IONOS (née Profitbricks). []
    • Reduce ambiguity in our environment names. []
    • Recreate the badge image, saving the .svg alongside it. []
    • Update our fiscal sponsors. [][][]
    • Tidy the weekly reports section on the news page [], fixup the typography on the documentation page [] and make all headlines stand out a bit more [].
    • Drop some old CSS files and fonts. []
    • Tidy news page a bit. []
    • Fixup a number of issues in the report template and previous reports. [][][][][][]

Holger Levsen also added explanations on how to install diffoscope on OpenBSD [] and FreeBSD [] to its homepage and Arnout Engelen added a preliminary and work-in-progress idea for a badge or “shield” program for upstream projects. [][][].

A special thank you to Alexander Borkowski [] Georg Faerber [], and John Scott [] for their individual fixes. To err is human; to reproduce, divine.


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, Niko Tyni provided a patch to use the Perl Sub::Override library for some temporary workarounds for issues in Archive::Zip instead of Monkey::Patch which was due for deprecation. [].

In addition, Chris Lamb made the following changes:

  • Identify data files from the COmmon Data Access (CODA) framework as being .zip files. []
  • Support OpenJDK “.jmod” files. []
  • Pass --no-sandbox if necessary to bypass seccomp-enabled version of file(1) which was causing a huge number of regressions in our testing framework.
  • Don’t just run the tests but build the Debian package instead using Salsa’s centralised scripts so that we get code coverage, Lintian, autopkgtests, etc. [][]
  • Update tests:
    • Don’t build release Git tags on salsa.debian.org. []
    • Merge the debian branch into the master branch to simplify testing and deployment [] and update debian/gbp.conf to match [].
  • Drop misleading and outdated MANIFEST and MANIFEST.SKIP files as they are not used by our release process. []


Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org. The following changes were performed in the last month:

  • Holger Levsen:
    • Debian-specific changes:
      • Make a large number of adjustments to support the new Debian bullseye distribution and the release of buster. [][][][][][][] [][][][]
      • Fix the colours for the five suites now being built. []
      • Make a number code improvements to the calculation of our “metapackage” sets including refactoring and changes of email address, etc. [][][][][]
      • Add the “http-proxy” variable to the displayed node info. []
    • Alpine changes:
      • Rebuild the webpages every two hours (instead of twice per hour). []
    • Reproducible tooling:
      • Fix the detection of version number in Arch Linux. []
      • Drop reprotest and strip-nondeterminism jobs as we run that via Salsa CI now. [][]
      • Add a link to current SQL database schema. []
  • Mattia Rizzolo:
    • Make a number of adjustments to support the new Debian bullseye distribution. [][][][]
    • Ensure that our arm64 hosts always trust the Debian archive keyring. []
    • Enable the backports repositories on the arm64 build hosts. []

Holger Levsen [][][] and Mattia Rizzolo [][][] performed the usual node maintenance and lastly, Vagrant Cascadian added support to generate a reproducible-tracker.json metadata file for the next release of Debian (bullseye). []


On the mailing list

Chris Lamb cross-posted his reply to the “Re: file(1) now with seccomp support enabled thread that was originally started on the debian-devel Debian list. In his post, he refers to a strip-nondeterminism not being able to accommodate the additional security hardening in file(1) and the changes made to the tool in order to do fix this issue which was causing a huge number of regressions in our testing framework.

Matt Bearup wrote about his experience when he generated different checksums for the libgcrypt20 package which resulted in some pointers etc. in that one should be using the equivalent .buildinfo post-build certificate when attempting to reproduce any particular build.

Vagrant Cascadian posted a request for comments regarding a potential proposal to the GNU Tools “Cauldron” gathering to be held in Montréal, Canada during September 2019 and Bernhard M. Wiedemann posed a query about using consistent terms on our webpages and elsewhere.

Lastly, in a thread titled “Reproducible Builds - aiming for bullseye: comments and a purpose” Jathan asked about whether we had considered offering “101”-like beginner sessions to fix packages that are not currently reproducible.



Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

This month’s report was written by Benjamin Hof, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

CryptogramRegulating International Trade in Commercial Spyware

Siena Anstis, Ronald J. Deibert, and John Scott-Railton of Citizen Lab published an editorial calling for regulating the international trade in commercial surveillance systems until we can figure out how to curb human rights abuses.

Any regime of rigorous human rights safeguards that would make a meaningful change to this marketplace would require many elements, for instance, compliance with the U.N. Guiding Principles on Business and Human Rights. Corporate tokenism in this space is unacceptable; companies will have to affirmatively choose human rights concerns over growing profits and hiding behind the veneer of national security. Considering the lies that have emerged from within the surveillance industry, self-reported compliance is insufficient; compliance will have to be independently audited and verified and accept robust measures of outside scrutiny.

The purchase of surveillance technology by law enforcement in any state must be transparent and subject to public debate. Further, its use must comply with frameworks setting out the lawful scope of interference with fundamental rights under international human rights law and applicable national laws, such as the "Necessary and Proportionate" principles on the application of human rights to surveillance. Spyware companies like NSO Group have relied on rubber stamp approvals by government agencies whose permission is required to export their technologies abroad. To prevent abuse, export control systems must instead prioritize a reform agenda that focuses on minimizing the negative human rights impacts of surveillance technology and that ensures -- with clear and immediate consequences for those who fail -- that companies operate in an accountable and transparent environment.

Finally, and critically, states must fulfill their duty to protect individuals against third-party interference with their fundamental rights. With the growth of digital authoritarianism and the alarming consequences that it may hold for the protection of civil liberties around the world, rights-respecting countries need to establish legal regimes that hold companies and states accountable for the deployment of surveillance technology within their borders. Law enforcement and other organizations that seek to protect refugees or other vulnerable persons coming from abroad will also need to take digital threats seriously.

Krebs on SecurityThe Risk of Weak Online Banking Passwords

If you bank online and choose weak or re-used passwords, there’s a decent chance your account could be pilfered by cyberthieves — even if your bank offers multi-factor authentication as part of its login process. This story is about how crooks increasingly are abusing third-party financial aggregation services like Mint, PlaidYodlee, YNAB and others to surveil and drain consumer accounts online.

Crooks are constantly probing bank Web sites for customer accounts protected by weak or recycled passwords. Most often, the attacker will use lists of email addresses and passwords stolen en masse from hacked sites and then try those same credentials to see if they permit online access to accounts at a range of banks.

A screenshot of a password-checking tool being used to target Chase Bank customers who re-use passwords from other sites. Image: Hold Security.

From there, thieves can take the list of successful logins and feed them into apps that rely on application programming interfaces (API)s from one of several personal financial data aggregators which help users track their balances, budgets and spending across multiple banks.

A number of banks that do offer customers multi-factor authentication — such as a one-time code sent via text message or an app — have chosen to allow these aggregators the ability to view balances and recent transactions without requiring that the aggregator service supply that second factor. That’s according to Brian Costello, vice president of data strategy at Yodlee, one of the largest financial aggregator platforms.

Costello said while some banks have implemented processes which pass through multi-factor authentication (MFA) prompts when consumers wish to link aggregation services, many have not.

“Because we have become something of a known quantity with the banks, we’ve set up turning off MFA with many of them,” Costello said.  “Many of them are substituting coming from a Yodlee IP or agent as a factor because banks have historically been relying on our security posture to help them out.”

Such reconnaissance helps lay the groundwork for further attacks: If the thieves are able to access a bank account via an aggregator service or API, they can view the customer’s balance(s) and decide which customers are worthy of further targeting.

This targeting can occur in at least one of two ways. The first involves spear phishing attacks to gain access to that second authentication factor, which can be made much more convincing once the attackers have access to specific details about the customer’s account — such as recent transactions or account numbers (even partial account numbers).

The second is through an unauthorized SIM swap, a form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

But beyond targeting customers for outright account takeovers, the data available via financial aggregators enables a far more insidious type of fraud: The ability to link the target’s bank account(s) to other accounts that the attackers control.

That’s because PayPal, Zelle, and a number of other pure-play online financial institutions allow customers to link accounts by verifying the value of microdeposits. For example, if you wish to be able to transfer funds between PayPal and a bank account, the company will first send a couple of tiny deposits  — a few cents, usually — to the account you wish to link. Only after verifying those exact amounts will the account-linking request be granted.

Alex Holden is founder and chief technology officer of Hold Security, a Milwaukee-based security consultancy. Holden and his team closely monitor the cybercrime forums, and he said the company has seen a number of cybercriminals discussing how the financial aggregators are useful for targeting potential victims.

Holden said it’s not uncommon for thieves in these communities to resell access to bank account balance and transaction information to other crooks who specialize in cashing out such information.

“The price for these details is often very cheap, just a fraction of the monetary value in the account, because they’re not selling ‘final’ access to the account,” Holden said. “If the account is active, hackers then can go to the next stage for 2FA phishing or social engineering, or linking the accounts with another.”

Currently, the major aggregators and/or applications that use those platforms store bank logins and interactively log in to consumer accounts to periodically sync transaction data. But most of the financial aggregator platforms are slowly shifting toward using the OAuth standard for logins, which can give banks a greater ability to enforce their own fraud detection and transaction scoring systems when aggregator systems and apps are initially linked to a bank account.

That’s according to Don Cardinal, managing director of the Financial Data Exchange (FDX), which is seeking to unite the financial industry around a common, interoperable, and royalty-free standard for secure consumer and business access to their financial data.

“This is where we’re going,” Cardinal said. “The way it works today, you the aggregator or app stores the credentials encrypted and presents them to the bank. What we’re moving to is [an account linking process] that interactively loads the bank’s Web site, you login there, and the site gives the aggregator an OAuth token. In that token granting process, all the bank’s fraud controls are then direct to the consumer.”

Alissa Knight, a senior analyst with the Aite Group, a financial and technology analyst firm, said such attacks highlight the need to get rid of passwords altogether. But until such time, she said, more consumers should take full advantage of the strongest multi-factor authentication option offered by their bank(s), and consider using a password manager, which helps users pick and remember strong and unique passwords for each Web site.

“This is just more empirical data around the fact that passwords just need to go away,” Knight said. “For now, all the standard precautions we’ve been giving consumers for years still stand: Pick strong passwords, avoid re-using passwords, and get a password manager.”

Some of the most popular password managers include 1Password, Dashlane, LastPass and Keepass. Wired.com recently published a worthwhile writeup which breaks down each of these based on price, features and usability.

CryptogramWanted: Cybersecurity Imagery

Eli Sugarman of the Hewlettt Foundation laments about the sorry state of cybersecurity imagery:

The state of cybersecurity imagery is, in a word, abysmal. A simple Google Image search for the term proves the point: It's all white men in hoodies hovering menacingly over keyboards, green "Matrix"-style 1s and 0s, glowing locks and server racks, or some random combination of those elements -- sometimes the hoodie-clad men even wear burglar masks. Each of these images fails to convey anything about either the importance or the complexity of the topic­ -- or the huge stakes for governments, industry and ordinary people alike inherent in topics like encryption, surveillance and cyber conflict.

I agree that this is a problem. It's not something I noticed until recently. I work in words. I think in words. I don't use PowerPoint (or anything similar) when I give presentations. I don't need visuals.

But recently, I started teaching at the Harvard Kennedy School, and I constantly use visuals in my class. I made those same image searches, and I came up with similarly unacceptable results.

But unlike me, Hewlett is doing something about it. You can help: participate in the Cybersecurity Visuals Challenge.

EDITED TO ADD (8/5): News article. Slashdot thread.

Worse Than FailureCodeSOD: A Truly Painful Exchange

Java has a boolean type, and of course it also has a parseBoolean method, which works about how you'd expect. It's worth noting that a string "true" (ignoring capitalization) is the only thing which is considered true, and all other inputs are false. This does mean that you might not always get the results you want, depending on your inputs, so you might need to make your own boolean parser.

Adam H has received the gift of legacy code. In this case, the code was written circa 2002, and the process has been largely untouched since. An outside vendor uploads an Excel spreadsheet to an FTP site. And yes, it must be FTP, as the vendor's tool won't do anything more modern, and it must be Excel because how else do you ship tables of data between two businesses?

The Excel sheet has some columns which are filled with "TRUE" and "FALSE". This means their process needs to parse those values in as booleans. Or does it…

public class BooleanParseUtil { private static final String TRUE = "TRUE"; private static final String FALSE = "FALSE"; private BooleanParseUtil() { //private because class should not be instantiated } public static String parseBoolean(String paramString) { String result = null; if (paramString != null) { String s = paramString.toUpperCase().trim(); if (ParseUtilHelper.isPositive(s)) { result = TRUE; } else if (ParseUtilHelper.isNegative(s)) { result = FALSE; } } else { result = FALSE; } return result; } //snip }

Note the signature of parseBoolean: it takes a string and it returns a string. If we trace through the logic: a null input is false, a not-null input that isPositive is "TRUE", one that isNegative is "FALSE", and anything else returns null. I'm actually pretty sure that's a mistake, and is exactly the kind of thing that happens when you follow the "single return rule"- where each method has only one return statement. This likely is a source of heisenbugs and failed processing runs.

But wait a second, isPositive sure sounds like it means "greater than or equal to zero". But that can't be what it means, right? What are isPositive and isNegative actually doing?

public class ParseUtilHelper { private static final String TRUE = "TRUE"; private static final String FALSE = "FALSE"; private static final Set<String> positiveValues = new HashSet<>( Arrays.asList(TRUE, "YES", "ON", "OK", "ENABLED", "ACTIVE", "CHECKED", "REPORTING", "ON ALL", "ALLOW") ); private static final Set<String> negativeValues = new HashSet<>( Arrays.asList(FALSE, "NO", "OFF", "DISABLED", "INACTIVE", "UNCHECKED", "DO NOT DISPLAY", "NOT REPORTING", "N/A", "NONE", "SCHIMELPFENIG") ); private ParseUtilHelper() { //private constructor because class should not be instantiated } public static boolean isPositive(String v) { return positiveValues.contains(v); } public static boolean isNegative(String v) { return negativeValues.contains(v) || v.contains("DEFERRED"); } //snip }

For starters, we redefine constants that exist over in our BooleanParseUtil, which, I mean, maybe we could use different strings for TRUE and FALSE in this object, because that wouldn't be confusing at all.

But the real takeaway is that we have absolutely ALL of the boolean values. TRUE, YES, OK, DO NOT DISPLAY, and even SCHIMELPFENIG, the falsest of false values. That said, I can't help but think there's one missing.

In truth, this is exactly the sort of code that happens when you have a cross-organization data integration task with no schema. And while I'm sure the end users are quite happy to continue doing this in Excel, the only tool they care about using, there are many, many other possible ways to send that data around. I suppose we should just be happy that the process wasn't built using XML? I'm kidding, of course, even XML would be an improvement.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Planet DebianThorsten Alteholz: My Debian Activities in July 2019

FTP master

After the release of Buster I could start with real work in NEW again. Even the temperature could not hinder me to reject something. So this month I accepted 279 packages and rejected 15. The overall number of packages that got accepted was 308.

Debian LTS

This was my sixty first month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 18.5h. During that time I did LTS uploads of:

  • [DLA 1849-1] zeromq3 security update for one CVE
  • [DLA 1833-2] bzip2 regression update for one patch
  • [DLA 1856-1] patch security update for one CVE
  • [DLA 1859-1] bind9 security update for one CVE
  • [DLA 1864-1] patch security update for one CVE

I am glad that I could finish the bind9 upload this month.
I also started to work on ruby-mini-magick and python2.7. Unfortunatley when building both packages (even without new patches), the test suite fails. So I first have to fix that as well.

Last but not least I did ten days of frontdesk duties. This was more than a week as everybody was at DebConf and I seemed to be the only one at home …

Debian ELTS

This month was the fourteenth ELTS month.

During my allocated time I uploaded:

  • ELA-132-2 of bzip2 for an upstream regression
  • ELA-144-1 of patch for one CVE
  • ELA-147-1 of patch for one CVE
  • ELA-148-1 of bind9 for one CVE

I also did some days of frontdesk duties.

Other stuff

This month I reuploaded some go packages, that would not migrate due to being binary uploads.

I also filed rm bugs to remove all alljoyn packages. Upstream is dead, no one is using this software anymore and bugs won’t be fixed.

Planet DebianEmmanuel Kasper: Debian 9 -> 10 Ugrade report

I upgraded my laptop and VPS to Debian 10, as usual in Debian everything worked out of the box, the necessary daemons restarted without problems.
I followed my usual upgrade approach, which involves upgrading a backup of the root FS of the server in a container, to test the upgrade path, followed by a config file merge.

I had one major problem, though, connecting to my php based Dokuwiki subsole.org website, which displayed a rather unwelcoming screen after the upgrade:




I was a bit unsure at first, as I thought I would need to fight my way through the nine different config files of the dokuwiki debian package in /etc/dokuwiki

However the issue was not so complicated: as  the apache2 php module was disabled, apache2 was outputting the source code of dokuwiki instead of executing it. As you see, I don't php that often.

A simple
a2enmod php7.3
systemctl restart apache2


fixed the issue.

I understood the problem after noticing that a simple phpinfo() would not get executed by the server.

I would have expected the upgrade to automatically enable the new php7.3 module, since the oldstable php7.0 apache module was removed as part of the upgrade, but I am not sure what the Debian policy would recommend here, or if I am missing something else.
If I can reproduce the issue in a upgrade scenario, I'll probably submit a bug to the php package maintainers.

Planet DebianDebian GSoC Kotlin project blog: Packaging Dependencies Part 2; and plan on how to.

Mapping and packaging dependencies part 1.

Hey all, I had my exams during weeks 8 ad 9 so I couldn't update my blog nor get much accomplished; but last week was completely free so I managed to finish packaging all the dependencies from pacakging dependencies part 1. Since some of you may not remember how I planned to tackle pacakging dependencies I'll mention it here one more time.

"I split this task into two sub tasks that can be done independently. The 2 subtasks are as follows:
->part 1: make the entire project build successfully without :buildSrc:prepare-deps:intellij-sdk:build
--->part1.1:package these dependencies
->part 2: package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build ; i.e try to recreate whatever is in it."

This is taken from my last blog which was specifically on packaging dependencies in part 1. Now I am happy to tell all of you that packaging dependencies for part 1 is now complete and all the needed pacakges are either in the new queue or already in sid archive as of 04 August 2019. I would like to thank ebourg, seamlik and andrewsh for helping me with this.

How to build kotlin 1.3.30 after dependency packaging part 1 and design choices.

Before I go into how to build the project as it is now I'll briefly talk of some of the choices I made while packaging dependencies in part 1 and general things you should know.

Two dependencies in part 1 were Jcabi-aether and sonatype-aether, both of these are incompatible with maven-3 and these were only used in one single file in the entire dist task graph. Considering the time it would take to migrate these dependencies to maven-3 I chose to patch out the one file that needed both of these and that change is denoted by this commit. Also it must be noted that so far we are only trying to build the dist task which only and only builds the basic kotlin compiler; it doesn't build the maven artifacts with poms nor does it build the kotlin-gradle-plugin. Those things are built and installed in the local maven repository (.m2 file in surce project when you invoke debuild) using the install task which I am planning to do once we finish successfully building the dist task. Invoking the install task in our master as of Aug 04 2019 will build and install all available maven artifacts into the local maven repo but this again will not have kotlin-gradle-plugin or such since I have removed those subprojects as they aren't needed by the dist task. Keeping them would mean that I have to convert and patch them to groovy if they are written in .kts since they are evaluated during the initialization phase.

Now we are ready to build the project. I have written a simple makefile which copies all the needed bootstrap jars and prebuilts to their proper places. All you need to build the project is

1.git clone https://salsa.debian.org/m36-guest/kotlin-1.3.30.git  
2.cd kotlin-1.3.30
3.git checkout buildv1  
4.debian/pseudoBootstrap bootstrap
5.debuild -b -rfakeroot -us -uc
Note that we need only do steps 1 though 4 the very first time you are building this project. everytime after that just invoke step 5

Packaging dependencies part 2.

Now packaging dependencies part 2 involves package the dependencies in :buildSrc:prepare-deps:intellij-sdk:build. This is the folder that is taking up the most space in Kotlin-1.3.30-temp-requirements. The sole purpose of this task is reduce the jars in this folder and substitue them with jar from the debian environment. I have managed to map out the needed jars from these for the dist task graph and they are

```
saif@Hope:/srv/chroot/KotlinCh/home/kotlin/kotlin-1.3.30-debian-maintained/buildSrc/prepare-deps/intellij-sdk/repo/kotlin.build.custom.deps/183.5153.4$ ls -R .: intellij-core intellij-core.ivy.xml intellijUltimate intellijUltimate.ivy.xml jps-standalone jps-standalone.ivy.xml

./intellij-core:
asm-all-7.0.jar  intellij-core.jar  java-compatibility-1.0.1.jar

./intellijUltimate:
lib

./intellijUltimate/lib:
asm-all-7.0.jar  guava-25.1-jre.jar  jna.jar           log4j.jar      openapi.jar    picocontainer-1.2.jar  platform-impl.jar   trove4j.jar
extensions.jar   jdom.jar            jna-platform.jar  lz4-1.3.0.jar  oro-2.0.8.jar  platform-api.jar       streamex-0.6.7.jar  util.jar

./jps-standalone:
jps-model.jar
```

This folder is treated as an ant repository and the code to that is here. Build.gradle files use this via methods like this which tells the project to take only the needed jars from the collection. I am planning on replacing this with just plain old maven repository resolution using format like compile(groupID:artifactId:version) but we will need the jars to be in our system anyways, atleast now we know that this particular file structure can be avoided.

Please note that these jars listed above by me are only needed for the dist task and the ones needed for other subprojects in the original install task can still be found here.

The following are the dependencies need for part 2. * denotes what I am not sure of. Contact me before you attempt to pacakge any of the intellij dependencies as we only need parts from those and I have a script to tell what we need.

1.->java-compatibility-1.0.1 -> https://github.com/JetBrains/intellij-deps-java-compatibility (DONE: here)
2.->jps-model -> https://github.com/JetBrains/intellij-community/tree/master/jps
3.->intellij-core, open-api -> https://github.com/JetBrains/intellij-community/tree/183.5153
4.->streamex-0.6.7 -> https://github.com/amaembo/streamex/tree/streamex-0.6.7 (DONE: here)
5.->guava-25.1 -> https://github.com/google/guava/tree/v25.1 (Used guava-19 from libguava-java)
6.->lz4-java -> https://github.com/lz4/lz4-java/blob/1.3.0/build.xml(DONE:here)
7.->libjna-java & libjna-platform-java recompiled in jdk 8. -> https://salsa.debian.org/java-team/libjna-java (DONE : commit)
8.->liboro-java recompiled in jdk8 -> https://salsa.debian.org/java-team/liboro-java (DONE : commit)
9.->picocontainer-1.3 refining -> https://salsa.debian.org/java-team/libpicocontainer-1-java (DONE: here)
10.-> * platform-api -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform
11.-> * util -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform
12.-> * platform-impl -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform

So if any of you want to help please kindly take on any of these and package them.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

Here is a link to the work I have done so far. You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates weekly.

Planet DebianMike Gabriel: MATE 1.22 landed in Debian unstable

Last week, I did a bundle upload of (nearly) all MATE 1.22 related components to Debian unstable. Packages should have been built by now for most of the 24 architectures supported by Debian (I just fixed an FTBFS of mate-settings-daemon on non-Linux host archs). The current/latest build status can be viewed on the DDPO page of the Debian+Ubuntu MATE Packaging Team [1].

Credits

Again a big thanks goes to the packaging team and also to the upstream maintainers of the MATE desktop environment. Martin Wimpress and I worked on most parts of the packaging for the 1.22 release series this time. On the upstream side, a big thanks goes to all developers, esp. Vlad Orlov and Wolfgang Ulbrich for fixing / reviewing many many issues / merge requests. Good work, folks!!! plus Big Thanks!!!

References


light+love,
Mike Gabriel (aka sunweaver)

CryptogramMore on Backdooring (or Not) WhatsApp

Yesterday, I blogged about a Facebook plan to backdoor WhatsApp by adding client-side scanning and filtering. It seems that I was wrong, and there are no such plans.

The only source for that post was a Forbes essay by Kalev Leetaru, which links to a previous Forbes essay by him, which links to a video presentation from a Facebook developers conference.

Leetaru extrapolated a lot out of very little. I watched the video (the relevant section is at the 23:00 mark), and it doesn't talk about client-side scanning of messages. It doesn't talk about messaging apps at all. It discusses using AI techniques to find bad content on Facebook, and the difficulties that arise from dynamic content:

So far, we have been keeping this fight [against bad actors and harmful content] on familiar grounds. And that is, we have been training our AI models on the server and making inferences on the server when all the data are flooding into our data centers.

While this works for most scenarios, it is not the ideal setup for some unique integrity challenges. URL masking is one such problem which is very hard to do. We have the traditional way of server-side inference. What is URL masking? Let us imagine that a user sees a link on the app and decides to click on it. When they click on it, Facebook actually logs the URL to crawl it at a later date. But...the publisher can dynamically change the content of the webpage to make it look more legitimate [to Facebook]. But then our users click on the same link, they see something completely different -- oftentimes it is disturbing; oftentimes it violates our policy standards. Of course, this creates a bad experience for our community that we would like to avoid. This and similar integrity problems are best solved with AI on the device.

That might be true, but it also would hand whatever secret-AI sauce Facebook has to every one of its users to reverse engineer -- which means it's probably not going to happen. And it is a dumb idea, for reasons Steve Bellovin has pointed out.

Facebook's first published response was a comment on the Hacker News website from a user named "wcathcart," which Cardozo assures me is Will Cathcart, the vice president of WhatsApp. (I have no reason to doubt his identity, but surely there is a more official news channel that Facebook could have chosen to use if they wanted to.) Cathcart wrote:

We haven't added a backdoor to WhatsApp. The Forbes contributor referred to a technical talk about client side AI in general to conclude that we might do client side scanning of content on WhatsApp for anti-abuse purposes.

To be crystal clear, we have not done this, have zero plans to do so, and if we ever did it would be quite obvious and detectable that we had done it. We understand the serious concerns this type of approach would raise which is why we are opposed to it.

Facebook's second published response was a comment on my original blog post, which has been confirmed to me by the WhatsApp people as authentic. It's more of the same.

So, this was a false alarm. And, to be fair, Alec Muffet called foul on the first Forbes piece:

So, here's my pre-emptive finger wag: Civil Society's pack mentality can make us our own worst enemies. If we go around repeating one man's Germanic conspiracy theory, we may doom ourselves to precisely what we fear. Instead, we should ­ we must ­ take steps to constructively demand what we actually want: End to End Encryption which is worthy of the name.

Blame accepted. But in general, this is the sort of thing we need to watch for. End-to-end encryption only secures data in transit. The data has to be in the clear on the device where it is created, and it has to be in the clear on the device where it is consumed. Those are the obvious places for an eavesdropper to get a copy.

This has been a long process. Facebook desperately wanted to convince me to correct the record, while at the same time not wanting to write something on their own letterhead (just a couple of comments, so far). I spoke at length with Privacy Policy Manager Nate Cardozo, whom Facebook hired last December from EFF. (Back then, I remember thinking of him -- and the two other new privacy hires -- as basically human warrant canaries. If they ever leave Facebook under non-obvious circumstances, we know that things are bad.) He basically leveraged his historical reputation to assure me that WhatsApp, and Facebook in general, would never do something like this. I am trusting him, while also reminding everyone that Facebook has broken so many privacy promises that they really can't be trusted.

Final note: If they want to be trusted, Adam Shostack and I gave them a road map.

Hacker News thread.

EDITED TO ADD (8/4): SlashDot covered my retraction.

,

Planet DebianAndy Simpkins: Debconf19: Curitiba, Brazil – AV Setup

I write this on Monday whilst sat in the airport in São Paulo awaiting my onward flight back to the UK and the fun of the change of personnel in Downing street that has been something I have fortunately been able to ignore whilst at DebConf.  [Edit: and finishing writing the Saturday after getting home after much sleep]

Arriving on the first Sunday of DebCamp meant that I was one of the first people to arrive; however most of the video team were either arriving about the same time or had landed before me.  We spent most of our daytime time during DebCamp setting up for the following weeks conference.

Step one was getting a network operational.  We had been offered space for our servers in a university machine room, but chose instead to occupy the two ‘green’ rooms below the main auditorium stage, using one as a makeshift V/NOC and the other as our machine room as this enabled us continuous and easy access [0] to our servers whilst separating us from the fan noise.  I ran additional network cable between the back of the stage and our makeshift machine room, routing the cable around the back of the stage and into the ceiling void to just outside the V/NOC was relatively simple.   Routing into the V/NOC needed a bit of help to get the cable through a small gap we found where some other cables ran through the ‘fire break’.  Getting a cable between the two ‘green rooms’ however was a PITA.  Many people, including myself, eventually giving up before I finally returned to the problem and with the aid of a fully extended server rail gaffer taped to a clothing rail to make a 4m long pole I was eventually able to deliver a cable through the 3 floor supports / fire breaks that separated the two rooms (and before someone suggests I should have used a ‘fish’ wire that was what we tried first).   The university were providing us with backbone network but it did take a couple of meetings to get our video network in it’s own separate VLAN and get it to pass traffic unmolested between nodes.

The final network setup (for video that is – the conference was piggy-backing on the university WiFi network and there was also a DebConf network in the National Inn) was to make live the fibre links that had been installed prior to our arrival.  Two links had been pulled through so that we could join the ‘Video Confrencia’ room and the ‘Front Desk’ to the rest of the university network, however when we came to commission them we discovered that the wrong media converters had been supplied, they should have been for single mode fibre but multi-mode converters had been delivered.  Nothing that the university IT department couldn’t solve, indeed they did as soon as we pointed out the mistake.  The provided us with replacement media converters capable of driving a signal down *both* single and multi-mode fiber, something I have not seen before.

For the rest of the week Paddatrapper and myself spent most of our time running cables and setting up the three talk rooms that were to be filmed.  Phls had been able to provide us with details of the venue’s AV system AND scale plans of the three talk rooms, this along with the photos provided by the local team, & Tumbleweed’s visit to the sight enabled us to plan the cable runs right down to the location of power sockets.

I am going to add scale plans & photos to the things that we request for all future DebConfs.  They made planning and setup so much easier and faster.  Of cause we still ended up running more cables than we originally expected – we ran Ethernet front to back in all three rooms when we originally intended to only do this in Video Confrencia (the small BoF room), this was because it turned out that the sockets at different ends of the room were on differing packet switches that in turn feed into the university backbone.  We were informed that the backbone is 1Gb/s which meant that the video LAN would have consumed the entire bandwidth of the backbone with nothing left over.

We have 200Mb/s streams from between OPSIS frame grabbers and a 2nd 200Mb/s output stream from each room.  That equates to exactly 1Gb/s (the video-confrencia BoF room is small enough that we were always going to run a front/back cable) and that is before any backups of recordings to our server.  As it turns out that wasn’t true but by then we had already run the cables and got things working…

I won’t blog about the software setup the servers, our back-end CDN or the review process – this is not my area of expertise.  You need to thank Olasd, Tumbleweed & Ivo for the on-site system setup and Walter for the review process.  Actually I there is also Carlfk, Ubec, Valhalla and I am sure numerous other people that I am too tired to remember, I appologise for forgetting you…

So back to physical setup.  The main auditorium was operational.  I had re-patched the mixing desk to give a setup as close as possible in all three talk rooms – we are most interested in audio for the stream/recording and so use the main mix output for this, and move the room PA onto a sub group output.  Unusually for a DebConf, I found that I needed to ensure that there *was* a ground connection at the desk for all output feeds – It appears that there was NO earth in the entire auditorium; well there was at some point back in time but had been systematically removed either by cutting off the earth pin on power plugs, or unfortunately for us, by cutting and removing cables from any bonding points, behind sockets etc.   Done, probably, because RCDs kept tripping and clearly the problem is that there is an earth present to leak into and not that there is a leak in the first place, or just long cable runs into inductive loads that mean that a different ‘trip curve’ needed to be selected <sigh>.

We still had significant mains hum on the PA system (slightly less than was present before I started room setup so nothing I had done).  The venue AV team pointed out that they had a magnetic coupler AND an audio DSP unit in front of the PA amplifier stack – telling me that this was to reduce the hum. Fortunately for us the venue had 4 equalisers that I could use, one for each of the mics So I was able to knock out 60Hz, 120Hz and dip higher harmonics, this again made an improvement.  Apparently we were getting the best results in living memory of the on-site AV team so at this point I stooped tweaking the setup “It was good enough”, we could live with the remaining hum.

The other two talk rooms were pretty much the same setup, only the rooms are smaller.  The exception being that whilst we do have a small portable PA in the Video Conferancia room we only use it for audio from the presenters laptop – the room was so small there was no point in amplifying presenters…

Right I could now move on to ‘lighting’.  We were supposed to use the flood ‘work’ lights above the stage, but quite a few of the halogen lamps were blown.  This meant that there were far too many ‘dark’ patches along the stage.  Additionally the colour temperatures of the different work lights were all over the place, and this would cause havoc with white balance, still we could have lived with this…  I asked about getting the lamps replaced.  Initially I was told no, but once I pointed out the problem to a more senior member of staff they agreed that the lamps could be replaced and that it would be done the following day.  It wasn’t.  I offered that we could replace the lamps but was then told that they would now be doing this as part of a service in a few weeks time.  I was however told that instead, if I was prepared to rig them myself, that we could use the stage lights on the dimmers.  Win!  This would have been my preferred option all along and I suspect we were only offered this having started to build a reasonable working relationship with the site AV team.  I was able to sign out a bunch of lamps from the stores and rig then as I saw fit.  I was given a large wooden step ladder, and shown how to access the catwalk.  I could now rig lights where I wanted them.

Two over head floods and two spots were used to light the lectern from different angles.  Three overhead floods and three focused cans were used to light the discussion table.  I also hung to forward facing spots to illuminate someone stood at the question mic, and finally 4 cans (2 focus cans and a pair of 1kW par cans sharing the same plug) to add some light to the front 5 or 6 rows of the audience.  The Venue AV team repaired the DMX cable to the lighting dimmers and we were good to go…  well just as soon as I had worked out the DMX addressing / cable patching at the dimmer banks and then there was a short time whilst I read the instructions for the desk – enough to apply ‘soft patches’ so I could allocate a fader to each dimmer channel we were using.  I then red the instructions a bit further and came back the following day and programmed appropriate scenes so that the table could be lit using one ‘slider’, the lectern by another and so on.  JMW came back later in the week and updated the program again to add a timed fade up or down and we also set a maximum level on the audience lights to stop us from ‘blinding’ people in the first couple of rows (we set the maximum value of that particular scene to be 20% available intensity).

Lighting in the mini auditorium was from simple overhead ‘domestic’ lamps, I still needed to get some bulbs replaced, and then move / adjust them to best light a speaker stood at the lectern or a discussion panel sat at the table.   Finally we had no control of lighting in Video Confeencia (about normal for a DebConf).

Later in the week we revisited the hum problem again.  We confirmed that the Hum was no longer being emitted out of the desk, so it must have be on the cable run to the stack or in the stack itself.  The hum was still annoying and Kyle wanted to confirm that the DSP at the top of the amp stack was correctly setup – could we improve things?  It took a little persuasion but eventually we were granted permission, and the password, to access the DSP.  The DSP had not been configured properly at all.  Kyle applied a 60Hz notch filter, and this made some difference.  I suggested a comb filter which Kyle then applied for 60Hz and 5 or 6 orders of harmonics, that did the trick (thanks Kyle – I wouldn’t have had a clue how to drive the DSP).  There was no longer any perceivable noise coming out of the left hand speakers, but there was still a noticeable, but much lower, hum from the right.  We removed the input cable to the amp stack and yes the hum was still there, so we were picking up noise between the amps and the speaker!  a quick confirmation of turning off the lighting dimmers and the noise dropped again.  I started chasing the right hand speaker cables – they run up and over the stage along the catwalk, in the same bundle as all the unearthed lighting AND permanent power cables.  We were inducing mains noise directly onto the speaker cables.  The only fix for this would be to properly screen AND separate the speaker fed cables.  Better yet send a balanced audio feed, separated from the power cables, to the right hand side of the stage and move the right hand amplifiers to that side of the stage.  Nothing we could do – but something that we could point out to the venue AV team, who strangely, hadn’t considered this before…

 

 

[0] Where continuous access meant “whilst we had access to the site” (the whole campus is closed overnight)

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.6

A shiny new release 0.2.6 of RcppCCTZ is now at CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version updates to CCTZ release 2.3 from April, plus changes accrued since then. It also switches to tinytest which, among other benefits, permits continued testing of the installed package.

Changes in version 0.2.6 (2019-08-03)

  • Synchronized with upstream CCTZ release 2.3 plus commits accrued since then (Dirk in #30).

  • The package now uses tinytest for unit tests (Dirk in #31).

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianBits from Debian: New Debian Developers and Maintainers (May and June 2019)

The following contributors got their Debian Developer accounts in the last two months:

  • Jean-Philippe Mengual (jpmengual)
  • Taowa Munene-Tardif (taowa)
  • Georg Faerber (georg)
  • Kyle Robbertze (paddatrapper)
  • Andy Li (andyli)
  • Michal Arbet (kevko)
  • Sruthi Chandran (srud)
  • Alban Vidal (zordhak)
  • Denis Briand (denis)
  • Jakob Haufe (sur5r)

The following contributors were added as Debian Maintainers in the last two months:

  • Bobby de Vos
  • Jongmin Kim
  • Bastian Germann
  • Francesco Poli

Congratulations!

Planet DebianElana Hashman: My favourite bash alias for git

I review a lot of code. A lot. And an important part of that process is getting to experiment with said code so I can make sure it actually works. As such, I find myself with a frequent need to locally run code from a submitted patch.

So how does one fetch that code? Long ago, when I was a new maintainer, I would add the remote repository I was reviewing to my local repo so I could fetch that whole fork and target branch. Once downloaded, I could play around with that on my local machine. But this was a lot of overhead! There was a lot of clicking, copying, and pasting involved in order to figure out the clone URL for the remote repo, and a bunch of commands to set it up. It felt like a lot of toil that could be easily automated, but I didn't know a better way.

One day, when a coworker of mine saw me struggling with this, he showed me the better way.

Turns out, most hosted git repos with pull request functionality will let you pull down a read-only version of the changeset from the upstream fork using git, meaning that you don't have to set up additional remote tracking to fetch and run the patch or use platform-specific HTTP APIs.

Using GitHub's git references for pull requests

I first learned how to do this on GitHub.

GitHub maintains a copy of pull requests against a particular repo at the pull/NUM/head reference. (More documentation on refs here.) This means that if you have set up a remote called origin and someone submits a pull request #123 against that repository, you can fetch the code by running

$ git fetch origin pull/123/head
remote: Enumerating objects: 3, done.
remote: Counting objects: 100% (3/3), done.
remote: Total 4 (delta 3), reused 3 (delta 3), pack-reused 1
Unpacking objects: 100% (4/4), done.
From github.com:ehashman/hack_the_planet
 * branch            refs/pull/123/head -> FETCH_HEAD

$ git checkout FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at deadb00 hack the planet!!!

Woah.

Using pull request references for CI

As a quick aside: This is also handy if you want to write your own CI scripts against users' pull requests. Even better—on GitHub, you can fetch a tree with the pull request already merged onto the top of the current master branch by fetching pull/NUM/merge. (I'm not sure if this is officially documented somewhere, and I don't believe it's widely supported by other hosted git platforms.)

If you also specify the --depth flag in your fetch command, you can fetch code even faster by limiting how much upstream history you download. It doesn't make much difference on small repos, but it is a big deal on large projects:

elana@silverpine:/tmp$ time git clone https://github.com/kubernetes/kubernetes.git
Cloning into 'kubernetes'...
remote: Enumerating objects: 295, done.
remote: Counting objects: 100% (295/295), done.
remote: Compressing objects: 100% (167/167), done.
remote: Total 980446 (delta 148), reused 136 (delta 128), pack-reused 980151
Receiving objects: 100% (980446/980446), 648.95 MiB | 12.47 MiB/s, done.
Resolving deltas: 100% (686795/686795), done.
Checking out files: 100% (20279/20279), done.

real    1m31.035s
user    1m17.856s
sys     0m7.782s

elana@silverpine:/tmp$ time git clone --depth=10 https://github.com/kubernetes/kubernetes.git kubernetes-shallow
Cloning into 'kubernetes-shallow'...
remote: Enumerating objects: 34305, done.
remote: Counting objects: 100% (34305/34305), done.
remote: Compressing objects: 100% (22976/22976), done.
remote: Total 34305 (delta 17247), reused 19060 (delta 10567), pack-reused 0
Receiving objects: 100% (34305/34305), 34.22 MiB | 10.25 MiB/s, done.
Resolving deltas: 100% (17247/17247), done.

real    0m31.495s
user    0m3.941s
sys     0m1.228s

Writing the pull alias

So how can one harness all this as a bash alias? It takes just a little bit of code:

pull() {
    git fetch "$1" pull/"$2"/head && git checkout FETCH_HEAD
}

alias pull='pull'

Then I can check out a PR locally with the short command pull <remote> <num>:

$ pull origin 123
remote: Enumerating objects: 4, done.
remote: Counting objects: 100% (4/4), done.
remote: Total 5 (delta 4), reused 4 (delta 4), pack-reused 1
Unpacking objects: 100% (5/5), done.
From github.com:ehashman/hack_the_planet
 * branch            refs/pull/123/head -> FETCH_HEAD
Note: checking out 'FETCH_HEAD'.

You are in 'detached HEAD' state. You can look around, make experimental
changes and commit them, and you can discard any commits you make in this
state without impacting any branches by performing another checkout.

If you want to create a new branch to retain commits you create, you may
do so (now or later) by using -b with the checkout command again. Example:

  git checkout -b <new-branch-name>

HEAD is now at deadb00 hack the planet!!!

You can even add your own commits, save them on a local branch, and push that to your collaborator's repository to build on their PR if you're so inclined... but let's not get too ahead of ourselves.

Changeset references on other git platforms

These pull request refs are not a special feature of git itself, but rather a per-platform implementation detail using an arbitrary git ref format. As far as I'm aware, most major git hosting platforms implement this, but they all use slightly different ref names.

GitLab

At my last job I needed to figure out how to make this work with GitLab in order to set up CI pipelines with our Jenkins instance. Debian's Salsa platform also runs GitLab.

GitLab calls user-submitted changesets "merge requests" and that language is reflected here:

git fetch origin merge-requests/NUM/head

They also have some nifty documentation for adding a git alias to fetch these references. They do so in a way that creates a local branch automatically, if that's something you'd like—personally, I check out so many patches that I would not be able to deal with cleaning up all the extra branch mess!

BitBucket

Bad news: as of the time of publication, this isn't supported on bitbucket.org, even though a request for this feature has been open for seven years. (BitBucket Server supports this feature, but that's standalone and proprietary, so I won't bother including it in this post.)

Gitea

While I can't find any official documentation for it, I tested and confirmed that Gitea uses the same ref names for pull requests as GitHub, and thus you can use the same bash/git aliases on a Gitea repo as those you set up for GitHub.

Saved you a click?

Hope you found this guide handy. No more excuses: now that it's just one short command away, go forth and run your colleagues' code locally!

,

Krebs on SecurityWhat We Can Learn from the Capital One Hack

On Monday, a former Amazon employee was arrested and charged with stealing more than 100 million consumer applications for credit from Capital One. Since then, many have speculated the breach was perhaps the result of a previously unknown “zero-day” flaw, or an “insider” attack in which the accused took advantage of access surreptitiously obtained from her former employer. But new information indicates the methods she deployed have been well understood for years.

What follows is based on interviews with almost a dozen security experts, including one who is privy to details about the ongoing breach investigation. Because this incident deals with somewhat jargon-laced and esoteric concepts, much of what is described below has been dramatically simplified. Anyone seeking a more technical explanation of the basic concepts referenced here should explore some of the many links included in this story.

According to a source with direct knowledge of the breach investigation, the problem stemmed in part from a misconfigured open-source Web Application Firewall (WAF) that Capital One was using as part of its operations hosted in the cloud with Amazon Web Services (AWS).

Known as “ModSecurity,” this WAF is deployed along with the open-source Apache Web server to provide protections against several classes of vulnerabilities that attackers most commonly use to compromise the security of Web-based applications.

The misconfiguration of the WAF allowed the intruder to trick the firewall into relaying requests to a key back-end resource on the AWS platform. This resource, known as the “metadata” service, is responsible for handing out temporary information to a cloud server, including current credentials sent from a security service to access any resource in the cloud to which that server has access.

In AWS, exactly what those credentials can be used for hinges on the permissions assigned to the resource that is requesting them. In Capital One’s case, the misconfigured WAF for whatever reason was assigned too many permissions, i.e. it was allowed to list all of the files in any buckets of data, and to read the contents of each of those files.

The type of vulnerability exploited by the intruder in the Capital One hack is a well-known method called a “Server Side Request Forgery” (SSRF) attack, in which a server (in this case, CapOne’s WAF) can be tricked into running commands that it should never have been permitted to run, including those that allow it to talk to the metadata service.

Evan Johnson, manager of the product security team at Cloudflare, recently penned an easily digestible column on the Capital One hack and the challenges of detecting and blocking SSRF attacks targeting cloud services. Johnson said it’s worth noting that SSRF attacks are not among the dozen or so attack methods for which detection rules are shipped by default in the WAF exploited as part of the Capital One intrusion.

“SSRF has become the most serious vulnerability facing organizations that use public clouds,” Johnson wrote. “The impact of SSRF is being worsened by the offering of public clouds, and the major players like AWS are not doing anything to fix it. The problem is common and well-known, but hard to prevent and does not have any mitigations built into the AWS platform.”

Johnson said AWS could address this shortcoming by including extra identifying information in any request sent to the metadata service, as Google has already done with its cloud hosting platform. He also acknowledged that doing so could break a lot of backwards compatibility within AWS.

“There’s a lot of specialized knowledge that comes with operating a service within AWS, and to someone without specialized knowledge of AWS, [SSRF attacks are] not something that would show up on any critical configuration guide,” Johnson said in an interview with KrebsOnSecurity.

“You have to learn how EC2 works, understand Amazon’s Identity and Access Management (IAM) system, and how to authenticate with other AWS services,” he continued. “A lot of people using AWS will interface with dozens of AWS services and write software that orchestrates and automates new services, but in the end people really lean into AWS a ton, and with that comes a lot of specialized knowledge that is hard to learn and hard to get right.”

In a statement provided to KrebsOnSecurity, Amazon said it is inaccurate to argue that the Capital One breach was caused by AWS IAM, the instance metadata service, or the AWS WAF in any way.

“The intrusion was caused by a misconfiguration of a web application firewall and not the underlying infrastructure or the location of the infrastructure,” the statement reads. “AWS is constantly delivering services and functionality to anticipate new threats at scale, offering more security capabilities and layers than customers can find anywhere else including within their own datacenters, and when broadly used, properly configured and monitored, offer unmatched security—and the track record for customers over 13+ years in securely using AWS provides unambiguous proof that these layers work.”

Amazon pointed to several (mostly a la carte) services it offers AWS customers to help mitigate many of the threats that were key factors in this breach, including:

Access Advisor, which helps identify and scope down AWS roles that may have more permissions than they need;
GuardDuty, designed to raise alarms when someone is scanning for potentially vulnerable systems or moving unusually large amounts of data to or from unexpected places;
The AWS WAF, which Amazon says can detect common exploitation techniques, including SSRF attacks;
Amazon Macie, designed to automatically discover, classify and protect sensitive data stored in AWS.

William Bengston, formerly a senior security engineer at Netflix, wrote a series of blog posts last year on how Netflix built its own systems for detecting and preventing credential compromises in AWS. Interestingly, Bengston was hired roughly two months ago to be director of cloud security for Capital One. My guess is Capital One now wishes they had somehow managed to lure him away sooner.

Rich Mogull is founder and chief technology officer with DisruptOPS, a firm that helps companies secure their cloud infrastructure. Mogull said one major challenge for companies moving their operations from sprawling, expensive physical data centers to the cloud is that very often the employees responsible for handling that transition are application and software developers who may not be as steeped as they should in security.

“There is a basic skills and knowledge gap that everyone in the industry is fighting to deal with right now,” Mogull said. “For these big companies making that move, they have to learn all this new stuff while maintaining their old stuff. I can get you more secure in the cloud more easily than on-premise at a physical data center, but there’s going to be a transition period as you’re acquiring that new knowledge.”

Image: Capital One

Since news of the Capital One breach broke on Monday, KrebsOnSecurity has received numerous emails and phone calls from security executives who are desperate for more information about how they can avoid falling prey to the missteps that led to this colossal breach (indeed, those requests were part of the impetus behind this story).

Some of those people included executives at big competing banks that haven’t yet taken the plunge into the cloud quite as deeply as Capital One has. But it’s probably not much of a stretch to say they’re all lining up in front of the diving board.

It’s been interesting to watch over the past couple of years how various cloud providers have responded to major outages on their platforms — very often soon after publishing detailed post-mortems on the underlying causes of the outage and what they are doing to prevent such occurrences in the future. In the same vein, it would be wonderful if this kind of public accounting extended to other big companies in the wake of a massive breach.

I’m not holding out much hope that we will get such detail officially from Capital One, which declined to comment on the record and referred me to their statement on the breach and to the Justice Department’s complaint against the hacker. That’s probably to be expected, seeing as the company is already facing a class action lawsuit over the breach and is likely to be targeted by more lawsuits going forward.

But as long as the public and private response to data breaches remains orchestrated primarily by attorneys (which is certainly the case now at most major corporations), everyone else will continue to lack the benefit of being able to learn from and avoid those same mistakes.

CryptogramFriday Squid Blogging: Piglet Squid Video

Really neat.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramDisabling Security Cameras with Lasers

There's a really interesting video of protesters in Hong Kong using some sort of laser to disable security cameras. I know nothing more about the technologies involved.

CryptogramFacebook Plans on Backdooring WhatsApp

This article points out that Facebook's planned content moderation scheme will result in an encryption backdoor into WhatsApp:

In Facebook's vision, the actual end-to-end encryption client itself such as WhatsApp will include embedded content moderation and blacklist filtering algorithms. These algorithms will be continually updated from a central cloud service, but will run locally on the user's device, scanning each cleartext message before it is sent and each encrypted message after it is decrypted.

The company even noted that when it detects violations it will need to quietly stream a copy of the formerly encrypted content back to its central servers to analyze further, even if the user objects, acting as true wiretapping service.

Facebook's model entirely bypasses the encryption debate by globalizing the current practice of compromising devices by building those encryption bypasses directly into the communications clients themselves and deploying what amounts to machine-based wiretaps to billions of users at once.

Once this is in place, it's easy for the government to demand that Facebook add another filter -- one that searches for communications that they care about -- and alert them when it gets triggered.

Of course alternatives like Signal will exist for those who don't want to be subject to Facebook's content moderation, but what happens when this filtering technology is built into operating systems?

The problem is that if Facebook's model succeeds, it will only be a matter of time before device manufacturers and mobile operating system developers embed similar tools directly into devices themselves, making them impossible to escape. Embedding content scanning tools directly into phones would make it possible to scan all apps, including ones like Signal, effectively ending the era of encrypted communications.

I don't think this will happen -- why does AT&T care about content moderation -- but it is something to watch?

EDITED TO ADD (8/2): This story is wrong. Read my correction.

Planet DebianSven Hoexter: From 30 to 230 docker containers per host

I could not find much information on the interwebs how many containers you can run per host. So here are mine and the issues we ran into along the way.

The Beginning

In the beginning there were virtual machines running with 8 vCPUs and 60GB of RAM. They started to serve around 30 containers per VM. Later on we managed to squeeze around 50 containers per VM.

Initial orchestration was done with swarm, later on we moved to nomad. Access was initially fronted by nginx with consul-template generating the config. When it did not scale anymore nginx was replaced by Traefik. Service discovery is managed by consul. Log shipping was initially handled by logspout in a container, later on we switched to filebeat. Log transformation is handled by logstash. All of this is running on Debian GNU/Linux with docker-ce.

At some point it did not make sense anymore to use VMs. We've no state inside the containerized applications anyway. So we decided to move to dedicated hardware for our production setup. We settled with HPe DL360G10 with 24 physical cores and 128GB of RAM.

THP and Defragmentation

When we moved to the dedicated bare metal hosts we were running Debian/stretch + Linux from stretch-backports. At that time Linux 4.17. These machines were sized to run 95+ containers. Once we were above 55 containers we started to see occasional hiccups. First occurences lasted only for 20s, then 2min, and suddenly some lasted for around 20min. Our system metrics, as collected by prometheus-node-exporter, could only provide vague hints. The metric export did work, so processes were executed. But the CPU usage and subsequently the network throughput went down to close to zero.

I've seen similar hiccups in the past with Postgresql running on a host with THP (Transparent Huge Pages) enabled. So a good bet was to look into that area. By default /sys/kernel/mm/transparent_hugepage/enabled is set to always, so THP are enabled. We stick to that, but changed the defrag mode /sys/kernel/mm/transparent_hugepage/defrag (since Linux 4.12) from the default madavise to defer+madvise.

This moves page reclaims and compaction for pages which were not allocated with madvise to the background, which was enough to get rid of those hiccups. See also the upstream documentation. Since there is no sysctl like facility to adjust sysfs values, we're using the sysfsutils package to adjust this setting after every reboot.

Conntrack Table

Since the default docker networking setup involves a shitload of NAT, it shouldn't be surprising that nf_conntrack will start to drop packets at some point. We're currently fine with setting the sysctl tunable

net.netfilter.nf_conntrack_max = 524288

but that's very much up to your network setup and traffic characteristics.

Inotify Watches and Cadvisor

Along the way cadvisor refused to start at one point. Turned out that the default settings (again sysctl tunables) for

fs.inotify.max_user_instances = 128
fs.inotify.max_user_watches = 8192

are too low. We increased to

fs.inotify.max_user_instances = 4096
fs.inotify.max_user_watches = 32768

Ephemeral Ports

We didn't ran into an issue with running out of ephemeral ports directly, but dockerd has a constant issue of keeping track of ports in use and we already see collisions to appear regularly. Very unscientifically we set the sysctl

net.ipv4.ip_local_port_range = 11000 60999

NOFILE limits and Nomad

Initially we restricted nomad (via systemd) with

LimitNOFILE=65536

which apparently is not enough for our setup once we were crossing the 100 container per host limit. Though the error message we saw was hard to understand:

[ERROR] client.alloc_runner.task_runner: prestart failed: alloc_id=93c6b94b-e122-30ba-7250-1050e0107f4d task=mycontainer error="prestart hook "logmon" failed: Unrecognized remote plugin message:

This was solved by following the official recommendation and setting

LimitNOFILE=infinity
LimitNPROC=infinity
TasksMax=infinity

The main lead here was looking into the "hashicorp/go-plugin" library source, and understanding that they try to read the stdout of some other process, which sounded roughly like someone would have to open at some point a file.

Running out of PIDs

Once we were close to 200 containers per host (test environment with 256GB RAM per host), we started to experience failures of all kinds because processes could no longer be forked. Since that was also true for completely fresh user sessions, it was clear that we're hitting some global limitation and nothing bound to session via a pam module.

It's important to understand that most of our workloads are written in Java, and a lot of the other software we use is written in go. So we've a lot of Threads, which in Linux are presented as "Lightweight Process" (LWP). So every LWP still exists with a distinct PID out of the global PID space.

With /proc/sys/kernel/pid_max defaulting to 32768 we actually ran out of PIDs. We increased that limit vastly, probably way beyond what we currently need, to 500000. Actuall limit on 64bit systems is 222 according to man 5 proc.

CryptogramHow Privacy Laws Hurt Defendants

Rebecca Wexler has an interesting op-ed about an inadvertent harm that privacy laws can cause: while law enforcement can often access third-party data to aid in prosecution, the accused don't have the same level of access to aid in their defense:

The proposed privacy laws would make this situation worse. Lawmakers may not have set out to make the criminal process even more unfair, but the unjust result is not surprising. When lawmakers propose privacy bills to protect sensitive information, law enforcement agencies lobby for exceptions so they can continue to access the information. Few lobby for the accused to have similar rights. Just as the privacy interests of poor, minority and heavily policed communities are often ignored in the lawmaking process, so too are the interests of criminal defendants, many from those same communities.

In criminal cases, both the prosecution and the accused have a right to subpoena evidence so that juries can hear both sides of the case. The new privacy bills need to ensure that law enforcement and defense investigators operate under the same rules when they subpoena digital data. If lawmakers believe otherwise, they should have to explain and justify that view.

For more detail, see her paper.

Planet DebianVincent Bernat: Securing BGP on the host with origin validation

An increasingly popular design for a datacenter network is BGP on the host: each host ships with a BGP daemon to advertise the IPs it handles and receives the routes to its fellow servers. Compared to a L2-based design, it is very scalable, resilient, cross-vendor and safe to operate.1 Take a look at “L3 routing to the hypervisor with BGP” for a usage example.

Spine-leaf fabric two spine routers, six leaf routers and nine physical hosts. All links have a BGP session established over them. Some of the servers have a speech balloon expliciting the IP prefix they want to handle.
BGP on the host with a spine-leaf IP fabric. A BGP session is established over each link and each host advertises its own IP prefixes.

While routing on the host eliminates the security problems related to Ethernet networks, a server may announce any IP prefix. In the above picture, two of them are announcing 2001:db8:cc::/64. This could be a legit use of anycast or a prefix hijack. BGP offers several solutions to improve this aspect and one of them is to reuse the features around the RPKI.

Short introduction to the RPKI

On the Internet, BGP is mostly relying on trust. This contributes to various incidents due to operator errors, like the one that affected Cloudflare a few months ago, or to malicious attackers, like the hijack of Amazon DNS to steal cryptocurrency wallets. RFC 7454 explains the best practices to avoid such issues.

IP addresses are allocated by five Regional Internet Registries (RIR). Each of them maintains a database of the assigned Internet resources, notably the IP addresses and the associated AS numbers. These databases may not be totally reliable but are widely used to build ACLs to ensure peers only announce the prefixes they are expected to. Here is an example of ACLs generated by bgpq3 when peering directly with Apple:2

$ bgpq3 -l v6-IMPORT-APPLE -6 -R 48 -m 48 -A -J -E AS-APPLE
policy-options {
 policy-statement v6-IMPORT-APPLE {
replace:
  from {
    route-filter 2403:300::/32 upto /48;
    route-filter 2620:0:1b00::/47 prefix-length-range /48-/48;
    route-filter 2620:0:1b02::/48 exact;
    route-filter 2620:0:1b04::/47 prefix-length-range /48-/48;
    route-filter 2620:149::/32 upto /48;
    route-filter 2a01:b740::/32 upto /48;
    route-filter 2a01:b747::/32 upto /48;
  }
 }
}

The RPKI (RFC 6480) adds public-key cryptography on top of it to sign the authorization for an AS to be the origin of an IP prefix. Such record is a Route Origination Authorization (ROA). You can browse the databases of these ROAs through the RIPE’s RPKI Validator instance:

Screenshot from an instance of RPKI validator showing the validity of 85.190.88.0/21 for AS 64476
RPKI validator shows one ROA for 85.190.88.0/21

BGP daemons do not have to download the databases or to check digital signatures to validate the received prefixes. Instead, they offload these tasks to a local RPKI validator implementing the “RPKI-to-Router Protocol” (RTR, RFC 6810).

For more details, have a look at “RPKI and BGP: our path to securing Internet Routing.”

Using origin validation in the datacenter

While it is possible to create our own RPKI for use inside the datacenter, we can take a shortcut and use a validator implementing RTR, like GoRTR, and accepting another source of truth. Let’s work on the following topology:

Spine-leaf fabric two spine routers, six leaf routers and nine physical hosts. All links have a BGP session established over them. Three of the physical hosts are validators and RTR sessions are established between them and the top-of-the-rack routers—except their own top-of-the-racks.
BGP on the host with prefix validation using RTR. Each server has its own AS number. The leaf routers establish RTR sessions to the validators.

You assume we have a place to maintain a mapping between the private AS numbers used by each host and the allowed prefixes:3

ASN Allowed prefixes
AS 65005 2001:db8:aa::/64
AS 65006 2001:db8:bb::/64,
2001:db8:11::/64
AS 65007 2001:db8:cc::/64
AS 65008 2001:db8:dd::/64
AS 65009 2001:db8:ee::/64,
2001:db8:11::/64
AS 65010 2001:db8:ff::/64

From this table, we build a JSON file for GoRTR, assuming each host can announce the provided prefixes or longer ones (like 2001:db8:aa::­42:d9ff:­fefc:287a/128 for AS 65005):

{
  "roas": [
    {
      "prefix": "2001:db8:aa::/64",
      "maxLength": 128,
      "asn": "AS65005"
    }, {
      "…": "…"
    }, {
      "prefix": "2001:db8:ff::/64",
      "maxLength": 128,
      "asn": "AS65010"
    }, {
      "prefix": "2001:db8:11::/64",
      "maxLength": 128,
      "asn": "AS65006"
    }, {
      "prefix": "2001:db8:11::/64",
      "maxLength": 128,
      "asn": "AS65009"
    }
  ]
}

This file is deployed to all validators and served by a web server. GoRTR is configured to fetch it and update it every 10 minutes:

$ gortr -refresh=600 \
        -verify=false -checktime=false \
        -cache=http://127.0.0.1/rpki.json
INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash  -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96
INFO[0000] Updated added, new serial 1

The refresh time could be lowered but GoRTR can be notified of an update using the SIGHUP signal. Clients are immediately notified of the change.

The next step is to configure the leaf routers to validate the received prefixes using the farm of validators. Most vendors support RTR:

Platform Over TCP? Over SSH?
Juniper JunOS ✔️
Cisco IOS XR ✔️ ✔️
Cisco IOS XE ✔️
Cisco IOS ✔️
Arista EOS
BIRD ✔️ ✔️
FRR ✔️ ✔️
GoBGP ✔️

Configuring JunOS

JunOS only supports plain-text TCP. First, let’s configure the connections to the validation servers:

routing-options {
    validation {
        group RPKI {
            session validator1 {
                hold-time 60;         # session is considered down after 1 minute
                record-lifetime 3600; # cache is kept for 1 hour
                refresh-time 30;      # cache is refreshed every 30 seconds
                port 8282;
            }
            session validator2 { /* OMITTED */ }
            session validator3 { /* OMITTED */ }
        }
    }
}

By default, at most two sessions are randomly established at the same time. This provides a good way to load-balance them among the validators while maintaining good availability. The second step is to define the policy for route validation:

policy-options {
    policy-statement ACCEPT-VALID {
        term valid {
            from {
                protocol bgp;
                validation-database valid;
            }
            then {
                validation-state valid;
                accept;
            }
        }
        term invalid {
            from {
                protocol bgp;
                validation-database invalid;
            }
            then {
                validation-state invalid;
                reject;
            }
        }
    }
    policy-statement REJECT-ALL {
        then reject;
    }
}

The policy statement ACCEPT-VALID turns the validation state of a prefix from unknown to valid if the ROA database says it is valid. It also accepts the route. If the prefix is invalid, the prefix is marked as such and rejected. We have also prepared a REJECT-ALL statement to reject everything else, notably unknown prefixes.

A ROA only certifies the origin of a prefix. A malicious actor can therefore prepend the expected AS number to the AS path to circumvent the validation. For example, AS 65007 could annonce 2001:db8:dd::/64, a prefix allocated to AS 65006, by advertising it with the AS path 65007 65006. To avoid that, we define an additional policy statement to reject AS paths with more than one AS:

policy-options {
    as-path EXACTLY-ONE-ASN "^.$";
    policy-statement ONLY-DIRECTLY-CONNECTED {
        term exactly-one-asn {
            from {
                protocol bgp;
                as-path EXACTLY-ONE-ASN;
            }
            then next policy;
        }
        then reject;
    }
}

The last step is to configure the BGP sessions:

protocols {
    bgp {
        group HOSTS {
            local-as 65100;
            type external;
            # export [ … ];
            import [ ONLY-DIRECTLY-CONNECTED ACCEPT-VALID REJECT-ALL ];
            enforce-first-as;
            neighbor 2001:db8:42::a10 {
                peer-as 65005;
            }
            neighbor 2001:db8:42::a12 {
                peer-as 65006;
            }
            neighbor 2001:db8:42::a14 {
                peer-as 65007;
            }
        }
    }
}

The import policy rejects any AS path longer than one AS, accepts any validated prefix and rejects everything else. The enforce-first-as directive is also pretty important: it ensures the first (and, here, only) AS in the AS path matches the peer AS. Without it, a malicious neighbor could inject a prefix using an AS different than its own, defeating our purpose.4

Let’s check the state of the RTR sessions and the database:

> show validation session
Session                                  State   Flaps     Uptime #IPv4/IPv6 records
2001:db8:4242::10                        Up          0   00:16:09 0/9
2001:db8:4242::11                        Up          0   00:16:07 0/9
2001:db8:4242::12                        Connect     0            0/0

> show validation database
RV database for instance master

Prefix                 Origin-AS Session                                 State   Mismatch
2001:db8:11::/64-128       65006 2001:db8:4242::10                       valid
2001:db8:11::/64-128       65006 2001:db8:4242::11                       valid
2001:db8:11::/64-128       65009 2001:db8:4242::10                       valid
2001:db8:11::/64-128       65009 2001:db8:4242::11                       valid
2001:db8:aa::/64-128       65005 2001:db8:4242::10                       valid
2001:db8:aa::/64-128       65005 2001:db8:4242::11                       valid
2001:db8:bb::/64-128       65006 2001:db8:4242::10                       valid
2001:db8:bb::/64-128       65006 2001:db8:4242::11                       valid
2001:db8:cc::/64-128       65007 2001:db8:4242::10                       valid
2001:db8:cc::/64-128       65007 2001:db8:4242::11                       valid
2001:db8:dd::/64-128       65008 2001:db8:4242::10                       valid
2001:db8:dd::/64-128       65008 2001:db8:4242::11                       valid
2001:db8:ee::/64-128       65009 2001:db8:4242::10                       valid
2001:db8:ee::/64-128       65009 2001:db8:4242::11                       valid
2001:db8:ff::/64-128       65010 2001:db8:4242::10                       valid
2001:db8:ff::/64-128       65010 2001:db8:4242::11                       valid

  IPv4 records: 0
  IPv6 records: 18

Here is an example of accepted route:

> show route protocol bgp table inet6 extensive all
inet6.0: 11 destinations, 11 routes (8 active, 0 holddown, 3 hidden)
2001:db8:bb::42/128 (1 entry, 0 announced)
        *BGP    Preference: 170/-101
                Next hop type: Router, Next hop index: 0
                Address: 0xd050470
                Next-hop reference count: 4
                Source: 2001:db8:42::a12
                Next hop: 2001:db8:42::a12 via em1.0, selected
                Session Id: 0x0
                State: <Active NotInstall Ext>
                Local AS: 65006 Peer AS: 65000
                Age: 12:11
                Validation State: valid
                Task: BGP_65000.2001:db8:42::a12+179
                AS path: 65006 I
                Accepted
                Localpref: 100
                Router ID: 1.1.1.1

A rejected route would be similar with the reason “rejected by import policy” shown in the details and the validation state would be invalid.

Configuring BIRD

BIRD supports both plain-text TCP and SSH. Let’s configure it to use SSH. We need to generate keypairs for both the leaf router and the validators (they can all share the same keypair). We also have to create a known_hosts file for BIRD:

(validatorX)$ ssh-keygen -qN "" -t rsa -f /etc/gortr/ssh_key
(validatorX)$ echo -n "validatorX:8283 " ; \
              cat /etc/bird/ssh_key_rtr.pub
validatorX:8283 ssh-rsa AAAAB3[…]Rk5TW0=
(leaf1)$ ssh-keygen -qN "" -t rsa -f /etc/bird/ssh_key
(leaf1)$ echo 'validator1:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts
(leaf1)$ echo 'validator2:8283 ssh-rsa AAAAB3[…]Rk5TW0=' >> /etc/bird/known_hosts
(leaf1)$ cat /etc/bird/ssh_key.pub
ssh-rsa AAAAB3[…]byQ7s=
(validatorX)$ echo 'ssh-rsa AAAAB3[…]byQ7s=' >> /etc/gortr/authorized_keys

GoRTR needs additional flags to allow connections over SSH:

$ gortr -refresh=600 -verify=false -checktime=false \
      -cache=http://127.0.0.1/rpki.json \
      -ssh.bind=:8283 \
      -ssh.key=/etc/gortr/ssh_key \
      -ssh.method.key=true \
      -ssh.auth.user=rpki \
      -ssh.auth.key.file=/etc/gortr/authorized_keys
INFO[0000] Enabling ssh with the following authentications: password=false, key=true
INFO[0000] New update (7 uniques, 8 total prefixes). 0 bytes. Updating sha256 hash  -> 68a1d3b52db8d654bd8263788319f08e3f5384ae54064a7034e9dbaee236ce96
INFO[0000] Updated added, new serial 1

Then, we can configure BIRD to use these RTR servers:

roa6 table ROA6;
template rpki VALIDATOR {
   roa6 { table ROA6; };
   transport ssh {
     user "rpki";
     remote public key "/etc/bird/known_hosts";
     bird private key "/etc/bird/ssh_key";
   };
   refresh keep 30;
   retry keep 30;
   expire keep 3600;
}
protocol rpki VALIDATOR1 from VALIDATOR {
   remote validator1 port 8283;
}
protocol rpki VALIDATOR2 from VALIDATOR {
   remote validator2 port 8283;
}

Unlike JunOS, BIRD doesn’t have a feature to only use a subset of validators. Therefore, we only configure two of them. As a safety measure, if both connections become unavailable, BIRD will keep the ROAs for one hour.

We can query the state of the RTR sessions and the database:

> show protocols all VALIDATOR1
Name       Proto      Table      State  Since         Info
VALIDATOR1 RPKI       ---        up     17:28:56.321  Established
  Cache server:     rpki@validator1:8283
  Status:           Established
  Transport:        SSHv2
  Protocol version: 1
  Session ID:       0
  Serial number:    1
  Last update:      before 25.212 s
  Refresh timer   : 4.787/30
  Retry timer     : ---
  Expire timer    : 3574.787/3600
  No roa4 channel
  Channel roa6
    State:          UP
    Table:          ROA6
    Preference:     100
    Input filter:   ACCEPT
    Output filter:  REJECT
    Routes:         9 imported, 0 exported, 9 preferred
    Route change stats:     received   rejected   filtered    ignored   accepted
      Import updates:              9          0          0          0          9
      Import withdraws:            0          0        ---          0          0
      Export updates:              0          0          0        ---          0
      Export withdraws:            0        ---        ---        ---          0

> show route table ROA6
Table ROA6:
    2001:db8:11::/64-128 AS65006  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:11::/64-128 AS65009  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:aa::/64-128 AS65005  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:bb::/64-128 AS65006  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:cc::/64-128 AS65007  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:dd::/64-128 AS65008  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:ee::/64-128 AS65009  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)
    2001:db8:ff::/64-128 AS65010  [VALIDATOR1 17:28:56.333] * (100)
                                  [VALIDATOR2 17:28:56.414] (100)

Like for the JunOS case, a malicious actor could try to workaround the validation by building an AS path where the last AS number is the legitimate one. BIRD is flexible enough to allow us to use any AS to check the IP prefix. Instead of checking the origin AS, we ask it to check the peer AS with this function, without looking at the AS path:

function validated(int peeras) {
   if (roa_check(ROA6, net, peeras) != ROA_VALID) then {
      print "Ignore invalid ROA ", net, " for ASN ", peeras;
      reject;
   }
   accept;
}

The BGP instance is then configured to use the above function as the import policy:

protocol bgp PEER1 {
   local as 65100;
   neighbor 2001:db8:42::a10 as 65005;
   ipv6 {
      import keep filtered;
      import where validated(65005);
      # export …;
   };
}

You can view the rejected routes with show route filtered, but BIRD does not store information about the validation state in the routes. You can also watch the logs:

2019-07-31 17:29:08.491 <INFO> Ignore invalid ROA 2001:db8:bb::40:/126 for ASN 65005

Currently, BIRD does not reevaluate the prefixes when the ROAs are updated. There is work in progress to fix this. If this feature is important to you, have a look at FRR instead: it also supports the RTR protocol and triggers a soft reconfiguration of the BGP sessions when ROAs are updated.


  1. Notably, the data flow and the control plane are separated. A node can remove itself by notifying its peers without losing a single packet. ↩︎

  2. People often use AS sets, like AS-APPLE in this example, as they are convenient if you have multiple AS numbers or customers. However, there is currently nothing preventing a rogue actor to add arbitrary AS numbers to their AS set. ↩︎

  3. We are using 16-bit AS numbers for readability. Because we need to assign a different AS number for each host in the datacenter, in an actual deployment, we would use 32-bit AS numbers. ↩︎

  4. Cisco routers and FRR enforce the first AS by default. It is a tunable value to allow the use of route servers: they distribute prefixes on behalf of other routers. ↩︎

Worse Than FailureError'd: Choice is but an Illusion

"If you choose not to decide which button to press, you still have made a choice," Rob H. wrote.

 

"If you have a large breed cat, or small dog, the name doesn't matter, it just has to get the job done," writes Bryan.

 

Mike R. wrote, "Thanks Dropbox. Becuase your survey can't add, I missed out on my chance to win a gift card. Way to go guys..."

 

"There was a magnitude 7.1 earthquake near Ridgecrest, CA on 7/5/2019 at 8:25PM PDT. I visited the USGS earthquakes page, clicked on the earthquake link, and clickedd on the 'Did you feel it?' link, because we DID feel it here in Sacramento, CA, 290 miles away," Ken M. wrote, "Based on what I'm seeing though, I think they may call it a 'bat-quake' instead."

 

Benjamin writes, "Apparently Verizon is trying to cast a spell on me because I used too much data."

 

Daniel writes, "German telekom's customer center site is making iFrames sexy again."

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianJunichi Uekawa: Started wanting to move stuff to docker.

Started wanting to move stuff to docker. Especially around build systems. If things are mutable they will go bad and fixing them is annoying.

,

Cory DoctorowPaul Di Filippo on Radicalized: “Upton-Sinclairish muckraking, and Dickensian-Hugonian ashcan realism”

I was incredibly gratified and excited to read Paul Di Filippo’s Locus review of my latest book, Radicalized; Di Filippo is a superb writer, one of the original, Mirrorshades cyberpunks, and he is a superb and insightful literary critic, so when I read his superlative-laden review of my book today, it was an absolute thrill (I haven’t been this excited about a review since Bruce Sterling reviewed Walkaway).


There’s so much to be delighted by in this review, not least a comparison to Rod Serling (!). Below, a couple paras of especial note.

His latest, a collection of four novellas, subtitled “Four Tales of Our Present Moment”, fits the template perfectly, and extends his vision further into a realm where impassioned advocacy, Upton-Sinclairish muckraking, and Dickensian-Hugonian ashcan realism drives a kind of partisan or Cassandran science fiction seen before mostly during the post-WWII atomic bomb panic (think On the Beach) and 1960s New Wave-Age of Aquarius agitation (think Bug Jack Barron). Those earlier troubled eras resonate with our current quandary, but the “present moment” under Doctorow’s microscope – or is that a sniper’s crosshairs? – has its own unique features that he seeks to elucidate. These stories walk a razor’s edge between literature and propaganda, aesthetics and bludgeoning, subtlety and stridency, rant and revelation. The only guaranteed outcome after reading is that no one can be indifferent to them…

…The Radicalized collection strikes me in some sense as an episode of a primo TV anthology series – Night Gallery in the classical mode, or maybe in a more modern version, Philip K. Dick’s Electric Dreams. It gives us polymath Cory Doctorow as talented Rod Serling – himself both a dreamer and a social crusader – telling us that he’s going to show us, as vividly as he can, several nightmares or future hells, but that somehow the human spirit and soul will emerge intact and even triumphant.


Paul Di Filippo Reviews Radicalized by Cory Doctorow [Paul Di Filippo/Locus]

Worse Than FailureCodeSOD: Close to the Point

Lena inherited some C++ code which had issues regarding a timeout. While skimming through the code, one block in particular leapt out. This was production code which had been running in this state for some time.

if((pFile) && (pFile != (FILE *)(0xcdcdcdcd))) {
    fclose(pFile);
    pFile = NULL;
}

The purpose of this code is, as you might gather from the call to fclose, to close a file handle represented by pFile, a pointer to the handle. This code mostly is fine, but with one, big, glaring “hunh?” and it’s this bit here: (pFile != (FILE *)(0xcdcdcdcd))

(FILE *)(0xcdcdcdcd) casts the number 0xcdcdcdcd to a file pointer- essentially it creates a pointer pointing at memory address 0xcdcdcdcd. If pFile points to that address, we won’t close pFile. Is there a reason for this? Not that Lena could determine from the code. Did the 0xcdcdcdcd come from anywhere specific? Probably a previous developer trying to track down a bug and dumping addresses from the debugger. How did it get into production code? How long had it been there? It was impossible to tell. It was also impossible to tell if it was secretly doing something important, so Lena made a note to dig into it later, but focused on solving the timeout bug which had started this endeavor.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramAnother Attack Against Driverless Cars

In this piece of research, attackers successfully attack a driverless car system -- Renault Captur's "Level 0" autopilot (Level 0 systems advise human drivers but do not directly operate cars) -- by following them with drones that project images of fake road signs in 100ms bursts. The time is too short for human perception, but long enough to fool the autopilot's sensors.

Boing Boing post.

Worse Than FailureCodeSOD: What a Happy Date

As is the case with pretty much any language these days, Python comes with robust date handling functionality. If you want to know something like what the day of the month is? datetime.now().day will tell you. Simple, easy, and of course, just an invitation for someone to invent their own.

Jan was witness to a little date-time related office politics. This particular political battle started during a code review. Klaus had written some date mangling code, relying heavily on strftime to parse dates out to strings and then parse them back in as integers. Richard, quite reasonably, pointed out that Klaus was taking the long way around, and maybe Klaus should possibly think about doing it in a simpler fashion.

“So, you don’t understand the code?” Klaus asked.

“No, I understand it,” Richard replied. “But it’s far too complicated. You’re doing a simple task- getting the day of the month! The code should be simple.”

“Ah, so it’s too complicated, so you can’t understand it.”

“Just… write it the simple way. Use the built-in accessor.”

So, Klaus made his revisions, and merged the revised code.

import datetime
# ...
now = datetime.datetime.now()  # Richard
date = now.strftime("%d")  # Richard, this is a string over here
date_int = int(date)  # day number, int("08") = 8, so no problem here
hour = now.hour  # Richard :)))))
hour_int = int(hour)  # int hour, e.g. if it's 22:36 then hour = 22

Richard did not have a big :))))) on his face when he saw that in the master branch.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

TEDStages of Life: Notes from Session 5 of TEDSummit 2019

Yilian Cañizares rocks the TED stage with a jubilant performance of her signature blend of classic jazz and Cuban rhythms. She performs at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

The penultimate session of TEDSummit 2019 had a bit of everything — new thoughts on aging, loneliness and happiness as well as breakthrough science, music and even a bit of comedy.

The event: TEDSummit 2019, Session 5: Stages of Life, hosted by Kelly Stoetzel and Alex Moura

When and where: Wednesday, July 24, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Nicola Sturgeon, Sonia Livingstone, Howard Taylor, Sara-Jane Dunn, Fay Bound Alberti, Carl Honoré

Opening: Raconteur Mackenzie Dalrymple telling the story of the Goodman of Ballengeich

Music: Yilian Cañizares and her band, rocking the TED stage with a jubilant performance that blends classic jazz and Cuban rhythms

Comedy: Amidst a head-spinning program of big (and often heavy) ideas, a welcomed break from comedian Omid Djalili, who lightens the session with a little self-deprecation and a few barbed cultural observations

The talks in brief:

“In the world we live in today, with growing divides and inequalities, with disaffection and alienation, it is more important than ever that we … promote a vision of society that has well-being, not just wealth, at its very heart,” says Nicola Sturgeon, First Minister of Scotland. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Nicola Sturgeon, First Minister of Scotland

Big idea: It’s time to challenge the monolithic importance of GDP as a quality-of-life metric — and paint a broader picture that also encompasses well-being.

How? In 2018, Scotland, Iceland and New Zealand established the Wellbeing Economy Governments group to challenge the supremacy of GDP. The leaders of these countries — who are, incidentally, all women — believe policies that promote happiness (including equal pay, childcare and paternity rights) could help decrease alienation in its citizens and, in turn, build resolve to confront global challenges like inequality and climate change.

Quote of the talk: “Growth in GDP should not be pursued at any and all cost … The goal of economic policy should be collective well-being: how happy and healthy a population is, not just how wealthy a population is.”


Sonia Livingstone, social psychologist

Big idea: Parents often view technology as either a beacon of hope or a developmental poison, but the biggest influence on their children’s life choices is how they help them navigate this unavoidable digital landscape. Society as a whole can positively impact these efforts.

How? Sonia Livingstone’s own childhood was relatively analog, but her research has been focused on how families embrace new technology today. Changes abound in the past few decades — whether it’s intensified educational pressures, migration, or rising inequality — yet it’s the digital revolution that remains the focus of our collective apprehension. Livingstone’s research suggests that policing screen time isn’t the answer to raising a well-rounded child, especially at a time when parents are trying to live more democratically with their children by sharing decision-making around activities like gaming and exploring the internet. Leaders and institutions alike can support a positive digital future for children by partnering with parents to guide activities within and outside of the home. Instead of criticizing families for their digital activities, Livingstone thinks we should identify what real-world challenges they’re facing, what options are available to them and how we can support them better.

Quote of the talk: “Screen time advice is causing conflict in the family, and there’s no solid evidence that more screen time increases childhood problems — especially compared with socio-economic or psychological factors. Restricting children breeds resistance, while guiding them builds judgment.”


Howard Taylor, child safety advocate

Big idea: Violence against children is an endemic issue worldwide, with rates of reported incidence increasing in some countries. We are at a historical moment that presents us with a unique opportunity to end the epidemic, and some countries are already leading the way.

How? Howard Taylor draws attention to Sweden and Uganda, two very different countries that share an explicit commitment to ending violence against children. Through high-level political buy-in, data-driven strategy and tactical legislative initiatives, the two countries have already made progress on. These solutions and others are all part of INSPIRE, a set of strategies created by an alliance of global organizations as a roadmap to eliminating the problem. If we put in the work, Taylor says, a new normal will emerge: generations whose paths in life will be shaped by what they do — not what was done to them.

Quote of the talk: “What would it really mean if we actually end violence against children? Multiply the social, cultural and economic benefits of this change by every family, every community, village, town, city and country, and suddenly you have a new normal emerging. A generation would grow up without experiencing violence.”


“The first half of this century is going to be transformed by a new software revolution: the living software revolution. Its impact will be so enormous that it will make the first software revolution pale in comparison,” says computational biologist Sara-Jane Dunn. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Sara-Jane Dunn, computational biologist

Big idea: In the 20th century, computer scientists inscribed machine-readable instructions on tiny silicon chips, completely revolutionizing our lives and workplaces. Today, a “living software” revolution centered around organisms built from programmable cells is poised to transform medicine, agriculture and energy in ways we can scarcely predict.

How? By studying how embryonic stem cells “decide” to become neurons, lung cells, bone cells or anything else in the body, Sara-Jane Dunn seeks to uncover the biological code that dictates cellular behavior. Using mathematical models, Dunn and her team analyze the expected function of a cellular system to determine the “genetic program” that leads to that result. While they’re still a long way from compiling living software, they’ve taken a crucial early step.

Quote of the talk: “We are at the beginning of a technological revolution. Understanding this ancient type of biological computation is the critical first step. And if we can realize this, we would enter into the era of an operating system that runs living software.”


Fay Bound Alberti, cultural historian

Big idea: We need to recognize the complexity of loneliness and its ever-transforming history. It’s not just an individual and psychological problem — it’s a social and physical one.

Why? Loneliness is a modern-day epidemic, with a history that’s often recognized solely as a product of the mind. Fay Bound Alberti believes that interpretation is limiting. “We’ve neglected [loneliness’s] physical effects — and loneliness is physical,” she says. She points to how crucial touch, smell, sound, human interaction and even nostalgic memories of sensory experiences are to coping with loneliness, making people feel important, seen and helping to produce endorphins. By reframing our perspective on this feeling of isolation, we can better understand how to heal it.

Quote of talk: “I am suggesting we need to turn to the physical body, we need to understand the physical and emotional experiences of loneliness to be able to tackle a modern epidemic. After all, it’s through our bodies, our sensory bodies, that we engage with the world.”

Fun fact: “Before 1800 there was no word for loneliness in the English language. There was something called: ‘oneliness’ and there were ‘lonely places,’ but both simply meant the state of being alone. There was no corresponding emotional lack and no modern state of loneliness.”


“Whatever age you are: own it — and then go out there and show the world what you can do!” says Carl Honoré. He speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Carl Honoré, writer, thinker and activist

Big idea: Stop the lazy thinking around age and the “cult of youth” — it’s not all downhill from 40.

How? We need to debunk the myths and stereotypes surrounding age — beliefs like “older people can’t learn new things” and “creativity belongs to the young.” There are plenty of trailblazers and changemakers who came into their own later in life, from artists and musicians to physicists and business leaders. Studies show that people who fear and feel bad about aging are more likely to suffer physical effects as if age is an actual affliction rather than just a number. The first step to getting past that is by creating new, more positive societal narratives. Honoré offers a set of simple solutions — the two most important being: check your language and own your age. Embrace aging as an adventure, a process of opening rather than closing doors. We need to feel better about aging in order to age better.

Quote of the talk: “Whatever age you are: own it — and then go out there and show the world what you can do!”

TEDWhat Brexit means for Scotland: A Q&A with First Minister Nicola Sturgeon

First Minister of Scotland Nicola Sturgeon spoke at TEDSummit on Wednesday in Edinburgh about her vision for making collective well-being the main aim of public policy and the economy. (Watch her full talk on TED.com.) That same morning, Boris Johnson assumed office as Prime Minister of the United Kingdom, the latest episode of the Brexit drama that has engulfed UK politics. During the 2016 referendum, Scotland voted against Brexit.

After her talk, Chris Anderson, the Head of TED, joined Sturgeon, who’s been vocally critical of Johnson, to ask a few questions about the current political landscape. Watch their exchange below.

,

Cory DoctorowHoustonites! Come see Hank Green and me in conversation tomorrow night!

Hank Green and I are doing a double act tomorrow night, July 31, as part of the tour for the paperback of his debut novel, An Absolutely Remarkable Thing. It’s a ticketed event (admission includes a copy of Hank’s book), and we’re presenting at 7PM at Spring Forest Middle School in association with Blue Willow Bookshop. Hope to see you there!

Krebs on SecurityCapital One Data Theft Impacts 106M People

Federal prosecutors this week charged a Seattle woman with stealing data from more than 100 million credit applications made with Capital One Financial Corp. Incredibly, much of this breach played out publicly over several months on social media and other open online platforms. What follows is a closer look at the accused, and what this incident may mean for consumers and businesses.

Paige “erratic” Thompson, in an undated photo posted to her Slack channel.

On July 29, FBI agents arrested Paige A. Thompson on suspicion of downloading nearly 30 GB of Capital One credit application data from a rented cloud data server. Capital One said the incident affected approximately 100 million people in the United States and six million in Canada.

That data included approximately 140,000 Social Security numbers and approximately 80,000 bank account numbers on U.S. consumers, and roughly 1 million Social Insurance Numbers (SINs) for Canadian credit card customers.

“Importantly, no credit card account numbers or log-in credentials were compromised and over 99 percent of Social Security numbers were not compromised,” Capital One said in a statement posted to its site.

“The largest category of information accessed was information on consumers and small businesses as of the time they applied for one of our credit card products from 2005 through early 2019,” the statement continues. “This information included personal information Capital One routinely collects at the time it receives credit card applications, including names, addresses, zip codes/postal codes, phone numbers, email addresses, dates of birth, and self-reported income.”

The FBI says Capital One learned about the theft from a tip sent via email on July 17, which alerted the company that some of its leaked data was being stored out in the open on the software development platform Github. That Github account was for a user named “Netcrave,” which includes the resume and name of one Paige A. Thompson.

The tip that alerted Capital One to its data breach.

The complaint doesn’t explicitly name the cloud hosting provider from which the Capital One credit data was taken, but it does say the accused’s resume states that she worked as a systems engineer at the provider between 2015 and 2016. That resume, available on Gitlab here, reveals Thompson’s most recent employer was Amazon Inc.

Further investigation revealed that Thompson used the nickname “erratic” on Twitter, where she spoke openly over several months about finding huge stores of data intended to be secured on various Amazon instances.

The Twitter user “erratic” posting about tools and processes used to access various Amazon cloud instances.

According to the FBI, Thompson also used a public Meetup group under the same alias, where she invited others to join a Slack channel named “Netcrave Communications.”

KrebsOnSecurity was able to join this open Slack channel Monday evening and review many months of postings apparently made by Erratic about her personal life, interests and online explorations. One of the more interesting posts by Erratic on the Slack channel is a June 27 comment listing various databases she found by hacking into improperly secured Amazon cloud instances.

That posting suggests Erratic may also have located tens of gigabytes of data belonging to other major corporations:

According to Erratic’s posts on Slack, the two items in the list above beginning with “ISRM-WAF” belong to Capital One.

Erratic also posted frequently to Slack about her struggles with gender identity, lack of employment, and persistent suicidal thoughts. In several conversations, Erratic makes references to running a botnet of sorts, although it is unclear how serious those claims were. Specifically, Erratic mentions one botnet involved in cryptojacking, which uses snippets of code installed on Web sites — often surreptitiously — designed to mine cryptocurrencies.

None of Erratic’s postings suggest Thompson sought to profit from selling the data taken from various Amazon cloud instances she was able to access. But it seems likely that at least some of that data could have been obtained by others who may have followed her activities on different social media platforms.

Ray Watson, a cybersecurity researcher at cloud security firm Masergy, said the Capital One incident contains the hallmarks of many other modern data breaches.

“The attacker was a former employee of the web hosting company involved, which is what is often referred to as insider threats,” Watson said. “She allegedly used web application firewall credentials to obtain privilege escalation. Also the use of Tor and an offshore VPN for obfuscation are commonly seen in similar data breaches.”

“The good news, however, is that Capital One Incidence Response was able to move quickly once they were informed of a possible breach via their Responsible Disclosure program, which is something a lot of other companies struggle with,” he continued.

In Capital One’s statement about the breach, company chairman and CEO Richard D. Fairbank said the financial institution fixed the configuration vulnerability that led to the data theft and promptly began working with federal law enforcement.

“Based on our analysis to date, we believe it is unlikely that the information was used for fraud or disseminated by this individual,” Fairbank said. “While I am grateful that the perpetrator has been caught, I am deeply sorry for what has happened. I sincerely apologize for the understandable worry this incident must be causing those affected and I am committed to making it right.”

Capital One says it will notify affected individuals via a variety of channels, and make free credit monitoring and identity protection available to everyone affected.

Bloomberg reports that in court on Monday, Thompson broke down and laid her head on the defense table during the hearing. She is charged with a single count of computer fraud and faces a maximum penalty of five years in prison and a $250,000 fine. Thompson will be held in custody until her bail hearing, which is set for August 1.

A copy of the complaint against Thompson is available here.

Update, 3:38 p.m. ET: I’ve reached out to several companies that appear to be listed in the last screenshot above. Infoblox [an advertiser on this site] responded with the following statement:

“Infoblox is aware of the pending investigation of the Capital One hacking attack, and that Infoblox is among the companies referenced in the suspected hacker’s alleged online communications. Infoblox is continuing to investigate the matter, but at this time there is no indication that Infoblox was in any way involved with the reported Capital One breach. Additionally, there is no indication of an intrusion or data breach involving Infoblox causing any customer data to be exposed.”

CryptogramACLU on the GCHQ Backdoor Proposal

Back in January, two senior GCHQ officials proposed a specific backdoor for communications systems. It was universally derided as unworkable -- by me, as well. Now Jon Callas of the ACLU explains why.

CryptogramAttorney General William Barr on Encryption Policy

Yesterday, Attorney General William Barr gave a major speech on encryption policy -- what is commonly known as "going dark." Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how: 足an approach we have derisively named "nerd harder."

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having足not the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about "consumer cybersecurity," and not "nuclear launch codes." This is true, but ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There's no longer a difference between consumer tech and government tech -- it's all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTE足which became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003 -- which seems to have been an NSA operation -- and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that is it not about iPhones and data at rest. It is about communications: 足data in transit. The "going dark" debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr's latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody -- even at the cost of law-enforcement access -- than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

EDITED TO ADD: More news articles.

EDITED TO ADD (7/28): Gen. Hayden comments.

EDITED TO ADD (7/30): Good response by Robert Graham.

Worse Than FailureThis Process is Nuts

A great man once said "I used to be over by the window, and I could see the squirrels, and they were merry." As pleasing of a sight as that was, what if the squirrels weren't merry?

Grady had an unpleasant experience with bushy-tailed rodents at a former job. Before starting at the Fintech firm as a data scientist, he was assured the Business Intelligence department was very advanced and run by an expert. They needed Grady to manipulate large data sets and implement machine learning to help out Lenny, the resident BI "expert". It quickly became apparent that Lenny didn't put the "Intelligence" in Business Intelligence.

Lenny was a long-term contractor who started the BI initiative from the ground-up. His previous work as a front-end developer led to his decision to use PHP for the ETL process. This one-of-a-kind monstrosity made it as unstable as a house of cards in a hurricane and the resultant data warehouse was more like a data cesspool.

"This here is the best piece of software in the whole company," Lenny boasted. "They tell me you're really smart, so you'll figure out how it works on your own. My work is far too important and advanced for me to be bothered with questions!" Lenny told Grady sternly.

Grady, left to fend for himself, spent weeks stumbling through code with very few comments and no existing documentation. He managed to deduce the main workflow for the ETL and warehouse process and it wasn't pretty. The first part of the ETL process deleted the entire existing data warehouse, allowing for a "fresh start" each day. If an error occurred during the ETL, rather than fail gracefully, the whole process crashed without restoring the data warehouse that was wiped out.

Grady found that the morning ETL run failed more often than not. Since Lenny never bothered to stroll in until 10 AM, the people that depended on data warehouse reports loudly complained to Grady. Having no clue how to fix it, he would tell them to be patient. Lenny would saunter in and start berating him "Seriously? Why haven't you figured out how to fix this yet?!" Lenny would spend an hour doing damage control, then disappear for a 90 minute lunch break.

One day, an email arrived informing everyone that Lenny was no longer with the company after exercising an obscure opt-out clause in his contract. Grady suddenly became the senior-most BI developer and inherited Lenny's trash pile. Determined to find the cause of the errors, he dug into parts of the code Lenny strictly forbade him to enter. Hoping to find any semblance of logging that might help, he scoured for hours.

Grady finally started seeing commands called "WritetoSkype". It sounded absurd, but it almost seemed like Lenny was logging to a Skype channel during the ETL run. Grady created a Skype account and subscribed to LennysETLLogging. All he found there was a bunch of dancing penguin emoticons, written one at a time.

Grady scrolled and scrolled and scrolled some more as thousands of dancing penguins written during the day's run performed for him. He finally reached the bottom and found an emoticon of a squirrel eating an acorn. Looking back at the code, WritetoSkype sent (dancingpenguin) when a step succeeded and (heidy) when a step failed. It was far from useful logging, but Grady now had a clear mission - Exterminate all the squirrels.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Krebs on SecurityNo Jail Time for “WannaCry Hero”

Marcus Hutchins, the “accidental hero” who helped arrest the spread of the global WannaCry ransomware outbreak in 2017, will receive no jail time for his admitted role in authoring and selling malware that helped cyberthieves steal online bank account credentials from victims, a federal judge ruled Friday.

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image: twitter.com/malwaretechblog

The British security enthusiast enjoyed instant fame after the U.K. media revealed he’d registered and sinkholed a domain name that researchers later understood served as a hidden “kill switch” inside WannaCry, a fast-spreading, highly destructive strain of ransomware which propagated through a Microsoft Windows exploit developed by and subsequently stolen from the U.S. National Security Agency.

In August 2017, FBI agents arrested then 23-year-old Hutchins on suspicion of authoring and spreading the “Kronos” banking trojan and a related malware tool called UPAS Kit. Hutchins was released shortly after his arrest, but ordered to remain in the United States pending trial.

Many in the security community leaped to his defense at the time, noting that the FBI’s case appeared flimsy and that Hutchins had worked tirelessly through his blog to expose cybercriminals and their malicious tools. Hundreds of people donated to his legal defense fund.

In September 2017, KrebsOnSecurity published research which strongly suggested Hutchins’ dozens of alter egos online had a fairly lengthy history of developing and selling various malware tools and services. In April 2019, Hutchins pleaded guilty to criminal charges of conspiracy and to making, selling or advertising illegal wiretapping devices.

At his sentencing hearing July 26, U.S. District Judge Joseph Peter Stadtmueller said Hutchins’ action in halting the spread of WannaCry was far more consequential than the two malware strains he admitted authoring, and sentenced him to time served plus one year of supervised release. 

Marcy Wheeler, an independent journalist who live-tweeted and blogged about the sentencing hearing last week, observed that prosecutors failed to show convincing evidence of specific financial losses tied to any banking trojan victims, virtually all of whom were overseas — particularly in Hutchins’ home in the U.K.

“When it comes to matter of loss or gain,” Wheeler wrote, quoting Judge Stadtmeuller. “the most striking is comparison between you passing Kronos and WannaCry, if one looks at loss & numbers of infections, over 8B throughout world w/WannaCry, and >120M in UK.”

“This case should never have been prosecuted in the first place,” Wheeler wrote. “And when Hutchins tried to challenge the details of the case — most notably the one largely ceded today, that the government really doesn’t have evidence that 10 computers were damaged by anything Hutchins did — the government doubled down and issued a superseding indictment that, because of the false statements charge, posed a real risk of conviction.”

Hutchins’ conviction means he will no longer be allowed to stay in or visit the United States, although Judge Stadtmeuller reportedly suggested Hutchins should seek a presidential pardon, which would enable him to return and work here.

“Incredibly thankful for the understanding and leniency of the judge, the wonderful character letter you all sent, and everyone who helped me through the past two years, both financially and emotionally,” Hutchins tweeted immediately after the sentencing. “Once t[h]ings settle down I plan to focus on educational blog posts and livestreams again.”

Cory DoctorowPodcast: Adblocking: How About Nah?

In my latest podcast (MP3), I read my essay Adblocking: How About Nah?, published last week on EFF’s Deeplinks; it’s the latest installment in my series about “adversarial interoperability,” and the role it has historically played in keeping tech open and competitive, and how that role is changing now that yesterday’s scrappy startups have become today’s bloated incumbents, determined to prevent anyone from disrupting them they way they disrupted tech in their early days.

At the height of the pop-up wars, it seemed like there was no end in sight: the future of the Web would be one where humans adapted to pop-ups, then pop-ups found new, obnoxious ways to command humans’ attention, which would wane, until pop-ups got even more obnoxious.

But that’s not how it happened. Instead, browser vendors (beginning with Opera) started to ship on-by-default pop-up blockers. What’s more, users—who hated pop-up ads—started to choose browsers that blocked pop-ups, marginalizing holdouts like Microsoft’s Internet Explorer, until they, too, added pop-up blockers.

Chances are, those blockers are in your browser today. But here’s a funny thing: if you turn them off, you won’t see a million pop-up ads that have been lurking unseen for all these years.

Because once pop-up ads became invisible by default to an ever-larger swathe of Internet users, advertisers stopped demanding that publishers serve pop-up ads. The point of pop-ups was to get people’s attention, but something that is never seen in the first place can’t possibly do that.

MP3

Rondam RamblingsFedex: when it absolutely, positively has to get stuck in the system for over two months

I have seen some pretty serious corporate bureaucratic dysfunction over the years, but I think this one takes the cake: on May 23, we shipped a package via Fedex from California to Colorado.  The package required a signature.  It turned out that the person we sent it to had moved, and so was not able to sign for the package, and so it was not delivered. Now, the package has our return address on

Worse Than FailureCodeSOD: Some Kind of Magic

We all have our little bits of sloppiness and our bad habits. Most of us have more than one. One place I'm likely to get lazy, especially as I'm feeling my way around a problem, is with magic numbers. I always mean to go back and replace them with a constant, but sometimes there's another fire you need to put out and you just don't get back to it till somebody calls it out in a code review.

Then, of course, there are the folks who go too far. I once got a note complaining that I shouldn't have used 2*PI, but instead should have created a new constant, TAU. I disavow the need for tau, but my critic said magic numbers, like two, were disallowed, so I said "ciao" and tau is there now.

Angela A, who's had some experience with bad constants before, has found a new one.

// Decimal constant for value of 1 static constant float THIRTY = 30.0f;

The decimal constant for the value of 1 is THIRTY.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

CryptogramFriday Squid Blogging: Humbolt Squid in Mexico Are Getting Smaller

The Humbolt squid are getting smaller:

Rawley and the other researchers found a flurry of factors that drove the jumbo squid's demise. The Gulf of California historically cycled between warm-water El Niño conditions and cool-water La Niña phases. The warm El Niño waters were inhospitable to jumbo squid­more specifically to the squid's prey­but subsequent La Niñas would allow squid populations to recover. But recent years have seen a drought of La Niñas, resulting in increasingly and more consistently warm waters. Frawley calls it an "oceanographic drought," and says that conditions like these will become more and more common with climate change. "But saying this specific instance is climate change is more than we can claim in the scope of our work," he adds. "I'm not willing to make that connection absolutely."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

,

CryptogramInsider Logic Bombs

Add to the "not very smart criminals" file:

According to court documents, Tinley provided software services for Siemens' Monroeville, PA offices for nearly ten years. Among the work he was asked to perform was the creation of spreadsheets that the company was using to manage equipment orders.

The spreadsheets included custom scripts that would update the content of the file based on current orders stored in other, remote documents, allowing the company to automate inventory and order management.

But while Tinley's files worked for years, they started malfunctioning around 2014. According to court documents, Tinley planted so-called "logic bombs" that would trigger after a certain date, and crash the files.

Every time the scripts would crash, Siemens would call Tinley, who'd fix the files for a fee.

Worse Than FailureError'd: Nice Day for Golf (in Hades)

"A coworker was looking up what the weather was going to be like for his tee time. He said he’s definitely wearing shorts," writes Angela A.

 

"I guess whenever a company lists welding in their LinkedIn job posting you know that they're REEAALLY serious about computer hardware," Andrew I. writes.

 

Chris A. wrote, "It was game, set, and match, but unfortunately, someone struck out."

 

Bruce C. writes, "I'm not surprised that NULL is missing some deals....that File Not Found person must be getting it all."

 

"Learning to use Docker with the 'Get Started' tutorials and had to wonder...is there some theme here?" Dave E. wondered.

 

"Ever type up an email and hit 'send' too early? Well...here's an example," writes Charlie.

 

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

TEDIt’s not about privacy — it’s about power: Carole Cadwalladr speaks at TEDSummit 2019

Three months after her landmark talk, Carole Cadwalladr is back at TED. In conversation with curator Bruno Giussani, Cadwalladr discusses the latest on her reporting on the Facebook-Cambridge Analytica scandal and what we still don’t know about the transatlantic links between Brexit and the 2016 US presidential election.

“Who has the information, who has the data about you, that is where power now lies,” Cadwalladr says.

Cadwalladr appears in The Great Hack, a documentary by Karim Amer and TED Prize winner Jehane Noujaim that explores how Cambridge Analytica has come to symbolize the dark side of social media. The documentary was screened for TEDSummit participants today. Watch it in select theaters and on Netflix starting July 24.

Learn more about how you can support Cadwalladr’s investigation into data, disinformation and democracy.

TEDNot All Is Broken: Notes from Session 6 of TEDSummit 2019

Raconteur Mackenzie Dalrymple regales the TEDSummit audience with a classic Scottish story. He speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

In the final session of TEDSummit 2019, the themes from the week — our search for belonging and community, our digital future, our inextricable connection to the environment — ring out with clarity and insight. From the mysterious ways our emotions impact our biological hearts, to a tour-de-force talk on the languages we all speak, it’s a fitting close to a week of revelation, laughter, tears and wonder.

The event: TEDSummit 2019, Session 6: Not All Is Broken, hosted by Chris Anderson and Bruno Giussani

When and where: Thursday, July 25, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Johann Hari, Sandeep Jauhar, Anna Piperal, Eli Pariser, Poet Ali

Interlude: Mackenzie Dalrymple sharing the tale of an uncle and nephew competing to become Lord of the Isles

Music: Djazia Satour, blending 1950s Chaabi (a genre of North African folk music) with modern grooves

The talks in brief:

Johann Hari, journalist

Big idea: The cultural narrative and definitions of depression and anxiety need to change.

Why? We need to talk less about chemical imbalances and more about imbalances in the way we live. Johann Hari met with experts around the world, boiling down his research into a surprisingly simple thesis: all humans have physical needs (food, shelter, water) as well as psychological needs (feeling that you belong, that your life has meaning and purpose). Though antidepressant drugs work for some, biology isn’t the whole picture, and any treatment must be paired with a social approach. Our best bet is to listen to the signals of our bodies, instead of dismissing them as signs of weakness madness. If we take time to investigate our red flags of depression and anxiety — and take the time to reevaluate how we build meaning and purpose, especially through social connections — we can start to heal in a society deemed the loneliest in human history.

Quote of the talk: “If you’re depressed, if you’re anxious — you’re not weak. You’re not crazy. You’re not a machine with broken parts. You’re a human being with unmet needs.”


“Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways,” says cardiologist Sandeep Jauhar. He speaks at TEDSummit: A Community Beyond Borders, July 21-25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Sandeep Jauhar, cardiologist

Big Idea: Emotional stress can be a matter of life and death. Let’s factor that into how we care for our hearts.

How? “The heart may not originate our feelings, but it is highly responsive to them,” says Sandeep Jauhar. In his practice as a cardiologist, he has seen extensive evidence of this: grief and fear can cause profound cardiac injury. “Takotsubo cardiomyopathy,” or broken heart syndrome, has been found to occur when the heart weakens after the death of a loved one or the stress of a large-scale natural disaster. It comes with none of the other usual symptoms of heart disease, and it can resolve in just a few weeks. But it can also prove fatal. In response, Jauhar says that we need a new paradigm of care, one that considers the heart as more than “a machine that can be manipulated and controlled” — and recognizes that emotional stress is as important as cholesterol.

Quote of the talk: “Even if emotions are not contained inside our hearts, the emotional heart overlaps its biological counterpart in surprising and mysterious ways.”


“In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated,” says e-governance expert Anna Piperal. She speaks at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Anna Piperal, e-governance expert 

Big idea: Bureaucracy can be eradicated by going digital — but we’ll need to build in commitment and trust.

How? Estonia is one of the most digital societies on earth. After gaining independence 30 years ago, and subsequently building itself up from scratch, the country decided not only to digitize existing bureaucracy but also to create an entirely new system. Now citizens can conduct everything online, from running a business to voting and managing their healthcare records, and only need to show up in person for literally three things: to claim their identity card, marry or divorce, or sell a property. Anna Piperal explains how, using a form of blockchain technology, e-Estonia builds trust through the “once-only” principle, through which the state cannot ask for information more than once nor store it in more than one place. The country is working to redefine bureaucracy by making it more efficient, granting citizens full ownership of their data — and serving as a model for the rest of the world to do the same.

Quote of the talk: “In most countries, people don’t trust their governments, and the governments don’t trust them back. All the complicated paper-based formal procedures are supposed to solve that problem. Except that they don’t. They just make life more complicated.”


Eli Pariser, CEO of Upworthy

Big idea: We can find ways to make our online spaces civil and safe, much like our best cities.

How? Social media is a chaotic and sometimes dangerous place. With its trolls, criminals and segregated spaces, it’s a lot like New York City in the 1970s. But like New York City, it’s also a vibrant space in which people can innovate and find new ideas. So Eli Pariser asks: What if we design social media like we design cities, taking cues from social scientists and urban planners like Jane Jacobs? Built around empowered communities, one-on-one interactions and public censure for those who act out, platforms could encourage trust and discourse, discourage antisocial behavior and diminish the sense of chaos that leads some to embrace authoritarianism.

Quote of the talk: “If online digital spaces are going to be our new home, let’s make them a comfortable, beautiful place to live — a place we all feel not just included, but actually some ownership of. A place we get to know each other. A place you’d actually want not just to visit, but to bring your kids.”


“Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds,” says Poet Ali. He speaks at at TEDSummit: A Community Beyond Borders, July 25, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Poet Ali, architect of human connection

Big idea: You speak far more languages than you realize, with each language representing a gateway to understanding different societies, cultures and experiences.

How? Whether it’s the recognized tongue of your country or profession, or the social norms of your community, every “language” you speak is more than a lexicon of words: it also encompasses feelings like laughter, solidarity, even a sense of being left out. These latter languages are universal, and the more we embrace their commonality — and acknowledge our fluency in them — the more we can empathize with our fellow humans, regardless of our differences.

Quote of the talk: “Every language we learn is a portal by which we can access another language. The more you know, the more you can speak. … That’s why languages are so important, because they give us access to new worlds.”

TEDBusiness Unusual: Notes from Session 4 of TEDSummit 2019

ELEW and Marcus Miller blend jazz improvisation with rock in a musical cocktail of “rock-jazz.” They perform at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

To keep pace with our ever-changing world, we need out-of-the-box ideas that are bigger and more imaginative than ever. The speakers and performers from this session explore these possibilities, challenging us to think harder about the notions we’ve come to accept.

The event: TEDSummit 2019, Session 4: Business Unusual, hosted by Whitney Pennington Rodgers and Cloe Shasha

When and where: Wednesday, July 24, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Margaret Heffernan, Bob Langert, Rose Mutiso, Mariana Mazzucato, Diego Prilusky

Music: A virtuosic violin performance by Min Kym, and a closing performance by ELEW featuring Marcus Miller, blending jazz improvisation with rock in a musical cocktail of “rock-jazz.”

The talks in brief:

“The more we let machines think for us, the less we can think for ourselves,” says Margaret Heffernan. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Margaret Heffernan, entrepreneur, former CEO and writer 

Big idea: The more we rely on technology to make us efficient, the fewer skills we have to confront the unexpected. That’s why we must start practicing “just-in-case” management — anticipating the events (climate catastrophes, epidemics, financial crises) that will almost certainly happen but are ambiguous in timing, scale and specifics. 

Why? In our complex, unpredictable world, changes can occur out of the blue and have outsize impacts. When governments, businesses and individuals prioritize efficiency above all else, it keeps them from responding quickly, effectively and creatively. That’s why we all need to focus on cultivating what Heffernan calls our “unpredictable, messy human skills.” These include exercising our social abilities to build strong relationships and coalitions; humility to admit we don’t have all the answers; imagination to dream up never-before-seen solutions; and bravery to keep experimenting.

Quote of the talk: “The harder, deeper truth is that the future is uncharted, that we can’t map it until we get there. But that’s OK because we have so much capacity for imagination — if we use it. We have deep talents for inventiveness and exploration — if we apply them. We are brave enough to invent things we’ve never seen before. Lose these skills and we are adrift. But hone and develop them, and we can make any future we choose.”


Bob Langert, sustainability expert and VP of sustainability at McDonald’s

Big idea: Adversaries can be your best allies.

How? Three simple steps: reach out, listen and learn. As a “corporate suit” (his words), Bob Langert collaborates with his company’s strongest critics to find business-friendly solutions for society. Instead of denying and pushing back, he tries to embrace their perspectives and suggestions. He encourages others in positions of power to do the same, driven by this mindset: assume the best intentions of your critics; focus on the truth, the science and facts; and be open and transparent in order to turn critics into allies. The worst-case scenario? You’ll become better, your organization will become better — and you might make some friends along the way.

Fun fact: After working with NGOs in the 1990s, McDonald’s reduced 300 million pounds of waste over 10 years.


“When we talk about providing energy for growth, it is not just about innovating the technology: it’s the slow and hard work of improving governance, institutions and a broader macro-environment,” says Rose Mutiso. She speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Rose Mutiso, energy scientist

Big Idea: In order to grow out of poverty, African countries need a steady supply of abundant and affordable electricity.

Why? Energy poverty, or the lack of access to electricity and other basic energy services, affects nearly two-thirds of Sub-Saharan Africa. As the region’s population continues to grow, we have the opportunity to build a new energy system — from scratch — to grow with it, says Rose Mutiso. It starts with naming the systemic holes that current solutions (solar, LED and battery technology) overlook: we don’t have a clear consensus on what energy poverty is; there’s too much reliance on quick fixes; and we’re misdirecting our climate change concerns. What we need, Mutiso says, is nuanced, large-scale solutions with a diverse range of energy sources. For instance, the region has significant hydroelectric potential, yet less than 10 percent of this potential is currently being utilized. If we work hard to find new solutions to our energy deficits now, everybody benefits.

Quote of talk:Countries cannot grow out of poverty without access to a steady supply of abundant, affordable and reliable energy to power these productive sectors — what I call energy for growth.”


Mariana Mazzucato, economist and policy influencer

Big idea: We’ve forgotten how to tell the difference between the value extractors in the C-suites and finance sectors and the value producers, the workers and taxpayers who actually fuel innovation and productivity. And recently we’ve neglected the importance of even questioning what the difference between the two.

How? Economists must redefine and recognize true value creators, envisioning a system that rewards them just as much as CEOs, investors and bankers. We need to rethink how we value education, childcare and other “free” services — which don’t have a price but clearly contribute to sustaining our economies. We need to make sure that our entire society not only shares risks but also rewards.

Quote of the talk: “[During the bank bailouts] we didn’t hear the taxpayers bragging that they were value creators. But, obviously, having bailed out the biggest ‘value-creating’ productive companies, perhaps they should have.”


Diego Prilusky demos his immersive storytelling technology, bringing Grease to the TED stage. He speaks at TEDSummit: A Community Beyond Borders, July 24, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Diego Prilusky, video pioneer

Big idea: Get ready for the next revolution in visual storytelling: volumetric video, which aims to do nothing less than recreate reality as a cinematic experience.

How? Movies have been around for more than 100 years, but we’re still making (and watching) them in basically the same way. Can movies exist beyond the flat screen? Yes, says Diego Prilusky, but we’ll first need to completely rethink how they’re made. With his team at Intel Studios, Prilusky is pioneering volumetric video, a data-intensive medium powered by hundreds of sensors that capture light and motion from every possible direction. The result is like being inside a movie, which you could explore from different perspectives (or even through a character’s own eyes). In a live tech demo, Prilusky takes us inside a reshoot of an iconic dance number from the 1978 hit Grease. As actors twirl and sing “You’re the One That I Want,” he positions and repositions his perspective on the scene — moving, around, in front of and in between the performers. Film buffs can rest easy, though: the aim isn’t to replace traditional movies, he says, but to empower creators to tell stories in new ways, across multiple vantage points.

Quote of the talk: “We’re opening the gates for new possibilities of immersive storytelling.”

TEDThe Big Rethink: Notes from Session 3 of TEDSummit 2019

Marco Tempest and his quadcopters perform a mind-bending display that feels equal parts science and magic at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

In an incredible session, speakers and performers laid out the biggest problems facing the world — from political and economic catastrophe to rising violence and deepfakes — and some new thinking on solutions.

The event: TEDSummit 2019, Session 3: The Big Rethink, hosted by Corey Hajim and Cyndi Stivers

When and where: Tuesday, July 23, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: George Monbiot, Nick Hanauer, Raghuram Rajan, Marco Tempest, Rachel Kleinfeld, Danielle Citron, Patrick Chappatte

Music: KT Tunstall sharing how she found her signature sound and playing her hits “Miniature Disasters,” “Black Horse and the Cherry Tree” and “Suddenly I See.”

The talks in brief:

“We are a society of altruists, but we are governed by psychopaths,” says George Monbiot. He speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

George Monbiot, investigative journalist and self-described “professional troublemaker”

Big idea: To get out of the political mess we’re in, we need a new story that captures the minds of people across fault lines.

Why? “Welcome to neoliberalism, the zombie doctrine that never seems to die,” says George Monbiot. We have been induced by politicians and economists into accepting an ideology of extreme competition and individualism, weakening the social bonds that make our lives worth living. And despite the 2008 financial crisis, which exposed the blatant shortcomings of neoliberalism, it still dominates our lives. Why? We haven’t yet produced a new story to replace it — a new narrative to help us make sense of the present and guide the future. So, Monbiot proposes his own: the “politics of belonging,” founded on the belief that most people are fundamentally altruistic, empathetic and socially minded. If we can tap into our fundamental urge to cooperate — namely, by building generous, inclusive communities around the shared sphere of the commons — we can build a better world. With a new story to light the way, we just might make it there.

Quote of the talk: “We are a society of altruists, but we are governed by psychopaths.”


Nick Hanauer, entrepreneur and venture capitalist.

Big idea: Economics has ceased to be a rational science in the service of the “greater good” of society. It’s time to ditch neoliberal economics and create tools that address inequality and injustice.

How? Today, under the banner of unfettered growth through lower taxes, fewer regulations, and lower wages, economics has become a tool that enforces the growing gap between the rich and poor. Nick Hanauer thinks that we must recognize that our society functions not because it’s a ruthless competition between its economically fittest members but because cooperation between people and institutions produces innovation. Competition shouldn’t be between the powerful at the expense of everyone else but between ideas battling it out in a well-managed marketplace in which everyone can participate.

Quote of the talk: “Successful economies are not jungles, they’re gardens — which is to say that markets, like gardens, must be tended … Unconstrained by social norms or democratic regulation, markets inevitably create more problems than they solve.”


Raghuram Rajan shares his idea for “inclusive localism” — giving communities the tools to turn themselves around while establishing standards tp prevent discrimination and corruption — at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Raghuram Rajan, economist and former Governor of the Reserve Bank of India

Big idea: As markets grow and governments focus on solving economic problems from the top-down, small communities and neighborhoods are losing their voices — and their livelihoods. But if nations lack the tools to address local problems, it’s time to turn to grass-roots communities for solutions.

How? Raghuram Rajan believes that nations must exercise “inclusive localism”: giving communities the tools to turn themselves around while establishing standards tp prevent discrimination and corruption. As local leaders step forward, citizens become active, and communities receive needed resources from philanthropists and through economic incentives, neighborhoods will thrive and rebuild their social fabric.

Quote of the talk: “What we really need [are] bottom-up policies devised by the community itself to repair the links between the local community and the national — as well as thriving international — economies.”


Marco Tempest, cyber illusionist

Big idea: Illusions that set our imaginations soaring are created when magic and science come together.

Why? “Is it possible to create illusions in a world where technology makes anything possible?” asks techno-magician Marco Tempest, as he interacts with his group of small flying machines called quadcopters. The drones dance around him, reacting buoyantly to his gestures and making it easy to anthropomorphize or attribute personality traits. Tempest’s buzzing buddies swerve, hover and pause, moving in formation as he orchestrates them. His mind-bending display will have you asking yourself: Was that science or magic? Maybe it’s both.

Quote to remember: “Magicians are interesting, their illusions accomplish what technology cannot, but what happens when the technology of today seems almost magical?”


Rachel Kleinfeld, democracy advisor and author

Big idea: It’s possible to quell violence — in the wider world and in our own backyards — with democracy and a lot of political TLC.

How? Compassion-concentrated action. We need to dispel the idea that some people deserve violence because of where they live, the communities they’re a part of or their socio-economic background. Kleinfeld calls this particular, inequality-based vein of violence “privilege violence,” explaining how it evolves in stages and the ways we can eradicate it. By deprogramming how we view violence and its origins and victims, we can move forward and build safer, more secure societies.

Quote of the talk: “The most important thing we can do is abandon the notion that some lives are just worth less than others.”


“Not only do we believe fakes, we are starting to doubt the truth,” says Danielle Citron, revealing the threat deepfakes pose to the truth and democracy. She speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Danielle Citron, professor of law and deepfake scholar

Big idea: Deepfakes — machine learning technology used to manipulate or fabricate audio and video content — can cause significant harm to individuals and society. We need a comprehensive legislative and educational approach to the problem.

How? The use of deepfake technology to manipulate video and audio for malicious purposes — whether it’s to stoke violence against minorities or to defame politicians and journalists — is becoming ubiquitous. With tools being made more accessible and their products more realistic, what becomes of that key ingredient for democratic processes: the truth? As Danielle Citron points out, “Not only do we believe fakes, we are starting to doubt the truth.” The fix, she suggests, cannot be merely technological. Legislation worldwide must be tailored to fighting digital impersonations that invade privacy and ruin lives. Educational initiatives are needed to teach the media how to identify fakes, persuade law enforcement that the perpetrators are worth prosecuting and convince the public at large that the future of democracy really is at stake.

Quote of the talk: “Technologists expect that advances in AI will soon make it impossible to distinguish a fake video and a real one. How can truths emerge in a deepfake ridden ‘marketplace of ideas?’ Will we take the path of least resistance and just believe what we want to believe, truth be damned?”


“Freedom of expression is not incompatible with dialogue and listening to each other, but it is incompatible with intolerance,” says editorial cartoonist Patrick Chappatte. He speaks at TEDSummit: A Community Beyond Borders, July 23, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Patrick Chappatte, editorial cartoonist and graphic journalist

Big idea: We need humor like we need the air we breathe. We shouldn’t risk compromising our freedom of speech by censoring ourselves in the name of political correctness.

How? Our social media-saturated world is both a blessing and a curse for political cartoonists like Patrick Chappatte, whose satirical work can go viral while also making them, and the publications they work for, a target. Be it a prison sentence, firing or the outright dissolution of cartoon features in newspapers, editorial cartoonists worldwide are increasingly penalized for their art. Chappatte emphasizes the importance of the art form in political discourse by guiding us through 20 years of editorial cartoons that are equal parts humorous and caustic. In an age where social media platforms often provide places for fury instead of debate, he suggests that traditional media shouldn’t shy away from these online kingdoms, and neither should we. Now is the time to resist preventative self-censorship; if we don’t, we risk waking up in a sanitized world without freedom of expression.

Quote of the talk: “Freedom of expression is not incompatible with dialogue and listening to each other, but it is incompatible with intolerance.”

TEDAnthropo Impact: Notes from Session 2 of TEDSummit 2019

Radio Science Orchestra performs the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Session 2 of TEDSummit 2019 is all about impact: the actions we can take to solve humanity’s toughest challenges. Speakers and performers explore the perils — from melting glaciers to air pollution — along with some potential fixes — like ocean-going seaweed farms and radical proposals for how we can build the future.

The event: TEDSummit 2019, Session 2: Anthropo Impact, hosted by David Biello and Chee Pearlman

When and where: Monday, July 22, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Tshering Tobgay, María Neira, Tim Flannery, Kelly Wanser, Anthony Veneziale, Nicola Jones, Marwa Al-Sabouni, Ma Yansong

Music: Radio Science Orchestra, performing the musical odyssey “Prelude, Landing, Legacy” in celebration of the 50th anniversary of the Apollo 11 moon landing (and the 100th anniversary of the theremin’s invention)

… and something completely different: Improv maestro Anthony Veneziale, delivering a made-up-on-the-spot TED Talk based on a deck of slides he’d never seen and an audience-suggested topic: “the power of potatoes.” The result was … surprisingly profound.

The talks in brief:

Tshering Tobgay, politician, environmentalist and former Prime Minister of Bhutan

Big idea: We must save the Hindu Kush Himalayan glaciers from melting — or else face dire, irreversible consequences for one-fifth of the global population.

Why? The Hindu Kush Himalayan glaciers are the pulse of the planet: their rivers alone supply water to 1.6 billion people, and their melting would massively impact the 240 million people across eight countries within their reach. Think in extremes — more intense rains, flash floods and landslides along with unimaginable destruction and millions of climate refugees. Tshering Togbay telegraphs the future we’re headed towards unless we act fast, calling for a new intergovernmental agency: the Third Pole Council. This council would be tasked with monitoring the glaciers’ health, implementing policies to protect them and, by proxy, the billions of who depend of them.

Fun fact: The Hindu Kush Himalayan glaciers are the world’s third-largest repository of ice (after the North and South poles). They’re known as the “Third Pole” and the “Water Towers of Asia.”


Air pollution isn’t just bad for the environment — it’s also bad for our brains, says María Neira. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

María Neira, public health leader

Big idea: Air pollution isn’t just bad for our lungs — it’s bad for our brains, too.

Why? Globally, poor air quality causes seven million premature deaths per year. And all this pollution isn’t just affecting our lungs, says María Neira. An emerging field of research is shedding a light on the link between air pollution and our central nervous systems. The fine particulate matter in air pollution travels through our bloodstreams to our major organs, including the brain — which can slow down neurological development in kids and speed up cognitive decline in adults. In short: air pollution is making us less intelligent. We all have a role to play in curbing air pollution — and we can start by reducing traffic in cities, investing in clean energy and changing the way we consume.

Quote of the talk: “We need to exercise our rights and put pressure on politicians to make sure they will tackle the causes of air pollution. This is the first thing we need to do to protect our health and our beautiful brains.”


Tim Flannery, environmentalist, explorer and professor

Big idea: Seaweed could help us drawdown atmospheric carbon and curb global warming.

How? You know the story: the blanket of CO2 above our heads is driving adverse climate changes and will continue to do so until we get it out of the air (a process known as “drawdown”). Tim Flannery thinks seaweed could help: it grows fast, is made out of productive, photosynthetic tissue and, when sunk more than a kilometer deep into the ocean, can lock up carbon long-term. If we cover nine percent of the ocean surface in seaweed farms, for instance, we could sequester the same amount of CO2 we currently put into the atmosphere. There’s still a lot to figure, Flannery notes —  like how growing seaweed at scale on the ocean surface will affect biodiversity down below — but the drawdown potential is too great to allow uncertainty to stymie progress.

Fun fact: Seaweed is the most ancient multicellular life known, with more genetic diversity than all other multicellular life combined.


Could cloud brightening help curb global warming? Kelly Wanser speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. Photo: Bret Hartman / TED

Kelly Wanser, geoengineering expert and executive director of SilverLining

Big idea: The practice of cloud brightening — seeding clouds with sea salt or other particulates to reflect sunshine back into space — could partially offset global warming, giving us crucial time while we figure out game-changing, long-term solutions.

How: Starting in 2020, new global regulations will require ships to cut emissions by 85 percent. This is a good thing, right? Not entirely, says Kelly Wanser. It turns out that when particulate emissions (like those from ships) mix with clouds, they make the clouds brighter — enabling them to reflect sunshine into space and temporarily cool our climate. (Think of it as the ibuprofen for our fevered climate.) Wanser’s team and others are coming up with experiments to see if “cloud-brightening” proves safe and effective; some scientists believe increasing the atmosphere’s reflectivity by one or two percent could offset the two degrees celsius of warming that’s been forecasted for earth. As with other climate interventions, there’s much yet to learn, but the potential benefits make those efforts worth it. 

An encouraging fact: The global community has rallied to pull off this kind of atmospheric intervention in the past, with the 1989 Montreal Protocol.


Nicola Jones, science journalist

Big idea: Noise in our oceans — from boat motors to seismic surveys — is an acute threat to underwater life. Unless we quiet down, we will irreparably damage marine ecosystems and may even drive some species to extinction.

How? We usually think of noise pollution as a problem in big cities on dry land. But ocean noise may be the culprit behind marine disruptions like whale strandings, fish kills and drops in plankton populations. Fortunately, compared to other climate change solutions, it’s relatively quick and easy to dial down our noise levels and keep our oceans quiet. Better ship propellor design, speed limits near harbors and quieter methods for oil and gas prospecting will all help humans restore peace and quiet to our neighbors in the sea.

Quote of the talk: “Sonar can be as loud as, or nearly as loud as, an underwater volcano. A supertanker can be as loud as the call of a blue whale.”


TED curator Chee Pearlman (left) speaks with architect Marwa Al-Sabouni at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Marwa Al-Sabouni, architect, interviewed by TED curator Chee Pearlman

Big idea: Architecture can exacerbate the social disruptions that lead to armed conflict.

How? Since the time of the French Mandate, officials in Syria have shrunk the communal spaces that traditionally united citizens of varying backgrounds. This contributed to a sense of alienation and rootlessness — a volatile cocktail that built conditions for unrest and, eventually, war. Marwa Al-Sabouni, a resident of Homs, Syria, saw firsthand how this unraveled social fabric helped reduce the city to rubble during the civil war. Now, she’s taking part in the city’s slow reconstruction — conducted by citizens with little or no government aid. As she explains in her book The Battle for Home, architects have the power (and the responsibility) to connect a city’s residents to a shared urban identity, rather than to opposing sectarian groups.

Quote of the talk: “Syria had a very unfortunate destiny, but it should be a lesson for the rest of the world: to take notice of how our cities are making us very alienated from each other, and from the place we used to call home.”


“Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit,” says Ma Yansong. He speaks at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Ma Yansong, architect and artist

Big Idea: By creating architecture that blends with nature, we can break free from the “matchbox” sameness of many city buildings.

How? Ma Yansong paints a vivid image of what happens when nature collides with architecture — from a pair of curvy skyscrapers that “dance” with each other to buildings that burst out of a village’s mountains like contour lines. Yansong embraces the shapes of nature — which never repeat themselves, he notes — and the randomness of hand-sketched designs, creating a kind of “emotional scenery.” When we think beyond the boxy geometry of modern cities, he says, the results can be breathtaking.

Quote of talk: “Architecture is no longer a function or a machine for living. It also reflects the nature around us. It also reflects our soul and the spirit.”

TED10 years of TED Fellows: Notes from the Fellows Session of TEDSummit 2019

TED Fellows celebrate the 10-year anniversary of the program at TEDSummit: A Community Beyond Borders, July 22, 2019 in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

The event: TEDSummit 2019, Fellows Session, hosted by Shoham Arad and Lily Whitsitt

When and where: Monday, July 22, 2019, 9am BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Carl Joshua Ncube, Suzanne Lee, Sonaar Luthra, Jon Lowenstein, Alicia Eggert, Lauren Sallan, Laura Boykin

Opening: A quick, witty performance from Carl Joshua Ncube, one of Zimbabwe’s best-known comedians, who uses humor to approach culturally taboo topics from his home country.

Music: An opening from visual artist and cellist Paul Rucker of the hauntingly beautiful “Criminalization of Survival,” a piece he created to explore issues related to mass incarceration, racially motivated violence, police brutality and the impact of slavery in the US.

And a dynamic closing from hip-hop artist and filmmaker Blitz Bazawule and his band, who tells stories of the polyphonic African diaspora.

The talks in brief:

Laura Boykin, computational biologist at the University of Western Australia

Big idea: If we’re going to solve the world’s toughest challenges — like food scarcity for millions of people living in extreme poverty — science needs to be more diverse and inclusive. 

How? Collaborating with smallholder farmers in sub-Saharan Africa, Laura Boykin uses genomics and supercomputing to help control whiteflies and viruses, which cause devastation to cassava crops. Cassava is a staple food that feeds more than 500 million people in East Africa and 800 million people globally. Boykin’s work transforms farmers’ lives, taking them from being unable to feed their families to having enough crops to sell and enough income to thrive. 

Quote of the talk: “I never dreamt the best science I would ever do would be sitting on a blanket under a tree in East Africa, using the highest tech genomics gadgets. Our team imagined a world where farmers could detect crop viruses in three hours instead of six months — and then we did it.”


Lauren Sallan, paleobiologist at the University of Pennsylvania

Big idea: Paleontology is about so much more than dinosaurs.

How? The history of life on earth is rich, varied and … entirely too focused on dinosaurs, according to Lauren Sallan. The fossil record shows that earth has a dramatic past, with four mass extinctions occurring before dinosaurs even came along. From fish with fingers to galloping crocodiles and armored squid, the variety of life that has lived on our changing planet can teach us more about how we got here, and what the future holds, if we take the time to look.

Quote of the talk: “We have learned a lot about dinosaurs, but there’s so much left to learn from the other 99.9 percent of things that have ever lived, and that’s paleontology.”


“If we applied the same energy we currently do suppressing forms of life towards cultivating life, we’d turn the negative image of the urban jungle into one that literally embodies a thriving, living ecosystem,” says Suzanne Lee. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Suzanne Lee, designer, biofabricator

Big idea: What if we could grow bricks, furniture and even ready-made fabric for clothes?

How? Suzanne Lee is a fashion designer turned biofabrication pioneer who is part of a global community of innovators who are figuring how to grow their own materials. By utilizing living microbial organisms like bacteria and fungi, we can replace plastic, cement and other waste-generating materials with alternatives that can help reduce pollution.

Quote of the talk: If we applied the same energy we currently do suppressing forms of life towards cultivating life, we’d turn the negative image of the urban jungle into one that literally embodies a thriving, living ecosystem.”


Sonaar Luthra, founder and CEO of Water Canary

Big idea: We need to get better at monitoring the world’s water supplies — and we need to do it fast.

How? Building a global weather service for water would help governments, businesses and communities manage 21st-century water risk. Sonaar Luthra’s company Water Canary aims to develop technologies that more efficiently monitor water quality and availability around the world, avoiding the unforecasted shortages that happen now. Businesses and governments must also invest more in water, he says, and the largest polluters and misusers of water must be held accountable.

Quote of the talk: “It is in the public interest to measure and to share everything we can discover and learn about the risks we face in water. Reality doesn’t exist until it’s measured. It doesn’t just take technology to measure it — it takes our collective will.”


Jon Lowenstein shares photos from the migrant journey in Latin America at TEDSummit: A Community Beyond Borders. July 22, 2019, in Edinburgh, Scotland. (Photo: Dian Lofton / TED)

Jon Lowenstein, documentary photographer, filmmaker and visual artist

Big idea: We need to care about the humanity of migrants in order to understand the desperate journeys they’re making across borders.

How? For the past two decades, Jon Lowenstein has captured the experiences of undocumented Latin Americans living in the United States to show the real stories of the men and women who make up the largest transnational migration in world history. Lowenstein specializes in long-term, in-depth documentary explorations that confront power, poverty and violence. 

Quote of the talk: “With these photographs, I place you squarely in the middle of these moments and ask you to think about [the people in them] as if you knew them. This body of work is a historical document — a time capsule — that can teach us not only about migration, but about society and ourselves.”


Alicia Eggert’s art asks us to recognize where we are now as individuals and as a society, and to identify where we want to be in the future. She speaks at TEDSummit: A Community Beyond Borders, July 22, 2019, in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Alicia Eggert, interdisciplinary artist

Big idea: A brighter, more equitable future depends upon our ability to imagine it.  

How? Alicia Eggert creates art that explores how light travels across space and time, revealing the relationship between reality and possibility. Her work has been installed on rooftops in Philadelphia, bridges in Amsterdam and uninhabited islands in Maine. Like navigational signs, Eggert’s artwork asks us to recognize where we are now as individuals and as a society, to identify where we want to be in the future — and to imagine the routes we can take to get there.

Quote of the talk: “Signs often help to orient us in the world by telling us where we are now and what’s happening in the present moment. But they can also help us zoom out, shift our perspective and get a sense of the bigger picture.”

TEDWeaving Community: Notes from Session 1 of TEDSummit 2019

Hosts Bruno Giussani and Helen Walters open Session 1: Weaving Community on July 21, 2019, Edinburgh, Scotland. (Photo: Bret Hartman / TED)

The stage is set for TEDSummit 2019: A Community Beyond Borders! During the opening session, speakers and performers explored themes of competition, political engagement and longing — and celebrated the TED communities (representing 84 countries) gathered in Edinburgh, Scotland to forge TED’s next chapter.

The event: TEDSummit 2019, Session 1: Weaving Community, hosted by Bruno Giussani and Helen Walters

When and where: Sunday, July 21, 2019, 5pm BST, at the Edinburgh Convention Centre in Edinburgh, Scotland

Speakers: Pico Iyer, Jochen Wegner, Hajer Sharief, Mariana Lin, Carole Cadwalladr, Susan Cain with Min Kym

Opening: A warm Scottish welcome from raconteur Mackenzie Dalrymple

Music: Findlay Napier and Gillian Frame performing selections from The Ledger, a series of Scottish folk songs

The talks in brief:

“Seeming happiness can stand in the way of true joy even more than misery does,” says writer Pico Iyer. (Photo: Ryan Lash / TED)

Pico Iyer, novelist and nonfiction author

Big idea: The opposite of winning isn’t losing; it’s failing to see the larger picture.

Why? As a child in England, Iyer believed the point of competition was to win, to vanquish one’s opponent. Now, some 50 years later and a resident of Japan, he’s realized that competition can be “more like an act of love.” A few times a week, he plays ping-pong at his local health club. Games are played as doubles, and partners are changed every five minutes. As a result, nobody ends up winning — or losing — for long. Iyer has found liberation and wisdom in this approach. Just as in a choir, he says, “Your only job is to play your small part perfectly, to hit your notes with feeling and by so doing help to create a beautiful harmony that’s much greater than the sum of its parts.”

Quote of the talk: “Seeming happiness can stand in the way of true joy even more than misery does.”


Jochen Wegner, journalist and editor of Zeit Online

Big idea: The spectrum of belief is as multifaceted as humanity itself. As social media segments us according to our interests, and as algorithms deliver us increasingly homogenous content that reinforces our beliefs, we become resistant to any ideas — or even facts — that contradict our worldview. The more we sequester ourselves, the more divided we become. How can we learn to bridge our differences?

How? Inspired by research showing that one-on-one conversations are a powerful tool for helping people learn to trust each other, Zeit Online built Germany Talks, a “Tinder for politics” that facilitates “political arguments” and face-to-face meetings between users in an attempt to bridge their points-of-view on issues ranging from immigration to same-sex marriage. With Germany Talks (and now My Country Talks and Europe Talks) Zeit has facilitated conversations between thousands of Europeans from 33 countries.

Quote of the talk: “What matters here is not the numbers, obviously. What matters here is whenever two people meet to talk in person for hours, without anyone else listening, they change — and so do our societies. They change, little by little, discussion by discussion.”


“The systems we have nowadays for political decision-making are not from the people for the people — they have been established by the few, for the few,” says activist Hajer Sharief. She speaks at TEDSummit: A Community Beyond Borders, July 21, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Hajer Sharief, activist and cofounder of the Together We Build It Foundation

Big Idea: People of all genders, ages, races, beliefs and socioeconomic statuses should participate in politics.

Why? Hajer Sharief’s native Libya is recovering from 40 years of authoritarian rule and civil war. She sheds light on the way politics are involved in every aspect of life: “By not participating in it, you are literally allowing other people to decide what you can eat, wear, if you can have access to healthcare, free education, how much tax you pay, when can you retire, what is your pension,” she says. “Other people are also deciding whether your race is enough to consider you a criminal, or if your religion or nationality are enough to put you on a terrorist list.” When Sharief was growing up, her family held weekly meetings to discuss family issues, abiding by certain rules to ensured everyone was respectful and felt free to voice their thoughts. She recounts a meeting that went badly for her 10-year-old self, resulting in her boycotting them altogether for many years — until an issue came about which forced her to participate again. Rejoining the meetings was a political assertion, and it helped her realize an important lesson: you are never too young to use your voice — but you need to be present for it to work.

Quote of talk: “Politics is not only activism — it’s awareness, it’s keeping ourselves informed, it’s caring for facts. When it’s possible, it is casting a vote. Politics is the tool through which we structure ourselves as groups and societies.”


Mariana Lin, AI character designer and principal writer for Siri

Big idea: Let’s inject AI personalities with the essence of life: creativity, weirdness, curiosity, fun.

Why? Tech companies are going in two different directions when it comes to creating AI personas: they’re either building systems that are safe, flat, stripped of quirks and humor — or, worse, they’re building ones that are fully customizable, programmed to say just what you want to hear, just how you like to hear it. While this might sound nice at first, we’re losing part of what makes us human in the process: the friction and discomfort of relating with others, the hard work of building trusting relationships. Mariana Lin calls for tech companies to try harder to truly bring AI to life — in all its messy, complicated, uncomfortable glory. For starters, she says, companies can hire a diverse range of writers, creatives, artists and social thinkers to work on AI teams. If the people creating these personalities are as diverse as the people using it — from poets and philosophers to bankers and beekeepers — then the future of AI looks bright.

Quote of the talk: “If we do away with the discomfort of relating with others not exactly like us, with views not exactly like ours — we do away with what makes us human.”


In 2018, Carole Cadwalladr exposed Cambridge Analytica’s attempt to influence the UK Brexit vote and the 2016 US presidential election via personal data on Facebook. She’s still working to sound the alarm. She speaks at TEDSummit: A Community Beyond Borders, July 21, 2019, in Edinburgh, Scotland. (Photo: Bret Hartman / TED)

Carole Cadwalladr, investigative journalist, interviewed by TED curator Bruno Giussani

Big idea: Companies that collect and hoard our information, like Facebook, have become unthinkably powerful global players — perhaps more powerful than governments. It’s time for the public hold them accountable.

How? Tech companies with offices in different countries must obey the laws of those nations. It’s up to leaders to make sure those laws are enforced — and it’s up to citizens to pressure lawmakers to further tighten protections. Despite legal and personal threats from her adversaries, Carole Cadwalladr continues to explore the ways in which corporations and politicians manipulate data to consolidate their power.

Quote to remember: “In Britain, Brexit is this thing which is reported on as this British phenomenon, that’s all about what’s happening in Westminster. The fact that actually we are part of something which is happening globally — this rise of populism and authoritarianism — that’s just completely overlooked. These transatlantic links between what is going on in Trump’s America are very, very closely linked to what is going on in Britain.”


Susan Cain meditates on how the feeling of longing can guide us to a deeper understanding of ourselves, accompanied by Min Kym on violin, at TEDSummit: A Community Beyond Borders. July 21, 2019, Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Susan Cain, quiet revolutionary, with violinist Min Kym

Big idea: Life is steeped in sublime magic that you can tap into, opening a whole world filled with passion and delight.

How? By forgoing constant positivity for a state of mind more exquisite and fleeting — a place where light (joy) and darkness (sorrow) meet, known to us all as longing. Susan Cain weaves her journey in search for the sublime with the splendid sounds of Min Kym on violin, sharing how the feeling of yearning connects us to each other and helps us to better understand what moves us deep down.

Quote of the talk: “Follow your longing where it’s telling you to go, and may it carry you straight to the beating heart of the perfect and beautiful world.”

Krebs on SecurityThe Unsexy Threat to Election Security

Much has been written about the need to further secure our elections, from ensuring the integrity of voting machines to combating fake news. But according to a report quietly issued by a California grand jury this week, more attention needs to be paid to securing social media and email accounts used by election officials at the state and local level.

California has a civil grand jury system designed to serve as an independent oversight of local government functions, and each county impanels jurors to perform this service annually. On Wednesday, a grand jury from San Mateo County in northern California released a report which envisions the havoc that might be wrought on the election process if malicious hackers were able to hijack social media and/or email accounts and disseminate false voting instructions or phony election results.

“Imagine that a hacker hijacks one of the County’s official social media accounts and uses it to report false results on election night and that local news outlets then redistribute those fraudulent election results to the public,” the report reads.

“Such a scenario could cause great confusion and erode public confidence in our elections, even if the vote itself is actually secure,” the report continues. “Alternatively, imagine that a hacker hijacks the County’s elections website before an election and circulates false voting instructions designed to frustrate the efforts of some voters to participate in the election. In that case, the interference could affect the election outcome, or at least call the results into question.”

In San Mateo County, the office of the Assessor-County Clerk-Recorder and Elections (ACRE) is responsible for carrying out elections and announcing local results. The ACRE sends election information to some 43,000 registered voters who’ve subscribed to receive sample ballots and voter information, and its Web site publishes voter eligibility information along with instructions on how and where to cast ballots.

The report notes that concerns about the security of these channels are hardly theoretical: In 2010, intruders hijacked ACRE’s election results Web page, and in 2016, cyber thieves successfully breached several county employee email accounts in a spear-phishing attack.

In the wake of the 2016 attack, San Mateo County instituted two-factor authentication for its email accounts — requiring each user to log in with a password and a one-time code sent via text message to their mobile device. However, the county uses its own Twitter, Facebook, Instagram and YouTube accounts to share election information, and these accounts are not currently secured by two-factor authentication, the report found.

“The Grand Jury finds that the security protections against hijacking of ACRE’s website, email, and social media accounts are not adequate to protect against the current cyber threats. These vulnerabilities expose the public to potential disinformation by hackers who could hijack an ACRE online communication platform to mislead voters before an election or sow confusion afterward. Public confidence is at stake, even if the vote itself is secure.”

The jury recommended the county take full advantage of the most secure two-factor authentication now offered by all of these social media platforms: The use of a FIDO physical security key, a small hardware device which allows the user to complete the login process simply by inserting the USB device and pressing a button. The key works without the need for any special software drivers [full disclosure: Yubico, a major manufacturer of security keys, is currently an advertiser on this site.]

Additionally, the report urges election officials to migrate away from one-time codes sent via text message, as these can be intercepted via man-in-the-middle (MitM) and SIM-swapping attacks.  MitM attacks use counterfeit login pages to steal credentials and one-time codes.

An unauthorized SIM swap is an increasingly rampant form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

Samy Tarazi is a sergeant with the sheriff’s office in nearby Santa Clara County and a supervisor with the REACT Task Force, a team of law enforcement officers that has been tracking down individuals perpetrating SIM swapping attacks. Tarazi said he fully expects SIM swapping to emerge as a real threat to state and local election workers, as well as to staff and volunteers working for candidates.

“I wouldn’t be surprised if some major candidate or their staff has an email or social media account with tons of important stuff on there [whose password] can be reset with just a text message,” Tarazi told KrebsOnSecurity. “I hope that doesn’t happen, but politicians are regular people who use the same tools we use.”

A copy of the San Mateo County grand jury report is available here (PDF).

CryptogramSoftware Developers and Security

According to a survey: "68% of the security professionals surveyed believe it's a programmer's job to write secure code, but they also think less than half of developers can spot security holes." And that's a problem.

Nearly half of security pros surveyed, 49%, said they struggle to get developers to make remediation of vulnerabilities a priority. Worse still, 68% of security professionals feel fewer than half of developers can spot security vulnerabilities later in the life cycle. Roughly half of security professionals said they most often found bugs after code is merged in a test environment.

At the same time, nearly 70% of developers said that while they are expected to write secure code, they get little guidance or help. One disgruntled programmer said, "It's a mess, no standardization, most of my work has never had a security scan."

Another problem is it seems many companies don't take security seriously enough. Nearly 44% of those surveyed reported that they're not judged on their security vulnerabilities.

Worse Than FailureCodeSOD: Null Thought

These days, it almost seems like most of the developers writing code for the Java Virtual Machine aren’t doing it in Java. It’s honestly an interesting thing for programming language development, as more and more folks put together languages with fundamentally different principles from Java that still work on the JVM.

Like Kotlin. Kotlin blends functional programming styles with object-oriented principles and syntactic features built around writing more compact, concise code than equivalent Java. And it’s not even limited to Java- it can compile down into JavaScript or even target LLVM.

And since you can write bad code in any language, you can write bad code in Kotlin. Keith inherited a Kotlin-developed Android app.

In Kotlin, if you wanted to execute some code specifically if a previous step failed, you might use a try/catch exception handler. It’s built into Kotlin. But maybe you want to do some sort of error handling in your pipeline of function calls. So maybe you want something which looks more like:

response.code
    .wasSuccess()
    .takeIf { false }
    ?.run { doErrorThing(it) } 

wasSuccess in this example returns a boolean. The takeIf checks to see if the return value was false- if it wasn’t, the takeIf returns a null, and the run call doesn’t execute (the question mark is our nullable operator).

Kotlin has a weird relationship with nulls, and unless you’re explicit about where you expect nulls, it is going to complain at you. Which is why Keith got annoyed at this block:

/**
     * Handles error and returns NULL if an error was found, true if everything was good
     */
    inline fun Int.wasSuccessOrNull() : Boolean? {
        return if (handleConnectionErrors(this))
            null
        else
            true
    }
    /**
     * Return true if any connection issues were found, false if everything okay
     */
    fun handleConnectionErrors(errorCode: Int) : Boolean {
        return when (errorCode)
        {
            Error.EXPIRED_TOKEN -> { requiresSignIn.value = true;  true}
            Error.NO_CONNECTION -> { connectionIssue.value = true; true}
            Error.INACTIVE_ACCOUNT -> { inactiveAccountIssue.value = true; true}
            Error.BAD_GATEWAY -> { badGatewayIssue.value = true;  true}
            Error.SERVICE_UNAVAILABLE -> { badGatewayIssue.value = true;  true}
            else -> {
                if (badGatewayIssue.value == true) {
                    badGatewayIssue.value = false
                }
                noErrors.value = true
                false
            }
        }
    }

wasSuccessOrNull returns true, if the status code is successful, otherwise it returns… null? Why a null? Just so that a nullable ?.run… call can be used? It’s a weird choice. If we’re just going to return non-true/false values from our boolean methods, there are other options we could use.

But honestly, handleConnectionErrors, which it calls, is even more worrisome. If an error did occur, this causes a side effect. Each error condition sets a variable outside of the scope of this function. Presumably these are class members, but who knows? It could just as easily be globals.

If the error code isn’t an error, we explicitly clear the badGatewayIssue, but we don’t clear any of the other errors. Presumably that does happen, somewhere, but again, who knows? This is the kind of code where trying to guess at what works and what doesn’t is a bad idea.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Krebs on SecurityNeo-Nazi SWATters Target Dozens of Journalists

Nearly three dozen journalists at a broad range of major publications have been targeted by a far-right group that maintains a Deep Web database listing the personal information of people who threaten their views. This group specializes in encouraging others to harass those targeted by their ire, and has claimed responsibility for dozens of bomb threats and “swatting” incidents, where police are tricked into visiting potentially deadly force on the target’s address.

At issue is a site called the “Doxbin,” which hosts the names, addresses, phone number and often known IP addresses, Social Security numbers, dates of birth and other sensitive information on hundreds of people — and in some cases the personal information of the target’s friends and family.

A significant number of the 400+ entries on the Doxbin are for journalists (32 at last count, including Yours Truly), although the curators of Doxbin have targeted everyone from federal judges to executives at major corporations. In January 2019, the group behind Doxbin claimed responsibility for doxing and swatting a top Facebook executive.

At least two of the journalists listed on the Doxbin have been swatted in the past six months, including Pulitzer prize winning columnist Leonard G. Pitts Jr.

In some cases, as in the entries for reporters from CNN, Politico, ProPublica and Vox, no reason is mentioned for their inclusion. But in many others, the explanation seems connected to stories the journalist has published dealing with race or the anti-fascist (antifa) movement.

“Anti-white race/politics writer,” reads the note next to Pitts’ entry in the Doxbin.

Many of those listed on the site soon find themselves on the receiving end of extended threats and harassment. Carey Holzman, a computer technician who runs a Youtube channel on repairing and modding computers, was swatted in January, at about the same time his personal information showed up on the Doxbin.

More recently, his tormentors started calling his mobile phone at all hours of the night, threatening to hire a hit man to kill him. They even promised to have drugs ordered off the Dark Web and sent to his home, as part of a plan to get him arrested for drug possession.

“They said they were going to send me three grams of cocaine,” Holzman told KrebsOnSecurity.

Sure enough, earlier this month a small vial of white powder arrived via the U.S. Postal Service. Holzman said he didn’t open the vial, but instead handed it over to the local police for testing.

On the bright side, Holzman said, he is now on a first-name basis with some of the local police, which isn’t a bad idea for anyone who is being threatened with swatting attacks.

“When I told one officer who came out to my house that they threatened to send me drugs, he said ‘Okay, well just let me know when the cocaine arrives,'” Holzman recalled. “It was pretty funny because the other responding officer approached us and only caught the last thing his partner said, and suddenly looked at the other officer with deadly seriousness.”

The Doxbin is tied to an open IRC chat channel in which the core members discuss alt-right and racist tropes, doxing and swatting people, and posting videos or audio news recordings of their attacks.

The individual who appears to maintain the Doxbin is a fixture of this IRC channel, and he’s stated that he also was responsible for maintaining SiegeCulture, a white supremacist Web site that glorifies the writings of neo-Nazi James Mason.

Mason’s various written works call on followers to start a violent race war in the United States. Those works have become the de facto bible for the Atomwaffen Division, an extremist group whose members are suspected of having committed multiple murders in the U.S. since 2017.

Courtney Radsch, advocacy director at the nonprofit Committee to Protect Journalists, said lists that single out journalists for harassment unfortunately are not uncommon.

“We saw in the Ukraine, for example, there were lists of journalists compiled that led to harassment and threats against reporters there,” Radsch said. “We saw it in Malta where there were reports that the prime minister was part of a secret Facebook group used to coordinate harassment campaigns against a journalist who was later murdered. And we’ve seen the American government — the Customs and Border Protection — compiling lists of reporters and activists who’ve been singled out for questioning.”

Radsch said when CPJ became aware that the personal information of several journalists were listed on a doxing site, they reached out and provided information on relevant safety resources.

“It does seem that some of these campaigns by extremist groups are being coordinated in secret chat groups or dark web forums, where they can talk about the messaging before they bring it out into the public sphere,” she said.

In some ways, the Doxbin represents a far more extreme version of Exposed[.]su, a site erected briefly in 2013 by a gang of online hoodlums that doxed and swatted celebrities and public figures. The core members of that group were later arrested and charged with various crimes — including numerous swatting attacks.

One of the men in that group — convicted serial swatter and stalker Mir Islam — was arrested last year in the Philippines and charged with murder after he and an associate allegedly dumped the body of a friend in a local river.

Swatting attacks can quickly turn deadly. In March 2019, 26-year-old serial swatter Tyler Barriss was sentenced to 20 years in prison for making a phony emergency call to police in late 2017 that led to the shooting death of an innocent Kansas resident.

My hope is that law enforcement officials can shut down this Doxbin gang before someone else gets killed.

Sociological ImagesHappy Birthday, SocImages!

This month, Sociological Images turns twelve! It has been a busy year with some big changes backstage, so today I’m rounding up a dozen of our top posts as we look forward to a new academic year.

The biggest news is that the blog has a new home. It still lives on my computer (and The Society Pages’ network), but that home has moved east as I start as an assistant professor at UMass Boston Sociology. It’s a great department with wonderful colleagues who share a commitment to publicly-oriented scholarship, and I am excited to see what we can build in Boston! 

This year, readers loved the recent discovery that many of the players on the US Women’s National Team were sociology majors and a look at the the sociology of streetwear. We covered high-class hoaxes in the wake of the Fyre Festival documentaries, looked at who gets to win board games on TV, and followed the spooky side of science for the 200th anniversary of FrankensteinGender reveal parties were literally booming, unfortunately.

We also had a bunch of stellar guest posts this year, tackling all kinds of big questions like why people freaked out about fast food at the White House, why Green Book was a weird Oscar win, why people sometimes collect racist memorabilia, and why we often avoid reading the news. My personal favorites included a research roundup on women’s expertise and a look at the boom in bisexual identification in the United States. Please keep sending in guest posts! I want to feature your work. Guidelines are here, and you can always reach out via email or Twitter DM.

Finally, big thanks to all of you who read the blog actively, pass along posts to friends and family, and bring it into your classes. We keep this blog running on a zero-dollar budget, Creative Commons licensing, and a heavy dose of the sociological imagination that comes with your support. Happy reading!

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: A Long Conversion

Let’s talk a little bit about .NET’s TryParse method. Many types, especially the built in numerics, support it, alongside a Parse. The key difference between Parse and TryParse is that TryParse bakes the exception handling logic in it. Instead of using exceptions to tell you if it can parse or not, it returns a boolean value, instead.

If, for example, you wanted to take an input, and either store it as an integer in a database, or store a null, you might do something like this:

int result;
if (int.TryParse(data, out result)) {
  rowData[column] = result;
} else {
  rowData[column] = DBNull.Value;
}

There are certainly better, cleaner ways to handle this. Russell F. has a co-worker that has a much uglier way to handle this.

private void BuildIntColumns(string data, DataRow rowData, int startIndex, int length, string columnName, FileInfo file, string tableName)
{
    if (data.Trim().Length > startIndex)
    {
        try
        {
            int resultOut;

            if (data.Substring(startIndex, length).Trim() == "" || string.IsNullOrEmpty(data.Substring(startIndex, length).Trim()))
            {
                rowData[columnName] = DBNull.Value;
            }
            else if (int.TryParse(data.Substring(startIndex, length).Trim(), out resultOut) == false)
            {
                rowData[columnName] = DBNull.Value;
            }
            else
            {
                rowData[columnName] = Convert.ToInt32(data.Substring(startIndex, length).Trim());
            }
        }
        catch (Exception e)
        {
            rowData[columnName] = DBNull.Value;
            SaveErrorData(file, data, e.Message, tableName);
        }
    }
}

private void BuildLongColumns(string data, DataRow rowData, int startIndex, int length, string columnName, FileInfo file, string tableName)
{
    if (data.Trim().Length > startIndex)
    {
        try
        {
            int resultOut;

            if (data.Substring(startIndex, length).Trim() == "" || string.IsNullOrEmpty(data.Substring(startIndex, length).Trim()))
            {
                rowData[columnName] = DBNull.Value;
            }
            else if (int.TryParse(data.Substring(startIndex, length).Trim(), out resultOut) == false)
            {
                rowData[columnName] = DBNull.Value;
            }
            else
            {
                rowData[columnName] = Convert.ToInt64(data.Substring(startIndex, length).Trim());
            }
        }
        catch (Exception e)
        {
            rowData[columnName] = DBNull.Value;
            SaveErrorData(file, data, e.Message, tableName);
        }
    }
}

Here’s a case where the developer knows that methods like int.TryParse and string.IsNullOrEmpty exist, but they don’t understand them. More worrying, every operation has to be on a Substring for some reason, which implies that they’re processing strings which contain multiple fields of data. Presumably that means there’s a mainframe with fixed-width records someplace in the backend, but certainly splitting while converting falls is a bad practice.

For bonus points, compare the BuildIntColumns with BuildLongColumns. There’s an extremely subtle bug in the BuildLongColumns- specifically, they still do an int.TryParse, but this isn’t an int, it’s a long. If you actually tried to feed it a long integer, it would consider it invalid.

Russell adds: “I found this in an old file – no clue who wrote it or how it came to exist (I guess I could go through the commit logs, but that sounds like a lot of work).”

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

TEDA new mission to mobilize 2 million women in US politics … and more TED news

TED2019 may be past, but the TED community is busy as ever. Below, a few highlights.

Amplifying 2 million women across the U.S. Activist Ai-jen Poo, Black Lives Matter co-founder Alicia Garza and Planned Parenthood past president Cecile Richards have joined forces to launch Supermajority, which aims to train 2 million women in the United States to become activists and political leaders. To scale, the political hub plans to partner with local nonprofits across the country; as a first step, the co-founders will embark on a nationwide listening tour this summer. (Watch Poo’s, Garza’s and Richards’ TED Talks.)

Sneaker reseller set to break billion-dollar record. Sneakerheads, rejoice! StockX, the sneaker-reselling digital marketplace led by data expert Josh Luber, will soon become the first company of its kind with a billion-dollar valuation, thanks to a new round of venture funding.  StockX — a platform where collectible and limited-edition sneakers are bought and exchanged through real-time bidding — is an evolution of Campless, Luber’s site that collected data on rare sneakers. In an interview with The New York Times, Luber said that StockX pulls in around $2 million in gross sales every day. (Watch Luber’s TED Talk.)

A move to protect iconic African-American photo archives. Investment expert Mellody Hobson and her husband, filmmaker George Lucas, filed a motion to acquire the rich photo archives of iconic African-American lifestyle magazines Ebony and Jet. The archives are owned by the recently bankrupt Johnson Publishing Company; Hobson and Lucas intend to gain control over them through their company, Capital Holdings V. The collections include over 5 million photos of notable events and people in African American history, particularly during the Civil Rights Movement. In a statement, Capital Holdings V said: “The Johnson Publishing archives are an essential part of American history and have been critical in telling the extraordinary stories of African-American culture for decades. We want to be sure the archives are protected for generations to come.” (Watch Hobson’s TED Talk.)

10 TED speakers chosen for the TIME100. TIME’s annual round-up of the 100 most influential people in the world include climate activist Greta Thunberg, primatologist and environmentalist Jane Goodall, astrophysicist Sheperd Doeleman and educational entrepreneur Fred Swaniker — also Nancy Pelosi, the Pope, Leana Wen, Michelle Obama, Gayle King (who interviewed Serena Williams and now co-hosts CBS This Morning home to TED segment), and Jeanne Gang. Thunberg was honored for her work igniting climate change activism among teenagers across the world; Goodall for her extraordinary life work of research into the natural world and her steadfast environmentalism; Doeleman for his contribution to the Harvard team of astronomers who took the first photo of a black hole; and Swaniker for the work he’s done to educate and cultivate the next generation of African leaders. Bonus: TIME100 luminaries are introduced in short, sharp essays, and this year many of them came from TEDsters including JR, Shonda Rhimes, Bill Gates, Jennifer Doudna, Dolores Huerta, Hans Ulrich Obrest, Tarana Burke, Kai-Fu Lee, Ian Bremmer, Stacey Abrams, Madeleine Albright, Anna Deavere Smith and Margarethe Vestager. (Watch Thunberg’s, Goodall’s, Doeleman’s, Pelosi’s, Pope Francis’, Wen’s, Obama’s, King’s, Gang’s and Swaniker’s TED Talks.)

Meet Sports Illustrated’s first hijab-wearing model. Model and activist Halima Aden will be the first hijab-wearing model featured in Sports Illustrated’s annual swimsuit issue, debuting May 8. Aden will wear two custom burkinis, modestly designed swimsuits. “Being in Sports Illustrated is so much bigger than me,” Aden said in a statement, “It’s sending a message to my community and the world that women of all different backgrounds, looks, upbringings can stand together and be celebrated.” (Watch Aden’s TED Talk.)

Scotland post-surgical deaths drop by a third, and checklists are to thank. A study indicated a 37 percent decrease in post-surgical deaths in Scotland since 2008, which it attributed to the implementation of a safety checklist. The 19-item list created by the World Health Organization is supposed to encourage teamwork and communication during operations. The death rate fell to 0.46 per 100 procedures between 2000 and 2014, analysis of 6.8 million operations showed. Dr. Atul Gawande, who introduced the checklist and co-authored the study, published in the British Journal of Surgery, said to the BBC: “Scotland’s health system is to be congratulated for a multi-year effort that has produced some of the largest population-wide reductions in surgical deaths ever documented.” (Watch Gawanda’s TED Talk.) — BG

And finally … After the actor Luke Perry died unexpectedly of a stroke in February, he was buried according to his wishes: on his Tennessee family farm, wearing a suit embedded with spores that will help his body decompose naturally and return to the earth. His Infinity Burial Suit was made by Coeio, led by designer, artist and TED Fellow Jae Rhim Lee. Back in 2011, Lee demo’ed the mushroom burial suit onstage at TEDGlobal; now she’s focused on testing and creating suits for more people. On April 13, Lee spoke at Perry’s memorial service, held at Warner Bros. Studios in Burbank; Perry’s daughter revealed his story in a thoughtful instagram post this past weekend. (Watch Lee’s TED Talk.) — EM

CryptogramScience Fiction Writers Helping Imagine Future Threats

The French army is going to put together a team of science fiction writers to help imagine future threats.

Leaving aside the question of whether science fiction writers are better or worse at envisioning nonfictional futures, this isn't new. The US Department of Homeland Security did the same thing over a decade ago, and I wrote about it back then:

A couple of years ago, the Department of Homeland Security hired a bunch of science fiction writers to come in for a day and think of ways terrorists could attack America. If our inability to prevent 9/11 marked a failure of imagination, as some said at the time, then who better than science fiction writers to inject a little imagination into counterterrorism planning?

I discounted the exercise at the time, calling it "embarrassing." I never thought that 9/11 was a failure of imagination. I thought, and still think, that 9/11 was primarily a confluence of three things: the dual failure of centralized coordination and local control within the FBI, and some lucky breaks on the part of the attackers. More imagination leads to more movie-plot threats -- which contributes to overall fear and overestimation of the risks. And that doesn't help keep us safe at all.

Science fiction writers are creative, and creativity helps in any future scenario brainstorming. But please, keep the people who actually know science and technology in charge.

Last month, at the 2009 Homeland Security Science & Technology Stakeholders Conference in Washington D.C., science fiction writers helped the attendees think differently about security. This seems like a far better use of their talents than imagining some of the zillions of ways terrorists can attack America.

Worse Than FailureCodeSOD: Break my Validation

Linda inherited an inner-platform front-end framework. It was the kind of UI framework with an average file size of 1,000 lines of code, and an average test coverage of 0%.

Like most UI frameworks, it had a system for doing client side validation. Like most inner-platform UI frameworks, the validation system was fragile, confusing, and impossible to understand.

This code illustrates some of the problems:

/**
 * Modify a validator key, e.g change minValue or disable required
 *
 * @param fieldName
 * @param validatorKey - of the specific validator
 * @param key - the key to change
 * @param value - the value to set
 */
modifyValidatorValue: function (fieldName, validatorKey, key, value) {

	if (!helper.isNullOrUndefined(fieldName)) {
		// Iterate over fields
		for (var i in this.fields) {
			if (this.fields.hasOwnProperty(i)) {
				var field = this.fields[i];
				if (field.name === fieldName) {
					if (!helper.isNullOrUndefined(validatorKey)) {
						if (field.hasOwnProperty('validators')) {
							// Iterate over validators
							for (var j in field.validators) {
								if (field.validators.hasOwnProperty(j)) {
									var validator = field.validators[j];
									if (validator.key === validatorKey) {
										if (!helper.isNullOrUndefined(key) && !helper.isNullOrUndefined(value)) {
											if (validator.hasOwnProperty(key)) {
												validator[key] = value;
											}
										}
										break;
									}
								}
							}
						}
					}
					break;
				}
			}
		}
	}

}

What this code needs to do is find the field for a given name, check the list of validators for that field, and update a value on that validator.

Normally, in JavaScript, you’d do this by using an object/dictionary and accessing things directly by their keys. This, instead, iterates across all the fields on the object and all the validators on that field.

It’s smart code, though, as the developer knew that once they found the fields in question, they could exit the loops, so they added a few breaks to exit. I think those breaks are in the right place. To be sure, I’d need to count curly braces, and I’m worried that I don’t have enough fingers and toes to count them all in this case.

According to the git log, this code was added in exactly this form, and hasn’t been touched since.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cory DoctorowPodcast: Adversarial Interoperability is Judo for Network Effects

In my latest podcast (MP3), I read my essay SAMBA versus SMB: Adversarial Interoperability is Judo for Network Effects, published last week on EFF’s Deeplinks; it’s a furhter exploration of the idea of “adversarial interoperability” and the role it has played in fighting monopolies and preserving competition, and how we could use it to restore competition today.

In tech, “network effects” can be a powerful force to maintain market dominance: if everyone is using Facebook, then your Facebook replacement doesn’t just have to be better than Facebook, it has to be so much better than Facebook that it’s worth using, even though all the people you want to talk to are still on Facebook. That’s a tall order.

Adversarial interoperability is judo for network effects, using incumbents’ dominance against them. To see how that works, let’s look at a historical example of adversarial interoperability role in helping to unseat a monopolist’s dominance.

The first skirmishes of the PC wars were fought with incompatible file formats and even data-storage formats: Apple users couldn’t open files made by Microsoft users, and vice-versa. Even when file formats were (more or less) harmonized, there was still the problems of storage media: the SCSI drive you plugged into your Mac needed a special add-on and flaky driver software to work on your Windows machine; the ZIP cartridge you formatted for your PC wouldn’t play nice with Macs.

MP3

Krebs on SecurityWhat You Should Know About the Equifax Data Breach Settlement

Big-three credit bureau Equifax has reportedly agreed to pay at least $650 million to settle lawsuits stemming from a 2017 breach that let intruders steal personal and financial data on roughly 148 million Americans. Here’s a brief primer that attempts to break down what this settlement means for you, and what it says about the value of your identity.

Q: What happened?

A: If the terms of the settlement are approved by a court, the Federal Trade Commission says Equifax will be required to spend up to $425 million helping consumers who can demonstrate they were financially harmed by the breach. The company also will provide up to 10 years of free credit monitoring to those who had their data exposed.

Q: What about the rest of the money in the settlement?

A: An as-yet undisclosed amount will go to pay lawyers fees for the plaintiffs.

Q: $650 million seems like a lot. Is that some kind of record?

A: If not, it’s pretty close. The New York Times reported earlier today that it was thought to be the largest settlement ever paid by a company over a data breach, but that statement doesn’t appear anywhere in their current story.

Q: Hang on…148 million affected consumers…out of that $425 million pot that comes to just $2.87 per victim, right?

A: That’s one way of looking at it. But as always, the devil is in the details. You won’t see a penny or any other benefit unless you do something about it, and how much you end up costing the company (within certain limits) is up to you.

The Times reports that the proposed settlement assumes that only around seven million people will sign up for their credit monitoring offers. “If more do, Equifax’s costs for providing it could rise meaningfully,” the story observes.

Q: Okay. What can I do?

A: You can visit www.equifaxbreachsettlement.com, although none of this will be official or on offer until a court approves the settlement.

Q: Uh, that doesn’t look like Equifax’s site…

A: Good eyes! It’s not. It’s run by a third party. But we should probably just be grateful for that; given Equifax’s total dumpster fire of a public response to the breach, the company has shown itself incapable of operating (let alone securing) a properly functioning Web site.

Q: What can I get out of this?

A: In a nutshell, affected consumers are eligible to apply for one or more remedies, including:

Free credit monitoring: At least three years of credit monitoring via all three major bureaus simultaneously, including Equifax, Experian and Trans Union. The settlement also envisions up to six more years of single bureau monitoring through Experian. Or, if you don’t want to take advantage of the credit monitoring offers, you can opt instead for a $125 cash payment. You can’t get both.

Reimbursement: …For the time you spent remedying identity theft or misuse of your personal information caused by the breach, or purchasing credit monitoring or credit reports. This is capped at 20 total hours at $25 per hour ($500). Total cash reimbursement payment will not exceed $20,000 per consumer.

Help with ongoing identity theft issues: Up to seven years of “free assisted identity restoration services.” Again, the existing breach settlement page is light on specifics there.

Q: Does this cover my kids/dependents, too?

A: The FTC says if you were a minor in May 2017 (when Equifax first learned of the breach), you are eligible for a total of 18 years of free credit monitoring.

Q: How do I take advantage of any of these?

A: You can’t yet. The settlement has to be approved first. The settlement Web site says to check back again later. In addition to checking the breach settlement site periodically, consumers can sign up with the FTC to receive email updates about this settlement.

Update: The eligibility site is now active, at this link.

The settlement site said consumers also can call 1-833-759-2982 for more information. Press #2 on your phone’s keypad if you want to skip the 1-minute preamble and get straight into the queue to speak with a real person.

KrebsOnSecurity dialed in to ask for more details on the “free assisted identity restoration services,” and the person who took my call said they’d need to have some basic information about me in order to proceed. He said they needed my name, address and phone number to proceed. I gave him a number and a name, and after checking with someone he came back and said the restoration services would be offered by Equifax, but confirmed that affected consumers would still have to apply for it.

He added that the Equifaxbreachsettlement.com site will soon include a feature that lets visitors check to see if they’re eligible, but also confirmed that just checking eligibility won’t entitle one to any of the above benefits: Consumers will still need to file a claim through the site (when it’s available to do so).

ANALYSIS

We’ll see how this unfolds, but I’ll be amazed if anything related to taking advantage of this settlement is painless. I still can’t even get a free copy of my credit report from Equifax, as I’m entitled to under the law for free each year. I’ve even requested a copy by mail, according to their instructions. So far nothing.

But let’s say for the sake of argument that our questioner is basically right — that this settlement breaks down to about $3 worth of flesh extracted from Equifax for each affected person. The thing is, this figure probably is less than what Equifax makes selling your credit history to potential creditors each year.

In a 2017 story about the Equifax breach, I quoted financial fraud expert Avivah Litan saying the credit bureaus make about $1 every time they sell your credit file to a potential creditor (or identity thief posing as you). According to recent stats from the New York Federal Reserve, there were around 145 million hard credit pulls in the fourth quarter of 2018 (it’s not known how many of those were legitimate or desired).

But there is something you can do to stop Equifax and the other bureaus from profiting this way: Freeze your credit files with them.

A security freeze essentially blocks any potential creditors from being able to view or “pull” your credit file, unless you affirmatively unfreeze or thaw your file beforehand. With a freeze in place on your credit file, ID thieves can apply for credit in your name all they want, but they will not succeed in getting new lines of credit in your name because few if any creditors will extend that credit without first being able to gauge how risky it is to loan to you. And it’s now free for all Americans.

This post explains in detail what’s involved in freezing your files; how to place, thaw or remove a freeze; the limitations of a freeze and potential side effects; and alternatives to freezes.

What’s wrong with just using credit monitoring, you might ask? These services do not prevent thieves from using your identity to open new lines of credit, and from damaging your good name for years to come in the process. The most you can hope for is that credit monitoring services will alert you soon after an ID thief does steal your identity.

If past experience is any teacher, anyone with a freeze on their credit file will need to briefly thaw their file at Equifax before successfully signing up for the service when it’s offered. Since a law mandating free freezes across the land went into effect, all three bureaus have made it significantly easier to place and lift security freezes.

Probably too easy, in fact. Especially for people who had freezes in place before Equifax revamped its freeze portal. Those folks were issued a numeric PIN to lift, thaw or remove a freeze, but Equifax no longer lets those users do any of those things online with just the PIN.

These days, that PIN doesn’t play a role in any freeze or thaw process. To create an account at the MyEquifax portal, one need only supply name, address, Social Security number, date of birth, any phone number  (all data points exposed in the Equifax breach, and in any case widely available for sale in the cybercrime underground) and answer 4 multiple-guess questions whose answers are often available in public records or on social media.

And so this is yet another reason why you should freeze your credit: If you don’t sign up as you at MyEquifax, someone else might do it for you.

What else can you do in the meantime? Be wary of any phone calls or emails you didn’t sign up for that invoke this data breach settlement and ask you to provide personal and/or financial information.

And if you haven’t done so lately, go get a free copy of your credit report from annualcreditreport.com; by law all Americans are entitled to a free report from each of the major bureaus annually. You can opt for one report, or all three at once. Either way, make sure to read the report(s) closely and dispute anything that looks amiss.

It has long been my opinion that the big three bureaus are massively stifling innovation and offering consumers so little choice or say in the bargain that’s being made on the backs of their hard work, integrity and honesty. The real question is, if someone or something eventually serves to dis-intermediate the big three and throw the doors wide open to competition, what would the net effect for consumers?

Obviously, there is no way to know for sure, but a company that truly offered to pay consumers anywhere near what their data is actually worth would probably wipe these digital dinosaurs from the face of the earth.

That is, if the banks could get on board. After all, the banks and their various fingers are what drive the credit industry. And these giants don’t move very nimbly. They’re massively hard to turn on the simplest changes. And they’re not known for quickly warming to an entirely new model of doing business (i.e. huge cost investments).

My hometown Sen. Mark Warner (D-Va.) seems to suggest the $650 million settlement was about half what it should be.

“Americans don’t choose to have companies like Equifax collecting their data – by the nature of their business models, credit bureaus collect your personal information whether you want them to or not. In light of that, the penalties for failing to secure that data should be appropriately steep. While I’m happy to see that customers who have been harmed as a result of Equifax’s shoddy cybersecurity practices will see some compensation, we need structural reforms and increased oversight of credit reporting agencies in order to make sure that this never happens again.”

Sen. Warner sponsored a bill along with Sen. Elizabeth Warren (D-Ma.) called “The Data Breach Prevention and Compensation Act,” which calls for “robust compensation to consumers for stolen data; mandatory penalties on credit reporting agencies (CRAs) for data breaches; and giving the FTC more direct supervisory authority over data security at CRAs.

“Had the bill been in effect prior to the 2017 Equifax breach, the company would have had to pay at least $1.5 billion for their failure to protect Americans’ personal information,” Warner’s statement concludes.

Update, 4:44 pm: Added statement from Sen. Warner.

CryptogramHackers Expose Russian FSB Cyberattack Projects

More nation-state activity in cyberspace, this time from Russia:

Per the different reports in Russian media, the files indicate that SyTech had worked since 2009 on a multitude of projects since 2009 for FSB unit 71330 and for fellow contractor Quantum. Projects include:

  • Nautilus -- a project for collecting data about social media users (such as Facebook, MySpace, and LinkedIn).

  • Nautilus-S -- a project for deanonymizing Tor traffic with the help of rogue Tor servers.

  • Reward -- a project to covertly penetrate P2P networks, like the one used for torrents.

  • Mentor -- a project to monitor and search email communications on the servers of Russian companies.

  • Hope -- a project to investigate the topology of the Russian internet and how it connects to other countries' network.

  • Tax-3 -- a project for the creation of a closed intranet to store the information of highly-sensitive state figures, judges, and local administration officials, separate from the rest of the state's IT networks.

BBC Russia, who received the full trove of documents, claims there were other older projects for researching other network protocols such as Jabber (instant messaging), ED2K (eDonkey), and OpenFT (enterprise file transfer).

Other files posted on the Digital Revolution Twitter account claimed that the FSB was also tracking students and pensioners.

Worse Than FailureAn Indispensible Guru

Simple budgeting spreadsheet eg

Business Intelligence is the oxymoron that makes modern capitalism possible. In order for a company the size of a Fortune 500 to operate, key people have to know key numbers: how the finances are doing, what sales looks like, whether they're trending on target to meet their business goals or above or below that mystical number.

Once upon a time, Initech had a single person in charge of their Business Intelligence reports. When that person left for greener pastures, the company had a problem. They had no idea how he'd done what he did, just that he'd gotten numbers to people who'd needed them on time every month. There was no documentation about how he'd generated the numbers, nothing to pass on to his successor. They were entirely in the dark.

Recognizing the weakness of having a single point of failure, they set up a small team to create and manage the BI reporting duties and to provide continuity in the event that somebody else were to leave. This new team consisted of four people: Charles, a senior engineer; Liam, his junior sidekick; and two business folks who could provide context around what numbers were needed where and when.

Charles knew Excel. Intimately. Charles could make Excel do frankly astonishing things. Our submitter has worked in IT for three decades, and yet has never seen the like: spreadsheets so chock-full with array formulae, vlookups, hlookups, database functions, macros, and all manner of cascading sheets that they were virtually unreadable. Granted, Charles also had Microsoft Access. However, to Charles, the only thing Access was useful for was the initial downloading of all the raw data from the IBM AS/400 mainframe. Everything else was done in Excel.

Nobody doubted the accuracy of Charles' reports. However, actually running a report involved getting Excel primed and ready to go, hitting the "manual recalculate" button, then sitting back and waiting 45 minutes for the various formulae and macros to do all the things they did. On a good day, Charles could run five, maybe six reports. On a bad day? Three, at best.

By contrast, Liam was very much the "junior" role. He was younger, and did not have the experience that Charles did. That said, Liam was a smart cookie. He took one look at the spreadsheet monstrosity and knew it was a sledgehammer to crack a walnut. Unfortunately, he was the junior member of the engineering half of the team. His objections were taken as evidence of his inexperience, not his intelligence, and his suggestions were generally overlooked.

Eventually, Charles also left for bigger and brighter things, and Liam inherited all of his reports. Almost before the door had stopped swinging, Liam solicited our submitter's assistance in recreating just one of Charles' reports in Access. This took a combined four days; it mostly consisted of the submitter asking "So, Sheet 1, cell A1 ... where does that number come from?", and Liam pointing out the six other sheets they needed to pivot, fold, spindle, and mutilate in order to calculate the number. "Right, so, Sheet 1, cell A2 ... where does that one come from?" ...

Finally, it was done, and the replacement was ready to test. They agreed to run the existing report alongside the new one, so they could determine that the new reports were producing the same output as the old ones. Liam pressed "manual recalculate" while our submitter did the honors of running the new Access report. Thirty seconds later, the Access report gave up and spat out numbers.

"Damn," our submitter muttered. "Something's wrong, it must have died or aborted or something."

"I dunno," replied Liam, "those numbers do look kinda right."

Forty minutes later, when Excel finally finished running its version, lo and behold the outputs were identical. The new report was simply three orders of magnitude faster than the old one.

Enthused by this success, they not only converted all the other reports to run in Access, but also expanded them to run Region- and Area- level variants, essentially running the report about 54 times in the same time it took the original report to run once. They also set up an automatic distribution process so that the reports were emailed out to the appropriate department heads and sales managers. Management was happy; business was happy; developers were happy.

"Why didn't we do this sooner?" was the constant refrain from all involved.

Liam was able to give our submitter the real skinny: "Charles used the long run times to prove how complex the reports were, and therefore, how irreplaceable he was. 'Job security,' he used to call it."

To this day, Charles' LinkedIn profile shows that he was basically running Initech. Liam's has a little more humility about the whole matter. Which just goes to show you shouldn't undersell your achievements in your resume. On paper, Charles still looks like a genius who single-handedly solved all the BI issues in the whole company.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

TEDGetting ready for TEDSummit 2019: Photo gallery

TEDSummit banners are hung at the entrance of the Edinburgh Convention Centre, our home for the week. (Photo: Bret Hartman / TED)

TEDSummit 2019 officially kicks off today! Members of the TED community from 84 countries — TEDx’ers, TED Translators, TED Fellows, TED-Ed Educators, past speakers and more — have gathered in Edinburgh, Scotland to dream up what’s next for TED. Over the next week, the community will share adventures around the city, more than 100 Discovery Sessions and, of course, seven sessions of TED Talks.

Below, check out some photo highlights from the lead-up to TEDSummit and pre-conference activities. (And view our full photostream here.)

It takes a small (and mighty) army to get the theater ready for TED Talks.

(Photo: Bret Hartman / TED)

(Photo: Ryan Lash / TED)

(Photo: Bret Hartman / TED)

TED Translators get the week started with a trip to Edinburgh Castle, complete with high tea in the Queen Anne Tea Room, and a welcome reception.

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

A bit of Scottish rain couldn’t stop the TED Fellows from enjoying a hike up Arthur’s Seat. Weather wasn’t a problem at a welcome dinner.

(Photo: Ryan Lash / TED)

(Photo: Ryan Lash / TED)

(Photo: Ryan Lash / TED)

TEDx’ers kick off the week with workshops, panel discussions and a welcome reception.

(Photo: Dian Lofton / TED)

(Photo: Dian Lofton / TED)

(Photo: Ryan Lash / TED)

It’s all sun and blue skies for the speaker community’s trip to Edinburgh Castle and reception at the Playfair Library.

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

(Photo: Bret Hartman / TED)

Cheers to an amazing week ahead!

(Photo: Ryan Lash / TED)

,

Cory DoctorowAppearance on the Jim Rutt Podcast

Jim Rutt — former chairman of the Santa Fe Institute and ex-Network Solutions CEO — just launched his new podcast, and included me in the first season! (MP3) It was a characteristically wide-ranging, interdisciplinary kind of interview, covering competition and adversarial interoperability, technological self-determination and human rights, conspiracy theories and corruption. There’s a full transcript here.

,

Cory DoctorowPodcast: Occupy Gotham

In my latest podcast (MP3), I read my essay Occupy Gotham, published in Detective Comics: 80 Years of Batman, commemorating the 1000th issue of Batman comics. It’s an essay about the serious hard problem of trusting billionaires to solve your problems, given the likelihood that billionaires are the cause of your problems.

A thousand issues have gone by, nearly 80 years have passed, and Batman still hasn’t cleaned up Gotham. If the formal definition of insanity it trying the same thing and expecting a different outcome, then Bruce Wayne belongs in a group therapy session in Arkham Asylum. Seriously, get that guy some Cognitive Behavioral Therapy before he gets into some *serious* trouble.

As Upton Sinclair wrote in his limited run of *Batman: Class War*[1], “It’s impossible to get a man to understand something when his paycheck depends on his not understanding it.”

Gotham is a city riven by inequality. In 1939, that prospect had a very different valence than it has in 2018. Back in 1939, the wealth of the world’s elites had been seriously eroded, first by the Great War, then by the Great Crash and the interwar Great Depression, and what was left of those vast fortunes was being incinerated on the bonfire of WWII. Billionaire plutocrats were a curious relic of a nostalgic time before the intrinsic instability of extreme wealth inequality plunged the world into conflict.

MP3

,

Cory DoctorowI appeared on Nanowrimo’s awesome Write-Minded podcast to talk about Radicalized

It turned out really well!

Today’s dystopian fiction seems to be closer to reality than the dystopian fiction of the past. Brooke and Grant explore this new reality with Cory Doctorow, whose socially conscientious science fiction novels delve into topics of political consequence. From the ways in which anxieties fuel science fiction writers to how fiction has the power to change the way we think and operate in the world, today’s episode emphasizes the importance of dystopian fiction for its capacity to shed light on what is true, and what might happen, ideally, as Cory suggests, so that we might fix things before it’s too late.

,

Cory DoctorowWhere to catch me at San Diego Comic-Con!

I’m headed back to San Diego for Comic-Con next weekend, and you can catch me on Friday, Saturday and Sunday:

Friday, 5PM: Signing in AA04

Saturday, 5PM: Panel: Writing: Craft, Community, and Crossover (with James Killen, Seanan McGuire, Charlie Jane Anders,, Annalee Newitz, and Sarah Gailey), Room 23ABC

Sunday, 10AM: Signing and giveaway for Radicalized, Tor Booth, #2701.

I hope to see you there!

,

Sam VargheseThe Rise and Fall of the Tamil Tigers is full of errors

How many mistakes should one accept in a book before it is pulled from sale? In the normal course, when a book is accepted for publication by a recognised publishing company, there are experienced editors who go through the text, correct it and ensure that there are no major bloopers.

Then there are fact-checkers who ensure that what is stated within the book is, at least, mostly aligned with public versions of events from reliable sources.

In the case of The Rise and Fall of the Tamil Tigers, a third-rate book that is being sold by some outlets online, neither of these exercises has been carried out. And it shows.

If the author, Damian Tangram, had voiced his views or even put the entire book online as a free offering, that would be fine. He is entitled to his opinion. But when he is trying to trick people into buying what is a very poor-quality book, then warnings are in order.

Here are just a few of the screw-ups in the first 14 pages (the book is 375 pages!):

In the foreword, the words “Civil War” are capitalised. This is incorrect and would be right only if the civil war were exclusive to Sri Lanka. This is not the case; there are numerous civil wars occurring around the world.

Next, the foreword claims the war started in 1985. This, again, is incorrect. It began in July 1983. The next claim is that this war “had its origins in the post-war political exploitation of socially divisive policies.” Really? Post-war means after the war – this conflict must be the first in the world to begin after it was over!

There is a further line indicating that the author does not know how to measure time: “After spanning three decades…” A decade is 10 years, three decades would be 30 years. The war lasted a little less than 26 years – July 23, 1983 to May 19, 2009.

Again, in the foreword, the author claims that the Liberation Tigers of Tamil Eelam “grew from being a small despot insurgency to the most dangerous and effective terrorist organizations the world has ever seen.” The LTTE was started by Velupillai Pirapaharan in the 1970s. By 1983, it was already a well-organised fighting force. Further, the English is all wonky here, the word should be “organization”, not the plural “organizations”.

And this is just the first paragraph of the book!

The second paragraph of the foreword claims about the year 2006: “Just when things could not be worse Sri Lanka was plunged into all-out war.” The war started much earlier and was in a brief hiatus. The final effort to eliminate the LTTE began on April 25, 2006. And a comma would be handy there.

Then again, the book claims in the foreword that the only person who refused to compromise in the conflict had been Pirapaharan. This is incorrect as the government was also equally stubborn until 2002.

To go on, the foreword says the book gives “an example of how a terrorist organisation like the LTTE can proliferate and spread its murderous ambitions”. The book suffers from numerous generalisations of this kind, all of which are standout examples of malapropism. And one’s ambitions grow, one does not “spread ambitions”.

Again, and we are still in the foreword, the book says the LTTE “was a force that lasted for more than twenty-five years…” Given that it took shape in the 1970s, this is again incorrect.

Next, there is a section titled “About this Book”. Again, misplaced capitalisation of the word “Book”. The author says he visited Sri Lanka for the first time in 1989 soon after he “met and married wife….” Great use of butler English, that. Additionally, he could not have married his wife; the woman in question became his wife only after he married her.

That year, he claims the “most frightening organization” was the JVP or Janata Vimukti Peramuna or People’s Liberation Front. Two years later, when he returned for a visit, the JVP had been defeated but “the enemy to peace was the LTTE”. This is incorrect as the LTTE did not offer any let-up while the JVP was engaging the Sri Lankan army.

Of the Tigers he says, “the power that they had acquired over those short years had turned them into a mythical unstoppable force.” This is incorrect; the Tigers became a force to be reckoned with many years earlier. They did not undergo any major evolution between 1989 and 1991.

The author’s only connection to Sri Lanka is through marrying a Sri Lankan woman. This, plus his visits, he claims give him a “close connection” to the island!

So we go on: “I returned to Sri Lankan several times…” The word is Lanka, not Lankan. More proof of a lack of editing, if any is needed by now.

“Lives were being lost; freedoms restricted and the economy being crushed under a financial burden.” The use of that semi-colon illustrates Tangram’s level of ignorance of English. Factually, this is all stating the bleeding obvious as all these fallouts of the war had begun much earlier.

The author claims that one generation started the war, a second continued to fight and a third was about to grow up and be thrown into a conflict. How three generations can come and go in the space of 26 years is a mystery and more evidence that this man just flings words about and hopes that they make sense.

More in this same section: “To know Sri Lanka without war was once an impossible dream…” Rubbish, I lived in Sri Lanka from 1957 till 1972 and I knew peace most of the time.

Ending this section is another screw-up: “I returned to Sri Lanka in 2012, after the war had ended, to witness the one thing I had not seen in over 25 years: Peace.” Leaving aside the wrong capitalisation of the word “peace”, since the author’s first visit was in 1989, how does 2012 make it “over 25 years”? By any calculation, that comes to 23 years. This is a ruse used throughout the book to give the impression that the author has a long connection to Sri Lanka when in reality he is just an opportunist trying to turn some bogus observations about a conflict he knows nothing about into a cash cow.

And so far I have covered hardly three full pages!!!

Let’s have a brief look at Ch-1 (one presumes that means Chapter 1) which is titled “Understanding Sri Lanka” with a sub-heading “Introduction Understanding Sri Lanka: The impossible puzzle”. (If it is impossible as claimed, how does the author claim he can explain it?)

So we begin: “…there is very little information being proliferated into the general media about the nation of Sri Lanka.” The author obviously does not own a dictionary and is unaware how the word “proliferated” should be used.

There are several strange conglomerations of words which mean nothing; for example, take this: “Without referring to a map most people would struggle to name any other city than Colombo. Even the name of the island may reflect some kind of echo of when it changed from being called Ceylon to when it became Sri Lanka.” Apart from all the missing punctuation, and the mixing up of the order of words, what the hell does this mean? Echo?

On the next page, the book says: “At the bottom corner of India is the small teardrop-shaped island of Sri Lankan.” That sentence could have done without the last “n”. Once again, no editor. Only Tangram the great.

The word Sinhalese is spelt that way; there is nobody who spells it “Singhalese”. But since the author is unable to read Sinhala, the local language, he makes errors of this kind over and over again. Again, common convention for the usage of numbers in print dictates that one to nine be spelt out and any higher number be used as a figure. The author is blissfully unaware of this too.

The percentage of Sinhalese-speakers is given as “about 70%” when the actual figure is 74.9%. And then in another illustration of his sloppiness, the author writes “The next largest groups are the Tamils who make up about 15% of the population.” The Tamils are not a single group, being made up of plantation Tamils who were brought in by the British from India to work in the tea estates (4.2%) and the local Tamils (11.2%) who have been there much longer.

He then refers to a group whom he calls Burgers – which is something sold in a fast-food outlet. The Sri Lankan ethnic group is called Burghers, who are the product of inter-marriages between Sinhalese and Portuguese, British or Dutch invaders. There is a reference made to a group of indigenous people, whom the author calls “Vedthas.” Later, on the same page, he calls these people Veddhas. This is not the first time that it is clear that he could not be bothered to spell-check this bogus tome.

There’s more: the “Singhalese” (the author’s spelling) are claimed to be of “Arian” origin. The word is Aryan. Then there is a claim that the Veddhas are related to the “Australian Indigenous Aborigines”. One has yet to hear of any non-Indigenous Aborigines. Redundant words are one thing at which Tangram excels.

There is reference to some king of Sri Lanka known as King Dutigama. The man’s name was Dutugemunu. But then what’s the difference, eh? We might as well have called him Charlie Chaplin!

Referring to the religious groups in Sri Lanka, Tangram writes: “Hinduism also has a long history in Sri Lanka with Kovils…” The word is temples, unless one is writing in the vernacular. He claims Buddhists make up 80%; the correct figure is 70.2%.

Then referring to the Bo tree under which Gautama Buddha is claimed to have found enlightenment, Tangram claims it is more than 2000 years old and the oldest cultivated tree alive today. He does not know about the Bristlecone pine trees that date back more than 4700 years. Or the redwoods that carbon dating has shown to be more than 3000 years old.

This brings me to page 14 and I have crossed 1500 words! The entire book would probably take me a week to cover. But this number of errors should serve to prove my point: this book should not be sold. It is a fraud on the public.