Planet Russell

,

Planet DebianAndreas Metzler: exim update

Testing users might want to manually pull the latest (4.92.1-3) upload of Exim from sid instead of waiting for regular migration to testing. It fixes a nasty vulnerability.

,

CryptogramFriday Squid Blogging: Squid Perfume

It's not perfume for squids. Nor is it perfume made from squids. It's a perfume called Squid, "inspired by life in the sea."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianIustin Pop: Nationalpark Bike Marathon 2019

This is a longer story… but I think it’s interesting, nonetheless.

The previous 6 months

Due to my long-going foot injury, my training was sporadic at the end of 2018, and by February I realised I will have to stop most if not all training in order to have a chance of recovery. I knew that meant no bike races this year, and I was fine with that. Well, I had to.

The only compromise was that I wanted to do one race, the NBM short route, since that is really easy (both technique and endurance), and even untrained should be fine.

So my fitness (well, CTL) went down and down and down, and my weight didn’t improve either.

As April and May went by, my foot was getting better, but training on the indoor trainer was still causing problems and moving my recovery backwards, so easy times.

By June things were looking better - even was able to do a couple slow runs!, July started even better, but trainer sessions were still a no-go. Only early August I could reliably do a short trainer session without issues. The good side was that since around June I could bike to work and back without problems, but that’s a short commute.

But, I felt confident I could do ~50Km on a bike with some uphill, so I registered for the race.

In August I could also restart trainer sessions, and to my pleasant surprise, even harder ones. So, I started preparing for the race in the last 2 weeks before it :( Well, better than nothing.

Overall, my CTL went from 62-65 in August 2018, to 6 (six!!) in early May, and started increasing in June, reaching a peak of 23 on the day before the race. That’s almost three times lower… so my expectations for the race were low. Especially as the longest ride I did in these 6 months was one hour long or so, whereas the race is double this time.

The race week

Things were going quite well. I also started doing some of the Zwift Academy workouts, more precisely 1 and 2 which are at low power, and everything good.

On Wednesday however, I did workout number 3, which has two “as hard as possible” intervals. Which are exactly the ones that my foot can’t yet do, so it caused some pain, and some concern about the race.

Then we went to Scuol, and I didn’t feel very well on Thursday as I was driving. I thought some exercise would help, so I went for a short run, which reminded me I made my foot worse the previous day, and was even more concerned.

On Friday morning, instead of better, I felt terrible. All I wanted was to go back to bed and sleep the whole day, but I knew that would mean no race tomorrow. I thought - maybe some slight walking would be better for me than lie in bed… At least I didn’t have a running nose or couching, but this definitely felt like a cold.

We went up with the gondola, walked ~2Km, got back down, and I was feeling at least not worse. All this time, I was overly dressed and feeling cold, while everybody else was in t-shirt.

A bit of rest in the afternoon helped, I went and picked my race number and felt better. After dinner, as I was preparing my stuff for next morning, I started feeling a bit better about doing the race. “Did not start” was now out of the question, but whether it will be a DNF was not clear yet.

Race (day)

Thankfully the race doesn’t start early for this specific route, so the morning was relatively relaxed. But then of course I was late a few minutes, so I hurried on my bike to the train station, only to realise I’m among the early people. Loading bike, get on the bus (the train station in Scuol is off-line for repairs), long bus ride to start point, and then… 2 hours of waiting. And to think I thought I’m 5 minutes late :)

I spent the two hours just reading email and browsing the internet (and posting a selfie on FB), and then finally it was on.

And I was surprised how “on” the race was from the first 100 meters. Despite repeated announcements in those two hours that the first 2-3 km do not matter since they’re through the S-chanf village, people started going very hard as soon as there was some space.

So I find myself going 40km/h (forty!!!) on a mountain bike on relatively flat gravel road. This sounds unbelievable, right? But the data says:

  • race started at 1’660m altitude
  • after the first 4.4km, I was at 1’650m, with a total altitude gain of 37m (and thus a total descent of 47m); thus, not flat-flat, but not downhill
  • over this 4.4km, my average speed was 32.5km/h, and that includes starting from zero speed, and in the block (average speed for the first minute was 20km/h)

While 32.5km/h on an MTB sounds cool, the sad part was that I knew this was unsustainable, both from the pure speed point of view, and from the heart rate point of view. I was already at 148bpm after 2½ minutes, but then at minute 6 it went over 160bpm and stayed that way. That is above my LTHR (estimated by various Garmin devices), so I was dipping into reserves. VeloViewer estimates power output here was 300-370W in these initial minutes, which is again above my FTP, so…

But, it was fun. Then at 4.5 a bit of climb (800m long, 50 altitude, ~6.3%), after which it became mostly flow on gravel. And for the next hour, until the single long climb (towards Guarda), it was the best ride I had this year, and one of the best segments in races in general. Yes, there are a few short climbs here and there (e.g. a 10% one over ~700m, another ~11% one over 300m or so), but in general it’s slowly descending route from ~1700m altitude to almost 1400m (plus add in another ~120m gained), so ~420m descent over ~22km. This means, despite the short climbs, average speed is still god - a bit more than 25km/h, which made this a very, very nice segment. No foot pain, no exertion, mean heart rate 152bpm, which is fine. Estimated power is a bit high (mean 231W, NP: 271W ← this is strange, too high); I’d really like to have a power meter on my MTB as well.

Then, after about an hour, the climb towards Guarda starts. It’s an easy climb for a fit person, but as I said I was worse for fitness this year, and also my weight was not good. Data for this segment:

  • it took me 33m:48s
  • 281m altitude gained
  • 4.7km length
  • mean HR: 145bpm
  • mean cadence: 75rpm

I remember stopping to drink once, and maybe another time to rest for about half a minute, but not sure. I stopped in total 33s during this half hour.

Then slowly descending on nice roads towards the next small climb to Ftan, then another short climb (thankfully, I was pretty dead at this point) of 780m distance, 7m36s including almost a minute stop, then down, another tiny climb, then down for the finish.

At the finish, knowing that there’s a final climb after you descend into Scuol itself and before the finish, I gathered all my reserves to do the climb standing. Alas, it was a bit longer than I thought; I think I managed to do 75-80% of it standing, but then sat down. Nevertheless, a good short climb:

  • 22m altitude over 245m distance, done in 1m02s
  • mean grade 8.8%, max grade 13.9%
  • mean HR 161bpm, max HR 167bpm which actually was my max for this race
  • mean speed 14.0km/h
  • estimated mean power 433W, NP: 499w; seems optimistic, but I’ll take it :)

Not bad, not bad. I was pretty happy about being able to push this hard, for an entire minute, at the end of the race. Yay for 1m power?

And obligatory picture, which also shows the grade pretty well:

Final climb! And beating my PR by ~9% Final climb! And beating my PR by ~9%

I don’t know how the photographer managed to do it, but having those other people in the picture makes it look much better :)

Comparison with last year

Let’s first look at official race results:

  • 2018: 2h11m37s
  • 2019: 2h22m13s

That’s 8% slower. Honestly, I thought I will do much worse, given my lack of training. Or does a 2.5× lower CTL only result in 8% time loss?

Honestly, I don’t think so. I think what saved me this year was that—since I couldn’t do bike rides—I did much more cross-train as in core exercises. Nothing fancy, just planks, push-ups, yoga, etc. but it helped significantly. If my foot will be fine and I can do both for next year, I’ll be in a much better position.

And this is why the sub-title of this post is “Fitness has many meanings”. I really need to diversify my training in general, but I was thinking in a somewhat theoretical way about it; this race showed it quite practically.

If I look at Strava data, it gives an even more clear picture:

  • on the 1 hour long flat segment I was telling about, which I really loved, I got a PR beating previous year by 1 minute; Strava estimates 250W for this hour, which is what my FTP was last year;
  • on all the climbs, I was slower than last year, as expected, but on the longer climbs significantly so; and I was many times slower than even 2016, when I did the next longer route.

And I just realise, of the 10½m I took longer this year, 6½m I lost on the Guarda climb :)

So yes, you can’t discount fitness, but leg fitness is not everything, and Training Peaks it seems can’t show overall fitness.

At least I did beat my PR on the finishing climb (1m12s vs. 1m19s last year), because I had left aside those final reserves for it.

Next steps

Assuming I’m successful at dealing my foot issue, and that early next year I can restart consistent training, I’m not concerned. I need to put in regular session, I also need to put in long sessions. The success story here is clear, it all depends on willpower.

Oh, and losing ~10kg of fat wouldn’t be bad, like at all.

Cory DoctorowTalking RADICALIZED and MAKERS on Writers Voice

The Writers Voice podcast just published their interview with me about Radicalized; as a bonus, they include my decade-old interview about Makers in the recording!

MP3

CryptogramMassive iPhone Hack Targets Uyghurs

China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

Earlier this year, Google's Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

Earlier this year Google's Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Four more news stories.

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Google's announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

EDITED TO ADD: New data on the value of smartphone exploits:

According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

CryptogramThe Doghouse: Crown Sterling

A decade ago, the Doghouse was a regular feature in both my email newsletter Crypto-Gram and my blog. In it, I would call out particularly egregious -- and amusing -- examples of cryptographic "snake oil."

I dropped it both because it stopped being fun and because almost everyone converged on standard cryptographic libraries, which meant standard non-snake-oil cryptography. But every so often, a new company comes along that is so ridiculous, so nonsensical, so bizarre, that there is nothing to do but call it out.

Crown Sterling is complete and utter snake oil. The company sells "TIME AI," "the world's first dynamic 'non-factor' based quantum AI encryption software," "utilizing multi-dimensional encryption technology, including time, music's infinite variability, artificial intelligence, and most notably mathematical constancies to generate entangled key pairs." Those sentence fragments tick three of my snake-oil warning signs -- from 1999! -- right there: pseudo-math gobbledygook (warning sign #1), new mathematics (warning sign #2), and extreme cluelessness (warning sign #4).

More: "In March of 2019, Grant identified the first Infinite Prime Number prediction pattern, where the discovery was published on Cornell University's www.arXiv.org titled: 'Accurate and Infinite Prime Number Prediction from Novel Quasi-Prime Analytical Methodology.' The paper was co-authored by Physicist and Number Theorist Talal Ghannam PhD. The discovery challenges today's current encryption framework by enabling the accurate prediction of prime numbers." Note the attempt to leverage Cornell's reputation, even though the preprint server is not peer-reviewed and allows anyone to upload anything. (That should be another warning sign: undeserved appeals to authority.) PhD student Mark Carney took the time to refute it. Most of it is wrong, and what's right isn't new.

I first encountered the company earlier this year. In January, Tom Yemington from the company emailed me, asking to talk. "The founder and CEO, Robert Grant is a successful healthcare CEO and amateur mathematician that has discovered a method for cracking asymmetric encryption methods that are based on the difficulty of finding the prime factors of a large quasi-prime numbers. Thankfully the newly discovered math also provides us with much a stronger approach to encryption based on entangled-pairs of keys." Sounds like complete snake-oil, right? I responded as I usually do when companies contact me, which is to tell them that I'm too busy.

In April, a colleague at IBM suggested I talk with the company. I poked around at the website, and sent back: "That screams 'snake oil.' Bet you a gazillion dollars they have absolutely nothing of value -- and that none of their tech people have any cryptography expertise." But I thought this might be an amusing conversation to have. I wrote back to Yemington. I never heard back -- LinkedIn suggests he left in April -- and forgot about the company completely until it surfaced at Black Hat this year.

Robert Grant, president of Crown Sterling, gave a sponsored talk: "The 2019 Discovery of Quasi-Prime Numbers: What Does This Mean For Encryption?" I didn't see it, but it was widely criticized and heckled. Black Hat was so embarrassed that it removed the presentation from the conference website. (Parts of it remain on the Internet. Here's a short video from the company, if you want to laugh along with everyone else at terms like "infinite wave conjugations" and "quantum AI encryption." Or you can read the company's press release about what happened at Black Hat, or Grant's Twitter feed.)

Grant has no cryptographic credentials. His bio -- on the website of something called the "Resonance Science Foundation" -- is all over the place: "He holds several patents in the fields of photonics, electromagnetism, genetic combinatorics, DNA and phenotypic expression, and cybernetic implant technologies. Mr. Grant published and confirmed the existence of quasi-prime numbers (a new classification of prime numbers) and their infinite pattern inherent to icositetragonal geometry."

Grant's bio on the Crown Sterling website contains this sentence, absolutely beautiful in its nonsensical use of mathematical terms: "He has multiple publications in unified mathematics and physics related to his discoveries of quasi-prime numbers (a new classification for prime numbers), the world's first predictive algorithm determining infinite prime numbers, and a unification wave-based theory connecting and correlating fundamental mathematical constants such as Pi, Euler, Alpha, Gamma and Phi." (Quasi-primes are real, and they're not new. They're numbers with only large prime factors, like RSA moduli.)

Near as I can tell, Grant's coauthor is the mathematician of the company: "Talal Ghannam -- a physicist who has self-published a book called The Mystery of Numbers: Revealed through their Digital Root as well as a comic book called The Chronicles of Maroof the Knight: The Byzantine." Nothing about cryptography.

There seems to be another technical person. Ars Technica writes: "Alan Green (who, according to the Resonance Foundation website, is a research team member and adjunct faculty for the Resonance Academy) is a consultant to the Crown Sterling team, according to a company spokesperson. Until earlier this month, Green -- a musician who was 'musical director for Davy Jones of The Monkees' -- was listed on the Crown Sterling website as Director of Cryptography. Green has written books and a musical about hidden codes in the sonnets of William Shakespeare."

None of these people have demonstrated any cryptographic credentials. No papers, no research, no nothing. (And, no, self-publishing doesn't count.)

After the Black Hat talk, Grant -- and maybe some of those others -- sat down with Ars Technica and spun more snake oil. They claimed that the patterns they found in prime numbers allows them to break RSA. They're not publishing their results "because Crown Sterling's team felt it would be irresponsible to disclose discoveries that would break encryption." (Snake-oil warning sign #7: unsubstantiated claims.) They also claim to have "some very, very strong advisors to the company" who are "experts in the field of cryptography, truly experts." The only one they name is Larry Ponemon, who is a privacy researcher and not a cryptographer at all.

Enough of this. All of us can create ciphers that we cannot break ourselves, which means that amateur cryptographers regularly produce amateur cryptography. These guys are amateurs. Their math is amateurish. Their claims are nonsensical. Run away. Run, far, far, away.

But be careful how loudly you laugh when you do. Not only is the company ridiculous, it's litigious as well. It has sued ten unnamed "John Doe" defendants for booing the Black Hat talk. (It also sued Black Hat, which may have more merit. The company paid $115K to have its talk presented amongst actual peer-reviewed talks. For Black Hat to remove its nonsense may very well be a breach of contract.)

Maybe Crown Sterling can file a meritless lawsuit against me instead for this post. I'm sure it would think it'd result in all sorts of positive press coverage. (Although any press is good press, so maybe it's right.) But if I can prevent others from getting taken in by this stuff, it would be a good thing.

Planet DebianReproducible Builds: Reproducible Builds in August 2019

Welcome to the August 2019 report from the Reproducible Builds project!

In these monthly reports we outline the most important things that have happened in the world of Reproducible Builds and we have been up to.

As a quick recap of our project, whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed to end users or systems as precompiled binaries. The motivation behind the reproducible builds effort is to ensure zero changes have been introduced during these compilation processes. This is achieved by promising identical results are always generated from a given source thus allowing multiple third-parties to come to a consensus on whether a build was changed or even compromised.

In August’s month’s report, we cover:

  • Media coverage & eventsWebmin, CCCamp, etc.
  • Distribution workThe first fully-reproducible package sets, openSUSE update, etc
  • Upstream newslibfaketime updates, gzip, ensuring good definitions, etc.
  • Software developmentMore work on diffoscope, new variations in our testing framework, etc.
  • Misc newsFrom our mailing list, etc.
  • Getting in touchHow to contribute, etc

If you are interested in contributing to our project, please visit our Contribute page on our website.


Media coverage & events

A backdoor was found in Webmin a popular web-based application used by sysadmins to remotely manage Unix-based systems. Whilst more details can be found on upstream’s dedicated exploit page, it appears that the build toolchain was compromised. Especially of note is that the exploit “did not show up in any Git diffs” and thus would not have been found via an audit of the source code. The backdoor would allow a remote attacker to execute arbitrary commands with superuser privileges on the machine running Webmin. Once a machine is compromised, an attacker could then use it to launch attacks on other systems managed through Webmin or indeed any other connected system. Techniques such as reproducible builds can help detect exactly these kinds of attacks that can lay dormant for years. (LWN comments)

In a talk titled There and Back Again, Reproducibly! Holger Levsen and Vagrant Cascadian presented at the 2019 edition of the Linux Developer Conference in São Paulo, Brazil on Reproducible Builds.

LWN posted and hosted an interesting summary and discussion on Hardening the file utility for Debian. In July, Chris Lamb had cross-posted his reply to the “Re: file(1) now with seccomp support enabled” thread, originally started on the debian-devel mailing list. In this post, Chris refers to our strip-nondeterminism tool not being able to accommodate the additional security hardening in file(1) and the changes made to the tool in order to do fix this issue which was causing a huge number of regressions in our testing framework.

The Chaos Communication Camp — an international, five-day open-air event for hackers that provides a relaxed atmosphere for free exchange of technical, social, and political ideas — hosted its 2019 edition where there were many discussions and meet-ups at least partly related to Reproducible Builds. This including the titular Reproducible Builds Meetup session which was attended by around twenty-five people where half of them were new to the project as well as a session dedicated to all Arch Linux related issues.


Distribution work

In Debian, the first “package sets” — ie. defined subsets of the entire archive — have become 100% reproducible including as the so-called “essential” set for the bullseye distribution on the amd64 and the armhf architectures. This is thanks to work by Chris Lamb on bash, readline and other low-level libraries and tools. Perl still has issues on i386 and arm64, however.

Dmitry Shachnev filed a bug report against the debhelper utility that speaks to issues around using the date from the debian/changelog file as the source for the SOURCE_DATE_EPOCH environment variable as this can lead to non-intuitive results when package is automatically rebuilt via so-called binary (NB. not “source”) NMUs. A related issue was later filed against qtbase5-dev by Helmut Grohne as this exact issue led to an issue with co-installability across architectures.

Lastly, 115 reviews of Debian packages were added, 45 were updated and 244 were removed this month, appreciably adding to our knowledge about identified issues. Many issue types were updated by Chris Lamb, including embeds_build_data_via_node_preamble, embeds_build_data_via_node_rollup, captures_build_path_in_beam_cma_cmt_files, captures_varying_number_of_build_path_directory_components (discussed later), timezone_specific_files_due_to_haskell_devscripts, etc.

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution. New issues were found from enabling Link Time Optimization (LTO) in this distribution’s Tumbleweed branch. This affected, for example, nvme-cli as well as perl-XML-Parser and pcc with packaging issues.


Upstream news


Software development

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. In August we wrote a large number of such patches, including:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb made the following changes:

  • Improvements:
    • Don’t fallback to an unhelpful raw hexdump when, for example, readelf(1) reports an minor issue in a section in an ELF binary. For example, when the .frames section is of the NOBITS type its contents are apparently “unreliable” and thus readelf(1) returns 1. (#58, #931962)
    • Include either standard error or standard output (not just the latter) when an external command fails. []
  • Bug fixes:
    • Skip calls to unsquashfs when we are neither root nor running under fakeroot. (#63)
    • Ensure that all of our artificially-created subprocess.CalledProcessError instances have output instances that are bytes objects, not str. []
    • Correct a reference to parser.diff; diff in this context is a Python function in the module. []
    • Avoid a possible traceback caused by a str/bytes type confusion when handling the output of failing external commands. []
  • Testsuite improvements:

    • Test for 4.4 in the output of squashfs -version, even though the Debian package version is 1:4.3+git190823-1. []
    • Apply a patch from László Böszörményi to update the squashfs test output and additionally bump the required version for the test itself. (#62 & #935684)
    • Add the wabt Debian package to the test-dependencies so that we run the WebAssembly tests on our continuous integration platform, etc. []
  • Improve debugging:
    • Add the containing module name to the (eg.) “Using StaticLibFile for ...” debugging messages. []
    • Strip off trailing “original size modulo 2^32 671” (etc.) from gzip compressed data as this is just a symptom of the contents itself changing that will be reflected elsewhere. (#61)
    • Avoid a lack of space between “... with return code 1” and “Standard output”. []
    • Improve debugging output when instantantiating our Comparator object types. []
    • Add a literal “eg.” to the comment on stripping “original size modulo...” text to emphasise that the actual numbers are not fixed. []
  • Internal code improvements:
    • No need to parse the section group from the class name; we can pass it via type built-in kwargs argument. []
    • Add support to Difference.from_command_exc and friends to ignore specific returncodes from the called program and treat them as “no” difference. []
    • Simplify parsing of optional command_args argument to Difference.from_command_exc. []
    • Set long_description_content_type to text/x-rst to appease the PyPI.org linter. []
    • Reposition a comment regarding an exception within the indented block to match Python code convention. []

In addition, Mattia Rizzolo made the following changes:

  • Now that we install wabt, expect its tools to be available. []
  • Bump the Debian backport check. []

Lastly, Vagrant Cascadian updated diffoscope to versions 120, 121 and 122 in the GNU Guix distribution.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, Chris Lamb made the following changes.

  • Add support for enabling and disabling specific normalizers via the command line. (#10)
  • Drop accidentally-committed warning emitted on every fixture-based test. []
  • Reintroduce the .ar normalizer [] but disable it by default so that it can be enabled with --normalizers=+ar or similar. (#3)
  • In verbose mode, print the normalizers that strip-nondeterminism will apply. []

In addition, there was some movement on an issue in the Archive::Zip Perl module that strip-nondeterminism uses regarding the lack of support for bzip compression that was originally filed in 2016 by Andrew Ayer.

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org.

This month Vagrant Cascadian suggested and subsequently implemented that we additionally test a varying build directory of different string lengths (eg. /path/to/123 vs /path/to/123456 but we also vary the number of directory components within this, eg. /path/to/dir vs. /path/to/parent/subdir. Curiously, whilst it was a priori believed that was rather unlikely to yield differences, Chris Lamb has managed to identify approximately twenty packages that are affected by this issue.

It was also noticed that our testing of the Coreboot free software firmware fails to build the toolchain since we switched to building on the Debian buster distribution. The last successful build was on August 7th but all newer builds have failed.

In addition, the following code changes were performed in the last month:

  • Chris Lamb: Ensure that the size the log for the second build in HTML pages was also correctly formatted (eg. “12KB” vs “12345”). []

  • Holger Levsen:

  • Mathieu Parent: Update the contact details for the Debian PHP Group. []

  • Mattia Rizzolo:

The usual node maintenance was performed by Holger Levsen [][] and Vagrant Cascadian [].


Misc news

There was a yet more effort put into our our website this month, including misc copyediting by Chris Lamb [], Mathieu Parent referencing his fix for php-pear [] and Vagrant Cascadian updating a link to his homepage. [].

On our mailing list this month Santiago Torres Arias started a Setting up a MS-hosted rebuilder with in-toto metadata thread regarding Microsoft’s interest in setting up a rebuilder for Debian packages touching on issues of transparency logs and the integration of in-toto by the Secure Systems Lab at New York University. In addition, Lars Wirzenius continued conversation regarding various questions about reproducible builds and their bearing on building a distributed continuous integration system.

Lastly, in a thread titled Reproducible Builds technical introduction tutorial Jathan asked whether anyone had some “easy” Reproducible Builds tutorials in slides, video or written document format.


Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa, Mathieu Parent and Vagrant Cascadian. Wiedemann, Chris Lamb, Holger Levsen, Mathieu Parent and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

CryptogramDefault Password for GPS Trackers

Many GPS trackers are shipped with the default password 123456. Many users don't change them.

We just need to eliminate default passwords. This is an easy win.

,

LongNowWhat a Prehistoric Monument Reveals about the Value of Maintenance

Members of Long Now London chalking the White Horse of Uffington, a 3000-year-old prehistoric hill figure in England. Photo by Peter Landers.

Imagine, if you will, that you could travel back in time three thousand years to the late Bronze Age, with a bird’s eye view of a hill near the present-day village of Uffington, in Oxfordshire, England. From that vantage, you’d see the unmistakable outlines of a white horse etched into the hillside. It is enormous — roughly the size of a football field — and visible from 20 miles away.

Now, fast forward. Bounding through the millennia, you’d see groups of people arrive from nearby villages at regular intervals, making their way up the hill to partake in good old fashioned maintenance. Using hammers and buckets of chalk, they scour the hillside to ensure the giant pictogram is not obscured. Without this regular maintenance, the hill figure would not last more than twenty years before becoming entirely eroded and overgrown. After the work is done, a festival is held.

Entire civilizations rise and fall. The White Horse of Uffington remains. Scribes and historians make occasional note of the hill figure, such as in the Welsh Red Book of Hergest in 01382 (“Near to the town of Abinton there is a mountain with a figure of a stallion upon it, and it is white. Nothing grows upon it.”) or by the Oxford archivist Francis Wise in 01736 (“The ceremony of scouring the Horse, from time immemorial, has been solemnized by a numerous concourse of people from all the villages roundabout.”). Easily recognizable by air, the horse is temporarily hidden by turf during World War II to confuse Luftwaffe pilots during bombing raids. Today, the National Trust preserves the site, overseeing a regular act of maintenance 3,000 years in the making.

Long Now London chalking the White Horse. Photo by Peter Landers.

Earlier this summer, members of Long Now London took a field trip to Uffington to participate in the time-honored ceremony. Christopher Daniel, the lead organizer of Long Now London, says the idea to chalk the White Horse came from a conversation with Sarah Davis of Longplayer about the maintenance of art, places and meaning across generations and millennia.

“Sitting there, performing the same task as people in 01819, 00819 and around 800 BCE, it is hard not to consider the types and quantities of meaning and ceremony that may have been attached to those actions in those times,” Daniel says.

The White Horse of Uffington in 01937. Photo by Paul Nash.

Researchers still do not know why the horse was made. Archaeologist David Miles, who was able to date the horse to the late Bronze Age using a technique called optical stimulated luminescence, told The Smithsonian that the figure of the horse might be related to early Celtic art, where horses are depicted pulling the chariot of the sun across the sky. From the bottom of the Uffington hill, the sun appears to rise behind the horse.

“From the start the horse would have required regular upkeep to stay visible,” Emily Cleaver writes in The Smithsonian. “It might seem strange that the horse’s creators chose such an unstable form for their monument, but archaeologists believe this could have been intentional. A chalk hill figure requires a social group to maintain it, and it could be that today’s cleaning is an echo of an early ritual gathering that was part of the horse’s original function.”

In her lecture at Long Now earlier this summer, Monica L. Smith, an archaeologist at UCLA, highlighted the importance of ritual sites like Stonehenge and Göbekli Tepe in the eventual formation of cities.

“The first move towards getting people into larger and larger groups was probably something that was a ritual impetus,” she said. “The idea of coming together and gathering with a bunch of strangers was something that is evident in the earliest physical ritual structures that we have in the world today.”

Photo by Peter Landers.

For Christopher Daniel, the visit to Uffington underscored that there are different approaches to making things last. “The White Horse requires rather more regular maintenance than somewhere like Stonehenge,” he said. “But thankfully the required techniques and materials are smaller, simpler and much closer to hand.”

Though it requires considerably less resources to maintain, and is more symbolic than functional, the Uffington White Horse nonetheless offers a lesson in maintaining the infrastructure of cities today. “As humans, we are historically biased against maintenance,” Smith said in her Long Now lecture. “And yet that is exactly what infrastructure needs.”

The Golden Gate Bridge in San Francisco. Photo by Rich Niewiroski Jr.

When infrastructure becomes symbolic to a built environment, it is more likely to be maintained. Smith gave the example of San Francisco’s Golden Gate Bridge to illustrate this point. Much like the White Horse, the Golden Gate Bridge undergoes a willing and regular form of maintenance. “Somewhere between five to ten thousand gallons of paint a year, and thirty painters, are dedicated to keeping the Golden Gate Bridge golden,” Smith said.

Photos by Peter Landers.

For members of Long Now London, chalking the White Horse revealed that participating in acts of maintenance can be deeply meaningful. “It felt at once both quite ordinary and utterly sublime,” Daniel said. “The physical activity itself is in many ways straightforward. It is the context and history that elevate those actions into what we found to be a profound experience. It was also interesting to realize that on some level it does not matter why we do this. What matters most is that it is done.”

Daniel hopes Long Now London will carry out this “secular pilgrimage” every year. 

“Many of the oldest protected routes across Europe are routes of pilgrimage,” he says. “They were stamped out over centuries by people carrying or searching for meaning. I want the horse chalking to carry meaning across both time and space. If even just a few of us go to the horse each year with this intent, it becomes a tradition. Once something becomes a tradition, it attracts meaning, year by year, generation by generation. On this first visit to the horse, one member brought his kids. A couple of other members said they want to bring theirs in the future. This relatively simple act becomes something we do together—something we remember as much for the communal spirit as for the activity itself. In so doing, we layer new meaning onto old as we bash new chalk into old.”


Learn More

,

Cory DoctorowCritical essays (including mine) discuss Toronto’s plan to let Google build a surveillance-based “smart city” along its waterfront

Sidewalk Labs is Google’s sister company that sells “smart city” technology; its showcase partner is Toronto, my hometown, where it has made a creepy shitshow out of its freshman outing, from the mass resignations of its privacy advisors to the underhanded way it snuck in the right to take over most of the lakeshore without further consultations (something the company straight up lied about after they were outed). Unsurprisingly, the city, the province, the country, and the company are all being sued over the plan.

Toronto Life has run a great, large package of short essays by proponents and critics of the project, from Sidewalk Labs CEO Dan Doctoroff (no, really, that’s his name) to former privacy commissioner Ann Cavoukian (who evinces an unfortunate belief in data-deidentification) to city councillor and former Greenpeace campaigner Gord Perks to urban guru Richard Florida to me.

I wrote about the prospect that a city could be organized around the principle that people are sensors, not things to be sensed — that is, imagine an internet of things that doesn’t relegate the humans it notionally serves to the status of “thing.”

Our cities are necessarily complex, and they benefit from sensing and control. From census tracts to John Snow’s 19th-century map of central London cholera infections, we have been gathering telemetry on the performance of our cities in order to tune and optimize them for hundreds of years. As cities advance, they demand ever-higher degrees of sensing and actuating. But smart cities have to be built by cities themselves, democratically controlled and publicly owned. Reinventing company towns with high-tech fillips is not a path to a brighter future. It’s a new form of digital feudalism.

Humans are excellent sensors. We’re spectacular at deciding what we want for dinner, which seat on the subway we prefer, which restaurants we’re likely to enjoy and which strangers we want to talk to at parties. What if people were the things that smart cities were designed to serve, rather than the data that smart cities lived to process? Here’s how that could work. Imagine someone ripped all the surveillance out of Android and all the anti-user controls out of iOS and left behind nothing on your phone but code that serves you, not manufacturers or advertisers. It could still collect data—where you are, who you talk to, what you say—but it would be a roach motel for that data, which would check in to your device but not check out. It wouldn’t be available to third parties without your ongoing consent.

A phone that knows about you—but doesn’t tell anyone what it knows about you—would be your interface to a better smart city. The city’s systems could stream data to your device, which could pick the relevant elements out of the torrent: the nearest public restroom, whether the next bus has a seat for you, where to get a great sandwich.


A smart city should serve its users, not mine their data
[Cory Doctorow/Toronto Life]

The Sidewalk Wars [Toronto Life]

(Image: Cryteria, CC-BY, modified)

Planet DebianTim Retout: PA Consulting

In early October, I will be saying goodbye to my colleagues at CV-Library after 7.5 years, and joining PA Consulting in London as a Principal Consultant.

Over the course of my time at CV-Library I have got married, had a child, and moved from Southampton to Bedford. I am happy to have played a part in the growth of CV-Library as a leading recruitment brand in the UK, especially helping to make the site more reliable - I can tell more than a few war stories.

Most of all I will remember the people. I still have much to learn about management, but working with such an excellent team, the years passed very quickly. I am grateful to everyone, and wish them all every future success.

CryptogramCredit Card Privacy

Good article in the Washington Post on all the surveillance associated with credit card use.

Krebs on Security‘Satori’ IoT Botnet Operator Pleads Guilty

A 21-year-old man from Vancouver, Wash. has pleaded guilty to federal hacking charges tied to his role in operating the “Satori” botnet, a crime machine powered by hacked Internet of Things (IoT) devices that was built to conduct massive denial-of-service attacks targeting Internet service providers, online gaming platforms and Web hosting companies.

Kenneth “Nexus-Zeta” Schuchman, in an undated photo.

Kenneth Currin Schuchman pleaded guilty to one count of aiding and abetting computer intrusions. Between July 2017 and October 2018, Schuchman was part of a conspiracy with at least two other unnamed individuals to develop and use Satori in large scale online attacks designed to flood their targets with so much junk Internet traffic that the targets became unreachable by legitimate visitors.

According to his plea agreement, Schuchman — who went by the online aliases “Nexus” and “Nexus-Zeta” — worked with at least two other individuals to build and use the Satori botnet, which harnessed the collective bandwidth of approximately 100,000 hacked IoT devices by exploiting vulnerabilities in various wireless routers, digital video recorders, Internet-connected security cameras, and fiber-optic networking devices.

Satori was originally based on the leaked source code for Mirai, a powerful IoT botnet that first appeared in the summer of 2016 and was responsible for some of the largest denial-of-service attacks ever recorded (including a 620 Gbps attack that took KrebsOnSecurity offline for almost four days).

Throughout 2017 and into 2018, Schuchman worked with his co-conspirators — who used the nicknames “Vamp” and “Drake” — to further develop Satori by identifying and exploiting additional security flaws in other IoT systems.

Schuchman and his accomplices gave new monikers to their IoT botnets with almost each new improvement, rechristening their creations with names including “Okiru,” and “Masuta,” and infecting up to 700,000 compromised systems.

The plea agreement states that the object of the conspiracy was to sell access to their botnets to those who wished to rent them for launching attacks against others, although it’s not clear to what extent Schuchman and his alleged co-conspirators succeeded in this regard.

Even after he was indicted in connection with his activities in August 2018, Schuchman created a new botnet variant while on supervised release. At the time, Schuchman and Drake had something of a falling out, and Schuchman later acknowledged using information gleaned by prosecutors to identify Drake’s home address for the purposes of “swatting” him.

Swatting involves making false reports of a potentially violent incident — usually a phony hostage situation, bomb threat or murder — to prompt a heavily-armed police response to the target’s location. According to his plea agreement, the swatting that Schuchman set in motion in October 2018 resulted in “a substantial law enforcement response at Drake’s residence.”

As noted in a September 2018 story, Schuchman was not exactly skilled in the art of obscuring his real identity online. For one thing, the domain name used as a control server to synchronize the activities of the Satori botnet was registered to the email address nexuczeta1337@gmail.com. That domain name was originally registered to a “ZetaSec Inc.” and to a “Kenny Schuchman” in Vancouver, Wash.

People who operate IoT-based botnets maintain and build up their pool of infected IoT systems by constantly scanning the Internet for other vulnerable systems. Schuchman’s plea agreement states that when he received abuse complaints related to his scanning activities, he responded in his father’s identity.

“Schuchman frequently used identification devices belonging to his father to further the criminal scheme,” the plea agreement explains.

While Schuchman may be the first person to plead guilty in connection with Satori and its progeny, he appears to be hardly the most culpable. Multiple sources tell KrebsOnSecurity that Schuchman’s co-conspirator Vamp is a U.K. resident who was principally responsible for coding the Satori botnet, and as a minor was involved in the 2015 hack against U.K. phone and broadband provider TalkTalk.

Multiple sources also say Vamp was principally responsible for the 2016 massive denial-of-service attack that swamped Dyn — a company that provides core Internet services for a host of big-name Web sites. On October 21, 2016, an attack by a Mirai-based IoT botnet variant overwhelmed Dyn’s infrastructure, causing outages at a number of top Internet destinations, including Twitter, Spotify, Reddit and others.

The investigation into Schuchman and his alleged co-conspirators is being run out the FBI field office in Alaska, spearheaded by some of the same agents who helped track down and ultimately secure guilty pleas from the original co-authors of the Mirai botnet.

It remains to be seen what kind of punishment a federal judge will hand down for Schuchman, who reportedly has been diagnosed with Asperger Syndrome and autism. The maximum penalty for the single criminal count to which he’s pleaded guilty is 10 years in prison and fines of up to $250,000.

However, it seems likely his sentencing will fall well short of that maximum: Schuchman’s plea deal states that he agreed to a recommended sentence “at the low end of the guideline range as calculated and adopted by the court.”

Cory DoctorowPodcast: Barlow’s Legacy

Even though I’m at Burning Man, I’ve snuck out an extra scheduled podcast episode (MP3): Barlow’s Legacy is my contribution to the Duke Law and Tech Review’s special edition, THE PAST AND FUTURE OF THE INTERNET: Symposium for John Perry Barlow:

“Who controls the past controls the future; who controls the present controls the past.”1

And now we are come to the great techlash, long overdue and desperately needed. With the techlash comes the political contest to assemble the narrative of What Just Happened and How We Got Here, because “Who controls the past controls the future. Who controls the present controls the past.”Barlow is a key figure in that narrative, and so defining his legacy is key to the project of seizing the future.

As we contest over that legacy, I will here set out my view on it. It’s an insider’s view: I met Barlow first through his writing, and then as a teenager on The WELL, and then at a dinner in London with Electronic Frontier Foundation (EFF) attorney Cindy Cohn (now the executive director of EFF), and then I worked with him, on and off, for more than a decade, through my work with EFF. He lectured to my students at USC, and wrote the introduction to one of my essay collections, and hung out with me at Burning Man, and we spoke on so many bills together, and I wrote him into one of my novels as a character, an act that he blessed. I emceed events where he spoke and sat with him in his hospital room as he lay dying. I make no claim to being Barlow’s best or closest friend, but I count myself mightily privileged to have been a friend, a colleague, and a protege of his.

There is a story today about “cyber-utopians”told as a part of the techlash: Once, there were people who believed that the internet would automatically be a force for good. They told us all to connect to one another and fended off anyone who sought to rein in the power of the technology industry, naively ushering in an era of mass surveillance, monopolism, manipulation, even genocide. These people may have been well-intentioned, but they were smart enough that they should have known better, and if they hadn’t been so unforgivably naive (and, possibly, secretly in the pay of the future monopolists) we might not be in such dire shape today.

MP3

Cory DoctorowPodcast: Barlow’s Legacy

Even though I’m at Burning Man, I’ve snuck out an extra scheduled podcast episode (MP3): Barlow’s Legacy is my contribution to the Duke Law and Tech Review’s special edition, THE PAST AND FUTURE OF THE INTERNET: Symposium for John Perry Barlow:

“Who controls the past controls the future; who controls the present controls the past.”1

And now we are come to the great techlash, long overdue and desperately needed. With the techlash comes the political contest to assemble the narrative of What Just Happened and How We Got Here, because “Who controls the past controls the future. Who controls the present controls the past.”Barlow is a key figure in that narrative, and so defining his legacy is key to the project of seizing the future.

As we contest over that legacy, I will here set out my view on it. It’s an insider’s view: I met Barlow first through his writing, and then as a teenager on The WELL, and then at a dinner in London with Electronic Frontier Foundation (EFF) attorney Cindy Cohn (now the executive director of EFF), and then I worked with him, on and off, for more than a decade, through my work with EFF. He lectured to my students at USC, and wrote the introduction to one of my essay collections, and hung out with me at Burning Man, and we spoke on so many bills together, and I wrote him into one of my novels as a character, an act that he blessed. I emceed events where he spoke and sat with him in his hospital room as he lay dying. I make no claim to being Barlow’s best or closest friend, but I count myself mightily privileged to have been a friend, a colleague, and a protege of his.

There is a story today about “cyber-utopians”told as a part of the techlash: Once, there were people who believed that the internet would automatically be a force for good. They told us all to connect to one another and fended off anyone who sought to rein in the power of the technology industry, naively ushering in an era of mass surveillance, monopolism, manipulation, even genocide. These people may have been well-intentioned, but they were smart enough that they should have known better, and if they hadn’t been so unforgivably naive (and, possibly, secretly in the pay of the future monopolists) we might not be in such dire shape today.

MP3

Planet DebianCandy Tsai: Beyond Outreachy: Final Interviews and Internship Review

The last few weeks (week 11 – week 13) of Outreachy were probably the hardest weeks. I had to do 3 informational interviews with the goal of getting a better picture of the open source/free software industry.

The thought of talking to someone I don’t even know just overwhelms me. So this assignment just leaves me scared to death. Pressing that “Send Email” button to these interviewees required me to summon up all of my courage but it was totally worth it. I really appreciate their time for chatting with me.

On the other hand, it’s hard to believe the internship is coming to an end! Good news is that I will be sticking around Debci after this.

Informational Interviews

The theme for week 11 was “Making connections”, so I had to reach out to 3 people that is beyond my network for an informational interview. I’d rather just call it an informational chat so it doesn’t sound too scary. My goal is to know better about how companies involved with open source survive and how others are working remotely. Therefore, my criteria for the interviewees were really simple but not so easy to find:

  • Lives in Taiwan
  • Works remotely
  • Their company is dedicated to open source/free software

At last I was really lucky to have them for my final assignment:

  • Andrew Lee: also part of the Debian community, has been working on open source for more than 20 years in Taiwan, works at Collabora, an open source consulting company
  • James Tien: works at Automattic, a company known for working on WordPress, link to his blog here, it’s in Chinese
  • Gordon Tai: works at Ververica, a company known for working on Apache Flink

A big thanks to them and to terceiro who guided me through this. During my search, it was hard to find someone working for a local company here in Taiwan that fulfilled my criteria.

I have organized and summarized below:

Staying in Open Source

  • Passion is needed for coding and open source, you have to really enjoy it to stay in the long run
  • Opportunities come unexpectedly, you never know when or how they would come to you
  • Write “code”

Remote work

  • People can still sense your up and downs through your chat messages and facial expressions in video calls
  • Communication is much more important than the actual code itself, sometimes you spend more time speaking out than coding down
  • You can use a pomodora clock to help focus or try working different hours
  • Try working in different environments: cafe shop, under the tree, in the forest, beside the ocean etc.
  • Exercise, exercise, exercise!

These above were very general but it was the stories and experiences that I heard that were special. It is for you to find out by doing your own informational interviews!

Internship Review

Last but not least, here’s a wrap-up of my internship in QA format. Hope that this helps anyone that wants to participate in future rounds get a better picture of how the Outreachy Internship was with Debian Debci.

What communication skills have you learned during the internship?

Asking questions and leaving comments. Since I am not a user of Debci, I started with absolutely zero knowledge. I even had to write a blog post to help me clarify what those terminology were for and come back to it if I forget in the future. I asked lots of questions and luckily my mentors were really patient. As we only have a video chat once per week, we discussed mostly through comments in the merge request or issue most of the time. Sometimes I find it hard for me to convey my thoughts with just words (or images), so this was a really good practice.

What technical skills have you learned during the internship?

I only started writing Ruby because of this internship. Also, I wrote my first VagrantFile. In general, I think getting familiar with the code base was the best part.

How did your Outreachy mentor help you along the way?

My mentor reviewed my code thoroughly and guided my through the whole internship. We did pair programming sessions and that was really helpful.

What was one amazing thing that happened during the internship?

The informational interview was pretty horrifying and at the same time amazing. The idea never really came to me that people would really take the time and talk to someone they don’t know. I am really grateful for their time. Their personal stories were really inspiring and motivating too.

How did Outreachy help you feel more confident in making open source and free software contributions?

In my opinion, Outreachy’s initial contribution phase is really important. It kind of forces candidates to at least reach out and take the first step. Even if you didn’t get accepted in the end, you still went from 0 to 1. That is when you find out that the community is actually pretty welcoming to newcomers. So for me, it wasn’t about being more confident, but rather a not so scared case.

What parts of your project did you complete?

I added a self service section where people can request their own test through the Debci UI without fumbling through CURL commands. Also added a VagrantFile for future newcomers to setup the project more easily. Hope it works for them because I’ve only tested on my computer. We’ll see then.

What are the next steps for you to complete the project?

I’m sticking around and at least until I finish the parts that I started because I think it was fun and people actually made some requests related to this. It’s always exciting to see what you are building is wanted by the users.

Really appreciate the opportunity that Outreachy has been offering to interns! Assuming that you have read through this post, you probably are interested in Outreachy. Please do come and apply if you are interested or recommend it to others!

Cory DoctorowThey told us DRM would give us more for less, but they lied

My latest Locus Magazine column is DRM Broke Its Promise, which recalls the days when digital rights management was pitched to us as a way to enable exciting new markets where we’d all save big by only buying the rights we needed (like the low-cost right to read a book for an hour-long plane ride), but instead (unsurprisingly) everything got more expensive and less capable.

For 40 years, University of Chicago-style market orthodoxy has promised widespread prosperity as a natural consequence of turning everything into unfettered, unregulated, monopolistic businesses. For 40 years, everyone except the paymasters who bankrolled the University of Chicago’s priesthood have gotten poorer.

Today, DRM stands as a perfect example of everything terrible about monopolies, surveillance, and shareholder capitalism.

The established religion of markets once told us that we must abandon the idea of owning things, that this was an old fashioned idea from the world of grubby atoms. In the futuristic digital realm, no one would own things, we would only license them, and thus be relieved of the terrible burden of ownership.

They were telling the truth. We don’t own things anymore. This summer, Microsoft shut down its ebook store, and in so doing, deactivated its DRM servers, rendering every book the company had sold inert, unreadable. To make up for this, Microsoft sent refunds to the custom­ers it could find, but obviously this is a poor replacement for the books themselves. When I was a bookseller in Toronto, noth­ing that happened would ever result in me breaking into your house to take back the books I’d sold you, and if I did, the fact that I left you a refund wouldn’t have made up for the theft. Not all the books Microsoft is confiscating are even for sale any lon­ger, and some of the people whose books they’re stealing made extensive annotations that will go up in smoke.

What’s more, this isn’t even the first time an electronic bookseller has done this. Walmart announced that it was shutting off its DRM ebooks in 2008 (but stopped after a threat from the FTC). It’s not even the first time Microsoft has done this: in 2004, Microsoft created a line of music players tied to its music store that it called (I’m not making this up) “Plays for Sure.” In 2008, it shut the DRM serv­ers down, and the Plays for Sure titles its customers had bought became Never Plays Ever Again titles.

We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness.

DRM Broke Its Promise [Locus/Cory Doctorow]

(Image: Cryteria, CC-BY, modified)

,

Krebs on SecuritySpam In your Calendar? Here’s What to Do.

Many spam trends are cyclical: Spammers tend to switch tactics when one method of hijacking your time and attention stops working. But periodically they circle back to old tricks, and few spam trends are as perennial as calendar spam, in which invitations to click on dodgy links show up unbidden in your digital calendar application from Apple, Google and Microsoft. Here’s a brief primer on what you can do about it.

Image: Reddit

Over the past few weeks, a good number of readers have written in to say they feared their calendar app or email account was hacked after noticing a spammy event had been added to their calendars.

The truth is, all that a spammer needs to add an unwelcome appointment to your calendar is the email address tied to your calendar account. That’s because the calendar applications from Apple, Google and Microsoft are set by default to accept calendar invites from anyone.

Calendar invites from spammers run the gamut from ads for porn or pharmacy sites, to claims of an unexpected financial windfall or “free” items of value, to outright phishing attacks and malware lures. The important thing is that you don’t click on any links embedded in these appointments. And resist the temptation to respond to such invitations by selecting “yes,” “no,” or “maybe,” as doing so may only serve to guarantee you more calendar spam.

Fortunately, the are a few simple steps you can take that should help minimize this nuisance. To stop events from being automatically added to your Google calendar:

-Open the Calendar application, and click the gear icon to get to the Calendar Settings page.
-Under “Event Settings,” change the default setting to “No, only show invitations to which I have responded.”

To prevent events from automatically being added to your Microsoft Outlook calendar, click the gear icon in the upper right corner of Outlook to open the settings menu, and then scroll down and select “View all Outlook settings.” From there:

-Click “Calendar,” then “Events from email.”
-Change the default setting for each type of reservation settings to “Only show event summaries in email.”

For Apple calendar users, log in to your iCloud.com account, and select Calendar.

-Click the gear icon in the lower left corner of the Calendar application, and select “Preferences.”
-Click the “Advanced” tab at the top of the box that appears.
-Change the default setting to “Email to [your email here].”

Making these changes will mean that any events your email provider previously added to your calendar automatically by scanning your inbox for certain types of messages from common events — such as making hotel, dining, plane or train reservations, or paying recurring bills — may no longer be added for you. Spammy calendar invitations may still show up via email; in the event they do, make sure to mark the missives as spam.

Have you experienced a spike in calendar spam of late? Or maybe you have another suggestion for blocking it? If so, sound off in the comments below.

Planet DebianNorbert Preining: Debian Activities of the last few months

I haven’t written about specific Debian activities in recent times, but I haven’t been lazy. In fact I have been very active with a lot of new packages I am contributing to.

TeX and Friends

Lots of updates since we first released TeX Live 2019 for Debian, too many to actually mention. We also have bumped the binary package with backports of fixes for dvipdfmx and other programs. Another item that is still pending is the separation of dvisvgm into a separate package (currently in the NEW queue). Biber has been updated to match the version of biblatex shipped in the TeX Live packages.

Calibre

Calibre development is continuing as usual, with lots of activity for getting Calibre ready for Python3. To prepare for this move, I have taken over the Python mechanize package which has been not updated for many years. At the moment it is already possible to build a Calibre package for Python3, but unfortunately by now practically all external plugins are still based on Python2 and thus fail with Python3. As a consequence I will keep Calibre at Python2 version for the time being, and hope that Calibre officially switches to Python3, which would trigger a conversion of the plugins, too, before Bulleye (the next Debian release) is released with the aim to get rid of Python2.

Cinnamon

The packages of Cinnamon 4.0 I have prepared together with the Cinnamon Team have been uploaded to sid, and I have uploaded packages of Cinnamon 4.2 to experimental. We plan to move the 4.2 packages to sid after the 4.0 packages have entered testing.

Onedrive

Onedrive didn’t cut it into the release of buster, in particular because the release masters weren’t happy with an upgrade request I made to get a new version (scheduled to enter testing 1 day after the freeze day!) with loads of fixes into buster. So I decided to remove onedrive altogether from Buster, better nothing than something broken. It is a bit a pain for me – but users are advised to get the source code from Github and install a self compiled version – this is definitely safer.


All in all quite a lot of work. Enjoy.

,

Planet DebianJunichi Uekawa: I have an issue remembering where I took notes.

I have an issue remembering where I took notes. In the past it was all in emacs. Now it's somewhere in one of the web services.

Planet DebianSean Whitton: Debian Policy call for participation -- September 2019

There hasn’t been much activity lately, but no shortage of interesting and hopefully-accessible Debian Policy work. Do write to debian-policy@lists.debian.org if you’d like to participate but are struggling to figure out how.

Consensus has been reached and help is needed to write a patch:

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#582109 document triggers where appropriate

#592610 Clarify when Conflicts + Replaces et al are appropriate

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#770440 policy should mention systemd timers

#823256 Update maintscript arguments with dpkg >= 1.18.5

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#786470 [copyright-format] Add an optional “License-Grant” field

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

#922654 Section 9.1.2 points to a wrong FHS section?

Krebs on SecurityFeds Allege Adconion Employees Hijacked IP Addresses for Spamming

Federal prosecutors in California have filed criminal charges against four employees of Adconion Direct, an email advertising firm, alleging they unlawfully hijacked vast swaths of Internet addresses and used them in large-scale spam campaigns. KrebsOnSecurity has learned that the charges are likely just the opening salvo in a much larger, ongoing federal investigation into the company’s commercial email practices.

Prior to its acquisition, Adconion offered digital advertising solutions to some of the world’s biggest companies, including Adidas, AT&T, Fidelity, Honda, Kohl’s and T-Mobile. Amobee, the Redwood City, Calif. online ad firm that acquired Adconion in 2014, bills itself as the world’s leading independent advertising platform. The CEO of Amobee is Kim Perell, formerly CEO of Adconion.

In October 2018, prosecutors in the Southern District of California named four Adconion employees — Jacob Bychak, Mark ManoogianPetr Pacas, and Mohammed Abdul Qayyum —  in a ten-count indictment on charges of conspiracy, wire fraud, and electronic mail fraud. All four men have pleaded not guilty to the charges, which stem from a grand jury indictment handed down in June 2017.

‘COMPANY A’

The indictment and other court filings in this case refer to the employer of the four men only as “Company A.” However, LinkedIn profiles under the names of three of the accused show they each work(ed) for Adconion and/or Amobee.

Mark Manoogian is an attorney whose LinkedIn profile states that he is director of legal and business affairs at Amobee, and formerly was senior business development manager at Adconion Direct; Bychak is listed as director of operations at Adconion Direct; Quayyum’s LinkedIn page lists him as manager of technical operations at Adconion. A statement of facts filed by the government indicates Petr Pacas was at one point director of operations at Company A (Adconion).

According to the indictment, between December 2010 and September 2014 the defendants engaged in a conspiracy to identify or pay to identify blocks of Internet Protocol (IP) addresses that were registered to others but which were otherwise inactive.

The government alleges the men sent forged letters to an Internet hosting firm claiming they had been authorized by the registrants of the inactive IP addresses to use that space for their own purposes.

“Members of the conspiracy would use the fraudulently acquired IP addresses to send commercial email (‘spam’) messages,” the government charged.

HOSTING IN THE WIND

Prosecutors say the accused were able to spam from the purloined IP address blocks after tricking the owner of Hostwinds, an Oklahoma-based Internet hosting firm, into routing the fraudulently obtained IP addresses on their behalf.

Hostwinds owner Peter Holden was the subject of a 2015 KrebsOnSecurity story titled, “Like Cutting Off a Limb to Save the Body,” which described how he’d initially built a lucrative business catering mainly to spammers, only to later have a change of heart and aggressively work to keep spammers off of his network.

That a case of such potential import for the digital marketing industry has escaped any media attention for so long is unusual but not surprising given what’s at stake for the companies involved and for the government’s ongoing investigations.

Adconion’s parent Amobee manages ad campaigns for some of the world’s top brands, and has every reason not to call attention to charges that some of its key employees may have been involved in criminal activity.

Meanwhile, prosecutors are busy following up on evidence supplied by several cooperating witnesses in this and a related grand jury investigation, including a confidential informant who received information from an Adconion employee about the company’s internal operations.

THE BIGGER PICTURE

According to a memo jointly filed by the defendants, “this case spun off from a larger ongoing investigation into the commercial email practices of Company A.” Ironically, this memo appears to be the only one of several dozen documents related to the indictment that mentions Adconion by name (albeit only in a series of footnote references).

Prosecutors allege the four men bought hijacked IP address blocks from another man tied to this case who was charged separately. This individual, Daniel Dye, has a history of working with others to hijack IP addresses for use by spammers.

For many years, Dye was a system administrator for Optinrealbig, a Colorado company that relentlessly pimped all manner of junk email, from mortgage leads and adult-related services to counterfeit products and Viagra.

Optinrealbig’s CEO was the spam king Scott Richter, who later changed the name of the company to Media Breakaway after being successfully sued for spamming by AOL, MicrosoftMySpace, and the New York Attorney General Office, among others. In 2008, this author penned a column for The Washington Post detailing how Media Breakaway had hijacked tens of thousands of IP addresses from a defunct San Francisco company for use in its spamming operations.

Dye has been charged with violations of the CAN-SPAM Act. A review of the documents in his case suggest Dye accepted a guilty plea agreement in connection with the IP address thefts and is cooperating with the government’s ongoing investigation into Adconion’s email marketing practices, although the plea agreement itself remains under seal.

Lawyers for the four defendants in this case have asserted in court filings that the government’s confidential informant is an employee of Spamhaus.org, an organization that many Internet service providers around the world rely upon to help identify and block sources of malware and spam.

Interestingly, in 2014 Spamhaus was sued by Blackstar Media LLC, a bulk email marketing company and subsidiary of Adconion. Blackstar’s owners sued Spamhaus for defamation after Spamhaus included them at the top of its list of the Top 10 world’s worst spammers. Blackstar later dropped the lawsuit and agreed to paid Spamhaus’ legal costs.

Representatives for Spamhaus declined to comment for this story. Responding to questions about the indictment of Adconion employees, Amobee’s parent company SingTel referred comments to Amobee, which issued a brief statement saying, “Amobee has fully cooperated with the government’s investigation of this 2017 matter which pertains to alleged activities that occurred years prior to Amobee’s acquisition of the company.”

ONE OF THE LARGEST SPAMMERS IN HISTORY?

It appears the government has been investigating Adconion’s email practices since at least 2015, and possibly as early as 2013. The very first result in an online search for the words “Adconion” and “spam” returns a Microsoft Powerpoint document that was presented alongside this talk at an ARIN meeting in October 2016. ARIN stands for the American Registry for Internet Numbers, and it handles IP addresses allocations for entities in the United States, Canada and parts of the Caribbean.

As the screenshot above shows, that Powerpoint deck was originally named “Adconion – Arin,” but the file has since been renamed. That is, unless one downloads the file and looks at the metadata attached to it, which shows the original filename and that it was created in 2015 by someone at the U.S. Department of Justice.

Slide #8 in that Powerpoint document references a case example of an unnamed company (again, “Company A”), which the presenter said was “alleged to be one of the largest spammers in history,” that had hijacked “hundreds of thousands of IP addresses.”

A slide from an ARIN presentation in 2016 that referenced Adconion.

There are fewer than four billion IPv4 addresses available for use, but the vast majority of them have already been allocated. In recent years, this global shortage has turned IP addresses into a commodity wherein each IP can fetch between $15-$25 on the open market.

The dearth of available IP addresses has created boom times for those engaged in the acquisition and sale of IP address blocks. It also has emboldened scammers and spammers who specialize in absconding with and spamming from dormant IP address blocks without permission from the rightful owners.

In May, KrebsOnSecurity broke the news that Amir Golestan — the owner of a prominent Charleston, S.C. tech company called Micfo LLC — had been indicted on criminal charges of fraudulently obtaining more than 735,000 IP addresses from ARIN and reselling the space to others.

KrebsOnSecurity has since learned that for several years prior to 2014, Adconion was one of Golestan’s biggest clients. More on that in an upcoming story.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.700.2.0

armadillo image

A new RcppArmadillo release based on a new Armadillo upstream release arrived on CRAN, and will get to Debian shortly. It brings continued improvements for sparse matrices and a few other things; see below for more details. I also appear to have skipped blogging about the preceding 0.9.600.4.0 release (which was actually extra-rigorous with an unprecedented number of reverse-depends runs) so I included its changes (with very nice sparse matrix improvements) as well.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 656 other packages on CRAN.

Changes in RcppArmadillo version 0.9.700.2.0 (2019-09-01)

  • Upgraded to Armadillo release 9.700.2 (Gangster Democracy)

    • faster handling of cubes by vectorise()

    • faster faster handling of sparse matrices by nonzeros()

    • faster row-wise index_min() and index_max()

    • expanded join_rows() and join_cols() to handle joining up to 4 matrices

    • expanded .save() and .load() to allow storing sparse matrices in CSV format

    • added randperm() to generate a vector with random permutation of a sequence of integers

  • Expanded the list of known good gcc and clang versions in configure.ac

Changes in RcppArmadillo version 0.9.600.4.0 (2019-07-14)

  • Upgraded to Armadillo release 9.600.4 (Napa Invasion)

    • faster handling of sparse submatrices

    • faster handling of sparse diagonal views

    • faster handling of sparse matrices by symmatu() and symmatl()

    • faster handling of sparse matrices by join_cols()

    • expanded clamp() to handle sparse matrices

    • added .clean() to replace elements below a threshold with zeros

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Carter: Free Software Activities (2019-08)

Ah, spring time at last. The last month I caught up a bit with my Debian packaging work after the Buster freeze, release and subsequent DebConf. Still a bit to catch up on (mostly kpmcore and partitionmanager that’s waiting on new kdelibs and a few bugs). Other than that I made two new videos, and I’m busy with renovations at home this week so my home office is packed up and in the garage. I’m hoping that it will be done towards the end of next week, until then I’ll have little screen time for anything that’s not work work.

2019-08-01: Review package hipercontracer (1.4.4-1) (mentors.debian.net request) (needs some work).

2019-08-01: Upload package bundlewrap (3.6.2-1) to debian unstable.

2019-08-01: Upload package gnome-shell-extension-dash-to-panel (20-1) to debian unstable.

2019-08-01: Accept MR!2 for gamemode, for new upstream version (1.4-1).

2019-08-02: Upload package gnome-shell-extension-workspaces-to-dock (51-1) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-hide-activities (0.00~git20131024.1.6574986-2) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-trash (0.2.0-git20161122.ad29112-2) to debian unstable.

2019-08-04: Upload package toot (0.22.0-1) to debian unstable.

2019-08-05: Upload package gamemode (gamemode-1.4.1+git20190722.4ecac89-1) to debian unstable.

2019-08-05: Upload package calamares-settings-debian (10.0.24-2) to debian unstable.

2019-08-05: Upload package python3-flask-restful (0.3.7-3) to debian unstable.

2019-08-05: Upload package python3-aniso8601 (7.0.0-2) to debian unstable.

2019-08-06: Upload package gamemode (1.5~git20190722.4ecac89-1) to debian unstable.

2019-08-06: Sponsor package assaultcube (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-06: Sponsor package assaultcube-data (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-07: Request more info on Debian bug #825185 (“Please which tasks should be installed at a default installation of the blend”).

2019-08-07: Close debian bug #689022 in desktop-base (“lxde: Debian wallpaper distorted on 4:3 monitor”).

2019-08-07: Close debian bug #680583 in desktop-base (“please demote librsvg2-common to Recommends”).

2019-08-07: Comment on debian bug #931875 in gnome-shell-extension-multi-monitors (“Error loading extension”) to temporarily avoid autorm.

2019-08-07: File bug (multimedia-devel)

2019-08-07: Upload package python3-grapefruit (0.1~a3+dfsg-7) to debian unstable (Closes: #926414).

2019-08-07: Comment on debian bug #933997 in gamemode (“gamemode isn’t automatically activated for rise of the tomb raider”).

2019-08-07: Sponsor package assaultcube-data (1.2.0.2.1-2) for debian unstable (e-mail request).

2019-08-08: Upload package calamares (3.2.12-1) to debian unstable.

2019-08-08: Close debian bug #32673 in aalib (“open /dev/vcsa* write-only”).

2019-08-08: Upload package tanglet (1.5.4-1) to debian unstable.

2019-08-08: Upload package tmux-theme-jimeh (0+git20190430-1b1b809-1) to debian unstable (Closes: #933222).

2019-08-08: Close debian bug #927219 (“amdgpu graphics fail to be configured”).

2019-08-08: Close debian bugs #861065 and #861067 (For creating nextstep task and live media).

2019-08-10: Sponsor package scons (3.1.1-1) for debian unstable (mentors.debian.org request) (Closes RFS: #932817).

2019-08-10: Sponsor package fractgen (2.1.7-1) for debian unstable (mentors.debian.net request).

2019-08-10: Sponsor package bitwise (0.33-1) for debian unstable (mentors.debian.net request). (Closes RFS: #934022).

2019-08-10: Review package python-pyspike (0.6.0-1) (mentors.debian.net request) (needs some additional work).

2019-08-10: Upload package connectagram (1.2.10-1) to debian unstable.

2019-08-11: Review package bitwise (0.40-1) (mentors.debian.net request) (need some further work).

2019-08-11: Sponsor package sane-backends (1.0.28-1~experimental1) to debian experimental (mentors.debian.net request).

2019-08-11: Review package hcloud-python (1.4.0-1) (mentors.debian.net).

2019-08-13: Review package bitwise (0.40-1) (e-mail request) (needs some further work).

2019-08-15: Sponsor package bitwise (0.40-1) for debian unstable (email request).

2019-08-19: Upload package calamares-settings-debian (10.0.20-1+deb10u1) to debian buster (CVE #2019-13179).

2019-08-19: Upload package gnome-shell-extension-dash-to-panel (21-1) to debian unstable.

2019-08-19: Upload package flask-restful (0.3.7-4) to debian unstable.

2019-08-20: Upload package python3-grapefruit (0.1~a3+dfsg-8) to debian unstable (Closes: #934599).

2019-08-20: Sponsor package runescape (0.6-1) for debian unstable (mentors.debian.net request).

2019-08-20: Review package ukui-menu (1.1.12-1) (needs some mor work) (mentors.debian.net request).

2019-08-20: File ITP #935178 for bcachefs-tools.

2019-08-21: Fix two typos in bcachefs-tools (Github bcachefs-tools PR: #20).

2019-08-25: Published Debian Package of the Day video #60: 5 Fonts (highvoltage.tv / YouTube).

2019-08-26: Upload new upstream release of speedtest-cli (2.1.2-1) to debian unstable (Closes: #934768).

2019-08-26: Upload new package gnome-shell-extension-draw-on-your-screen to NEW for debian untable. (ITP: #925518)

2019-08-27: File upstream bug for btfs so that python2 depencency can be dropped from Debian package (BTFS: #53).

2019-08-28: Published Debian Package Management #4: Maintainer Scripts (highvoltage.tv / YouTube).

2019-08-28: File upstream feature request in Calamares unpackfs module to help speed up installations (Calamares: #1229).

2019-08-28: File upstream request at smlinux/rtl8723de driver for license clarification (RTL8723DE: #49).

Planet DebianMike Gabriel: My Work on Debian LTS/ELTS (August 2019)

In August 2019, I have worked on the Debian LTS project for 24 hours (of 24.75 hours planned) and on the Debian ELTS project for another 2 hours (of 12 hours planned) as a paid contributor.

LTS Work

  • Upload fusiondirectory 1.0.8.2-5+deb8u2 to jessie-security (1 CVE, DLA 1875-1 [1])
  • Upload gosa 2.7.4+reloaded2+deb8u4 to jessie-security (1 CVE, DLA 1876-1 [2])
  • Upload gosa 2.7.4+reloaded2+deb8u5 to jessie-security (1 CVE, DLA 1905-1 [3])
  • Upload libav 6:11.12-1~deb8u8 to jessie-security (5 CVEs, DLA 1907-1 [4])
  • Investigate on CVE-2019-13627 (libgcrypt20). Upstream patch applies, build succeeds, but some tests fail. More work required on this.
  • Triage 14 packages with my LTS frontdesk hat on during the last week of August
  • Do a second pair of eyes review on changes uploaded with dovecot 1:2.2.13-12~deb8u7
  • File a merge request against security-tracker [5], add --minor option to contact-maintainers script.

ELTS Work

  • Investigate on CVE-2019-13627 (libgcrypt11). More work needed to assess if libgrypt11 in wheezy is affected by CVE-2019-13627.

References

Planet DebianJulien Danjou: Dependencies Handling in Python

Dependencies Handling in Python

Dependencies are a nightmare for many people. Some even argue they are technical debt. Managing the list of the libraries of your software is a horrible experience. Updating them — automatically? — sounds like a delirium.

Stick with me here as I am going to help you get a better grasp on something that you cannot, in practice, get rid of — unless you're incredibly rich and talented and can live without the code of others.

First, we need to be clear of something about dependencies: there are two types of them. Donald Stuff wrote better than I would about the subject years ago. To make it simple, one can say that they are two types of code packages depending on  external code: applications and libraries.

Libraries Dependencies

Python libraries should specify their dependencies in a generic way. A library should not require requests 2.1.5: it does not make sense. If every library out there needs a different version of requests, they can't be used at the same time.

Libraries need to declare dependencies based on ranges of version numbers. Requiring requests>=2 is correct. Requiring requests>=1,<2 is also correct if you know that requests 2.x does not work with the library. The problem that your version range specification is solving is the API compatibility issue between your code and your dependencies — nothing else. That's a good reason for libraries to use Semantic Versioning whenever possible.

Therefore, dependencies should be written in setup.py as something like:

from setuptools import setup

setup(
    name="MyLibrary",
    version="1.0",
    install_requires=[
        "requests",
    ],
    # ...
)

This way, it is easy for any application to use the library and co-exist with others.

Applications Dependencies

An application is just a particular case of libraries. They are not intended to be reused (imported) by other libraries of applications — though nothing would prevent it in practice.

In the end, that means that you should specify the dependencies the same way that you would do for a library in the application's setup.py.

The main difference is that an application is usually deployed in production to provide its service. Deployments need to be reproducible. For that, you can't solely rely on setup.py: the requested range of the dependencies are too broad. You're at the mercy of random version changes at any time when re-deploying your application.

You, therefore, need a different version management mechanism to handle deployment than just setup.py.

pipenv has an excellent section recapping this in its documentation. It splits dependency types into abstract and concrete dependencies: abstract dependencies are based on ranges (e.g., libraries) whereas concrete dependencies are specified with precise versions (e.g., application deployments) — as we've just seen here.

Handling Deployment

The requirements.txt file has been used to solve application deployment reproducibility for a long time now. Its format is usually something like:

requests==3.1.5
foobar==2.0

Each library sees itself specified to the micro version. That makes sure each of your deployment is going to install the same version of your dependency. Using a requirements.txt is a simple solution and a first step toward reproducible deployment. However, it's not enough.

Indeed, while you can specify which version of requests you want, if requests depends on urllib3, that could make pip install urllib 2.1 or urllib 2.2. You can't know which one will be installed, which does not make your deployment 100% reproducible.

Of course, you could duplicate all requests dependencies yourself in your requirements.txt, but that would be madness!

Dependencies Handling in PythonAn application dependency tree can be quite deep and complex sometimes.

There are various hacks available to fix this limitation, but the real saviors here are pipenv and poetry. The way they solve it is similar to many package managers in other programming languages. They generate a lock file that contains the list of all installed dependencies (and their own dependencies, etc.) with their version numbers. That makes sure the deployment is 100% reproducible.

Check out their documentation on how to set up and use them!

Handling Dependencies Updates

Now that you have your lock file that makes sure your deployment is reproducible in a snap, you've another problem. How do you make sure that your dependencies are up-to-date? There is a real security concern about this, but also bug fixes and optimizations that you might miss by staying behind.

If your project is hosted on GitHub, Dependabot is an excellent solution to solve this issue. Enabling this application on your repository creates automatically pull requests whenever a new version of the library listed in your lock file is available. For example, if you've deployed your application with redis 3.3.6, Dependabot will create a pull request updating to redis 3.3.7 as soon as it gets released. Furthermore, Dependabot supports requirements.txt, pipenv, and poetry!

Dependencies Handling in PythonDependabot updating jinja2 for you

Automatic Deployment Update

You're almost there. You have a bot that is letting you know that a new version of a library your project needs is available.

Once the pull request is created, your continuous integration system is going to kick in, deploy your project, and runs the test. If everything works fine, your pull request is ready to be merged. But are you really needed in this process?

Unless you have a particular and personal aversion on specific version numbers —"Gosh I hate versions that end with a 3. It's always bad luck."— or unless you have zero automated testing, you, human, is useless. This merge can be fully automatic.

This is where Mergify comes into play. Mergify is a GitHub application allowing to define precise rules about how to merge your pull requests. Here's a rule that I use in every project:

pull_requests_rules:
  - name: automatic merge from dependabot
    conditions:
      - author~=^dependabot(|-preview)\[bot\]$
      - label!=work-in-progress
      - "status-success=ci/circleci: pep8"
      - "status-success=ci/circleci: py37"
    actions:
      merge:
        method: merge
Dependencies Handling in PythonMergify reports when the rule fully matches

As soon as your continuous integration system passes, Mergify merges the pull request for you.

Dependencies Handling in Python

You can then automatically trigger your deployment hooks to update your production deployment and get the new library version installed right away. This leaves your application always up-to-date with newer libraries and not lagging behind several years of releases.

If anything goes wrong, you're still able to revert the commit from Dependabot — which you can also automate if you wish with a Mergify rule.

Beyond

This is to me the state of the art of dependency management lifecycle right now. And while this applies exceptionally well to Python, it can be applied to many other languages that use a similar pattern — such as Node and npm.

Planet DebianRuss Allbery: rra-c-util 8.0

This is a roll-up of a lot of changes to my utility package for C (and increasingly for Perl). It's been more than a year since the last release, so it's long-overdue.

Most of the changes in this release are to the Perl test libraries and accompanying tests. Test::RRA now must be imported before Test::More so that it can handle the absence of Test::More (such as on Red Hat systems with perl but not perl-core installed). The is_file_contents function in Test::RRA now handles Windows and other systems without a diff program. And there are more minor improvements to the various tests written in Perl.

The Autoconf probe RRA_LIB_KRB5_OPTIONAL now correctly handles the case where Kerberos libraries are not available but libcom_err is, rather than incorrectly believing that Kerberos libraries were present.

As of this release, rra-c-util now tests the Perl test programs that it includes, which requires it to build and test a dummy Perl module. This means the build system now requires Perl 5.6.2 and the Module::Build module.

You can get the latest version from the rra-c-util distribution page.

,

Planet DebianThorsten Alteholz: My Debian Activities in August 2019

FTP master

This month the numbers went up again and I accepted 389 packages and rejected 43. The overall number of packages that got accepted was 460.

Debian LTS

This was my sixty second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 21.75h. During that time I did LTS uploads of:

  • [DLA 1887-1] freetype security update for one CVE
  • [DLA 1889-1] python3.4 security update for one CVE
  • [DLA 1893-1] cups security update for two CVEs
  • [DLA 1895-1] libmspack security update for one CVE
  • [DLA 1894-1] libapache2-mod-auth-openidc security update for one CVE
  • [DLA 1897-1] tiff security update for one CVE
  • [DLA 1902-1] djvulibre security update for four CVEs
  • [DLA 1904-1] libextractor security update for one CVE
  • [DLA 1906-1] python2.7 security update for one CVE

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the fifteenth ELTS month.

During my allocated time I uploaded:

  • ELA-155-1 of cups
  • ELA-157-1 of djvulibre
  • ELA-158-1 of python2.7

I spent some time to work on tiff3 only to find that the affected features are not yet available.

I also did some days of frontdesk duties.

Other stuff

This month I uploaded new packages of …

I also uploaded new upstream versions of …

I improved packaging of …

On my Go challenge I uploaded golang-github-gin-contrib-static, golang-github-gin-contrib-cors, golang-github-yourbasic-graph, golang-github-cnf-structhash, golang-github-deanthompson-ginpprof, golang-github-jarcoal-httpmock, golang-github-gin-contrib-gzip, golang-github-mcuadros-go-gin-prometheus, golang-github-abdullin-seq, golang-github-centurylinkcloud-clc-sdk, golang-github-ziutek-mymysql, golang-github-terra-farm-udnssdk, golang-github-ensighten-udnssdk, golang-github-sethvargo-go-fastly.

I again reuploaded some go packages (golang-github-go-xorm-core, golang-github-jarcoal-httpmock, golang-github-mcuadros-go-gin-prometheus, golang-github-deanthompson-ginpprof, golang-github-gin-contrib-cors, golang-github-gin-contrib-gzip, golang-github-gin-contrib-static, golang-github-cyberdelia-heroku-go, golang-github-corpix-uarand, golang-github-cnf-structhash, golang-github-rs-zerolog, golang-gopkg-ldap.v3, golang-github-yourbasic-graph, golang-github-ovh-go-ovh, , that would not migrate due to being binary uploads before.

I also sponsored the following packages: golang-github-jesseduffield-gocui, printrun, cura-engine, theme-d, theme-d-gnome.

The DOPOM package for this month was gengetopt.

Planet DebianPetter Reinholdtsen: Norwegian movies that might be legal to share on the Internet

While working on identifying and counting movies that can be legally shared on the Internet, I also looked at the Norwegian movies listed in IMDb. So far I have identified 54 candidates published before 1940 that might no longer be protected by norwegian copyright law. Of these, only 29 are available at least in part from the Norwegian National Library. It can be assumed that the remaining 25 movies are lost. It seem most useful to identify the copyright status of movies that are not lost. To verify that the movie is really no longer protected, one need to verify the list of copyright holders and figure out if and when they died. I've been able to identify some of them, but for some it is hard to figure out when they died.

This is the list of 29 movies both available from the library and possibly no longer protected by copyright law. The year range (1909-1979 on the first line) is year of publication and last year with copyright protection.

1909-1979 ( 70 year) NSB Bergensbanen 1909 - http://www.imdb.com/title/tt0347601/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons likfærd - http://www.imdb.com/title/tt9299304/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons begravelse - http://www.imdb.com/title/tt9299300/
1912-1998 ( 86 year) Roald Amundsens Sydpolsferd (1910-1912) - http://www.imdb.com/title/tt9237500/
1913-2006 ( 93 year) Roald Amundsen på sydpolen - http://www.imdb.com/title/tt0347886/
1917-1987 ( 70 year) Fanden i nøtten - http://www.imdb.com/title/tt0346964/
1919-2018 ( 99 year) Historien om en gut - http://www.imdb.com/title/tt0010259/
1920-1990 ( 70 year) Kaksen på Øverland - http://www.imdb.com/title/tt0011361/
1923-1993 ( 70 year) Norge - en skildring i 6 akter - http://www.imdb.com/title/tt0014319/
1925-1997 ( 72 year) Roald Amundsen - Ellsworths flyveekspedition 1925 - http://www.imdb.com/title/tt0016295/
1925-1995 ( 70 year) En verdensreise, eller Da knold og tott vaskede negrene hvite med 13 sæpen - http://www.imdb.com/title/tt1018948/
1926-1996 ( 70 year) Luftskibet 'Norge's flugt over polhavet - http://www.imdb.com/title/tt0017090/
1926-1996 ( 70 year) Med 'Maud' over Polhavet - http://www.imdb.com/title/tt0017129/
1927-1997 ( 70 year) Den store sultan - http://www.imdb.com/title/tt1017997/
1928-1998 ( 70 year) Noahs ark - http://www.imdb.com/title/tt1018917/
1928-1998 ( 70 year) Skjæbnen - http://www.imdb.com/title/tt1002652/
1928-1998 ( 70 year) Chefens cigarett - http://www.imdb.com/title/tt1019896/
1929-1999 ( 70 year) Se Norge - http://www.imdb.com/title/tt0020378/
1929-1999 ( 70 year) Fra Chr. Michelsen til Kronprins Olav og Prinsesse Martha - http://www.imdb.com/title/tt0019899/
1930-2000 ( 70 year) Mot ukjent land - http://www.imdb.com/title/tt0021158/
1930-2000 ( 70 year) Det er natt - http://www.imdb.com/title/tt1017904/
1930-2000 ( 70 year) Over Besseggen på motorcykel - http://www.imdb.com/title/tt0347721/
1931-2001 ( 70 year) Glimt fra New York og den Norske koloni - http://www.imdb.com/title/tt0021913/
1932-2007 ( 75 year) En glad gutt - http://www.imdb.com/title/tt0022946/
1934-2004 ( 70 year) Den lystige radio-trio - http://www.imdb.com/title/tt1002628/
1935-2005 ( 70 year) Kronprinsparets reise i Nord Norge - http://www.imdb.com/title/tt0268411/
1935-2005 ( 70 year) Stormangrep - http://www.imdb.com/title/tt1017998/
1936-2006 ( 70 year) En fargesymfoni i blått - http://www.imdb.com/title/tt1002762/
1939-2009 ( 70 year) Til Vesterheimen - http://www.imdb.com/title/tt0032036/
To be sure which one of these can be legally shared on the Internet, in addition to verifying the right holders list is complete, one need to verify the death year of these persons:
Bjørnstjerne Bjørnson (dead 1910) - http://www.imdb.com/name/nm0085085/
Gustav Adolf Olsen (missing death year) - http://www.imdb.com/name/nm0647652/
Gustav Lund (missing death year) - http://www.imdb.com/name/nm0526168/
John W. Brunius (dead 1937) - http://www.imdb.com/name/nm0116307/
Ola Cornelius (missing death year) - http://www.imdb.com/name/nm1227236/
Oskar Omdal (dead 1927) - http://www.imdb.com/name/nm3116241/
Paul Berge (missing death year) - http://www.imdb.com/name/nm0074006/
Peter Lykke-Seest (dead 1948) - http://www.imdb.com/name/nm0528064/
Roald Amundsen (dead 1928) - https://www.imdb.com/name/nm0025468/
Sverre Halvorsen (dead 1936) - http://www.imdb.com/name/nm1299757/
Thomas W. Schwartz (missing death year) - http://www.imdb.com/name/nm2616250/

Perhaps you can help me figuring death year of those missing it, or right holders if some are missing in IMDb? It would be nice to have a definite list of Norwegian movies that are legal to share on the Internet.

This is the list of 25 movies not available from the library and possibly no longer protected by copyright law:

1907-2009 (102 year) Fiskerlivets farer - http://www.imdb.com/title/tt0121288/
1912-2018 (106 year) Historien omen moder - http://www.imdb.com/title/tt0382852/
1912-2002 ( 90 year) Anny - en gatepiges roman - http://www.imdb.com/title/tt0002026/
1916-1986 ( 70 year) The Mother Who Paid - http://www.imdb.com/title/tt3619226/
1917-2018 (101 year) En vinternat - http://www.imdb.com/title/tt0008740/
1917-2018 (101 year) Unge hjerter - http://www.imdb.com/title/tt0008719/
1917-2018 (101 year) De forældreløse - http://www.imdb.com/title/tt0007972/
1918-2018 (100 year) Vor tids helte - http://www.imdb.com/title/tt0009769/
1918-2018 (100 year) Lodsens datter - http://www.imdb.com/title/tt0009314/
1919-2018 ( 99 year) Æresgjesten - http://www.imdb.com/title/tt0010939/
1921-2006 ( 85 year) Det nye year? - http://www.imdb.com/title/tt0347686/
1921-1991 ( 70 year) Under Polarkredsens himmel - http://www.imdb.com/title/tt0012789/
1923-1993 ( 70 year) Nordenfor polarcirkelen - http://www.imdb.com/title/tt0014318/
1925-1995 ( 70 year) Med 'Stavangerfjord' til Nordkap - http://www.imdb.com/title/tt0016098/
1926-1996 ( 70 year) Over Atlanterhavet og gjennem Amerika - http://www.imdb.com/title/tt0017241/
1926-1996 ( 70 year) Hallo! Amerika! - http://www.imdb.com/title/tt0016945/
1926-1996 ( 70 year) Tigeren Teodors triumf - http://www.imdb.com/title/tt1008052/
1927-1997 ( 70 year) Rød sultan - http://www.imdb.com/title/tt1017979/
1927-1997 ( 70 year) Søndagsfiskeren Flag - http://www.imdb.com/title/tt1018002/
1930-2000 ( 70 year) Ro-ro til fiskeskjær - http://www.imdb.com/title/tt1017973/
1933-2003 ( 70 year) I kongens klær - http://www.imdb.com/title/tt0024164/
1934-2004 ( 70 year) Eventyret om de tre bukkene bruse - http://www.imdb.com/title/tt1007963/
1934-2004 ( 70 year) Pål sine høner - http://www.imdb.com/title/tt1017966/
1937-2007 ( 70 year) Et mesterverk - http://www.imdb.com/title/tt1019937/
1938-2008 ( 70 year) En Harmony - http://www.imdb.com/title/tt1007975/

Several of these movies completely lack right holder information in IMDb and elsewhere. Without access to a copy of the movie, it is often impossible to get the list of people involved in making the movie, making it impossible to figure out the correct copyright status.

Not listed here are the movies still protected by copyright law. Their copyright terms varies from 79 to 144 years, according to the information I have available so far. One of the non-lost movies might change status next year, Mustads Mono from 1920. The next one might be Hvor isbjørnen ferdes from 1935 in 2024.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianPaul Wise: FLOSS Activities August 2019

Changes

Issues

Review

Administration

  • Debian: restart dead stunnels
  • Debian wiki: unblacklist IP addresses, whitelist email addresses, whitelist email domains

Communication

Sponsors

The purple-discord, libgoogle-protocolbuffers-perl work was sponsored by my employer. All other work was done on a volunteer basis.

Planet DebianRuss Allbery: C TAP Harness 4.5

Peter Paris requested that C TAP Harness support being built as C++ code. I've not been a big fan of doing this with pure C code since I find some of the requirements of C++ mildly irritating, but Peter's initial patch also fixed one type error in a malloc uncovered because of one of C++'s rules requiring the return of malloc be cast. It turned out to be a mostly harmless error since the code was allocating a larger struct than it needed to, but it's still evidence that there's some potential here for catching bugs.

That said, adding an explicit cast to every malloc isn't likely to catch bugs. That's just having to repeat oneself in every allocation, and you're nearly as likely to repeat yourself incorrectly.

However, if one is willing to use a macro instead of malloc directly, this is fixable, and I'm willing to do that since I was already using a macro for allocation to do error handling. So I've modified the code to pass in the type of object to allocate instead of the size, and then used a macro to add the return cast. This makes for somewhat cleaner code and also makes it possible to build the code as pure C++. I also added some functions to the TAP generator library, bcalloc_type and breallocarray_type, that take the same approach. (I didn't remove the old functions, to maintain backward compatibility.)

I'm reasonably happy with the results, although it's a bit of a hassle and I'm not sure if I'm going to go to the trouble in all of my other C packages. But I'm at least considering it. (Of course, I'm also considering rewriting them all in Rust, and considering my profound lack of time to do either of these things.)

You can get the latest release from the C TAP Harness distribution page.

,

Planet DebianSylvain Beucler: Debian LTS and ELTS - August 2019

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

Yes, that changed since last month, as I was offered to work on ELTS :)

In August, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 21.75h for LTS (out of 30 max) and 14h for ELTS (max).

Interestingly I was able to factor out some time between LTS and ELTS while working on vim and tomcat for both suites.

LTS - Jessie

  • squirrelmail: CVE-2019-12970: locate patch, refresh previous fix with new upstream-blessed version, security upload
  • vim: CVE-2017-11109, CVE-2017-17087, CVE-2019-12735: analyze and reproduce issues (one of them not fully exploitable), fix new and postponed issues, security upload
  • tomcat8: improve past patch to fix the test suite, report and refresh test certificates
  • tomcat8: CVE-2016-5388, CVE-2018-8014, CVE-2019-0221: requalify old not-affected issue, fix new and postponed issues, security upload

Documentation:

  • wiki: document good upload/test practices (pbuilder and lintian+debdiff+piuparts); request for comments
  • www.debian.org: import missing DLA-1810 (tomcat7/CVE-2019-0221)
  • freeimage: update dla-needed.txt status

ELTS - Wheezy

  • Get acquainted with the new procedures and setup build/test environments
  • vim: CVE-2017-17087, CVE-2019-12735: analyze and reproduce issues (one of them not fully exploitable), fix new and pending issues, security upload
  • tomcat7: CVE-2016-5388: requalify old not-affected issue, security upload

Documentation:

  • raise concern about missing dependency in our list of supported packages
  • user documentation: doc fix apt-key list -> apt-key finger
  • triage: mark a few CVE as EOL, fix-up missing fixed versions in data/ELA/list (not automated anymore following the oldoldstable -> oldoldold(!)stable switch)

While not part of Debian strictly speaking, ELTS strives for the same level of transparency, see in particular the Git repositories: https://salsa.debian.org/freexian-team/extended-lts

Sam VargheseAustralian politicians are in it for the money

Australian politicians are in the game for one thing: money. Most of them are so incompetent that they would not be paid even half of what they earn were they to try for jobs in the private sector.

That’s why former members of the Victorian state parliament, who were voted out at the last election in 2018, are struggling to find jobs.

Apparently, some have been told by recruitment agencies that they “don’t know where to fit you”, according to a news report from the Melbourne tabloid Herald Sun.

People who enter politics are paid well in Australia, far above what people are paid by the private sector, unless one is very high up in the hierarchy.

Politicians get where they are by doing favours for people in high places and moving up the greasy pole.

They get all kinds of fancy allowances and benefits. They have no scruples about taking from the public purse whenever they can without getting caught.

They are the worst kind of scum.

Australia is a highly over-governed place, with three levels of government: the national parliament, the parliaments in the different states and territories and the local governments.

At each level there is plenty of scope for fattening one’s own lamb. There are a handful of people who have some kind of vocation for public service; the rest are out to grab whatever they can before they are voted out.

Nobody should have any pity for people of this kind given what they do when they are in office. About the only thing they do is to prepare things so that they will have a job here, there or anywhere when they finally get thrown out of politics.

Some get lanced so early in their political lives that they are unprepared. Perhaps they should be put to work as garbage collectors. But one doubts they would have the physical and mental fortitude to get through such a job.

Planet DebianChris Lamb: Free software activities in August 2019

Here is my monthly update covering most of what I have been doing in the free software world during August 2019 (previous month):

  • Opened pull requests to make the build reproducible for Mozilla's Bleach [...] and the re2c regular expression library [...].

Tails

For the Tails privacy-oriented operating system, I was made a number of updates as part of the pkg-privacy-tools team in Debian:

  • onionshare:

    • Package new upstream version 2.1. [...]
    • Correct spelling, format and syntax errors in manpage.
    • Update debian/copyright; socks.py no longer in upstream.
    • Misc updates:
      • Drop "ancient" X-Python3-Version specifier (satisfied in oldoldstable).
      • Move to debhelper compatibility level 12 and use the debhelper-compat virtual package, dropping debian/compat.
    • debian/watch: Ignore dev releases and move to version 4 format.
  • monkeysphere:

    • Prevent a FTBFS by updating the tests to accommodate an updated GnuPG in stretch now producing a different output. (#934034).

    • I also filed a "proposed update" to actually update the package in the stretch distribution. (#934775)

  • onioncircuits: Update continuous integration tests to the Python 3.x version of Dogtail. (#935174)

  • seahorse-nautilus: (Almost) no-change upload to unstable to ensure migration to the testing distribution as binaries were uploaded with previous 3.11.92-3 release. [...]

  • obfs4proxy: Move to using the debian-compat virtual package, level 12. [...]


Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month:


I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

Improvements:

  • Don't fallback to an unhelpful raw hexdump when, for example, readelf(1) reports an minor issue in a section in an ELF binary. For example, when the .frames section is of the NOBITS type its contents are apparently "unreliable" and thus readelf(1) returns 1. (#58, #931962)
  • Include either standard error or standard output (not just the latter) when an external command fails. [...]

Bug fixes:

  • Skip calls to unsquashfs when we are neither root nor running under fakeroot. (#63)
  • Ensure that all of our artificially-created subprocess.CalledProcessError instances have output instances that are bytes objects, not str. [...]
  • Correct a reference to parser.diff; diff in this context is a Python function in the module. [...]
  • Avoid a possible traceback caused by a str/bytes type confusion when handling the output of failing external commands. [...]

Testsuite improvements:

  • Test for 4.4 in the output of squashfs -version, even though the Debian package version is 1:4.3+git190823-1. [...]
  • Apply a patch from László Böszörményi to update the squashfs test output and additionally bump the required version for the test itself. (#62 & #935684)
  • Add the wabt Debian package to the test-dependencies so that we run the WebAssembly tests on our continuous integration platform, etc. [...]

Improve debugging:

  • Add the containing module name to the (eg.) Using StaticLibFile for ... debugging messages. [...]
  • Strip off trailing "original size modulo 2^32 671" (etc.) from gzip compressed data as this is just a symptom of the contents itself changing that will be reflected elsewhere. (#61)
  • Avoid a lack of space between "... with return code 1" and "Standard output". [...]
  • Improve debugging output when instantantiating our Comparator object types. [...]
  • Add a literal "eg." to the comment on stripping "original size modulo..." text to emphasise that the actual numbers are not fixed. [...]

Internal code improvements:

  • No need to parse the section group from the class name; we can pass it via type built-in kwargs argument. [...]
  • Add support to Difference.from_command_exc and friends to ignore specific returncodes from the called program and treat them as "no" difference. [...]
  • Simplify parsing of optional command_args argument to Difference.from_command_exc. [...]
  • Set long_description_content_type to text/x-rst to appease the PyPI.org linter. [...]
  • Reposition a comment regarding an exception within the indented block to match Python code convention. [...]


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Add support for enabling and disabling specific normalizers via the command line. (#10)
  • Drop accidentally-committed warning emitted on every fixture-based test. [...]
  • Reintroduce the .ar normalizer [...] but disable it by default so that it can be enabled with --normalizers=+ar or similar. (#3)
  • In verbose mode, print the normalizers that strip-nondeterminism will apply. [...]

Debian

Lintian

More hacking on the Lintian static analysis tool for Debian packages, including uploading versions 2.17.0, 2.18.0 and 2.19.0:

New features:

Bug fixes:

Other:


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, participating in mailing list discussions, etc.

  • Investigated and triaged cent, clamav, enigmail, freeradius, ghostscript, libcrypto++, musl, open-cobol, pango1.0, php5, python-django, python-werkzeug, radare2, salt, subversion, suricata, u-boot, xtrlock & yara.

  • Updated our lts-cve-triage.py script to correct undefined reference to colored when standard output is not a terminal [...] and address a number of flake8 issues [...].

  • Worked on a number of interations towards a comprehensive patch to xtrlock to address an issue whereby multitouch events (such as on a tablet or many modern laptops) are not correct locked. Whilst originally filed by a user as #830726 whilst triaging issues for this package I was able to reproduce it. I thus requested and was granted my first CVE number (CVE-2016-10894) and hope to upload a patched version early next month.

  • Issued DLA 1896-1 for to fix a remote arbitrary code vulnerability in commons-beanutils, a set of tools and utilities for manipulating JavaBeans.

  • Issued DLA 1872-1 for the Django web development framework correcting two denial of service vulnerabilities and requiring a backport of upstream's patch series. I also fixed these issues in the buster distribution as well as an SQL injection possibility and potential memory exhaustion issues.

You can find out more about the project in the following video:


Debian uploads


FTP Team

As a Debian FTP assistant I ACCEPTed 28 packages: bitshuffle, golang-github-abdullin-seq, golang-github-centurylinkcloud-clc-sdk, golang-github-cnf-structhash, golang-github-deanthompson-ginpprof, golang-github-ensighten-udnssdk, golang-github-gin-contrib-cors, golang-github-gin-contrib-gzip, golang-github-gin-contrib-static, golang-github-hansrodtang-randomcolor, golang-github-jarcoal-httpmock, golang-github-mcuadros-go-gin-prometheus, golang-github-mitchellh-go-linereader, golang-github-nesv-go-dynect, golang-github-sethvargo-go-fastly, golang-github-terra-farm-udnssdk, golang-github-yourbasic-graph, golang-github-ziutek-mymysql, golang-gopkg-go-playground-colors.v1, gulkan, kdeplasma-applets-xrdesktop, libcds, libinputsynth, openvr, parfive, transip, znc & znc-push.

,

CryptogramFriday Squid Blogging: Why Mexican Jumbo Squid Populations Have Declined

A group of scientists conclude that it's shifting weather patterns and ocean conditions.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDWhat does it mean to become a TED Fellow?

TED Fellows celebrate the 10-year anniversary of the program at TEDSummit: A Community Beyond Borders, July 22, 2019 in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Every year, TED begins a new search looking for the brightest thinkers and innovators to be part of the TED Fellows program. With nearly 500 visionaries representing 300 different disciplines, these extraordinary individuals are making waves, disrupting the status quo and creating real impact.

Through a rigorous application process, we narrow down our candidate pool of thousands to just 20 exceptional people. (Trust us, this is not easy to do.) You may be wondering what makes for a good application (read more about that here), but just as importantly: What exactly does it mean to be a TED Fellow? Yes, you’ll work hand-in-hand with the Fellows team to give a TED Talk on stage, but being a Fellow is so much more than that. Here’s what happens once you get that call.

1. You instantly have a built-in support system.

Once selected, Fellows become part of our active global community. They are connected to a diverse network of other Fellows who they can lean on for support, resources and more. To get a better sense of who these people are (fishing cat conservationists! space environmentalists! police captains!), take a closer look at our class of 2019 Fellows, who represent 12 countries across four continents. Their common denominator? They are looking to address today’s most complex challenges and collaborate with others — which could include you.

2. You can participate in TED’s coaching and mentorship program.

To help Fellows achieve an even greater impact with their work, they are given the opportunity to participate in a one-of-a-kind coaching and mentoring initiative. Collaboration with a world-class coach or mentor helps Fellows maximize effectiveness in their professional and personal lives and make the most of the fellowship.

The coaches and mentors who support the program are some of the world’s most effective and intuitive individuals, each inspired by the TED mission. Fellows have reported breakthroughs in financial planning, organizational effectiveness, confidence and interpersonal relationships thanks to coaches and mentors. Head here to learn more about this initiative. 

3. You’ll receive public relations guidance and professional development opportunities, curated through workshops and webinars. 

Have you published exciting new research or launched a groundbreaking project? We partner with a dedicated PR agency to provide PR training and valuable media opportunities with top tier publications to help spread your ideas beyond the TED stage. The TED Fellows program has been recognized by PR News for our “PR for Fellows” program.

In addition, there are vast opportunities for Fellows to hone their skills and build new ones through invigorating workshops and webinars that we arrange throughout the year. We also maintain a Fellows Blog, where we continue to spotlight Fellows long after they give their talks.

***

Over the last decade, our program has helped Fellows impact the lives of more than 180 million people. Success and innovation like this doesn’t happen in a vacuum — it’s sparked by bringing Fellows together and giving them this kind of support. If this sounds like a community you want to join, apply to become a TED Fellow by August 27, 2019 11:59pm UTC.

Sociological ImagesSurviving Student Debt

Recent estimates indicate that roughly 45 million students in the United States have incurred student loans during college. Democratic candidates like Senators Elizabeth Warren and Bernie Sanders have proposed legislation to relieve or cancel  this debt burden. Sociologist Tressie McMillan Cottom’s congressional testimony on behalf of Warren’s student loan relief plan last April reveals the importance of sociological perspectives on the debt crisis. Sociologists have recently documented the conditions driving student loan debt and its impacts across race and gender. 

College debt is the new black.
Photo Credit: Mike Rastiello, Flickr CC

In recent decades, students have enrolled in universities at increasing rates due to the “education gospel,” where college credentials are touted as public goods and career necessities, encouraging students to seek credit. At the same time, student loan debt has rapidly increased, urging students to ask whether the risks of loan debt during early adulthood outweigh the reward of a college degree. Student loan risks include economic hardship, mental health problems, and delayed adult transitions such as starting a family.Individual debt has also led to disparate impacts among students of color, who are more likely to hail from low-income families. Recent evidence suggests that Black students are more likely to drop out of college due to debt and return home after incurring more debt than their white peers. Racial disparities in student loan debt continue into their mid-thirties and impact the white-Black racial wealth gap.

365.75
Photo Credit: Kirstie Warner, Flickr CC

Other work reveals gendered disparities in student debt. One survey found that while women were more likely to incur debt than their male peers, men with higher levels of student debt were more likely to drop out of college than women with similar amounts of debt. The authors suggest that women’s labor market opportunities — often more likely to require college degrees than men’s — may account for these differences. McMillan Cottom’s interviews with 109 students from for-profit colleges uncovers how Black, low-income women in particular bear the burden of student loans. For many of these women, the rewards of college credentials outweigh the risks of high student loan debt.

Amber Joy is a PhD candidate in the Department of Sociology at the University of Minnesota. Her current research interests include punishment, policing, victimization, youth, and the intersections of race, gender, and sexuality. Her dissertation explores youth responses to sexual violence within youth correctional facilities.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityPhishers are Angling for Your Cloud Providers

Many companies are now outsourcing their marketing efforts to cloud-based Customer Relationship Management (CRM) providers. But when accounts at those CRM providers get hacked or phished, the results can be damaging for both the client’s brand and their customers. Here’s a look at a recent CRM-based phishing campaign that targeted customers of Fortune 500 construction equipment vendor United Rentals.

Stamford, Ct.-based United Rentals [NYSE:URI] is the world’s largest equipment rental company, with some 18,000 employees and earnings of approximately $4 billion in 2018. On August 21, multiple United Rental customers reported receiving invoice emails with booby-trapped links that led to a malware download for anyone who clicked.

While phony invoices are a common malware lure, this particular campaign sent users to a page on United Rentals’ own Web site (unitedrentals.com).

A screen shot of the malicious email that spoofed United Rentals.

In a notice to customers, the company said the unauthorized messages were not sent by United Rentals. One source who had at least two employees fall for the scheme forwarded KrebsOnSecurity a response from UR’s privacy division, which blamed the incident on a third-party advertising partner.

“Based on current knowledge, we believe that an unauthorized party gained access to a vendor platform United Rentals uses in connection with designing and executing email campaigns,” the response read.

“The unauthorized party was able to send a phishing email that appears to be from United Rentals through this platform,” the reply continued. “The phishing email contained links to a purported invoice that, if clicked on, could deliver malware to the recipient’s system. While our investigation is continuing, we currently have no reason to believe that there was unauthorized access to the United Rentals systems used by customers, or to any internal United Rentals systems.”

United Rentals told KrebsOnSecurity that its investigation so far reveals no compromise of its internal systems.

“At this point, we believe this to be an email phishing incident in which an unauthorized third party used a third-party system to generate an email campaign to deliver what we believe to be a banking trojan,” said Dan Higgins, UR’s chief information officer.

United Rentals would not name the third party marketing firm thought to be involved, but passive DNS lookups on the UR subdomain referenced in the phishing email (used by UL for marketing since 2014 and visible in the screenshot above as “wVw.unitedrentals.com”) points to Pardot, an email marketing division of cloud CRM giant Salesforce.

Companies that use cloud-based CRMs sometimes will dedicate a domain or subdomain they own specifically for use by their CRM provider, allowing the CRM to send emails that appear to come directly from the client’s own domains. However, in such setups the content that gets promoted through the client’s domain is actually hosted on the cloud CRM provider’s systems.

Salesforce told KrebsOnSecurity that this was not a compromise of Pardot, but of a Pardot customer account that was not using multi-factor authentication.

“UR uses a third party marketing agency that utilizes the Pardot platform,” said Salesforce spokesman Bradford Burns. “The third party marketing agency is who was compromised, not a Pardot employee.”

This attack comes on the heels of another targeted phishing campaign leveraging Pardot that was documented earlier this month by Netskope, a cloud security firm. Netskope’s Ashwin Vamshi said users of cloud CRM platforms have a high level of trust in the software because they view the data and associated links as internal, even though they are hosted in the cloud.

“A large number of enterprises provide their vendors and partners access to their CRM for uploading documents such as invoices, purchase orders, etc. (and often these happen as automated workflows),” Vamshi wrote. “The enterprise has no control over the vendor or partner device and, more importantly, over the files being uploaded from them. In many cases, vendor- or partner-uploaded files carry with them a high level of implicit trust.”

Cybercriminals increasingly are targeting cloud CRM providers because compromised accounts on these systems can be leveraged to conduct extremely targeted and convincing phishing attacks. According to the most recent stats (PDF) from the Anti-Phishing Working Group, software-as-a-service providers (including CRM and Webmail providers) were the most-targeted industry sector in the first quarter of 2019, accounting for 36 percent of all phishing attacks.

Image: APWG

Update, 2:55 p.m. ET: Added comments and responses from Salesforce.

Planet DebianDimitri John Ledkov: How to disable TLS 1.0 and TLS 1.1 on Ubuntu

Example of website that only supports TLS v1.0, which is rejected by the client

Overivew

TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1.

As announced on the 15th of October 2018 Apple, Google, and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well.

To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that do not support TLS v1.2.

How to disable TLS v1.0 and TLS v1.1 in Google Chrome on Ubuntu

  1. Create policy directory
    sudo mkdir -p /etc/opt/chrome/policies/managed
  2. Create /etc/opt/chrome/policies/managed/mintlsver.json with
    {
        "SSLVersionMin" : "tls1.2"

How to disable TLS v1.0 and TLS v1.1 in Firefox on Ubuntu

  1. Navigate to about:config in the URL bar
  2. Search for security.tls.version.min setting
  3. Set it to 3, which stand for minimum TLS v1.2

How to disable TLS v1.0 and TLS v1.1 in OpenSSL

  1. Edit /etc/ssl/openssl.cnf
  2. After oid_section stanza add
    # System default
    openssl_conf = default_conf
  3. After oid_section stanza add
    [default_conf]
    ssl_conf = ssl_sect

    [ssl_sect]
    system_default = system_default_sect

    [system_default_sect]
    MinProtocol = TLSv1.2
    CipherString = DEFAULT@SECLEVEL=2
  4.  Save the file

How to disable TLS v1.0 and TLS v1.1 in GnuTLS

  1. Create config directory
    sudo mkdir -p /etc/gnutls/
  2. Create /etc/gnutls/default-priorities with
    SYSTEM=SECURE192:-VERS-ALL:+VERS-TLS1.3:+VERS-TLS1.2 
After performing above tasks most common applications will use TLS v1.2+

I have set these defaults on my systems, and I occasionally hit websites that only support TLS v1.0 and I report them. Have you found any websites and systems you use that do not support TLS v1.2 yet?

Planet DebianJonathan Dowland: PhD Stage 1 Progression Report

As promised, here's the report I wrote for my PhD Stage 1 progression in the hope that it is useful or interesting to someone. I've made some very small modifications to the submitted copy in order to remove some personal information.

I'll reiterate something from when I published my proposal:

A document produced for one institution's expectations might not be directly applicable to another. … You don't have any idea whether it has been judged to be particularly good or bad one by those who received it (you can make your own judgements).

CryptogramAttacking the Intel Secure Enclave

Interesting paper by Michael Schwarz, Samuel Weiser, Daniel Gruss. The upshot is that both Intel and AMD have assumed that trusted enclaves will run only trustworthy code. Of course, that's not true. And there are no security mechanisms that can deal with malicious enclaves, because the designers couldn't imagine that they would be necessary. The results are predictable.

The paper: "Practical Enclave Malware with Intel SGX."

Abstract: Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. For instance, Intel's threat model for SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether this threat model is realistic. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion, but also act on the user's behalf, e.g., sending phishing emails or mounting denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we seek to demystify the enclave malware threat and lay solid ground for future research on and defense against enclave malware.

,

Krebs on SecurityRansomware Bites Dental Data Backup Firm

PerCSoft, a Wisconsin-based company that manages a remote data backup service relied upon by hundreds of dental offices across the country, is struggling to restore access to client systems after falling victim to a ransomware attack.

West Allis, Wis.-based PerCSoft is a cloud management provider for Digital Dental Record (DDR), which operates an online data backup service called DDS Safe that archives medical records, charts, insurance documents and other personal information for various dental offices across the United States.

The ransomware attack hit PerCSoft on the morning of Monday, Aug. 26, and encrypted dental records for some — but not all — of the practices that rely on DDS Safe.

PercSoft did not respond to requests for comment. But Brenna Sadler, director of  communications for the Wisconsin Dental Association, said the ransomware encrypted files for approximate 400 dental practices, and that somewhere between 80-100 of those clients have now had their files restored.

Sadler said she did not know whether PerCSoft and/or DDR had paid the ransom demand, what ransomware strain was involved, or how much the attackers had demanded.

But updates to PerCSoft’s Facebook page and statements published by both PerCSoft and DDR suggest someone may have paid up: The statements note that both companies worked with a third party software company and were able to obtain a decryptor to help clients regain access to files that were locked by the ransomware.

Update: Several sources are now reporting that PerCSoft did pay the ransom, although it is not clear how much was paid. One member of a private Facebook group dedicated to IT professionals serving the dental industry shared the following screenshot, which is purportedly from a conversation between PerCSoft and an affected dental office, indicating the cloud provider was planning to pay the ransom:

Another image shared by members of that Facebook group indicates the ransomware that attacked PerCSoft is an extremely advanced and fairly recent strain known variously as REvil and Sodinokibi.

Original story:

However, some affected dental offices have reported that the decryptor did not work to unlock at least some of the files encrypted by the ransomware. Meanwhile, several affected dentistry practices said they feared they might be unable to process payroll payments this week as a result of the attack.

Cloud data and backup services are a prime target of cybercriminals who deploy ransomware. In July, attackers hit QuickBooks cloud hosting firm iNSYNQ, holding data hostage for many of the company’s clients. In February, cloud payroll data provider Apex Human Capital Management was knocked offline for three days following a ransomware infestation.

On Christmas Eve 2018, cloud hosting provider Dataresolution.net took its systems offline in response to a ransomware outbreak on its internal networks. The company was adamant that it would not pay the ransom demand, but it ended up taking several weeks for customers to fully regain access to their data.

The FBI and multiple security firms have advised victims not to pay any ransom demands, as doing so just encourages the attackers and in any case may not result in actually regaining access to encrypted files. In practice, however, many cybersecurity consulting firms are quietly urging their customers that paying up is the fastest route back to business-as-usual.

It remains unclear whether PerCSoft or DDR — or perhaps their insurance provider — paid the ransom demand in this attack. But new reporting from independent news outlet ProPublica this week sheds light on another possible explanation why so many victims are simply coughing up the money: Their insurance providers will cover the cost — minus a deductible that is usually far less than the total ransom demanded by the attackers.

More to the point, ProPublica found, such attacks may be great for business if you’re in the insurance industry.

“More often than not, paying the ransom is a lot cheaper for insurers than the loss of revenue they have to cover otherwise,” said Minhee Cho, public relations director of ProPublica, in an email to KrebsOnSecurity. “But, by rewarding hackers, these companies have created a perverted cycle that encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.”

“In fact, it seems hackers are specifically extorting American companies that they know have cyber insurance,” Cho continued. “After one small insurer highlighted the names of some of its cyber policyholders on its website, three of them were attacked by ransomware.”

Read the full ProPublica piece here. And if you haven’t already done so, check out this outstanding related reporting by ProPublica from earlier this year on how security firms that help companies respond to ransomware attacks also may be enabling and emboldening attackers.

Planet DebianDirk Eddelbuettel: anytime 0.3.6

A fresh and very exciting release of the anytime package is arriving on CRAN right now. This is the seventeenth release, and it comes pretty much exactly one month after the preceding 0.3.5 release.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release updates a number of things (see below for details). For users, maybe the most important change is that we now also convert single-digit months, i.e. a not-quite ISO input like “2019-7-5” passes. This required adding %e as a month format; I had overlooked this detail in the (copious) Boost date_time documentation. Another nice change is that we now use standard S3 dispatching rather a manual approach as we probably should have for a long time :-) but better late than never. The code change was actually rather minimal and done in a few minutes. Another change is a further extended use of unit testing via the excellent tinytest package which remains a joy to use. We also expanded the introductory pdf vignette; the benchmark comparisons we included look pretty decent for anytime which still combines ease of use and versability with performance.

Lastly, a somewhat sad “lowlight”. We submitted the package to the Journal of Open Source Software who then told us within days of the unworthyness of anytime for lack of research focus. Needless to see, we disagree. So here is plea: If you use anytime in a research setting, would you mind adding to the this very issue ticket and saying so? This may permit us a somewhat more emphatic data-driven riposte to the editors. Many thanks in advance for considering this.

The full list of changes follows.

Changes in anytime version 0.3.6 (2019-08-29)

  • Added, and then removed, required file for JOSS; added 'unworthy' badge as we earned a desk reject (cf #1605 there).

  • Renamed internal helper function format() to fmt() to avoid clashes with base::format() (Dirk in #104).

  • Use S3 dispatch and generics for key functions (Dirk in #106).

  • Continued to tweak tests as we find some of the rhub platform to behave strangely (Dirk via commits as well as #107).

  • Added %e format for single-digit day parsing by Boost (Dirk addressing at least #24, #70 and #99).

  • Expansed and updated vignette with benchmark comparisons.

  • Updated unit tests using tinytest which remains a pleasure to use; versioned Suggests: is now '>= 1.0.0'.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramAI Emotion-Detection Arms Race

Voice systems are increasingly using AI techniques to determine emotion. A new paper describes an AI-based countermeasure to mask emotion in spoken words.

Their method for masking emotion involves collecting speech, analyzing it, and extracting emotional features from the raw signal. Next, an AI program trains on this signal and replaces the emotional indicators in speech, flattening them. Finally, a voice synthesizer re-generates the normalized speech using the AIs outputs, which gets sent to the cloud. The researchers say that this method reduced emotional identification by 96 percent in an experiment, although speech recognition accuracy decreased, with a word error rate of 35 percent.

Academic paper.

Worse Than FailureCodeSOD: Bassackwards Compatibility

A long time ago, you built a web service. It was long enough ago that you chose XML as your serialization format. It worked fine, but before long, customers started saying that they’d really like to use JSON, so now you need to expose a slightly different, JSON-powered version of your API. To make it easy, you release a JSON client developers can drop into their front-ends.

Conor is one of those developers, and while examining the requests the client sent, he discovered a unique way of making your XML web-service JSON-friendly.

{"fetch":"<fetch version='1.0'><entity><entityDescriptor id='10'/>…<loadsMoreXML/></entity></fetch>"}

Simplicity itself!

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Planet DebianSteve McIntyre: If you can't stand the heat, get out of the kitchen...

Wow, we had a hot weekend in Cambridge. About 40 people turned up to our place in Cambridge for this year's OMGWTFBBQ. Last year we were huddling under the gazebos for shelter from torrential rain; this year we again had all the gazebos up, but this time to hide from the sun instead. We saw temperatures well into the 30s, which is silly for Cambridge at the end of August.

I think it's fair to say that everybody enjoyed themselves despite the ludicrous heat levels. We had folks from all over the UK, and Lars and Soile travelled all the way from Helsinki in Finland to help him celebrate his birthday!

cake!

We had a selection of beers again from the nice folks at Milton Brewery:
is 3 firkins enough?

Lars made pancakes, Paul made bread, and people brought lots of nice food and drink with them too.

Many thanks to a number of awesome friendly companies for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!

Planet DebianJulien Danjou: The Art of PostgreSQL is out!

The Art of PostgreSQL is out!

If you remember well, a couple of years ago, I wrote about Mastering PostgreSQL, a fantastic book written by my friend Dimitri Fontaine.

Dimitri is a long-time PostgreSQL core developer — for example, he wrote the extension support in PostgreSQL — no less. He is featured in my book Serious Python, where he advises on using databases and ORM in Python.

Today, Dimitri comes back with the new version of this book, named The Art of PostgreSQL.

The Art of PostgreSQL is out!As a bonus, here's a picture of me and Dimitri having fun in a PostgreSQL meetup!

I love the motto of this book: Turn Thousands of Lines of Code into Simple Queries. I have spent all my career working with code that talks to databases, and I can't count the number of times where I've seen people write lengthy, slow code in their pet language rather than a single well-thought SQL query which would do a better job.

The Art of PostgreSQL is out!

This is exactly what this book is about.

That's why it's my favorite SQL book. I learned so many things from it. In many cases, I've been able to divide by 10 the size of the code I had to write in Python to implement a feature. All I had to do is to browse the book to discover the right PostgreSQL feature and write a single SQL query. The right query that does the job for me.

Less code, fewer bugs, more happiness!

The book also features interviews with great PostgreSQL users and developers — hey, no wonder where Dimitri got this idea, right? ;-)

The Art of PostgreSQL is out!

I loved those interviews. What's better than reading Kris Jenkins explaining how Clojure and PostgreSQL play nice together, or Markus Winand (from the famous use-the-index-luke.com) talking about the relationship developers have with their database. :-)

No need to say that you should get your hands on this right now. Dimitri just made a launch offer where he offers a 15% discount on the book until the end of this month! You can also read the free chapter to get an idea of what you'll get.

Last thing: it's DRM-free and money-back guaranteed. You can get this book with your eyes closed.

The Art of PostgreSQL is out!

Google AdsenseSimplifying our content policies for publishers

One of our top priorities is to sustain a healthy digital advertising ecosystem, one that works for everyone: users, advertisers and publishers. On a daily basis, teams of Google engineers, policy experts, and product managers combat and stop bad actors. Just last year, we removed 734,000 publishers and app developers from our ad network and ads from nearly 28 million pages that violated our publisher policies.

But we’re not just stopping bad actors. Just as critical to our mission is the work we do every day to help good publishers in our network succeed. One consistent piece of feedback we’ve heard from our publishers is that they want us to further simplify our policies, across products, so they are easier to understand and follow. That’s why we'll be simplifying the way our content policies are presented to publishers, and standardizing content policies across our publisher products.

A simplified publisher experience
In September, we’ll update the way our publisher content policies are presented with a clear outline of the types of content where advertising is not allowed or will be restricted.

Our Google Publisher Policies will outline the types of content that are not allowed to show ads through any of our publisher products. This includes policies against illegal content, dangerous or derogatory content, and sexually explicit content, among others.

Our Google Publisher Restrictions will detail the types of content, such as alcohol or tobacco, that don’t violate policy, but that may not be appealing for all advertisers. Publishers will not receive a policy violation for trying to monetize this content, but only some advertisers and advertising products—the ones that choose this kind of content—will bid on it. As a result, Google Ads will not appear on this content and this content will receive less advertising than non-restricted content will. 

The Google Publisher Policies and Google Publisher Restrictions will apply to all publishers, regardless of the products they use—AdSense, AdMob or Ad Manager.

These changes are the next step in our ongoing efforts to make it easier for publishers to navigate our policies so their businesses can continue to thrive with the help of our publisher products.


Posted by:
Scott Spencer, Director of Sustainable Ads


Planet DebianArturo Borrero González: Wikimania 2019 Stockholm summary

Wikimania 2019 logo

A couple of weeks ago I attended the Wikimania 2019 conference in Stockholm, Sweden. This is the general and global conference for the Wikimedia movement, in which people interested in free knowledge gather together for a few days. The event happens annually, and this was my first time attending such conference. Wikimania 2019 main program ran for 3 days, but we had 2 pre-conference days in which a hackathon was held.

The venue was an amazing building in the Stockholm University, Aula Magna.

The hackathon reunited technical contributors, such as developers, which are interested in a variety of technical challenges in the wiki movement. You can find in the hackathon people interested in wiki edits automation, research, anti harassment tools and also infrastructure engineering and architecture, among other things.

My full time job is at the Wikimedia Cloud Services team. We provide platforms and services for wikimedia movement collaborators who want to perform technical tasks and contributions. Some examples of what we provide:

  • a public cloud service based on Openstack, AKA IaaS. We call this CloudVPS.
  • a PaaS product, based on Kubernetes and GridEngine. We call this Toolforge.
  • direct access to wiki databases in both SQL and XML format.
  • several other software products, like Quarry, PAWS, etc.

These services are widely used in the wiki community. About 40% of total edits to wiki projects come from software running in our platform. Some coworkers and myself attended the hackathon to provide support related to these tools and services, and to introduce them to new contributors.

Talk

We had session/talk called Introduction to Wikimedia Cloud Services the first day of the hackathon, and folks showed genuine interests in the things we offer. Some stuff I did during the hackathon included creating lots of Toolforge accounts, fixing issues in Cloud VPS projects, talks with many people about related technical topics, etc.

Once the hackathon ended, the main program conference started. I was amazed to see how vibrant the wiki movement is. Seeing people from all over the world sharing such a great mission and goals was really inspiring and I truly felt grateful for being part of it. The conference is joined by many wiki enthusiasts, editors and other volunteers from many organizations and local wiki chapters. For the record, the amount of paid staff from the Wikimedia Foundation is limited.

Honestly, until I attended this conference I was not aware of the scope and size of the movement and the variety of topics and approaches that involve free knowledge, the ultimate goal, which is not a far-fetched mission: we are in good track despite the many challenges :-)

After the conference, we had another week in Stockholm for a Tehcnical Engagement team offsite.

CryptogramThe Myth of Consumer-Grade Security

The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that's not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.

In his keynote address at the International Conference on Cybersecurity, Attorney General William Barr argued that companies should weaken encryption systems to gain access to consumer devices for criminal investigations. Barr repeated a common fallacy about a difference between military-grade encryption and consumer encryption: "After all, we are not talking about protecting the nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications."

The thing is, that distinction between military and consumer products largely doesn't exist. All of those "consumer products" Barr wants access to are used by government officials -- heads of state, legislators, judges, military commanders and everyone else -- worldwide. They're used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They're critical to national security as well as personal security.

This wasn't true during much of the Cold War. Before the Internet revolution, military-grade electronics were different from consumer-grade. Military contracts drove innovation in many areas, and those sectors got the cool new stuff first. That started to change in the 1980s, when consumer electronics started to become the place where innovation happened. The military responded by creating a category of military hardware called COTS: commercial off-the-shelf technology. More consumer products became approved for military applications. Today, pretty much everything that doesn't have to be hardened for battle is COTS and is the exact same product purchased by consumers. And a lot of battle-hardened technologies are the same computer hardware and software products as the commercial items, but in sturdier packaging.

Through the mid-1990s, there was a difference between military-grade encryption and consumer-grade encryption. Laws regulated encryption as a munition and limited what could legally be exported only to key lengths that were easily breakable. That changed with the rise of Internet commerce, because the needs of commercial applications more closely mirrored the needs of the military. Today, the predominant encryption algorithm for commercial applications -- Advanced Encryption Standard (AES) -- is approved by the National Security Agency (NSA) to secure information up to the level of Top Secret. The Department of Defense's classified analogs of the Internet­ -- Secret Internet Protocol Router Network (SIPRNet), Joint Worldwide Intelligence Communications System (JWICS) and probably others whose names aren't yet public -- use the same Internet protocols, software, and hardware that the rest of the world does, albeit with additional physical controls. And the NSA routinely assists in securing business and consumer systems, including helping Google defend itself from Chinese hackers in 2010.

Yes, there are some military applications that are different. The US nuclear system Barr mentions is one such example -- and it uses ancient computers and 8-inch floppy drives. But for pretty much everything that doesn't see active combat, it's modern laptops, iPhones, the same Internet everyone else uses, and the same cloud services.

This is also true for corporate applications. Corporations rarely use customized encryption to protect their operations. They also use the same types of computers, networks, and cloud services that the government and consumers use. Customized security is both more expensive because it is unique, and less secure because it's nonstandard and untested.

During the Cold War, the NSA had the dual mission of attacking Soviet computers and communications systems and defending domestic counterparts. It was possible to do both simultaneously only because the two systems were different at every level. Today, the entire world uses Internet protocols; iPhones and Android phones; and iMessage, WhatsApp and Signal to secure their chats. Consumer-grade encryption is the same as military-grade encryption, and consumer security is the same as national security.

Barr can't weaken consumer systems without also weakening commercial, government, and military systems. There's one world, one network, and one answer. As a matter of policy, the nation has to decide which takes precedence: offense or defense. If security is deliberately weakened, it will be weakened for everybody. And if security is strengthened, it is strengthened for everybody. It's time to accept the fact that these systems are too critical to society to weaken. Everyone will be more secure with stronger encryption, even if it means the bad guys get to use that encryption as well.

This essay previously appeared on Lawfare.com.

LongNowThe Vineyard Gazette on Revive & Restore’s Heath Hen De-extinction Efforts

 The world’s last heath hen went extinct in Martha’s Vineyard in 01932. The Revive & Restore team recently paid a visit there to discuss their efforts to bring the species back.

Members of the Revive & Restore team next to a statue of Booming Ben, the last heath hen.

From the Vineyard Gazette:

Buried deep within the woods of the Manuel Correllus State Forest is a statue of Booming Ben, the world’s final heath hen. Once common all along the eastern seaboard, the species was hunted to near-extinction in the 1870s. Although a small number of the birds found refuge on Martha’s Vineyard, they officially disappeared in 1932 — with Booming Ben, the last of their kind, calling for female mates who were no longer there to hear him.

“There is no survivor, there is no future, there is no life to be recreated in this form again,” Gazette editor Henry Beetle Hough wrote. “We are looking upon the uttermost finality which can be written, glimpsing the darkness which will not know another ray of light. We are in touch with the reality of extinction.”

The statue memorializes that reality.

Since 2013, however, a group of cutting-edge researchers with the group Revive and Restore have been hard at work to bring back the heath hen as part of an ambitious avian de-extinction project. The project got started when Ryan Phelan, who co-founded Revive and Restore with her husband, scientist and publisher of the Whole Earth Catalogue, Stewart Brand, began to think broadly about the goals for their organization.

“We started by saying what’s the most wild idea possible?” Ms. Phelan said. “What’s the most audacious? That would be bringing back an extinct species.”

Read the piece in full here.

Planet DebianDaniel Silverstone: RFH: Naming things is hard

As with all things in computing, one of two problems always seem to raise their ugly heads… We either have an off-by-one error, or we have a caching error, or we have a naming problem.

Lars and I have been working on an acceptance testing tool recently. You may have seen the soft launch announcement on Lars' blog. Sadly since that time we've discovered that Fable is an overloaded name in the domain of software quality assurance and we do not want to try and compete with Fable since (a) they were there first, and (b) accessibility is super-important and we don't want to detract from the work they're doing.

As such, this is a request for help. We need to name our tool usefully, since how can we make a git repository until we have a name? Previous incarnations of the tool were called Yarn and we chose Fable to carry on the sense of telling a story (the fundamental unit of testing in these systems is a scenario), but we are not wedded to the idea of continuing in the same vein.

If you have an idea for a name for our tool, please consider reading about it on the Fable website, and then either comment here, or send me an email, prod me on IRC, or indeed any of the various ways you have to find me.

Worse Than FailureTeleported Release

Matt works at an accounting firm, as a data engineer. He makes reports for people who don’t read said reports. Accounting firms specialize in different areas of accountancy, and Matt’s firm is a general firm with mid-size clients.

The CEO of the firm is a legacy from the last century. The most advanced technology on his desk is a business calculator and a pencil sharpener. He still doesn’t use a cellphone. But he does have a son, who is “tech savvy”, which gives the CEO a horrible idea of how things work.

Usually, this is pretty light, in that it’s sorting Excel files or sorting the output of an existing report. Sometimes the requests are bizarre or utter nonsense. And, because the boss doesn’t know what the technical folks are doing, some of the IT staff may be a bit lazy about following best practices.

This means that most of Matt’s morning is spent doing what is essentially Tier 1 support before he gets into doing his real job. Recently, there was a worse crunch, as actual support person Lucinda was out for materinity leave, and Jackie, the one other developer, was off on vacation on a foreign island with no Internet. Matt was in the middle of eating a delicious lunch of take-out lo mein when his phone rang. He sighed when he saw the number.

“Matt!” the CEO exclaimed. “Matt! We need to do a build of the flagship app! And a deploy!”

The app was rather large, and a build could take upwards of 45 minutes, depending on the day and how the IT gods were feeling. But the process was automated, the latest changes all got built and deployed each night. Anything approved was released within 24 hours. With everyone out of the office, there hadn’t been any approved changes for a few weeks.

Matt checked the Github to see if something went wrong with the automated build. Everything was fine.

“Okay, so I’m seeing that everything built on GitHub and everything is available in production,” Matt said.

“I want you to do a manual build, like you used to.”

“If I were to compile right now, it could take quite awhile, and redeploying runs the risk of taking our clients offline, and nothing would be any different.”

“Yes, but I want a build that has the changes which Jackie was working on before she left for vacation.”

Matt checked the commit history, and sure enough, Jackie hadn’t committed any changes since two weeks before leaving on vacation. “It doesn’t looked like she pushed those changes to Github.”

“Githoob? I thought everything was automated. You told me the process was automated,” the CEO said.

“It’s kind of like…” Matt paused to think of an analogy that could explain this to a golden retriever. “Your dishwasher, you could put a timer on it to run it every night, but if you don’t load the dishwasher first, nothing gets cleaned.”

There was a long pause as the CEO failed to understand this. “I want Jackie’s front-page changes to be in the demo I’m about to do. This is for Initech, and there’s millions of dollars riding on their account.”

“Well,” Matt said, “Jackie hasn’t pushed- hasn’t loaded her metaphorical dishes into the dishwasher, so I can’t really build them.”

“I don’t understand, it’s on her computer. I thought these computers were on the cloud. Why am I spending all this money on clouds?”

“If Jackie doesn’t put it on the cloud, it’s not there. It’s uh… like a fax machine, and she hasn’t sent us the fax.”

“Can’t you get it off her laptop?”

“I think she took it home with her,” Matt said.

“So?”

“Have you ever seen Star Trek? Unless Scotty can teleport us to Jackie’s laptop, we can’t get at her files.”

The CEO locked up on that metaphor. “Can’t you just hack into it? I thought the NSA could do that.”

“No-” Matt paused. Maybe Matt could try and recreate the changes quickly? “How long before this meeting?” he asked.

“Twenty minutes.”

“Just to be clear, you want me to do a local build with files I don’t have by hacking them from a computer which may or may not be on and connected to the Internet, and then complete a build process which usually takes 45 minutes- at least- deploy to production, so you can do a demo in twenty minutes?”

“Why is that so difficult?” the CEO demanded.

“I can call Jackie, and if she answers, maybe we can figure something out.”

The CEO sighed. “Fine.”

Matt called Jackie. She didn’t answer. Matt left a voicemail and then went back to eating his now-cold lo mein.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Planet DebianMike Gabriel: Release of nx-libs 3.5.99.22 (Call for Testing: Keyboard auto-grab Support)

Long time not blogged about, however, there is a new release of nx-libs: nx-libs 3.5.99.22.

What is nx-libs?

The nx-libs team maintains a software originally developed by NoMachine under the name nx-X11 (version 3) or shorter: NXv3. For years now, a small team of volunteers is continually improving, fixing and maintaining the code base (after some major and radical cleanups) of NXv3. NXv3 aka x2goagent has been the only graphical backend in X2Go [0], a remote desktop framework for Linux terminal servers, over the past years.

(Spoiler: in the near future, there will be two graphical backends for X2Go sessions, if you got curious... stay tuned...).

Credits

You may have noticed, that I skipped announcing several releases of nx-libs. All interim releases should have had their own announcements, indeed, as each of them deserved it. So I am sorry and I dearly apologize for not mentioning all the details of each individual release. I am sorry for not giving credits to the team of developers around me who do pretty hard work on keeping this beast intact.

The more, let me here here and now especially give credits to Ulrich Sibiller, Mihai Moldovan and Mario Trangoni for keeping the torch burning and for actually having achieved awesome results in each of the recent nx-libs releases over the past year or so. Thanks, folks!!!

Luckily, Mihai Moldovan (X2Go Release Manager) wrote regular release announcements for every version of nx-libs that he pulled over to the X2Go Git site and the X2Go upstream-DEBs archive site [1]. Also a big thanks for this!

Changes for nx-libs 3.5.99.21 and 3.5.99.22

3.5.99.21

  • Ulrich Sibiller did a major memory leak, double-free, etc. hunt all over the code and fixed several of such issues. Most of them will be in nx-libs shipped with Debian 10.1. (The one that is not yet in there has only recently been discovered).
  • There was also work done on the reparenting code when switching between fullscreen and windowed desktop session mode.
  • Ulrich Sibiller also reworked the NX-specific part of the XKB integration and cleaned up Font path handling.

For a complete list of changes, see the 3.5.99.21 upstream release commit [2].

3.5.99.22

  • The nxagent DDX code now uses the SAFE_Xfree and SAFE_free macros recently introduced everywhere.
  • The NX splash screen code had been tidied up entirely, plus: with nxagent option "-wr" you can now create a root window with white background.
  • Keyboard Auto-Grab support (see below)
  • Fix a double-free situation in the RandR implementation that occurred on NX session resumption

For a complete list of changes, see the 3.5.99.22 upstream release commit [3].

The new Feature: Keyboard auto-grab Support ( call for testing )

There is a new feature in nx-libs (aka nxagent, aka x2goagent) that people may find interesting. Ulrich Sibiller and I have been working on and off on a keyboard auto-grab feature for NX. See various discussions on nx-libs's issue tracker [4, 5, 6].

With keyboard auto-grab enabled (toggle switch is CTRL+ALT+G, configurable via /etc/{nxagent,x2goagent}/keystrokes.cfg), you can now run e.g. an "i3" [7] (or "awesome" [8]) window manager nested inside X2Go sessions with the local desktop environment also being an "i3" (or "awesome") window manager. I hear some of you cheering up now, in fact. Yes, it has become possible, finally.

Before we had this keyboard auto-grab feature in NX, it was not possible to connect to an X2Go session running i3 desktop from within an i3 window manager running on the local $DISPLAY. Keyboard input would never really end up in the X2Go session.

With keyboard auto-grab enabled, you can now nest "i3" (or "awesome") based desktops (local + remote via X2Go). If keyboard auto-grab is enabled, nearly all keyboard events (except the NX keystrokes) end up in the X2Go session window. With auto-grab disabled, all keyboard events end up in the local $DISPLAY's i3 (or "awesome") desktop.

Here is a little command line LOVE, to play with:

Log into a local desktop session, running the i3 window manager (if you have never touch i3, use awesome). If you don't know what tiling window managers are and how to use them... Try them out first.

If you are in a local i3wm session, do this from one terminal:

sudo apt-get install nxagent
nxagent -ac :1

And from another terminal:

export DISPLAY=:1
STARTUP=i3 dbus-run-session /etc/X11/Xsession

(You could do this more than once... You can use STARTUP=awesome instead of STARTUP=i3, too. ).

Have fun with nested tiling desktop environments tiled all over your screen. Use CTRL-ALT-G to toggle keyboard auto-grabbing for each NX session window individually. By default, auto-grab is disabled on startup of nxagent, so the local i3wm gets all the keyboard attention. Move the mouse over an nxagent + i3 window / tile and hit CTRL-ALT-G. Now the NX session window has all keyboard attention as long as the mouse pointer hovers above it.

And: please report any special and unexpected effects to the nx-libs issue tracker [9]. Thanks!

Have fun!!! Mike Gabriel (aka sunweaver)

References

Planet DebianMark Brown: Linux Audio Miniconference 2019

As in previous years we’re going to have an audio miniconference so we can get together and talk through issues, especially design decisions, face to face. This year’s event will be held on Sunday October 31st in Lyon, France, the day after ELC-E. This will be held at the Lyon Convention Center (the ELC-E venue), generously sponsored by Intel.

As with previous years let’s pull together an agenda through a mailing list discussion – this announcement has been posted to alsa-devel as well, the most convenient thing would be to follow up to it. Of course if we can sort things out more quickly via the mailing list that’s even better!

If you’re planning to attend please fill out the form here.

This event will be covered by the same code of conduct as ELC-E.

Thanks again to Intel for supporting this event.

Krebs on SecurityCybersecurity Firm Imperva Discloses Breach

Imperva, a leading provider of Internet firewall services that help Web sites block malicious cyberattacks, alerted customers on Tuesday that a recent data breach exposed email addresses, scrambled passwords, API keys and SSL certificates for a subset of its firewall users.

Redwood Shores, Calif.-based Imperva sells technology and services designed to detect and block various types of malicious Web traffic, from denial-of-service attacks to digital probes aimed at undermining the security of Web-based software applications.

Image: Imperva

Earlier today, Imperva told customers that it learned on Aug. 20 about a security incident that exposed sensitive information for some users of Incapsula, the company’s cloud-based Web Application Firewall (WAF) product.

“On August 20, 2019, we learned from a third party of a data exposure that impacts a subset of customers of our Cloud WAF product who had accounts through September 15, 2017,” wrote Heli Erickson, director of analyst relations at Imperva.

“We want to be very clear that this data exposure is limited to our Cloud WAF product,” Erickson’s message continued. “While the situation remains under investigation, what we know today is that elements of our Incapsula customer database from 2017, including email addresses and hashed and salted passwords, and, for a subset of the Incapsula customers from 2017, API keys and customer-provided SSL certificates, were exposed.”

Companies that use the Incapsula WAF route all of their Web site traffic through the service, which scrubs the communications for any suspicious activity or attacks and then forwards the benign traffic on to its intended destination.

Rich Mogull, founder and vice president of product at Kansas City-based cloud security firm DisruptOps, said Imperva is among the top three Web-based firewall providers in business today.

According to Mogull, an attacker in possession of a customer’s API keys and SSL certificates could use that access to significantly undermine the security of traffic flowing to and from a customer’s various Web sites.

At a minimum, he said, an attacker in possession of these key assets could reduce the security of the WAF settings and exempt or “whitelist” from the WAF’s scrubbing technology any traffic coming from the attacker. A worst-case scenario could allow an attacker to intercept, view or modify traffic destined for an Incapsula client Web site, and even to divert all traffic for that site to or through a site owned by the attacker.

“Attackers could whitelist themselves and begin attacking the site without the WAF’s protection,” Mogull told KrebsOnSecurity. “They could modify any of the security Incapsula security settings, and if they got [the target’s SSL] certificate, that can potentially expose traffic. For a security-as-a-service provider like Imperva, this is the kind of mistake that’s up there with their worst nightmare.”

Imperva urged all of its customers to take several steps that might mitigate the threat from the data exposure, such as changing passwords for user accounts at Incapsula, enabling multi-factor authentication, resetting API keys, and generating/uploading new SSL certificates.

Alissa Knight, a senior analyst at Aite Group, said the exposure of Incapsula users’ scrambled passwords and email addresses was almost incidental given that the intruders also made off with customer API keys and SSL certificates.

Knight said although we don’t yet know the cause of this incident, such breaches at cloud-based firms often come down to small but ultimately significant security failures on the part of the provider.

“The moral of the story here is that people need to be asking tough questions of software-as-a-service firms they rely upon, because those vendors are being trusted with the keys to the kingdom,” Knight said. “Even if the vendor in question is a cybersecurity company, it doesn’t necessarily mean they’re eating their own dog food.”

Planet DebianHolger Levsen: 20190827-cccamp

On my way home from CCCamp 2019

During the last week I've been swimming many times in 4 different lakes, enjoyed a great variety of talks, music, food, drinks and lots of nerdstuff. The small forest I put my tent in was illuminated through a disco ball. And almost best of it all, until an hour ago, I spent the last 72h offline with friends.

I <3 cccamp.

Planet DebianMike Gabriel: Debian goes libjpeg-turbo 2.0.x [RFH]

I recently uploaded libjpeg-turbo 2.0.2-1~exp1 to Debian experimental. This has been the first upload of the 2.0.x release series of libjpeg-turbo.

After 3 further upload iterations (~exp4 that is), the package now builds on nearly all (except 3) architectures supported by Debian.

@all: Please Test

For those architectures that libjpeg-turbo 2.0.2-1~exp* is already available in Debian experimental, please start testing your applications on Debian testing/unstable systems with libjpeg-turbo 2.0.2-1~exp* installed from experimental. If you observe any peculiarities, please file bugs against src:libjpeg-turbo on Debian BTS. Thanks!

Please note: the major 2.x release series does not introduce an SOVERSION bump, so applications don't have to be rebuilt against the newer libjpeg-turbo. Simply drop-in-replace installed libjpeg62-turbo bin:pkg by the version from Debian experimental.

[RFH] FTBFS during Unit Tests

On the alphas, powerpc and sparc64 architectures, the builds [1] fail during unit tests:

301/302 Test #155: tjunittest-static-yuv-alloc .......................   Passed   60.08 sec
302/302 Test #156: tjunittest-static-yuv-nopad .......................   Passed   60.01 sec

99% tests passed, 2 tests failed out of 302

Total Test time (real) = 121.40 sec

The following tests FAILED:
     83 - djpeg-shared-3x2-float-prog-cmp (Failed)
    234 - djpeg-static-3x2-float-prog-cmp (Failed)
Errors while running CTest
make[1]: *** [Makefile:133: test] Error 8
make[1]: Leaving directory '/<<PKGBUILDDIR>>/obj-sparc64-linux-gnu'
dh_auto_test: cd obj-sparc64-linux-gnu && make -j8 test ARGS\+=-j8 returned exit code 2
make: *** [debian/rules:40: build-arch] Error 255

As I am not so much a porter, nor a JPEG adept, I'll appreciate some help from people with more porting and/or JPEG experience. If you feel called to work on this, please ping me on IRC (OFTC) so we can coordinate our research. The packaging Git of libjpeg-turbo has recently been migrated to Salsa [2].

References

Thanks in advance to anyone who chimes in,
Mike (aka sunweaver)

Planet DebianJonathan Dowland: Debian hiatus

Back In July I decided to take a (minimum) six months hiatus from involvement in the Debian project. This is for a number of reasons, but I completely forgot to write about it publically. So here we are.

I'm going to look at things again no sooner than January 2020 and decide whether or not (or how much) to pick it back up.

CryptogramThe Threat of Fake Academic Research

Interesting analysis of the possibility, feasibility, and efficacy of deliberately fake scientific research, something I had previously speculated about.

Planet DebianColin Watson: man-db 2.8.7

I’ve released man-db 2.8.7 (announcement, NEWS), and uploaded it to Debian unstable.

There are a few things of note that I wanted to talk about here. Firstly, I made some further improvements to the seccomp sandbox originally introduced in 2.8.0. I do still think it’s correct to try to confine subprocesses this way as a defence against malicious documents, but it’s also been a pretty rough ride for some users, especially those who use various kinds of VPNs or antivirus programs that install themselves using /etc/ld.so.preload and cause other programs to perform additional system calls. As well as a few specific tweaks, a recent discussion on LWN reminded me that it would be better to make seccomp return EPERM rather than raising SIGSYS, since that’s easier to handle gracefully: in particular, it fixes an odd corner case related to glibc’s nscd handling.

Secondly, there was a build failure on macOS that took a while to figure out, not least because I don’t have a macOS test system myself. In 2.8.6 I tried to make life easier for people on this platform with a CFLAGS tweak, but I made it a bit too general and accidentally took away configure’s ability to detect undefined symbols properly, which caused very confusing failures. More importantly, I hadn’t really thought through why this change was necessary and whether it was a good idea. man-db uses private shared libraries to keep its executable size down, and it passes -no-undefined to libtool to declare that those shared libraries have no undefined symbols after linking, which is necessary to build shared libraries on some platforms. But the CFLAGS tweak above directly contradicts this! So, instead of playing core wars with my own build system, I did some refactoring so that the assertion that man-db’s shared libraries have no undefined symbols after linking is actually true: this involved moving decompression code out of libman, and arranging for the code in libmandb to take the database path as a parameter rather than as a global variable (something I’ve meant to fix for ages anyway; 252d7cbc23, 036aa910ea, a97d977b0b). Lesson: don’t make build system changes you don’t quite understand.

Planet DebianRuss Allbery: Review: Space Opera

Review: Space Opera, by Catherynne M. Valente

Publisher: Saga
Copyright: 2018
ISBN: 1-4814-9751-0
Format: Kindle
Pages: 304

Life is not, as humans had come to think, rare. The universe is packed with it, bursting at the seams. The answer to the Fermi paradox is not that life on Earth is a flukish chance. It's that, until recently, everyone else was distracted by total galactic war.

Thankfully by the time the other intelligent inhabitants of the galaxy stumble across Earth the Sentience Wars are over. They have found a workable solution to the everlasting problem of who counts as people and who counts as meat, who is sufficiently sentient and self-aware to be allowed to join the galactic community and who needs to be quietly annihilated and never spoken of again. That solution is the Metagalactic Grand Prix, a musical extravaganza that is also the highest-rated entertainment in the galaxy. All the newly-discovered species has to do is not finish dead last.

An overwhelmingly adorable giant space flamingo appears simultaneously to every person on Earth to explain this, and also to reassure everyone that they don't need to agonize over which musical act to send to save their species. As their sponsors and the last new species to survive the Grand Prix, the Esca have made a list of Earth bands they think would be suitable. Sadly though, due to some misunderstandings about the tragically short lifespans of humans, every entry on the list is dead but one: Decibel Jones and the Absolute Zeroes. Or their surviving two members, at least.

Space Opera is unapologetically and explicitly The Hitchhiker's Guide to the Galaxy meets Eurovision. Decibel Jones and his bandmate Oort are the Arthur Dent of this story, whisked away in an impossible spaceship to an alien music festival where they're expected to sing for the survival of their planet, minus one band member and well past their prime. When they were at the height of their career, they were the sort of sequin-covered glam rock act that would fit right in to a Eurovision contest. Decibel Jones still wants to be that person; Oort, on the other hand, has a wife and kids and has cashed in the glitterpunk life for stability. Neither of them have any idea what to sing, assuming they even survive to the final round; sabotage is allowed in the rules (it's great for ratings).

I love the idea of Eurovision, one that it shares with the Olympics but delivers with less seriousness and therefore possibly more effectiveness. One way to avoid war is to build shared cultural ties through friendly competition, to laugh with each other and applaud each other, and to make a glorious show out of it. It's a great hook for a book. But this book has serious problems.

The first is that emulating The Hitchhiker's Guide to the Galaxy rarely ends well. Many people have tried, and I don't know of anyone who has succeeded. It sits near the top of many people's lists of the best humorous SF not because it's a foundational model for other people's work, but because Douglas Adams had a singular voice that is almost impossible to reproduce.

To be fair, Valente doesn't try that hard. She goes a different direction: she tries to stuff into the text of the book the written equivalent of the over-the-top, glitter-covered, hilariously excessive stage shows of unapologetic pop rock spectacle. The result... well, it's like an overstuffed couch upholstered in fuchsia and spangles, onto which have plopped the four members of a vaguely-remembered boy band attired in the most eye-wrenching shade of violet satin and sulking petulantly because you have failed to provide usable cans of silly string due to the unfortunate antics of your pet cat, Eunice (it's a long story involving an ex and a book collection), in an ocean-reef aquarium that was a birthday gift from your sister, thus provoking a frustrated glare across an Escher knot of brilliant yellow and now-empty hollow-sounding cans of propellant, when Joe, the cute blonde one who was always your favorite, asks you why your couch and its impossibly green rug is sitting in the middle of Grand Central Station, and you have to admit that you do not remember because the beginning of the sentence slipped into a simile singularity so long ago.

Valente always loves her descriptions and metaphors, but in Space Opera she takes this to a new level, one covered in garish, cheap plastic. Also, if you can get through the Esca's explanation of what's going on without wanting to strangle their entire civilization, you have a higher tolerance for weaponized cutesy condescension than I do.

That leads me back to Hitchhiker's Guide and the difficulties of humor based on bizarre aliens and ludicrous technology: it's not funny or effective unless someone is taking it seriously.

Valente includes, in an early chapter, the rules of the Metagalactic Grand Prix. Here's the first one:

The Grand Prix shall occur once per Standard Alumizar Year, which is hereby defined by how long it takes Aluno Secundus to drag its business around its morbidly obese star, get tired, have a nap, wake up cranky, yell at everyone for existing, turn around, go back around the other way, get lost, start crying, feel sorry for itself and give up on the whole business, and finally try to finish the rest of its orbit all in one go the night before it's due, which is to say, far longer than a year by almost anyone else's annoyed wristwatch.

This is, in isolation, perhaps moderately amusing, but it's the formal text of the rules of the foundational event of galactic politics. Eurovision does not take itself that seriously, but it does have rules, which you can read, and they don't sound like that, because this isn't how bureaucracies work. Even bureaucracies that put on ridiculous stage shows. This shouldn't have been the actual rules. It should have been the Hitchhiker's Guide entry for the rules, but this book doesn't seem to know the difference.

One of the things that makes Hitchhiker's Guide work is that much of what happens is impossible for Arthur Dent or the reader to take seriously, but to everyone else in the book it's just normal. The humor lies in the contrast.

In Space Opera, no one takes anything seriously, even when they should. The rules are a joke, the Esca think the whole thing is a lark, the representatives of galactic powers are annoying contestants on a cut-rate reality show, and the relentless drumbeat of more outrageous descriptions never stops. Even the angst is covered in glitter. Without that contrast, without the pause for Arthur to suddenly realize what it means for the planet to be destroyed, without Ford Prefect dryly explaining things in a way that almost makes sense, the attempted humor just piles on itself until it collapses under its own confusing weight. Valente has no characters capable of creating enough emotional space to breathe. Decibel Jones only does introspection by moping, Oort is single-note grumbling, and each alien species is more wildly fantastic than the last.

This book works best when Valente puts the plot aside and tells the stories of the previous Grands Prix. By that point in the book, I was somewhat acclimated to the over-enthusiastic descriptions and was able to read past them to appreciate some entertainingly creative alien designs. Those sections of the book felt like a group of friends read a dozen books on designing alien species, dropped acid, and then tried to write a Traveler supplement. A book with those sections and some better characters and less strained writing could have been a lot of fun.

Unfortunately, there is a plot, if a paper-thin one, and it involves tedious and unlikable characters. There were three people I truly liked in this book: Decibel's Nani (I'm going to remember Mr. Elmer of the Fudd) who appears only in quotes, Oort's cat, and Mira. Valente, beneath the overblown writing, does some lovely characterization of the band as a trio, but Mira is the anchor and the only character of the three who is interesting in her own right. If this book had been about her... well, there are still a lot of problems, but I would have enjoyed it more. Sadly, she appears mostly around the edges of other people's manic despair.

That brings me to a final complaint. The core of this book is musical performance, which means that Valente has set herself the challenging task of describing music and performance sufficiently well to give the reader some vague hint of what's good, what isn't, and why. This does not work even a little bit. Most of the alien music is described in terms of hyperspecific genres that the characters are assumed to have heard of and haven't, which was a nice bit of parody of musical writing but which doesn't do much to create a mental soundtrack. The rest is nonspecific superlatives. Even when a performance is successful, I had no idea why, or what would make the audience like one performance and not another. This would have been the one useful purpose of all that overwrought description.

Clearly some people liked this book well enough to nominate it for awards. Humor is unpredictable; I'm sure there are readers who thought Space Opera was hilarious. But I wanted to salvage about 10% of this book, three of the supporting characters, and a couple of the alien ideas, and transport them into a better book far away from the tedious deluge of words.

I am now inspired to re-read The Hitchhiker's Guide to the Galaxy, though, so there is that.

Rating: 3 out of 10

,

Chaotic IdealismHow I Live Now

Years ago, when I was a biomedical engineering major and I thought I was going to be employable, I lived in an apartment and had a car and did all those things non-disabled people do. And I was stressed out, really stressed out, living on the edge of independence and just teetering, trying to keep my balance.

Eventually I switched majors from BME to psychology–an easier program, and one that interested me.

The car didn’t last long, totaled thanks to my poor reflexes and lack of the sort of short-notice judgment that makes me a dangerous driver. My driver’s license ran out; now I just have a state ID. I moved closer to WSU, but my executive function was still bad, and it was hard for me to get to class. They sent a van across the street to pick me up. I forgot to study; they provided one of their testing rooms, distraction-free, so I would have somewhere away from the temptations of my apartment to study. They interceded with professors and got me extra time.

I was taking classes part-time, with intensive help from the department of disability services; I couldn’t sustain full-time work. If Wright State hadn’t been willing to go out of its way for me, I’d never have gotten a degree at all. I was diagnosed with dysthymia as well as episodic major depression, which explained why I never seemed to get my energy back after an episode.

I graduated. GPA 3.5, respectable. Dreaming of graduate school. Blew the practice GRE out of the water.

I tried to get a job. I worked with my job coach for more than a year. I wanted a graduate assistantship, but nobody wanted me. We looked at jobs that would let me use my education, but nobody was hiring. Eventually we branched out into more low-level work–hospital receptionist, dog kennel attendant, pharmacy technician. They were all part-time; by that point I knew better than to assume I could stick it out for a 40-hour work week.

The pharmacy tech job almost succeeded, but the boss couldn’t work with the assisted transport service that could only deliver me between the hours of 9 and 3–plus, they’d assured me it was part time, only to schedule me for 35 hours. I can only assume they hired “part-time” workers to avoid paying them benefits.

I signed up with Varsity Tutors to teach math, science, writing, and statistics. I enjoyed the work, especially when I got to use my writing ability to help someone communicate clearly, or made statistics understandable to someone unused to thinking about math. But it wasn’t steady work; you were competing with all the other tutors. You had to accept a new assignment within seconds, even before you knew what it was or whether you could teach that topic, because if you didn’t someone else would click on it first. Students paid a huge fee–$50 an hour or thereabouts–of which we only got about $10. Sometimes, when I grabbed a job that involved teaching something I myself hadn’t learned yet, I had to spend hours preparing for a one-hour session–and no, preparation hours aren’t paid.

I grew tired of cheating the customers; I’m not worth a $50-an-hour tutoring fee, and practically all of the money went to the company for doing nothing more than maintaining a Web service to match tutors and clients. And since I’d paid, out of my own pocket, for a tablet, Web cam, and internet connection, I hadn’t actually made any money anyway. I suppose I would have, if I’d stuck with it, but I just don’t like feeling so dishonest. It’s been more than a year since I last had contact with them, so I can say that. No more non-disclosure agreement. I’m sure they haven’t changed, though.

I was running out of money. My disability payments couldn’t pay for my rent. Eventually, a friend who was remodeling a house in a Cincinnati suburb offered me a rented room, within my means, and I accepted.

For a year, I lived in a room of a house undergoing remodeling. Eventually, I moved downstairs, into a finished basement room. College loan companies bombarded me with mail, demanding money I didn’t have. With the US government becoming increasingly unstable, I worried that if I even tried to work, I might lose Medicaid, and without a Medicaid buy-in available, I would have to choose between working and taking my medication (note: I cannot work if I am not taking my meds; in fact, I am in deadly danger if I do not take my meds). It didn’t help that my area has no particularly good public transport service, and the assisted transport service is–as always–unreliable and cannot be used to get to work.

Eventually I gave in. I applied for permanent disability discharge of my student loans, and was granted it. I feel dishonest–again–for not being able to predict, when I got my degree, that it wouldn’t make me employable. But there it is. The world doesn’t like to hire people who are different, or who need accommodations, or who can’t fit into the machinery of society.

But a person can’t just sit around. I do a lot of volunteer work now. I’m the primary researcher for ASAN’s disability day of mourning web site; I spend an hour or more every day monitoring the news, keeping records, and writing bios of disabled people murdered by their families and caregivers. I’ve kept up with my own Autism Memorial site, too, and the list is nearly 500 names long now. Seems like a lot, but my spreadsheet of disabled homicide victims in general is approaching five thousand.

Two days a week, I volunteer at the library. I put away books, straighten shelves, help patrons find things. The board of directors of the library fired all the pages years ago as a cost-cutting measure, so it’s volunteers like me that keep the books on the shelves while the employees are stuck manning the checkout desk or the book return. I find the work very meaningful, especially in the current political climate; libraries are wonderful, subversive places that teach a person to think on their own.

In the backyard of the house, I’m growing a garden. Gardening is new to me, but last year I had an overabundance of cherry tomatoes, and this year I’m growing tomatoes, eppers, cucumbers, carrots, sunflowers, and various herbs. I keep the lawn mowed and the bushes trimmed. The garden is a good thing, because lately my food stamps have been cut and I can’t really afford produce anymore.

My housemate’s girlfriend moved in with him last summer. She’s a sweet teacher with two guinea pigs and a love of stories. On Fridays, we drive for an hour to go play D&D with friends, and I bake cookies. I’ve learned to bake cookies over the last few years; at first it was just frozen cookie dough, then from scratch. I’ve gotten pretty good at it.

After my cat Tiny died of kidney failure, Christy got more vocal and demanding. She yells at me now when she wants attention, and climbs up on my bed to snuggle with me. She seems to think she needs to do the job of two cats. She’s getting older now, less able to climb to the top of the furniture or snatch a fly out of the air with her paws; but she still gets the kitty crazies, running around and skating on the rag rugs I made to keep the concrete floor from being quite so chilly.

I’m still myself–idealistic, protective, with a deep need to be useful. Living now is easier than it used to be when I had college loans; I just don’t buy anything I don’t absolutely need, help where I can, and let the rest go. I still have to deal with depression and with the executive dysfunction and weird brain of autism, but that’s a part of me, and I see no sense in looking down on myself just because I’m disabled.

I worry about the future. Just when it’s becoming crucial, our country’s dropping the ball on climate change. Our president is erratic, untrustworthy, and unethical. Authoritarianism looms large on the horizon. I do my best as a private citizen to help change things–with a focus on preserving democracy–but it’s still frightening, because disabled people are always the ones who get hurt first, right along with the poor and the minorities. I have quite a few deaths in ICE detainment in that database of mine, all of disabled immigrants. Why do people have to hate each other so much? Life is not a zero-sum game; if we help others, we ourselves benefit. We have so much to give; why are we refusing to share?

I find meaning in life from all the little things I do do make the world a little better, even if it’s just making cookies or showing a kid where to find the “Harry Potter” books. I used to think I might do something grand with my life, but now I don’t really think so. I think maybe a better world is made up of a lot of little people, all doing little things, all pushing in the right direction, until the sheer weight of numbers can move mountains.

Planet DebianAlberto García: The status of WebKitGTK in Debian

Like all other major browser engines, WebKit is a project that evolves very fast with releases every few weeks containing new features and security fixes.

WebKitGTK is available in Debian under the webkit2gtk name, and we are doing our best to provide the most up-to-date packages for as many users as possible.

I would like to give a quick summary of the status of WebKitGTK in Debian: what you can expect and where you can find the packages.

  • Debian unstable (sid): The most recent stable version of WebKitGTK (2.24.3 at the time of writing) is always available in Debian unstable, typically on the same day of the upstream release.
  • Debian testing (bullseye): If no new bugs are found, that same version will be available in Debian testing a few days later.
  • Debian stable (buster): WebKitGTK is covered by security support for the first time in Debian buster, so stable releases that contain security fixes will be made available through debian-security. The upstream dependencies policy guarantees that this will be possible during the buster lifetime. Apart from security updates, users of Debian buster will get newer packages during point releases.
  • Debian experimental: The most recent development version of WebKitGTK (2.25.4 at the time of writing) is always available in Debian experimental.

In addition to that, the most recent stable versions are also available as backports.

  • Debian stable (buster): Users can get the most recent stable releases of WebKitGTK from buster-backports, usually a couple of days after they are available in Debian testing.
  • Debian oldstable (stretch): While possible we are also providing backports for stretch using stretch-backports-sloppy. Due to older or missing dependencies some features may be disabled when compared to the packages in buster or testing.

You can also find a table with an overview of all available packages here.

One last thing: as explained on the release notes, users of i386 CPUs without SSE2 support will have problems with the packages available in Debian buster (webkit2gtk 2.24.2-1). This problem has already been corrected in the packages available in buster-backports or in the upcoming point release.

CryptogramDetecting Credit Card Skimmers

Modern credit card skimmers hidden in self-service gas pumps communicate via Bluetooth. There's now an app that can detect them:

The team from the University of California San Diego, who worked with other computer scientists from the University of Illinois, developed an app called Bluetana which not only scans and detects Bluetooth signals, but can actually differentiate those coming from legitimate devices -- like sensors, smartphones, or vehicle tracking hardware -- from card skimmers that are using the wireless protocol as a way to harvest stolen data. The full details of what criteria Bluetana uses to differentiate the two isn't being made public, but its algorithm takes into account metrics like signal strength and other telltale markers that were pulled from data based on scans made at 1,185 gas stations across six different states.

LongNowDavid Byrne Launches New Online Magazine, Reasons to Be Cheerful

In his Long Now talk earlier this summer, David Byrne announced that he would soon launch a new website called Reasons to Be Cheerful. The premise, Byrne said, was to document stories and projects that give cause for optimism in troubles times. He was after solutions-oriented efforts that provided tangible lessons that could be broadly utilized in different parts of the world.

“I didn’t want something that would only be applied to one culture,” Byrne said.

Reasons to Be Cheerful has now officially launched. Here is Byrne on the project from the press release:

It often seems as if the world is going straight to Hell. I wake up in the morning, I look at the paper, and I say to myself, “Oh no!” Often I’m depressed for half the day. I imagine some of you feel the same.

Recently, I realized this isn’t helping. Nothing changes when you’re numb. So, as a kind of remedy, and possibly as a kind of therapy, I started collecting good news. Not schmaltzy, feel-good news, but stuff that reminded me, “Hey, there’s positive stuff going on! People are solving problems and it’s making a difference!”

I began telling others about what I’d found.

Their responses were encouraging, so I created a website called Reasons to be Cheerful and started writing. Later on, I realized I wanted to make the endeavor a bit more formal. So we got a team together and began commissioning stories from other writers and redesigned the website. Today, we’re relaunching Reasons to be Cheerful as an ongoing editorial project.

We’re telling stories that reveal that there are, in fact, a surprising number of reasons to feel cheerful — that provide a more optimistic and, we believe, more accurate depiction of the world. We hope to balance out some of the amplified negativity and show that things might not be as bad as we think. Stop by whenever you need a reminder.

Learn More

  • Byrne also released a trailer for the website, which you can watch below:
  • Watch David Byrne’s Long Now talk here.

Worse Than FailureCodeSOD: Checksum Yourself Before you Wrecksum Yourself

Mistakes happen. Errors crop up. Since we know this, we need to defend against it. When it comes to things like account numbers, we can make a rule about which numbers are valid by using a checksum. A simple checksum might be, "Add the digits together, and repeat until you get a single digit, which, after modulus with a constant, must be zero." This means that most simple data-entry errors will result in an invalid account number, but there's still a nice large pool of valid numbers to draw from.

James works for a company that deals with tax certificates, and thus needs to generate numbers which meet a similar checksum rule. Unfortunately for James, this is how his predecessor chose to implement it:

while (true) { digits = ""; for (int i = 0; i < certificateNumber.ToString().Length; i++) { int doubleDigit = Convert.ToInt32(certificateNumber.ToString().Substring(i, 1)) * 2; digits += (doubleDigit.ToString().Length > 1 ? Convert.ToInt32(doubleDigit.ToString().Substring(0, 1)) + Convert.ToInt32(doubleDigit.ToString().Substring(1, 1)) : Convert.ToInt32(doubleDigit.ToString().Substring(0, 1))); } int result = digits.ToString().Sum(c => c - '0'); if ((result % 10) == 0) break; else certificateNumber++; }

Whitespace added to make the ternary vaguely more readable.

We start by treating the number as a string, which allows us to access each digit individually, and as we loop, we'll grab a digit and double it. That, unfortunately, gives us a number, which is a big problem. There's absolutely no way to tell if a number is two digits long without turning it back into a string. Absolutely no way! So that's what we do. If the number is two digits, we'll split it back up and add those digits together.

Which again, gives us one of those pesky numbers. So once we've checked every digit, we'll convert that number back to a useful string, then Sum the characters in the string to produce a result. A result which, we hope, is divisible by 10. If not, we check the next number. Repeat and repeat until we get a valid result.

The worst part is, though, is that you can see from the while loop that this is just dropped into a larger method. This isn't a single function which generates valid certificate numbers. This is a block that gets dropped in line. Similar, but slightly different blocks are dropped in when numbers need to be validated. There's no single isValidCertificate method.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianUtkarsh Gupta: Farewell, GSoC o/

Hello, there.

In open source, we feel strongly that to really do something well, you have to get a lot of people involved.

Guess Linus Torvalds got that right from the start.
While GSoC 2019 comes to end, this project hasn’t. With GSoC, I started this project from scratch and I guess, this won’t “die” an early age.

Here’s a quick recap:

My GSoC project is to package a software called Loomio.
A little about it, Loomio is a decision-making software, designed to assist groups with the collaborative decision-making process.
It is a free software web-application, where users can initiate discussions and put up proposals.

In the span of last 3 months, I worked on creating a package of Loomio for the Debian repositories. Loomio is a big, complex software to package.
With over 484 directories and 4607 files as a part of it’s code base, it has a huge number of Ruby and Node dependencies, along with a couple of fonts that it uses.
Out of which, around 72 ruby gems, 58 node modules, 3 fonts, and other 27 packages which were the reverse dependencies needed work. Both, including packaged and unpackaged libraries.

Also, little did I know about the need of having loomio-installer.
Thus a good amount of time went there as well (which I also talked about in my first and second report).


Work done so far!

At the time of writing this report, the following work has been done:

NEW packages

Packages that have been uploaded to the archive:

» ruby-ahoy-matey
» ruby-aws-partitions
» ruby-aws-sdk-core
» ruby-aws-sdk-kms
» ruby-aws-sdk-s3
» ruby-aws-sigv4
» ruby-cancancan
» ruby-data-uri
» ruby-geocoder
» ruby-google-cloud-core
» ruby-google-cloud-env
» ruby-inherited-resources
» ruby-maxitest
» ruby-safely-block
» ruby-terrapin
» ruby-memory-profiler
» ruby-devise-i18n
» ruby-discourse-diff
» ruby-discriminator
» ruby-doorkeeper-i18n
» ruby-friendly-id
» ruby-google-cloud-core
» ruby-google-cloud-env
» ruby-has-scope
» ruby-has-secure-token
» ruby-heroku-deflater
» ruby-i18n-spec
» ruby-iso
» ruby-omniauth-openid-connect
» ruby-paper-trail
» ruby-referer-parser
» ruby-safely-block
» ruby-user-agent-parser
» ruby-google-cloud-translate
» ruby-maxminddb
» ruby-omniauth-ultraauth

Packages that are yet to be uploaded:

» ruby-arbre
» ruby-paperclip
» ruby-ahoy-email
» ruby-ransack
» ruby-benchmark-memory
» ruby-ammeter
» ruby-rspec-tag-matchers
» ruby-formtastic
» ruby-formtastic-i18n
» ruby-rails-serve-static-assets
» ruby-activeadmin
» ruby-rails-12factor
» ruby-rails-stdout-logging
» loomio-installer

Updated packages

» rails
» ruby-devise
» ruby-globalid
» ruby-pg
» ruby-activerecord-import
» ruby-rack-oauth2
» ruby-rugged
» ruby-task-list
» gem2deb
» node-find-up
» node-matcher
» node-supports-color
» node-array-union
» node-dot-prop
» node-flush-write-stream
» node-irregular-plurals
» node-loud-rejection
» node-make-dir
» node-tmp
» node-strip-ansi


Work left!

Whilst it is clear how big and complex Loomio is, it was not humanly possible to complete the entire package of Loomio.
At the moment, the following tasks are remaining for this project to get close to completion:

» Debug loomio-installer.
» Check what all node dependencies are not really needed.
» Package and update the needed dependencies for loomio.
» Package loomio.
» Fix autopkgtests (if humanly possible).
» Maintain it for life :D


Other Debian activites!

Debian is more than just my GSoC organisation to me.
As my NM profile says and I quote,

Debian has really been an amazing journey, an amazing place, and an amazing family!

With such lovely people and teams and with my DM hat on, I have been involved with a lot more than just GSoC. In the last 3 months, my activity within Debian (other than GSoC) can be summarized as follows.

Cloud Team

Since I’ve been interested in the work they do, I joined the team recently and currently helping in packaging image finder.

NEW packages

» python-flask-marshmallow
» python-marshmallow-sqlalchemy


Perl Team

With Gregor, Intrigeri, Yadd, Nodens, and Bremner being there, I learned Perl packaging and helped in maintaining the Perl modules.

NEW packages

» libdata-dumper-compact-perl
» libminion-backend-sqlite-perl
» libmoox-shorthas-perl
» libmu-perl

Updated packages

» libasync-interrupt-perl
» libbareword-filehandles-perl
» libcatalyst-manual-perl
» libdancer2-perl
» libdist-zilla-plugin-git-perl
» libdist-zilla-plugin-makemaker-awesome-perl
» libdist-zilla-plugin-ourpkgversion-perl
» libdomain-publicsuffix-perl
» libfile-find-object-rule-perl
» libfile-flock-retry-perl
» libgeoip2-perl
» libgraphics-colornames-www-perl
» libio-aio-perl
» libio-async-perl
» libmail-box-perl
» libmail-chimp3-perl
» libmath-clipper-perl
» libminion-perl
» libmojo-pg-perl
» libnet-amazon-s3-perl
» libnet-appliance-session-perl
» libnet-cli-interact-perl
» libnet-frame-perl
» libnetpacket-perl
» librinci-perl
» libperl-critic-policy-variables-prohibitlooponhash-perl
» libsah-schemas-rinci-perl
» libstrictures-perl
» libsisimai-perl
» libstring-tagged-perl
» libsystem-info-perl
» libtex-encode-perl
» libxxx-perl


Python Team

Since I lately learned Python packaging, there are a couple of packages that I worked on which I haven’t pushed yet, but by later this month.

» python3-dotenv
» python3-phonenumbers
» django-phonenumber-field
» django-phone-verify
» Helping newbies (thanks to DC19 talk).


JavaScript Team

Super thanks to Xavier (yadd) and Praveen for being right there. Worked on the following things.

» Helping in webpack transition (bit).
» Helping in nodejs transition.
» Helping in complying pkg-js-tools in all packages.
» Packaging dependencies of ava.
» node-d3-request
» node-find-up
» node-matcher
» node-supports-color
» node-array-union
» node-dot-prop
» node-flush-write-stream
» node-irregular-plurals
» node-loud-rejection
» node-make-dir
» node-tmp
» node-strip-ansi


Golang Team

I joined the Golang team to mostly help in doing the GitLab stuff. Thus did the following things.

» gitlab-workhorse
» gitaly
» Upstream contribution to gitaly.


Ruby Team

This is where I started from. All thanks to Praveen, Abhijith, and Raju.
In the last 3 months, except for maintaining packages for Loomio, I did the following things.

» Helping in maintaining GitLab (one of the maintainers).
» Setting the fasttrack repo; announcements soon!
» Fixing gem2deb for adding d/upstream/metadata.
» Enabling Salsa CI for 1392 packages (yes, I broke salsa :/).
» Reviewing and sponsoring packages.
» Co-chairing the Ruby Team BoF.
» And others.


Others

» Part of DC19 Content Team (thanks to Antonio).
» Part of DC19 Bursary Team (thanks to Jonathan).
» Perl sprint (DebCamp).
» Newbie’s Perspective Towards Debian talk (Open day).
» Chairing Ruby Team BoF.
» Presenting my GSoC project.
» Part of DC19 Video Team.
» Talking about Debian elsewhere (cf: mail archive).
» DC21 Indian bid ;)
» Organising MiniDebConf Goa :D


Acknowledgement :)

Never forget your roots.

And I haven’t. The last 8 months with Debian have been super amazing. Nothing I’d like to change, even if I could. Every person here is a mentor to me.
But above all, there are a couple of people who helped me immensely.
Starting with Pirate Praveen, Rajudev, Abhijith, Sruthi, Gregor, Xavier, Intrigeri, Nodens, Holger, Antonio Terceiro, Kanashiro, Boutil, Georg, Sanyam, Sakshi, Jatin, and Samyak. And of course, my little brother, Aryan.
Sorry if I’m forgetting anyone. Thank y’all :)

NOTE: Sorry for making this extremely long; someone told me to put in all the crap I did in last 90 days :P
Also, sorry if it gets too long on planet.d.o. :)

Until next time.
:wq for today.

,

Planet DebianRuss Allbery: Review: A Memory Called Empire

Review: A Memory Called Empire, by Arkady Martine

Series: Teixcalaan #1
Publisher: Tor
Copyright: March 2019
ISBN: 1-250-18645-5
Format: Kindle
Pages: 462

Mahit Dzmare grew up dreaming of Teixcalaan. She learned its language, read its stories, and even ventured some of her own poetry, in love with the partial and censored glimpses of its culture that were visible outside of the empire. From her home in Lsel Station, an independent mining station, Teixcalaan was a vast, lurking weight of history, drama, and military force. She dreamed of going there in person. She did not expect to be rushed to Teixcalaan as the new ambassador from Lsel Station, bearing a woefully out-of-date imago that she's barely begun to integrate, with no word from the previous ambassador and no indication of why Teixcalaan has suddenly demanded a replacement.

Lsel is small, precarious, and tightly managed, a station without a planet and with only the resources that it can maintain and mine for itself, but it does have a valuable secret. It cannot afford to lose vital skills to accident or age, and therefore has mastered the technology of recording people's personalities, memories, and skills using a device called an imago. The imago can then be implanted in the brain of another, giving them at first a companion in the back of their mind and, with time, a unification that grants them inherited skills and memory. Valuable expertise in piloting, mining, and every other field of importance need not be lost to death, but can be preserved through carefully tended imago lines and passed on to others who test as compatible.

Mahit has the imago of the previous ambassador to Teixcalaan, but it's a copy from five years after his appointment, and he was the first of his line. Yskandr Aghavn served another fifteen years before the loss of contact and Teixcalaan's emergency summons, never returning home to deposit another copy. Worse, the implantation had to be rushed due to Teixcalaan's demand. Rather than the normal six months of careful integration under active psychiatric supervision, Mahit has had only a month with her new imago, spent on a Teixcalaan ship without any Lsel support.

With only that assistance from home, Mahit's job is to navigate the complex bureaucracy and rich culture of an all-consuming interstellar empire to prevent the ruthlessly expansionist Teixcalaanli from deciding to absorb Lsel Station like they have so many other stations, planets, and cultures before them. Oh, and determine what happened to her predecessor, while keeping the imagos secret.

I love when my on-line circles light up with delight about a new novel, and it turns out to be just as good as everyone said it was.

A Memory Called Empire is a fascinating, twisty, complex political drama set primarily in the City at the heart of an empire, a city filled with people, computer-controlled services, factions, manuevering, frighteningly unified city guards, automated defense mechanisms, unexpected allies, and untrustworthy offers. Martine weaves a culture that feels down to its bones like an empire at the height of its powers and confidence: glorious, sophisticated, deeply aware of its history, rich in poetry and convention, inward-looking, and alternately bemused by and contemptuous of anyone from outside what Teixcalaan defines as civilization, when Teixcalaan thinks of them at all.

But as good as the setting is (and it's superb, with a deep, lived-in feel), the strength of this book is its characters. Mahit was expecting to be the relatively insignificant ambassador of a small station, tasked with trade negotiations and routine approvals and given time to get her feet under her. But when it quickly becomes clear that Yskandr was involved in some complex machinations at the heart of the Teixcalaan government, she shows admirable skill for thinking on her feet, making fast decisions, and mixing thoughtful reserve and daring leaps of judgment.

Mahit is here alone from Lsel, but she's not without assistance. Teixcalaan has assigned her an asekreta, a cultural liaison who works for the Information Ministry. Her name is Three Seagrass, and she is the best part of this book. Mahit starts wisely suspicious of her, and Three Seagrass starts carefully and thoroughly professional. But as the complexities of Mahit's situation mount, she and Three Seagrass develop a complex and delightful friendship, one that slowly builds on cautious trust and crosses cultural boundaries without ignoring them. Three Seagrass's nearly-unflappable curiosity and guidance is a perfect complement to Mahit's reserve and calculated gambits, and then inverts beautifully later in the book when the politics Mahit uncovers start to shake Three Seagrass's sense of stability. Their friendship is the emotional heart of this story, full of delicate grace notes and never falling into stock patterns.

Martine also does some things with gender and sexuality that are remarkable in how smoothly they lie below the surface. Neither culture in this novel cares much about the gender configurations of sexual partnerships, which means A Memory Called Empire shares with Nicola Griffith novels an unmarked acceptance of same-sex relationships. It's also not eager to pair up characters or put romance at the center of the story, which I greatly appreciated. And I was delighted that the character who navigates hierarchy via emotional connection and tumbling into the beds of the politically influential is, for once, the man.

I am stunned that this is a first novel. Martine has masterful control over both the characters and plot, keeping me engrossed and fully engaged from the first chapter. Mahit's caution towards her possible allies and her discovery of the lay of the political land parallel the reader's discovery of the shape of the plot in a way that lets one absorb Teixcalaanli politics alongside her. Lsel is at the center of the story, but only as part of Teixcalaanli internal maneuvering. It is important to the empire but is not treated as significant or worthy of its own voice, which is a knife-sharp thrust of cultural characterization. And the shadow of Yskandr's prior actions is beautifully handled, leaving both the reader and Mahit wondering whether he was a brilliant strategic genius or in way over his head. Or perhaps both.

This is also a book about empire, colonization, and absorption, about what it's like to delight in the vastness of its culture and history while simultaneously fearful of drowning in it. I've never before read a book that captures the tension of being an ambassador to a larger and more powerful nation: the complex feelings of admiration and fear, and the need to both understand and respect and in some ways crave the culture while still holding oneself apart. Mahit is by turns isolated and accepted, and by turns craves acceptance and inclusion and is wary of it. It's a set of emotions that I rarely see in space opera.

This is one of the best science fiction novels I've read, one that I'll mention in the same breath as Ancillary Justice or Cyteen. It is a thoroughly satisfying story, one that lasted just as long as it should and left me feeling satiated, happy, and eager for the sequel. You will not regret reading this, and I expect to see it on a lot of award lists next year.

Followed by A Desolation Called Peace, which I've already pre-ordered.

Rating: 10 out of 10

Planet DebianAndrew Cater: Cambridge BBQ 2019 - 2

Another day with a garden full of people. A house full of coders, talkers, coffee drinkers and unexpected bread makers - including a huge fresh loaf. Playing "the DebConf card game" for the first time was confusing as anything and a lot of fun. The youngest person there turned out to be one of the toughest players.

Hotter than yesterday - 32 degrees as I've just driven back across country and the sun in my eyes.. Sorry to leave everyone there for tomorrow's end of BBQ but there'll be another opportunity.

Thanks even more to Steve, Jo and everyone there - it's been a fantastic weekend.

Planet DebianAndrew Cater: Cambridge BBQ 2019

Usual friendly Debian family chaos: a garden full of people last night: lots of chat, lots  of catching up and conviviality including a birthday cake. The house was also full: games of cards ensued last thing at night :) Highlights: home made cookies, chilli and cheese bread [and the company as always]. One of the hotter days of the year at 30 degrees.

Now folk are filtering in: coffee machine is getting a workout and breakfast is happening. Lots more folk expected gradually as the morning progresses: it's 0955 UTC as I'm typing. Today is due to be hotter, apparently. Thanks to Steve and Jo for hosting, as always.

Planet DebianMolly de Blanc: Free software activities (July 2019)

Again, much belated with apologies.

Personal

  • Debian AH rebranded to the Debian Community Team (CT) after our sprint back in June. We had meetings, both following up on things that happened at the meeting and covering typical business. We created a draft of a new team mission statement, which was premiered, so to speak, at DebConf19.
  • While I did not attend, I participated remotely in the CT and Outreach BoFs at DC19 remotely. Special thanks to the video team for making this possible.
  • The Outreach team also had a meeting.
  • The OSI had its monthly meeting, and the Standing Committee also had a meeting.
  • The OSI Staffing Committee, of which I am a member, had a meeting.
  • I had a meeting with someone interested in working with the OSI.
  • I had weekly meetings with the General Manager of the OSI.
  • Another instance of someone being mean to me on the internet. I am almost losing count.

Professional

  • I learned a lot about the GNOME ecosystem, and the toolkit that is a necessary part of it, and parts of the project that organizations use even if they’re not using the GNOME desktop environment.
  • I had several fun meetings with people about the work we’re doing at GNOME.
  • I worked on fulfilling sponsorship benefits for GUADEC. This mostly means writing social media posts, blog posts, and working with an awesome volunteer to keep the web site updated.
  • I wrote a Friends of GNOME newsletter.
  • I wrote and published a Meet the GNOMEies interview.
  • I met with the Linux App Summit organizing team concerning sponsorships and fundraising for the event. The CFP is open and you should submit!

Planet DebianRuss Allbery: Review: The Calculating Stars

Review: The Calculating Stars, by Mary Robinette Kowal

Series: Lady Astronaut #1
Publisher: Tor
Copyright: July 2018
ISBN: 1-4668-6124-X
Format: Kindle
Pages: 429

Elma York is a (human) computer, working for the early space program in the National Advisory Committee for Aeronautics in 1952. She and her husband Nathaniel, one of the lead engineers, are on vacation in the Poconos when a massive meteorite hits the Atlantic Ocean just off the coast of Maryland, wiping out Washington D.C. and much of the eastern seaboard.

Elma and Nathaniel make it out of the mountains via their private plane (Elma served as a Women Airforce Service Pilot in World War II) to Wright-Patterson Air Force Base in Ohio, where the government is regrouping. The next few weeks are a chaos of refugees, arguments, and meetings, as Nathaniel attempts to convince the military that there's no way the meteorite could have been a Russian attack. It's in doing calculations to support his argument that Elma and her older brother, a meteorologist, realize that far more could be at stake. The meteorite may have kicked enough water vapor into the air to start runaway global warming, potentially leaving Earth with the climate of Venus. If this is true, humans need to get off the planet and somehow find a way to colonize Mars.

I was not a sympathetic audience for this plot. I'm all in favor of space exploration but highly dubious of colonization justifications. It's hard to imagine an event that would leave Earth less habitable than Mars already is, and Mars appears to be the best case in the solar system. We also know who would make it into such a colony (rich white people) and who would be left behind on Earth to die (everyone else), which gives these lifeboat scenarios a distinctly unappealing odor. To give her credit, Kowal postulates one of the few scenarios that might make living on Mars an attractive alternative, but I'm fairly sure the result would be the end of humanity. On this topic, I'm a pessimistic grinch.

I loved this book.

Some of that is because this book is not about the colonization. It's about the race to reach the Moon in an alternate history in which catastrophe has given that effort an international mandate and an urgency grounded in something other than great-power competition. It's also less about the engineering and the male pilots and more about the computers: Elma's world of brilliant women, many of them experienced WW2 transport pilots, stuffed into the restrictive constraints of 1950s gender roles. It's a fictionalization of Hidden Figures and Rise of the Rocket Girls, told from the perspective of a well-meaning Jewish woman who is both a victim of sexist and religious discrimination and is dealing (unevenly) with her own racism.

But that's not the main reason why I loved this book. The surface plot is about gender roles, the space program, racism, and Elma's determination to be an astronaut. The secondary plot is about anxiety, about what it does to one's life and one's thought processes, and how to manage it and overcome it, and it's taut, suspenseful, tightly observed, and vividly empathetic. This is one of the best treatments of living with a mental illness that I've read.

Elma has clinical anxiety, although she isn't willing to admit it until well into the book. But once I knew to look for it, I saw it everywhere. The institutional sexism she faces makes the reader want to fight and rage, but Elma turns defensively inward and tries to avoid creating conflict. Her main anxiety trigger is being the center of the attention of strangers, fearing their judgment and their reactions. She masks it with southern politeness and deflection and the skill of smoothing over tense situations, until someone makes her angry. And until she finds something that she wants more than she wants to avoid her panic attacks: to be an astronaut, to see space, and to tell others that they can as well.

One of the strengths of this book is Kowal's ability to write a marriage, to hint at what Elma sees in Nathaniel around the extended work hours and quietness. They play silly bedroom games, they rely on each other without a second thought, and Nathaniel knows how anxious she is and is afraid for her and doesn't know what to do. He can't do much, since Elma has to find her own treatment and her own coping mechanisms and her own way of reframing her goals, but he's quietly and carefully supportive in ways that I thought were beautifully portrayed. His side of this story is told in glimmers and moments, and the reader has to do a lot of work to piece together what he's thinking, but he quietly became one of my favorite characters in this book.

I should warn that I read a lot into this book. I hit on the centrality of anxiety to Elma's experience about halfway through and read it backwards and forwards through the book, and I admit I may be doing a lot of heavy lifting for the author. The anxiety thread is subtle, which means there's a risk that I'm manufacturing some pieces of it. Other friends who have read the book didn't notice it the way that I did, so your mileage may vary. But as someone who has some tendencies towards anxiety myself, this spoke to me in ways that made it hard to read at times but glorious in the ending. Everywhere in the book Elma got angry enough to push through her natural tendency to not make a fuss is wonderfully satisfying.

This book is set very much in its time, which means that it is full of casual, assumed institutional sexism. Elma fights it in places, but she more frequently endures it and works around it, which may not be the book that one is in the mood to read. This is a book about feminism, but it's a conditional and careful feminism that tactically cedes a lot of the cultural and conversational space.

There is also quite a lot of racism, to which Elma reacts like a well-intentioned (and somewhat anachronistic) white woman. There's a very fine line between the protagonist using some of their privilege to help others and a white savior narrative, and I'm not sure Kowal walks it successfully throughout the book. Like the sexism, the racism of the setting is deep and structural, Elma is not immune even when she thinks she's adjusting for it, and this book only pushes back against it around the edges. I appreciated the intent to show some of the complexity of intersectional oppression, but I think it lands a bit awkwardly.

But, those warnings aside, this is both a satisfying story of the early space program shifted even earlier to force less reliance on mechanical computers, and a tense and compelling story of navigating anxiety. It tackles the complex and difficult problems of conserving and carefully using one's own energy and fortitude, and of deciding what is worth getting angry about and fighting for. The first-person narrative voice was very effective for me, particularly once I started treating Elma as an unreliable narrator in denial about how much anxiety has shaped her life and started reading between the lines and looking for her coping strategies. I have nowhere near the anxiety issues that Elma has, but I felt seen by this book despite a protagonist who is apparently totally unlike me.

Although I would have ranked Record of a Spaceborn Few higher, The Calculating Stars fully deserves its Hugo, Nebula, and Locus Award wins. Highly recommended, and I will definitely read the sequel.

Followed by The Fated Sky.

Rating: 9 out of 10

,

Planet DebianDirk Eddelbuettel: RcppExamples 0.1.9

A new version of the RcppExamples package is now on CRAN.

The RcppExamples package provides a handful of short examples detailing by concrete working examples how to set up basic R data structures in C++. It also provides a simple example for packaging with Rcpp.

This releases brings a number of small fixes, including two from contributed pull requests (extra thanks for those!), and updates the package in a few spots. The NEWS extract follows:

Changes in RcppExamples version 0.1.9 (2019-08-24)

  • Extended DateExample to use more new Rcpp features

  • Do not print DataFrame result twice (Xikun Han in #3)

  • Missing parenthesis added in man page (Chris Muir in #5)

  • Rewrote StringVectorExample slightly to not run afould the -Wnoexcept-type warning for C++17-related name mangling changes

  • Updated NAMESPACE and RcppExports.cpp to add registration

  • Removed the no-longer-needed #define for new Datetime vectors

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSteinar H. Gunderson: Chess article

Last November (!), I was interviewed for a magazine article about computer chess and how it affects human play. Only a few short fragments remain of the hour-long discussion, but the article turned out to be very good nevertheless, and now it's freely available at last. Recommended Sunday read.

Planet DebianThomas Lange: New FAI.me feature

FAI.me, the build service for installation and cloud images has a new feature. When building an installation images, you can enable automatic reboot or shutdown at the end of the installation in the advanced options. This was implemented due to request by users, that are using the service for their VM instances or computers without any keyboard connected.

The FAI.me homepage.

FAI.me

Planet DebianDidier Raboud: miniDebConf19 Vaumarcus – Oct 25-27 2019 – Registration is open

The Vaumarcus miniDebConf19 is happening! Come see the fantastic view from the shores of Lake Neuchâtel, in Switzerland! We’re going to have two-and-a-half days of presentations and hacking in this marvelous venue and anybody interested in Debian development is welcome.

Registration is open

Registration is open now, and free, so go add your name and details on the Debian wiki: Vaumarcus/Registration

We’ll accept registrations until late, but don’t wait too much before making your travel plans! We have you covered with a lot of attendee information already: Vaumarcus.

Sponsors wanted

We’re looking for sponsors willing to help making this event possible; to help making it easier for anyone interested to attend. We have not yet decided upon sponsor categories and benefits, but come talk to us already if you can help!

More hands wanted

Things are on a good track, but we need more help. Specifically, Content, Bar, Sponsoring and Attendee support would benefit from more hands.

Get in touch

We gather on the #debian.ch channel on irc.debian.org and on the debian-switzerland@lists.debian.org list. For more private matters, talk to board@debian.ch!

Looking forward to seeing a lot of you in Vaumarcus!

(This was also sent to debian-devel-announce@l.d.o and debian-events-eu@l.d.o.)

Planet DebianJoachim Breitner: ICFP 2019

ICFP 2019 in Berlin ended yesterday, and it was – as always – a great pleasure. This year was particularly noteworthy for the quite affordable conference hotel and the absolutely amazing food during the coffee breaks.

Since I am no longer a proper academic, I unsurprisingly did not have real research to present. Luckily I found ways to not just be a passive participant this year:

  • At FARM, I presented Kaleidogen, a small game (or toy, some would say) of mine. The room was packed with people, so thanks for all your interest! If you missed it, you can soon see the recording or read the demo abstract.

  • At PLMW, the mentoring workshop for young researchers, I ran the “Social event” together with Niki Vazou. Like last year, we randomly grouped the students and held a little competition where they had to match program listings to languages and algorithms. This was great fun, and we even managed to solve the sudden problem of two ties in a ad-hoc extra quiz.

  • During his “State of GHC” speech, Simon Peyton Jones asked me to speak about the GHC Proposal Process for a few slides.

  • And since that is not enough stage time, I secured two spots in local stand-up comedy open mics on Monday and Friday, and even dragged sizable crowds of ICFP participants to these venues. One was a boat, and the other one a pretty dodgy bar in Neukölln, so that alone was a memorable experience. And the host was visibly surprised when his joke “I couldn’t be a software developers – I can’t commit” was met by such a roaring response…

Anyways, ICFP is over, back to disappear in the churn of every day work, and I hope to see you all next year.

Planet DebianRuss Allbery: Review: Thinking, Fast and Slow

Review: Thinking, Fast and Slow, by Daniel Kahneman

Publisher: Farrar, Straus and Giroux
Copyright: 2011
ISBN: 1-4299-6935-0
Format: Kindle
Pages: 448

Daniel Kahneman is an academic psychologist and the co-winner of the 2002 Nobel Memorial Prize in Economic Sciences for his foundational work on behavioral economics. With his long-time collaborator Amos Tversky, he developed prospect theory, a theory that describes how people chose between probabilistic alternatives involving risk. That collaboration is the subject of Michael Lewis's book The Undoing Project, which I have not yet read but almost certainly will.

This book is not only about Kahneman's own work, although there's a lot of that here. It's a general overview of cognitive biases and errors as explained through an inaccurate but useful simplification: modeling human thought processes as two competing systems with different priorities, advantages, and weaknesses. The book mostly focuses on the contrast between the fast, intuitive System One and the slower, systematic System Two, hence the title, but the last section of the book gets into hedonic psychology (the study of what makes experiences pleasant or unpleasant). That section introduces a separate, if similar, split between the experiencing self and the remembering self.

I read this book for the work book club, although I only got through about a third of it before we met to discuss it. For academic psychology, it's quite readable and jargon-free, but it's still not the sort of book that's easy to read quickly. Kahneman's standard pattern is to describe an oddity in thinking that he noticed, a theory about the possible cause, and the outcome of a set of small experiments he and others developed to test that theory. There are a lot of those small experiments, and all the betting games with various odds and different amounts of money blurred together unless I read slowly and carefully.

Those experiments also raise the elephant in the room, at least for me: how valid are they? Psychology is one of the fields facing a replication crisis. Researchers who try to reproduce famous experiments are able to do so only about half the time. On top of that, many of the experiments Kahneman references here felt artificial. In daily life, people spend very little time making bets of small amounts of money on outcomes with known odds. The bets are more likely to be for more complicated things such as well-being or happiness, and the odds of most real-world situations are endlessly murky. How much does that undermine Kahneman's conclusions? Kahneman himself takes the validity of this type of experiment for granted and seems uninterested in this question, at least in this book. He has a Nobel Prize and I don't, so I'm inclined to trust him, but it does give me some pause.

It didn't help that Kahneman cites the infamous marshmallow experiment approvingly and without caveats, which is a pet peeve of mine and means he fails my normal test for whether a popular psychology writer has taken a sufficiently thoughtful approach to analyzing the validity of experiments.

That caveat aside, this book is fascinating. One of the things that Kahneman does throughout, which is both entertaining and convincing, is show the reader one's brain making mistakes in real time. It's a similar experience to looking at optical illusions (indeed, Kahneman makes that comparison explicitly). Once told what's going on, you can see the right answer, but your brain is still determined to make an error.

Here's an example:

A bat and ball cost $1.10.
The bat costs one dollar more than the ball.
How much does the ball cost?

I've prepped you by talking about cognitive errors, so you will probably figure out that the answer is not 10 cents, but notice how much your brain wants the answer to be 10 cents, and how easy it is to be satisfied with that answer if you don't care that much about the problem, even though it's wrong. The book is full of small examples like this.

Kahneman's explanation for the cognitive mistake in this example is the subject of the first part of the book: two-system thinking. System one is fast, intuitive, pattern-matching, and effortless. It's our default, the system we use to navigate most of our lives. System two is deliberate, slow, methodical, and more accurate, but it's effortful, to a degree that the effort can be detected in a laboratory by looking for telltale signs of concentration. System two applies systematic rules, such as the process for multiplying two-digit numbers together or solving math problems like the above example correctly, but it takes energy to do this, and humans have a limited amount of that energy. System two is therefore lazy; if system one comes up with a plausible answer, system two tends to accept it as good enough.

This in turn provides an explanation for a wealth of cognitive biases that Kahneman discusses in part two, including anchoring, availability, and framing. System one is bad at probability calculations and relies heavily on availability. For example, when asked how common something is, system one will attempt to recall an example of that thing. If an example comes readily to mind, system one will decide that it's common; if it takes a lot of effort to think of an example, system one will decide it's rare. This leads to endless mistakes, such as worrying about memorable "movie plot" threats such as terrorism while downplaying the risks of far more common events such as car accidents and influenza.

The third part of the book is about overconfidence, specifically the prevalent belief that our judgments about the world are more accurate than they are and that the role of chance is less than it actually is. This includes a wonderful personal anecdote from Kahneman's time in the Israeli military evaluating new recruits to determine what roles they would be suited for. Even after receiving clear evidence that their judgments were no better than random chance, everyone involved kept treating the interview process as if it had some validity. (I was pleased by the confirmation of my personal bias that interviewing is often a vast waste of everyone's time.)

One fascinating takeaway from this section is that experts are good at making specific observations of fact that an untrained person would miss, but are bad at weighing those facts intuitively to reach a conclusion. Keeping expert judgment of decision factors but replacing the final decision-making process with a simple algorithm can provide a significant improvement in the quality of judgments. One example Kahneman uses is the Apgar score, now widely used to determine whether a newborn is at risk of a medical problem.

The fourth part of the book discusses prospect theory, and this is where I got a bit lost in the endless small artificial gambles. However, the core idea is simple and quite fascinating: humans tend to make decisions based on the potential value of losses and gains, not the final outcome, and the way losses and gains are evaluated is not symmetric and not mathematical. Humans are loss-avoiding, willing to give up expected value to avoid something framed as a loss, and are willing to pay a premium for certainty. Intuition also breaks down at the extremes; people are very bad at correctly understanding odds like 1%, instead treating it like 0% or more than 5% depending on the framing.

I was impressed that Kahneman describes the decision-making model that preceded prospect theory, explains why it was more desirable because it was simpler and was only abandoned for prospect theory because prospect theory made meaningfully more accurate predictions, and then pivots to pointing out the places where prospect theory is clearly wrong and an even more complicated model would be needed. It's a lovely bit of intellectual rigor and honesty that too often is missing from both popularizations and from people talking about their own work.

Finally, the fifth section of the book is about the difference between life as experienced and life as it is remembered. This includes a fascinating ethical dilemma: the remembering self is highly sensitive to how unpleasant an experience was at its conclusion, but remarkably insensitive to the duration of pain. Experiments will indicate that someone will have a less negative memory of a painful event where the pain gradually decreased at the end, compared to an event where the pain was at its worst at the end. This is true even if the worst moment of pain was the same in both cases and the second event was shorter overall. How should we react to that in choosing medical interventions? The intuitive choice for pain reduction is to minimize the total length of time someone is in pain or reduce the worst moment of pain, both of which are correctly reported as less painful in the moment. But this is not the approach that will be remembered as less painful later. Which of those experiences is more "real"?

There's a lot of stuff in this book, and if you are someone who (unlike me) is capable of reading more than one book at a time, it may be a good book to read slowly in between other things. Reading it straight through, I got tired of the endless descriptions of experimental setup. But the two-system description resonated with me strongly; I recognized a lot of elements of my quick intuition (and my errors in judgment based on how easy it is to recall an example) in the system one description, and Kahneman's description of the laziness of system two was almost too on point. The later chapters were useful primarily as a source of interesting trivia (and perhaps a trick to improve my memory of unpleasant events), but I think being exposed to the two-system model would benefit everyone. It's a quick and convincing way to remember to be wary of whole classes of cognitive errors.

Overall, this was readable, only occasionally dense, and definitely thought-provoking, if quite long. Recommended if any of the topics I've mentioned sound interesting.

Rating: 7 out of 10

,

CryptogramFriday Squid Blogging: Vulnerabilities in Squid Server

It's always nice when I can combine squid and security:

Multiple versions of the Squid web proxy cache server built with Basic Authentication features are currently vulnerable to code execution and denial-of-service (DoS) attacks triggered by the exploitation of a heap buffer overflow security flaw.

The vulnerability present in Squid 4.0.23 through 4.7 is caused by incorrect buffer management which renders vulnerable installations to "a heap overflow and possible remote code execution attack when processing HTTP Authentication credentials."

"When checking Basic Authentication with HttpHeader::getAuth, Squid uses a global buffer to store the decoded data," says MITRE's description of the vulnerability. "Squid does not check that the decoded length isn't greater than the buffer, leading to a heap-based buffer overflow with user controlled data."

The flaw was patched by the web proxy's development team with the release of Squid 4.8 on July 9.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianBastian Venthur: Introducing Noir

tl;dr

Noir is a drop-in replacement for Black (the uncompromising code formatter), with the default line length set to PEP-8's preferred 79 characters. If you want to use it, just replace black with noir in your requirements.txt and/or setup.py and you're good to go.

Black is a Python code formatter that reformats your code to make it more PEP-8 compliant. It implements a subset of PEP-8, most notably it deliberately ignores PEP-8's suggestion for a line length of 79 characters and defaults to a length of 88. I find the decision and the reasoning behind that somewhat arbitrary. PEP-8 is a good standard and there's a lot of value in having a style guide that is generally accepted and has a lot of tooling to support it.

When people ask to change Black's default line length to 79, the issue is usually closed with a reference to the reasoning in the README. But Black's developers are at least aware of this controversial decision, as Black's only option that allows to configure the (otherwise uncompromising) code formatter, is in fact the line length.

Apart from that, Black is a good formatter that's gaining more and more popularity. And, of course, the developers have every right to follow their own taste. However, since Black is licensed under the terms of the MIT license, I tried to see what needs to be done in order to fix the line length issue.

Step 1: Changing the Default

This is the easiest part. You only have to change the DEFAULT_LINE_LENGTH value in black.py from 88 to 79, and black works as expected. Bonus points for doing the same in black.vim and pyproject.toml, but not strictly necessary.

Step 2a: Fixing the Tests

Now comes the fun part. Black has an extensive test suite and suddenly a lot of tests are failing because the fixtures that compare the unformatted input with the expected, formatted output were written with a line length of 88 characters in mind. To make it more interesting the expected output comes in two forms: (1) as normal reformatted Python code (which is rather easy to fix) and (2) as a diff between the input and the expected output. The latter was really painful to fix -- although I'm very much used to reading diffs, I don't usually write them.

Step 2b: Fixing the Tests

After all fixtures were updated, some tests were still failing. And it turned out that Black is running itself on its own source code as part of its test suite, making the tests fail if Black's code does not conform to Black's coding standards. While this is a genius idea, it meant that I had to reformat Black's code to match the new 79 characters line length, generating a giant diff, that is functionally unrelated to the fix I wanted to make but now part of the fix anyway. This of course makes the whole patch horrible to maintain if you plan to follow along upstream's master branch.

Step 3: Publish

Since we already got this far, why not publish the fixed version of Black? To my surprise the name noir was still available on PyPi, so I renamed my version of Black to Noir and uploaded it to PyPi.

You can install it via:

$ pip install noir

Since I didn't change anything else, this is literally a drop-in replacement for Black. All you have to do is replace black with noir in your requirements.txt and/or setup.py and you're good to go. The script that executes Black is still called black and the server is still called blackd.

Outlook

While this was a fun exercise, the question remains what to do with it. I'll try to follow upstream and update my patch whenever a new version will come out. As new versions of Black are released only a handful of times a year, this might be feasible.

Depending on how painful it is to maintain the patch for the tests, I might either drop the tests altogether, relying on upstream's tests passing on their side and just maintaining the trivial patch from Step 1: Changing the DEFAULT_LINE_LENGTH. The latter can probably be automated somehow using github actions -- and I'll probably look into that at some point.

Best case scenario, of course, would be if Python changes its recommended line length to 88 and I wouldn't have to maintain noir in the first place :)

Planet DebianIustin Pop: Aftershokz Aeropex first impressions

I couldn’t sleep one evening so I was randomly1 browsing the internet. One thing led to another and I landed on a review of “bone-conducting” headphones, designed for safe listening to music or talking on the phone during sports.

I was intrigued. I’ve written before that proper music really motivates me when doing high-intensity efforts, so this seemed quite interesting. After reading more about it, and after finding that one can buy such things from local shops, I ordered a pair of Aftershokz Aeropex headphones.

To my surprise, they actually work as advertised. I’d say, they work despite the fancy company name :) There is a slight change to the tone of the sound (music) as compared to normal headphones, and the quality is not like one would expect from high-quality over-ear ones, but that’s beside the point - the kind of music that I’d like to listen to while pedalling up a hill doesn’t require very high fidelity2.

And with regards to environment awareness, there is for sure some decrease, but I’d say minimal (especially if you don’t listen on high volume). There is no “closed bubble” effect at all as you get with normal (even open) headphones, and definitely not the one with in-ear ones. So I’d say this kind of headphone is reasonably safe, if you are careful.

So, first test, commute to work and back. On the way to work it was very windy so that’s why I was hearing mostly (especially during cross-winds), but it was still OK. Enjoyed the ride, nothing special.

On the return though… it was quite glorious. Normally (in Garmin speak) I get a small training effect: 0.8-1.0 aerobic, and much less anaerobic, around 0.5. It’s a very short commute, but I try to push as I can. Today however, I got 1.3 aerobic, and 1.6 anaerobic, because I went quite a bit standing on the uphills. Higher anaerobic than aerobic on my commute is very rare… Also the “intensity minutes” that I got for today were ~50% increased compared to usual commute days. Max HR was not really changed, but the average HR was ~10bpm higher, which confirms I was able to motivate myself better. No Strava segments achievements though, since I was on a slow bike, but still, it felt much better than same bike on other days.

I don’t know how the headphones feel when wearing them for a few hours at a time; they might be somewhat unpleasant, especially under the bike helmet, but on my short commute they were OK. But a 2-3-5 hour race is something entirely different.

Anyway, it seems from my first quick test this is an interesting technology. I guess I’ll have to see in a real effort how it helps? And if it doesn’t work well, I can blame the choice of music :)


  1. I was looking for updated Fenix 6 rumours. Either Garmin is having a prank or it (the F6) will be quite cool itself; bigger screen, solar, more battery options, etc. etc.

  2. Rhythm/beat is very important, not so much good voice or high dynamic range. And when tired, most anything that is not soothing.

Valerie AuroraHow to avoid supporting sexual predators

[TW: child sex abuse]

Recently, I received an email from a computer security company asking for more information on why I refuse to work with them. My reason? The company was founded by a registered child sex offender who still serves as its CTO, which I found out during my standard client research process.

My first reaction was, “Do I really need to explain why I won’t work with you???” but as I write this, we’re at the part of the Jeffrey Epstein news cycle where we are learning about the people in computer science who supported Epstein—after Epstein pleaded guilty to two counts of “procuring prostitution with a child under 18,” registered as a sex offender, and paid restitution to dozens of victims. As someone who outed her own father as a serial child molester, I can tell you that it is quite common for people to support and help known sexual predators in this way.

I would like to share how I actively avoid supporting sexual predators, as someone who provides diversity and inclusion training, mostly to software companies:

  1. When a new client approaches me, I find the names of the CEO, CTO, COO, board members, and founders—usually on the “About Us” or “Who We Are” or “Founders” page of the company’s web site. Crunchbase and LinkedIn are also useful for this step.
  2. For each of the CEO, CTO, COO, board members, and/or founders, I search their name plus “allegations,” “sexism,” “sexual assault,” “sexual harassment,” and “women.” I do this for the company name too.
  3. If I find out any executives, board members, or founders have been credibly accused of sexual harassment or assault, I refuse to work with that company.
  4. I look up the funders of the company on Crunchbase. If any of their funders are listed on Sexism and Racism in Venture Capital, I give the company extra scrutiny.
  5. If the company agreed to take funding from a firm (or person) after knowing the lead partner(s) were sexual harassers or predators, I refuse to work with that company.

If you don’t have time to do this personally, I recommend hiring or contracting with someone to do it for you.

That’s just part of my research process (I search for other terms, such as “racism”). This has saved me from agreeing to help make money for a sexual predator or harasser many times. Specifically, I’ve turned down 13 out of 303 potential clients for this reason, or about 4% of clients who approached me. To be sure, it has also cost me money—I’d estimate at least $50,000—but I’d like to believe that my reputation and conscience are worth more than that. If you’re not in a position where you can say no to supporting a sexual predator, you have my sympathy and respect, and I hope you can find a way out sooner or later.

Your research process will look different depending on your situation, but the key elements will be:

  1. Assume that sexual predators exist in your field and you don’t know who all of them are.
  2. When you are asked to work with or support someone new, do research to find out if they are a sexual predator.
  3. When you find out someone is probably a sexual predator, refuse to support them.

What do I do if, say, the CEO has been credibly accused of sexual harassment or assault but the company has taken appropriate steps to make amends and heal the harm done to the victims? I don’t know, because I can’t remember a potential client who did that. I’ve had plenty that published a non-apology, forced victims to sign NDAs for trivial sums of money, or (very rarely) fired the CEO but allowed them to keep all or most of their equity, board seat, voting rights, etc. That’s not enough, because the CEO hasn’t shown remorse, made amends, or removed themselves from positions of power.

I don’t think all sexual predators should be ostracized completely, but I do think everyone has a moral responsibility not to help known sexual predators back into positions of power and influence without strong evidence of reform. Power and influence are privileges which should only be granted to people who are unlikely to abuse them, not rights which certain people “deserve” as long as they claim to have reformed. Someone with a history of sexually predatory behavior should be assumed to be dangerous unless exhaustively proven otherwise. One sign of complete reform is that the former sexual predator will themselves avoid and reject situations in which power and access would make sexual abuse easy to resume.

In this specific case, the CTO of this company maintains a public web site which briefly and vaguely mentions the harm done to victims of sex abuse—and then devotes the majority of the text to passionately advocating for the repeal of sex offender registry laws because of the incredible harm they do to the health and happiness of convicted sex offenders. So, no, I don’t think he has changed meaningfully, he is not a safe person to be around, he should not be the CTO of a computer security company, and I should not help him gain more wealth.

Don’t be the person helping the sexual predator insinuate themself back into a position with easy access to victims. If your first instinct is to feel sorry for the powerful and predatory, you need to do some serious work on your sense of empathy. Plenty of people have shared what it’s like to be the victim of sexual harassment and assault; go read their stories and try to imagine the suffering they’ve been through. Then compare that to the suffering of people who occasionally experience moderate consequences for sexually abusing people with less power than themselves. I hope you will adjust your empathy accordingly.

Sociological ImagesFamily Matters

The ‘power elite’ as we conceive it, also rests upon the similarity of its personnel, and their personal and official relations with one another, upon their social and psychological affinities. In order to grasp the personal and social basis of the power elite’s unity, we have first to remind ourselves of the facts of origin, career, and style of life of each of the types of circle whose members compose the power elite.

— C. Wright Mills. 1956. The Power Elite. Oxford University Press

President John F. Kennedy addresses the Prayer Breakfast in 1961. Wikimedia Commons.

A big question in political sociology is “what keeps leaders working together?” The drive to stay in public office and common business interests can encourage elites to cooperate, but politics is still messy. Different constituent groups and social movements demand that representatives support their interests, and the U.S. political system was originally designed to use this big, diverse set of factions to keep any single person or party from becoming too powerful.

Sociologists know that shared culture, or what Mills calls a “style of life,” is really important among elites. One of my favorite profiles of a style of life is Jeff Sharlet’s The Family, a look at how one religious fellowship has a big influence on the networks behind political power in the modern world. The book is a gripping case of embedded reporting that shows how this elite culture works. It also has a new documentary series:

When we talk about the religious right in politics, it is easy to jump to images of loud, pro-life protests and controversial speakers. What interests me about the Family is how the group has worked so hard to avoid this contentious approach. Instead, everything is geared toward simply getting newcomers to think of themselves as elites, bringing leaders together, and keeping them connected. A major theme in the first episode of the series is just how simple the theology is (“Jesus plus nothing”) and how quiet the group is, even drawing comparisons to the mafia.

Vipassana Meditation in Chiang Mai, Thailand. Source: Matteo, Flickr CC.

Sociologists see similar trends in other elite networks. In research on how mindfulness and meditation caught on in the corporate world, Jaime Kucinskas calls this “unobtrusive organizing.” Both the Family and the mindfulness movement show how leaders draw on core theological ideas in Christianity and Buddhism, but also modify those ideas to support their relationships in business and government. Rather than challenging those institutions, adapting and modifying these traditions creates new opportunities for elites to meet, mingle, and coordinate their work.

When we study politics and culture, it is easy to assume that core beliefs make people do things by giving them an agenda to follow. These cases are important because they show how that’s not always the point; sometimes core beliefs just shape how people do things in the halls of power.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramLicense Plate "NULL"

There was a DefCon talk by someone with the vanity plate "NULL." The California system assigned him every ticket with no license plate: $12,000.

Although the initial $12,000-worth of fines were removed, the private company that administers the database didn't fix the issue and new NULL tickets are still showing up.

The unanswered question is: now that he has a way to get parking fines removed, can he park anywhere for free?

And this isn't the first time this sort of thing has happened. Wired has a roundup of people whose license places read things like "NOPLATE," "NO TAG," and "XXXXXXX."

Planet DebianKai-Chung Yan: My Open-Source Activities from January to August 2019

Welcome, reader! This is a infrequently updated post series that logs my activities within open-source communities. I do not work on open-source full-time, although I sincerely would love to. Therefore the posts may cover a ridiculously long period (even a whole year).

Debian & Google Summer of Code

Debian is a general-purpose Linux distribution that is widely used on the planet. I am a Debian Developer who works on packages related to Android SDK and the Java ecosystem.

I started a new package in an attempt to build the Android framework android.jar using the upstream build systems involving Ninja, Soong and others. Since the beginning we have been writing our own (very simple) makefiles to build the binaries in AOSP because their build logic tends to be simple and straightforward, until we worked on android.jar. Building it requires digging in so much code that it became incredibly hard to maintain, which is why we still haven’t brought in any newer version since android-framework-23. This is problematic as developers can’t build any apps that target Android 7+.

After a month of work, this package is finally done. After all its dependencies are packaged in the future, it will be good to upload. This is where the students of Google Summer of Code (GSoC) come in!

This year’s GSoC projects related to Android SDK are:

Thanks to their hard work, we managed to upload these packages to Debian:

Voidbuilder

Voidbuilder is a simple program that mimics pbuilder but uses Docker and requires zero configuration. I have been using it privately and am quite satisfied.

I made some bugfixes and adopted Node.js 12 so that it can make use the latest experimental ES Modules support. Version 1.0.0 and 1.0.1 have been released.

Worse Than FailureError'd: One Size Fits All

"Multi-platform AND multi-gender! Who knew SSDs could be so accomodating?" Felipe C. wrote.

 

"This is a progress indicator from a certain Australian "Enterprise" ERP vendor. I suspect their sales guys use it to claim that their software updates over 1000% faster than their competitors," Erin D. writes.

 

Bruce W. writes, "I guess LinkedIn wants me to know that I'm not as popular as I think."

 

"According to Icinga's Round Trip Average calculation, one of our servers must have been teleported about a quarter of the way to the center of the Milky Way. The good news is that I have negative packet loss on that route. Guess the packets got bored on the way," Mike T. writes.

 

"From undefined to invalid, this bankruptcy site has it all...or is it nothing?" Pascal writes.

 

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityBreach at Hy-Vee Supermarket Chain Tied to Sale of 5M+ Stolen Credit, Debit Cards

On Tuesday of this week, one of the more popular underground stores peddling credit and debit card data stolen from hacked merchants announced a blockbuster new sale: More than 5.3 million new accounts belonging to cardholders from 35 U.S. states. Multiple sources now tell KrebsOnSecurity that the card data came from compromised gas pumps, coffee shops and restaurants operated by Hy-Vee, an Iowa-based company that operates a chain of more than 245 supermarkets throughout the Midwestern United States.

Hy-Vee, based in Des Moines, announced on Aug. 14 it was investigating a data breach involving payment processing systems that handle transactions at some Hy-Vee fuel pumps, drive-thru coffee shops and restaurants.

The restaurants affected include Hy-Vee Market Grilles, Market Grille Expresses and Wahlburgers locations that the company owns and operates. Hy-Vee said it was too early to tell when the breach initially began or for how long intruders were inside their payment systems.

But typically, such breaches occur when cybercriminals manage to remotely install malicious software on a retailer’s card-processing systems. This type of point-of-sale malware is capable of copying data stored on a credit or debit card’s magnetic stripe when those cards are swiped at compromised payment terminals. This data can then be used to create counterfeit copies of the cards.

Hy-Vee said it believes the breach does not affect payment card terminals used at its grocery store checkout lanes, pharmacies or convenience stores, as these systems rely on a security technology designed to defeat card-skimming malware.

“These locations have different point-of-sale systems than those located at our grocery stores, drugstores and inside our convenience stores, which utilize point-to-point encryption technology for processing payment card transactions,” Hy-Vee said. “This encryption technology protects card data by making it unreadable. Based on our preliminary investigation, we believe payment card transactions that were swiped or inserted on these systems, which are utilized at our front-end checkout lanes, pharmacies, customer service counters, wine & spirits locations, floral departments, clinics and all other food service areas, as well as transactions processed through Aisles Online, are not involved.”

According to two sources who asked not to be identified for this story — including one at a major U.S. financial institution — the card data stolen from Hy-Vee is now being sold under the code name “Solar Energy,” at the infamous Joker’s Stash carding bazaar.

An ad at the Joker’s Stash carding site for “Solar Energy,” a batch of more than 5 million credit and debit cards sources say was stolen from customers of supermarket chain Hy-Vee.

Hy-Vee said the company’s investigation is continuing.

“We are aware of reports from payment processors and the card networks of payment data being offered for sale and are working with the payment card networks so that they can identify the cards and work with issuing banks to initiate heightened monitoring on accounts,” Hy-Vee spokesperson Tina Pothoff said.

The card account records sold by Joker’s Stash, known as “dumps,” apparently stolen from Hy-Vee are being sold for prices ranging from $17 to $35 apiece. Buyers typically receive a text file that includes all of their dumps. Those individual dumps records — when encoded onto a new magnetic stripe on virtually anything the size of a credit card — can be used to purchase stolen merchandise in big box stores.

As noted in previous stories here, the organized cyberthieves involved in stealing card data from main street merchants have gradually moved down the food chain from big box retailers like Target and Home Depot to smaller but far more plentiful and probably less secure merchants (either by choice or because the larger stores became a harder target).

It’s really not worth spending time worrying about where your card number may have been breached, since it’s almost always impossible to say for sure and because it’s common for the same card to be breached at multiple establishments during the same time period.

Just remember that while consumers are not liable for fraudulent charges, it may still fall to you the consumer to spot and report any suspicious charges. So keep a close eye on your statements, and consider signing up for text message notifications of new charges if your card issuer offers this service. Most of these services also can be set to alert you if you’re about to miss an upcoming payment, so they can also be handy for avoiding late fees and other costly charges.

Rondam RamblingsFedex: three months and counting

It has now been three months since we shipped a package via Fedex that turned out to be undeliverable (we sent it signature-required, and the recipient, unbeknownst to us, had moved).  We expected that in a situation like that, the package would simply be returned to us, but it wasn't because we paid cash for the original shipment and (again, unbeknownst to us) the shipping cost doesn't include

CryptogramModifying a Tesla to Become a Surveillance Platform

From DefCon:

At the Defcon hacker conference today, security researcher Truman Kain debuted what he calls the Surveillance Detection Scout. The DIY computer fits into the middle console of a Tesla Model S or Model 3, plugs into its dashboard USB port, and turns the car's built-in cameras­ -- the same dash and rearview cameras providing a 360-degree view used for Tesla's Autopilot and Sentry features­ -- into a system that spots, tracks, and stores license plates and faces over time. The tool uses open source image recognition software to automatically put an alert on the Tesla's display and the user's phone if it repeatedly sees the same license plate. When the car is parked, it can track nearby faces to see which ones repeatedly appear. Kain says the intent is to offer a warning that someone might be preparing to steal the car, tamper with it, or break into the driver's nearby home.

Worse Than FailureKeeping Busy

Djungarian Hamster Pearl White run wheel

In 1979, Argle was 18, happy to be working at a large firm specializing in aerospace equipment. There was plenty of opportunity to work with interesting technology and learn from dozens of more senior programs—well, usually. But then came the day when Argle's boss summoned him to his cube for something rather different.

"This is a listing of the code we had prior to the last review," the boss said, pointing to a stack of printed Fortran code that was at least 6 inches thick. "This is what we have now." He gestured to a second printout that was slightly thicker. "I need you to read through this code and, in the old code, mark lines with 'WAS' where there was a change and 'IS' in the new listing to indicate what it was changed to."

Argle frowned at the daunting paper mountains. "I'm sorry, but, why do you need this exactly?"

"It's for FAA compliance," the boss said, waving his hand toward his cubicle's threshold. "Thanks!"

Weighed down with piles of code, Argle returned to his cube with a similarly sinking heart. At this place and time, he'd never even heard of UNIX, and his coworkers weren't likely to know anything about it, either. Their development computer had a TMS9900 CPU, the same one in the TI-99 home computer, and it ran its own proprietary OS from Texas Instruments. There was no diff command or anything like it. The closest analog was a file comparison program, but it only reported whether two files were identical or not.

Back at his cube, Argle stared at the printouts for a while, dreading the weeks of manual, mind-numbing dullness that loomed ahead of him. There was no way he'd avoid errors, no matter how careful he was. There was no way he'd complete this to every stakeholder's satisfaction. He was staring imminent failure in the face.

Was there a better way? If there weren't already a program for this kind of thing, could he write his own?

Argle had never heard of the Hunt–McIlroy algorithm, but he thought he might be able to do line comparisons between files, then hunt ahead in one file or the other until he re-synched again. He asked one of the senior programmers for the files' source code. Within one afternoon of tinkering, he'd written his very own diff program.

The next morning, Argle handed his boss 2 newly printed stacks of code, with "WAS -->" and "IS -->" printed neatly on all the relevant lines. As the boss began flipping through the pages, Argle smiled proudly, anticipating the pleasant surprise and glowing praise to come.

Quite to Argle's surprise, his boss fixed him with a red-faced, accusing glare. "Who said you could write a program?!"

Argle was speechless at first. "I was hired to program!" he finally blurted. "Besides, that's totally error-free! I know I couldn't have gotten everything correct by hand!"

The boss sighed. "I suppose not."

It wasn't until Argle was much older that his boss' reaction made any sense to him. The boss' goal hadn't been "compliance." He simply hadn't had anything constructive for Argle to do, and had thought he'd come up with a brilliant way to keep the new young hire busy and out of his hair for a few weeks.

Writer's note: Through the ages and across time, absolutely nothing has changed. In 2001, I worked at a (paid, thankfully) corporate internship where I was asked to manually browse through a huge network share and write down what every folder contained, all the way through thousands of files and sub-folders. Fortunately, I had heard of the dir command in DOS. Within 30 minutes, I proudly handed my boss the printout of the output—to his bemusement and dismay. —Ellis

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Cory DoctorowMy MMT Podcast appearance, part 2: monopoly, money, and the power of narrative


Last week, the Modern Monetary Theory Podcast ran part 1 of my interview with co-host Christian Reilly; they’ve just published the second and final half of our chat (MP3), where we talk about the link between corruption and monopoly, how to pitch monetary theory to people who want to abolish money altogether, and how stories shape the future.

If you’re new to MMT, here’s my brief summary of its underlying premises: “Governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.”

Cory DoctorowWhere to catch me at Burning Man!

This is my last day at my desk until Labor Day: tomorrow, we’re driving to Burning Man to get our annual dirtrave fix! If you’re heading to the playa, here’s three places and times you can find me:

Seating is always limited at these things (our living room is big, but it’s not that big!) so come by early!

I hope you have an amazing burn — we always do! This year I’m taking a break from working in the cafe pulling shots in favor of my first-ever Greeter shift, which I’m really looking forward to.

While we’re on the subject, there’s still time to sign up for the Liminal Labs Assassination Game!

Google AdsenseAdditional safeguards to protect the quality of our ad network

Supporting a healthy ads ecosystem that works for publishers, advertisers, and users continues to be a top priority in our effort to sustain a free and open web. As the ecosystem evolves, our ad systems and defenses must adapt as well. Today, we’d like to highlight some of our efforts to protect the quality of our ad network, and the benefits to our publishers and the advertising ecosystem. 


Last year, we introduced a site verification process in AdSense to provide additional safeguards before a publisher can serve ads. This feature allows us to provide more direct feedback to our publishers on the eligibility of their site, while allowing us to communicate issues sooner and lessen the likelihood of future violations. As an added benefit, confirming which websites a publisher intends to monetize allows us to reduce potential misuse of a publisher's ad code, such as when a bad actor tries to claim a website as their own, or when they use a legitimate publisher's ad code to serve ads on bad content in an attempt to demonetize the good website — each day, we now block more than 120 million ad requests with this feature. 


This year, we’re enhancing our defenses even more by improving the systems that identify potentially invalid traffic or high risk activities before ads are served. These defenses allow us to limit ad serving as needed to further protect our advertisers and users, while maximizing revenue opportunities for legitimate publishers. While most publishers will not notice any changes to their ad traffic, we are working on improving the experience for those that may be impacted, by providing more transparency around these actions. Publishers on AdSense and AdMob that are affected will soon be notified of these ad traffic restrictions directly in their Policy Center. This will allow them to understand why they may be experiencing reduced ad serving, and what steps they can take to resolve any issues and continue partnering with us.


We’re excited for what’s to come, and will continue to roll out improvements to these systems with all of our users in mind. Look out for future updates on our ongoing efforts to promote and sustain a healthy ads ecosystem.


Posted by: 
Andres Ferrate - Chief Advocate for Ad Traffic Quality

Krebs on SecurityForced Password Reset? Check Your Assumptions

Almost weekly now I hear from an indignant reader who suspects a data breach at a Web site they frequent that has just asked the reader to reset their password. Further investigation almost invariably reveals that the password reset demand was not the result of a breach but rather the site’s efforts to identify customers who are reusing passwords from other sites that have already been hacked.

But ironically, many companies taking these proactive steps soon discover that their explanation as to why they’re doing it can get misinterpreted as more evidence of lax security. This post attempts to unravel what’s going on here.

Over the weekend, a follower on Twitter included me in a tweet sent to California-based job search site Glassdoor, which had just sent him the following notice:

The Twitter follower expressed concern about this message, because it suggested to him that in order for Glassdoor to have done what it described, the company would have had to be storing its users’ passwords in plain text. I replied that this was in fact not an indication of storing passwords in plain text, and that many companies are now testing their users’ credentials against lists of hacked credentials that have been leaked and made available online.

The reality is Facebook, Netflix and a number of big-name companies are regularly combing through huge data leak troves for credentials that match those of their customers, and then forcing a password reset for those users. Some are even checking for password re-use on all new account signups.

The idea here is to stymie a massively pervasive problem facing all companies that do business online today: Namely, “credential-stuffing attacks,” in which attackers take millions or even billions of email addresses and corresponding cracked passwords from compromised databases and see how many of them work at other online properties.

So how does the defense against this daily deluge of credential stuffing work? A company employing this strategy will first extract from these leaked credential lists any email addresses that correspond to their current user base.

From there, the corresponding cracked (plain text) passwords are fed into the same process that the company relies upon when users log in: That is, the company feeds those plain text passwords through its own password “hashing” or scrambling routine.

Password hashing is designed to be a one-way function which scrambles a plain text password so that it produces a long string of numbers and letters. Not all hashing methods are created equal, and some of the most commonly used methods — MD5 and SHA-1, for example — can be far less secure than others, depending on how they’re implemented (more on that in a moment). Whatever the hashing method used, it’s the hashed output that gets stored, not the password itself.

Back to the process: If a user’s plain text password from a hacked database matches the output of what a company would expect to see after running it through their own internal hashing process, that user is then prompted to change their password to something truly unique.

Now, password hashing methods can be made more secure by amending the password with what’s known as a “salt” — or random data added to the input of a hash function to guarantee a unique output. And many readers of the Twitter thread on Glassdoor’s approach reasoned that the company couldn’t have been doing what it described without also forgoing this additional layer of security.

My tweeted explanatory reply as to why Glassdoor was doing this was (in hindsight) incomplete and in any case not as clear as it should have been. Fortunately, Glassdoor’s chief information officer Anthony Moisant chimed in to the Twitter thread to explain that the salt is in fact added as part of the password testing procedure.

“In our [user] database, we’ve got three columns — username, salt value and scrypt hash,” Moisant explained in an interview with KrebsOnSecurity. “We apply the salt that’s stored in the database and the hash [function] to the plain text password, and that resulting value is then checked against the hash in the database we store. For whatever reason, some people have gotten it into their heads that there’s no possible way to do these checks if you salt, but that’s not true.”

CHECK YOUR ASSUMPTIONS

You — the user — can’t be expected to know or control what password hashing methods a given site uses, if indeed they use them at all. But you can control the quality of the passwords you pick.

I can’t stress this enough: Do not re-use passwords. And don’t recycle them either. Recycling involves rather lame attempts to make a reused password unique by simply adding a digit or changing the capitalization of certain characters. Crooks who specialize in password attacks are wise to this approach as well.

If you have trouble remembering complex passwords (and this describes most people), consider relying instead on password length, which is a far more important determiner of whether a given password can be cracked by available tools in any timeframe that might be reasonably useful to an attacker.

In that vein, it’s safer and wiser to focus on picking passphrases instead of passwords. Passphrases are collections of multiple (ideally unrelated) words mushed together. Passphrases are not only generally more secure, they also have the added benefit of being easier to remember.

According to a recent blog entry by Microsoft group program manager Alex Weinert, none of the above advice about password complexity amounts to a hill of beans from the attacker’s standpoint.

Weinert’s post makes a compelling argument that as long as we’re stuck with passwords, taking full advantage of the most robust form of multi-factor authentication (MFA) offered by a site you frequent is the best way to deter attackers. Twofactorauth.org has a handy list of your options here, broken down by industry.

“Your password doesn’t matter, but MFA does,” Weinert wrote. “Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.”

Glassdoor’s Moisant said the company doesn’t currently offer MFA for its users, but that it is planning to roll that out later this year to both consumer and business users.

Password managers also can be useful for those who feel encumbered by having to come up with passphrases or complex passwords. If you’re uncomfortable with entrusting a third-party service or application to handle this process for you, there’s absolutely nothing wrong with writing down your passwords, provided a) you do not store them in a file on your computer or taped to your laptop or screen or whatever, and b) that your password notebook is stored somewhere relatively secure, i.e. not in your purse or car, but something like a locked drawer or safe.

Although many readers will no doubt take me to task on that last bit of advice, as in all things security related it’s important not to let the perfect become the enemy of the good. Many people (think moms/dads/grandparents) can’t be bothered to use password managers  — even when you go through the trouble of setting them up on their behalf. Instead, without an easier, non-technical method they will simply revert to reusing or recycling passwords.

CryptogramGoogle Finds 20-Year-Old Microsoft Windows Vulnerability

There's no indication that this vulnerability was ever used in the wild, but the code it was discovered in -- Microsoft's Text Services Framework -- has been around since Windows XP.

,

CryptogramSurveillance as a Condition for Humanitarian Aid

Excellent op-ed on the growing trend to tie humanitarian aid to surveillance.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don't apply for people who are starving.

Worse Than FailureCodeSOD: I'm Sooooooo Random, LOL

There are some blocks of code that require a preamble, and an explanation of the code and its flow. Often you need to provide some broader context.

Sometimes, you get some code like Wolf found, which needs no explanation:

export function generateRandomId(): string { counter++; return 'id' + counter; }

I mean, I guess that's a slightly better than this solution. Wolf found this because some code downstream was expecting random, unique IDs, and wasn't getting them.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Cory DoctorowPodcast: A cycle of renewal, broken: How Big Tech and Big Media abuse copyright law to slay competition

In my latest podcast (MP3), I read my essay “A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition”, published today on EFF’s Deeplinks; it’s the latest in my ongoing series of case-studies of “adversarial interoperability,” where new services unseated the dominant companies by finding ways to plug into existing products against those products’ manufacturers. This week’s installment recounts the history of cable TV, and explains how the legal system in place when cable was born was subsequently extinguished (with the help of the cable companies who benefitted from it!) meaning that no one can do to cable what cable once did to broadcasters.

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own “community antenna television” networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born.

The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment.

The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes.

By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony’s Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were “capable of substantial noninfringing uses.”

MP3

Krebs on SecurityThe Rise of “Bulletproof” Residential Networks

Cybercrooks increasingly are anonymizing their malicious traffic by routing it through residential broadband and wireless data connections. Traditionally, those connections have been mainly hacked computers, mobile phones, or home routers. But this story is about so-called “bulletproof residential VPN services” that appear to be built by purchasing or otherwise acquiring discrete chunks of Internet addresses from some of the world’s largest ISPs and mobile data providers.

In late April 2019, KrebsOnSecurity received a tip from an online retailer who’d seen an unusual number of suspicious transactions originating from a series of Internet addresses assigned to a relatively new Internet provider based in Maryland called Residential Networking Solutions LLC.

Now, this in itself isn’t unusual; virtually every provider has the occasional customers who abuse their access for fraudulent purposes. But upon closer inspection, several factors caused me to look more carefully at this company, also known as “Resnet.”

An examination of the IP address ranges assigned to Resnet shows that it maintains an impressive stable of IP blocks — totaling almost 70,000 IPv4 addresses — many of which had until quite recently been assigned to someone else.

Most interestingly, about ten percent of those IPs — more than 7,000 of them — had until late 2018 been under the control of AT&T Mobility. Additionally, the WHOIS registration records for each of these mobile data blocks suggest Resnet has been somehow reselling data services for major mobile and broadband providers, including AT&T, Verizon, and Comcast Cable.

The WHOIS records for one of several networks associated with Residential Networking Solutions LLC.

Drilling down into the tracts of IPs assigned to Resnet’s core network indicates those 7,000+ mobile IP addresses under Resnet’s control were given the label  “Service Provider Corporation” — mostly those beginning with IPs in the range 198.228.x.x.

An Internet search reveals this IP range is administered by the Wireless Data Service Provider Corporation (WDSPC), a non-profit formed in the 1990s to manage IP address ranges that could be handed out to various licensed mobile carriers in the United States.

Back when the WDSPC was first created, there were quite a few mobile wireless data companies. But today the vast majority of the IP space managed by the WDSPC is leased by AT&T Mobility and Verizon Wireless — which have gradually acquired most of their competing providers over the years.

A call to the WDSPC revealed the nonprofit hadn’t leased any new wireless data IP space in more than 10 years. That is, until the organization received a communication at the beginning of this year that it believed was from AT&T, which recommended Resnet as a customer who could occupy some of the company’s mobile data IP address blocks.

“I’m afraid we got duped,” said the person answering the phone at the WDSPC, while declining to elaborate on the precise nature of the alleged duping or the medium that was used to convey the recommendation.

AT&T declined to discuss its exact relationship with Resnet  — or if indeed it ever had one to begin with. It responded to multiple questions about Resnet with a short statement that said, “We have taken steps to terminate this company’s services and have referred the matter to law enforcement.”

Why exactly AT&T would forward the matter to law enforcement remains unclear. But it’s not unheard of for hosting providers to forge certain documents in their quest for additional IP space, and anyone caught doing so via email, phone or fax could be charged with wire fraud, which is a federal offense that carries punishments of up to $500,000 in fines and as much as 20 years in prison.

WHAT IS RESNET?

The WHOIS registration records for Resnet’s main Web site, resnetworking[.]com, are hidden behind domain privacy protection. However, a cursory Internet search on that domain turned up plenty of references to it on Hackforums[.]net, a sprawling community that hosts a seemingly never-ending supply of up-and-coming hackers seeking affordable and anonymous ways to monetize various online moneymaking schemes.

One user in particular — a Hackforums member who goes by the nickname “Profitvolt” — has spent several years advertising resnetworking[.]com and a number of related sites and services, including “unlimited” AT&T 4G/LTE data services, and the immediate availability of more than 1 million residential IPs that he suggested were “perfect for botting, shoe buying.”

The Hackforums user “Profitvolt” advertising residential proxies.

Profitvolt advertises his mobile and residential data services as ideal for anyone who wishes to run “various bots,” or “advertising campaigns.” Those services are meant to provide anonymity when customers are doing things such as automating ad clicks on platforms like Google Adsense and Facebook; generating new PayPal accounts; sneaker bot activity; credential stuffing attacks; and different types of social media spam.

For readers unfamiliar with this term, “shoe botting” or “sneaker bots” refers to the use of automated bot programs and services that aid in the rapid acquisition of limited-release, highly sought-after designer shoes that can then be resold at a profit on secondary markets. All too often, it seems, the people who profit the most in this scheme are using multiple sets of compromised credentials from consumer accounts at online retailers, and/or stolen payment card data.

To say shoe botting has become a thorn in the side of online retailers and regular consumers alike would be a major understatement: A recent State of The Internet Security Report (PDF) from Akamai (an advertiser on this site) noted that such automated bot activity now accounts for almost half of the Internet bandwidth directed at online retailers. The prevalance of shoe botting also might help explain Footlocker‘s recent $100 million investment in goat.com, the largest secondary shoe resale market on the Web.

In other discussion threads, Profitvolt advertises he can rent out an “unlimited number” of so-called “residential proxies,” a term that describes home or mobile Internet connections that can be used to anonymously relay Internet traffic for a variety of dodgy deals.

From a ne’er-do-well’s perspective, the beauty of routing one’s traffic through residential IPs is that few online businesses will bother to block malicious or suspicious activity emanating from them.

That’s because in general the pool of IP addresses assigned to residential or mobile wireless connections cycles intermittently from one user to the next, meaning that blacklisting one residential IP for abuse or malicious activity may only serve to then block legitimate traffic (and e-commerce) from the next user who gets assigned that same IP.

A BULLETPROOF PLAN?

In one early post on Hackforums, Profitvolt laments the untimely demise of various “bulletproof” hosting providers over the years, from the Russian Business Network and Atrivo/Intercage, to McColo, 3FN and Troyak, among others.

All of these Internet providers had one thing in common: They specialized in cultivating customers who used their networks for nefarious purposes — from operating botnets and spamming to hosting malware. They were known as “bulletproof” because they generally ignored abuse complaints, or else blamed any reported abuse on a reseller of their services.

In that Hackforums post, Profitvolt bemoans that “mediums which we use to distribute [are] locking us out and making life unnecessarily hard.”

“It’s still sketchy, so I am not going all out to reveal my plans, but currently I am starting off with a 32 GB RAM server with a 1 GB unmetered up-link in a Caribbean country,” Profitvolt told forum members, while asking in different Hackforums posts whether there are any other users from the dual-island Caribbean nation of Trinidad and Tobago on the forum.

“To be quite honest, the purpose of this is to test how far we can stretch the leniency before someone starts asking questions, or we start receiving emails,” Profitvolt continued.

Hackforums user Profitvolt says he plans to build his own “bulletproof” hosting network catering to fellow forum users who might want to rent his services for a variety of dodgy activities.

KrebsOnSecurity started asking questions of Resnet after stumbling upon several indications that this company was enabling different types of online abuse in bite-sized monthly packages. The site resnetworking[.]com appears normal enough on the surface, but a review of the customer packages advertised on it suggests the company has courted a very specific type of client.

“No bullshit, just proxies,” reads one (now hidden or removed) area of the site’s shopping cart. Other promotions advertise the use of residential proxies to promote “growth services” on multiple social media platforms including CraigslistFacebook, Google, Instagram, Spotify, Soundcloud and Twitter.

Resnet also peers with or partners with several other interesting organizations, including:

residential-network[.]com, also known as “IAPS Security Services” (formerly intl-alliance[.]com), which advertises the sale of residential VPNs and mobile 4G/IPv6 proxies aimed at helping customers avoid being blocked while automating different types of activity, from mass-creating social media and email accounts to bulk message sending on platforms like WhatsApp and Facebook.

Laksh Cybersecurity and Defense LLC, which maintains Hexproxy[.]com, another residential proxy service that largely courts customers involved in shoe botting.

-Several chunks of IP space from a Russian provider variously known by the names “SERVERSGET” and “Men Danil Valentinovich,” which has been associated with numerous instances of hijacking vast swaths of IP addresses from other organizations quite recently.

Some of Profitvolt’s discussion threads on Hackforums.

WHO IS RESNET?

Resnetworking[.]com lists on its home page the contact phone number 202-643-8533. That number is tied to the registration records for several domains, including resnetworking[.]com, residentialvpn[.]info, and residentialvpn[.]org. All of those domains also have in their historic WHOIS records the name Joshua Powder and Residential Networking Solutions LLC.

Running a reverse WHOIS lookup via Domaintools.com on “Joshua Powder” turns up almost 60 domain names — most of them tied to the email address joshua.powder@gmail.com. Among those are resnetworking[.]info, resvpn[.]com/net/org/info, tobagospeaks[.]com, tthack[.]com and profitvolt[.]com. Recall that “Profitvolt” is the nickname of the Hackforums user advertising resnetworking[.]com.

The email address josh@tthack.com was used to register an account on the scammer-friendly site blackhatworld[.]com under the nickname “BulletProofWebHost.” Here’s a list of domains registered to this email address.

A search on the Joshua Powder and tthack email addresses at Hyas, a startup that specializes in combining data from a number of sources to provide attribution of cybercrime activity, further associates those to mafiacloud@gmail.com and to the phone number 868-360-9983, which is a mobile number assigned by Digicel Trinidad and Tobago Ltd. A full list of domains tied to that 868- number is here.

Hyas’s service also pointed to this post on the Facebook page of the Prince George’s County Economic Development Corporation in Maryland, which appears to include a 2017 photo of Mr. Powder posing with county officials.

‘A GLORIFIED SOLUTIONS PROVIDER’

Roughly three weeks ago, KrebsOnSecurity called the 202 number listed at the top of resnetworking[.]com. To my surprise, a man speaking in a lovely Caribbean-sounding accent answered the call and identified himself as Josh Powder. When I casually asked from where he’d acquired that accent, Powder said he was a native of New Jersey but allowed that he has family members who now live in Trinidad and Tobago.

Powder said Residential Networking Solutions LLC is “a normal co-location Internet provider” that has been in operation for about three years and employs some 65 people.

“You’re not the first person to call us about residential VPNs,” Powder said. “In the past, we did have clients that did host VPNs, but it’s something that’s been discontinued since 2017. All we are is a glorified solutions provider, and we broker and lease Internet lines from different companies.”

When asked about the various “botting” packages for sale on Resnetworking[.]com, Powder replied that the site hadn’t been updated in a while and that these were inactive offers that resulted from a now-discarded business model.

“When we started back in 2016, we were really inexperienced, and hired some SEO [search engine optimization] firms to do marketing,” he explained. “Eventually we realized that this was creating a shitstorm, because it started to make us look a specific way to certain people. So we had to really go through a process of remodeling. That process isn’t complete, and the entire web site is going to retire in about a week’s time.”

Powder maintains that his company does have a contract with AT&T to resell LTE and 4G data services, and that he has a similar arrangement with Sprint. He also suggested that one of the aforementioned companies which partnered with Resnet — IAPS Security Services — was responsible for much of the dodgy activity that previously brought his company abuse complaints and strange phone calls about VPN services.

“That guy reached out to us and he leased service from us and nearly got us into a lot of trouble,” Powder said. “He was doing a lot of illegal stuff, and I think there is an ongoing matter with him legally. That’s what has caused us to be more vigilant and really look at what we do and change it. It attracted too much nonsense.”

Interestingly, when one visits IAPS Security Services’ old domain — intl-alliance[.]com — it now forwards to resvpn[.]com, which is one of the domains registered to Joshua Powder.

Shortly after our conversation, the monthly packages I asked Powder about that were for sale on resnetworking[.]com disappeared from the site, or were hidden behind a login. Also, Resnet’s IPv6 prefixes (a la IAPS Security Services) were removed from the company’s list of addresses. At the same time, a large number of Profitvolt’s posts prior to 2018 were deleted from Hackforums.

EPILOGUE

It appears that the future of low-level abuse targeting some of the most popular Internet destinations is tied to the increasing willingness of the world’s biggest ISPs to resell discrete chunks of their address space to whomever is able to pay for them.

Earlier this week, I had a Skype conversation with an individual who responded to my requests for more information from residential-network[.]com, and this person told me that plenty of mobile and land-line ISPs are more than happy to sell huge amounts of IP addresses to just about anybody.

“Mobile providers also sell mass services,” the person who responded to my Skype request offered. “Rogers in Canada just opened a new package for unlimited 4G data lines and we’re currently in negotiations with them for that service as well. The UK also has 4G providers that have unlimited data lines as well.”

The person responding to my Skype messages said they bought most of their proxies from a reseller at customproxysolutions[.]com, which advertises “the world’s largest network of 4G LTE modems in the United States.”

He added that “Rogers in Canada has a special offer that if you buy more than 50 lines you get a reduced price lower than the $75 Canadian Dollar price tag that they would charge for fewer than 50 lines. So most mobile ISPs want to sell mass lines instead of single lines.”

It remains unclear how much of the Internet address space claimed by these various residential proxy and VPN networks has been acquired legally or through other means. But it seems that Resnet and its business associates are in fact on the cutting edge of what it means to be a bulletproof Internet provider today.

CryptogramInfluence Operations Kill Chain

Influence operations are elusive to define. The Rand Corp.'s definition is as good as any: "the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent." Basically, we know it when we see it, from bots controlled by the Russian Internet Research Agency to Saudi attempts to plant fake stories and manipulate political debate. These operations have been run by Iran against the United States, Russia against Ukraine, China against Taiwan, and probably lots more besides.

Since the 2016 US presidential election, there have been an endless series of ideas about how countries can defend themselves. It's time to pull those together into a comprehensive approach to defending the public sphere and the institutions of democracy.

Influence operations don't come out of nowhere. They exploit a series of predictable weaknesses -- and fixing those holes should be the first step in fighting them. In cybersecurity, this is known as a "kill chain." That can work in fighting influence operations, too­ -- laying out the steps of an attack and building the taxonomy of countermeasures.

In an exploratory blog post, I first laid out a straw man information operations kill chain. I started with the seven commandments, or steps, laid out in a 2018 New York Times opinion video series on "Operation Infektion," a 1980s Russian disinformation campaign. The information landscape has changed since the 1980s, and these operations have changed as well. Based on my own research and feedback from that initial attempt, I have modified those steps to bring them into the present day. I have also changed the name from "information operations" to "influence operations," because the former is traditionally defined by the US Department of Defense in ways that don't really suit these sorts of attacks.

Step 1: Find the cracks in the fabric of society­ -- the social, demographic, economic, and ethnic divisions. For campaigns that just try to weaken collective trust in government's institutions, lots of cracks will do. But for influence operations that are more directly focused on a particular policy outcome, only those related to that issue will be effective.

Countermeasures: There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the "common political knowledge" necessary for democracies to function. That shared knowledge has to be strengthened, thereby making it harder to exploit the inevitable cracks. It needs to be made unacceptable -- or at least costly -- for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. The public must learn to become reflexively suspicious of information that makes them angry at fellow citizens. These cracks can't be entirely sealed, as they emerge from the diversity that makes democracies strong, but they can be made harder to exploit. Much of the work in "norms" falls here, although this is essentially an unfixable problem. This makes the countermeasures in the later steps even more important.

Step 2: Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives. In 2016, this consisted of creating social media accounts run either by human operatives or automatically by bots, making them seem legitimate, gathering followers. In the years following, this has gotten subtler. As social media companies have gotten better at deleting these accounts, two separate tactics have emerged. The first is microtargeting, where influence accounts join existing social circles and only engage with a few different people. The other is influencer influencing, where these accounts only try to affect a few proxies (see step 6) -- either journalists or other influencers -- who can carry their message for them.

Countermeasures: This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize. It would be best to delete accounts early, before those accounts have the time to establish themselves.

This might involve normally competitive companies working together, since operations and account names often cross platforms, and cross-platform visibility is an important tool for identifying them. Taking down accounts as early as possible is important, because it takes time to establish the legitimacy and reach of any one account. The NSA and US Cyber Command worked with the FBI and social media companies to take down Russian propaganda accounts during the 2018 midterm elections. It may be necessary to pass laws requiring Internet companies to do this. While many social networking companies have reversed their "we don't care" attitudes since the 2016 election, there's no guarantee that they will continue to remove these accounts -- especially since their profits depend on engagement and not accuracy.

Step 3: Seed distortion by creating alternative narratives. In the 1980s, this was a single "big lie," but today it is more about many contradictory alternative truths -- a "firehose of falsehood" -- that distort the political debate. These can be fake or heavily slanted news stories, extremist blog posts, fake stories on real-looking websites, deepfake videos, and so on.

Countermeasures: Fake news and propaganda are viruses; they spread through otherwise healthy populations. Fake news has to be identified and labeled as such by social media companies and others, including recognizing and identifying manipulated videos known as deepfakes. Facebook is already making moves in this direction. Educators need to teach better digital literacy, as Finland is doing. All of this will help people recognize propaganda campaigns when they occur, so they can inoculate themselves against their effects. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

Step 4: Wrap those narratives in kernels of truth. A core of fact makes falsehoods more believable and helps them spread. Releasing stolen emails from Hillary Clinton's campaign chairman John Podesta and the Democratic National Committee, or documents from Emmanuel Macron's campaign in France, were both an example of that kernel of truth. Releasing stolen emails with a few deliberate falsehoods embedded among them is an even more effective tactic.

Countermeasures: Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Fake news sows confusion just by being there. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. The media needs to learn skepticism about the chain of information and to exercise caution in how they approach debunked stories.

Step 5: Conceal your hand. Make it seem as if the stories came from somewhere else.

Countermeasures: Here the answer is attribution, attribution, attribution. The quicker an influence operation can be pinned on an attacker, the easier it is to defend against it. This will require efforts by both the social media platforms and the intelligence community, not just to detect influence operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

Step 6: Cultivate proxies who believe and amplify the narratives. Traditionally, these people have been called "useful idiots." Encourage them to take action outside of the Internet, like holding political rallies, and to adopt positions even more extreme than they would otherwise.

Countermeasures: We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth -- in the words of one report, we must "make the truth louder." Of course, there will always be true believers for whom no amount of fact-checking or counter-speech will suffice; this is not intended for them. Focus instead on persuading the persuadable.

Step 7: Deny involvement in the propaganda campaign, even if the truth is obvious. Although since one major goal is to convince people that nothing can be trusted, rumors of involvement can be beneficial. The first was Russia's tactic during the 2016 US presidential election; it employed the second during the 2018 midterm elections.

Countermeasures: When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA's Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

Step 8: Play the long game. Strive for long-term impact over immediate effects. Engage in multiple operations; most won't be successful, but some will.

Countermeasures: Counterattacks can disrupt the attacker's ability to maintain influence operations, as US Cyber Command did during the 2018 midterm elections. The NSA's new policy of "persistent engagement" (see the article by, and interview with, US Cyber Command Commander Paul Nakasone here) is a strategy to achieve this. So are targeted sanctions and indicting individuals involved in these operations. While there is little hope of bringing them to the United States to stand trial, the possibility of not being able to travel internationally for fear of being arrested will lead some people to refuse to do this kind of work. More generally, we need to better encourage both politicians and social media companies to think beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Deterring them will require a different theory. It will require, as the political scientist Henry Farrell and I have postulated, thinking of democracy itself as an information system and understanding "Democracy's Dilemma": how the very tools of a free and open society can be subverted to attack that society. We need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of influence operations, if we can publicly attribute, if we can respond either diplomatically or otherwise -- we can deter these attacks from nation-states.

None of these defensive actions is sufficient on its own. Steps overlap and in some cases can be skipped. Steps can be conducted simultaneously or out of order. A single operation can span multiple targets or be an amalgamation of multiple attacks by multiple actors. Unlike a cyberattack, disrupting will require more than disrupting any particular step. It will require a coordinated effort between government, Internet platforms, the media, and others.

Also, this model is not static, of course. Influence operations have already evolved since the 2016 election and will continue to evolve over time -- especially as countermeasures are deployed and attackers figure out how to evade them. We need to be prepared for wholly different kinds of influencer operations during the 2020 US presidential election. The goal of this kill chain is to be general enough to encompass a panoply of tactics but specific enough to illuminate countermeasures. But even if this particular model doesn't fit every influence operation, it's important to start somewhere.

Others have worked on similar ideas. Anthony Soules, a former NSA employee who now leads cybersecurity strategy for Amgen, presented this concept at a private event. Clint Watts of the Alliance for Securing Democracy is thinking along these lines as well. The Credibility Coalition's Misinfosec Working Group proposed a "misinformation pyramid." The US Justice Department developed a "Malign Foreign Influence Campaign Cycle," with associated countermeasures.

The threat from influence operations is real and important, and it deserves more study. At the same time, there's no reason to panic. Just as overly optimistic technologists were wrong that the Internet was the single technology that was going to overthrow dictators and liberate the planet, so pessimists are also probably wrong that it is going to empower dictators and destroy democracy. If we deploy countermeasures across the entire kill chain, we can defend ourselves from these attacks.

But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they're surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. And as more people, and multiple parties, conduct influence operations, they will increasingly be seen as how the game of politics is played in the information age. This means that the line will increasingly blur between influence operations and politics as usual, and that domestic influencers will be using them as part of campaigning. Defending democracy against foreign influence also necessitates making our own political debate healthier.

This essay previously appeared in Foreign Policy.

Worse Than FailureLowest Bidder Squared

Stack of coins 0214

Initech was in dire straits. The website was dog slow, and the budget had been exceeded by a factor of five already trying to fix it. Korbin, today's submitter, was brought in to help in exchange for decent pay and an office in their facility.

He showed up only to find a boxed-up computer and a brand new flat-packed desk, also still in the box. The majority of the space was a video-recording studio that saw maybe 4-6 hours of use a week. After setting up his office, Korbin spent the next day and a half finding his way around the completely undocumented C# code. The third day, there was a carpenter in the studio area. Inexplicably, said carpenter decided he needed to contact-glue carpet to a set of huge risers ... indoors. At least a gallon of contact cement was involved. In minutes, Korbin got a raging headache, and he was essentially gassed out of the building for the rest of the day. Things were not off to a good start.

Upon asking around, Korbin quickly determined that the contractors originally responsible for coding the website had underbid the project by half, then subcontracted the whole thing out to a team in India to do the work on the cheap. The India team had then done the very same thing, subcontracting it out to the most cut-rate individuals they could find. Everything had been written in triplicate for some reason, making it impossible to determine what was actually powering the website and what was dead code. Furthermore, while this was a database-oriented site, there were no stored procedures, and none of the (sub)subcontractors seemed to understand how to use a JOIN command.

In an effort to tease apart what code was actually needed, Korbin turned on profiling. Only ... it was already on in the test version of the site. With a sudden ominous hunch, he checked the live site—and sure enough, profiling was running in production as well. He shut it off, and instantly, the whole site became more responsive.

The next fix was also pretty simple. The site had a bad habit of asking for information it already had, over and over, without any JOINs. Reducing the frequency of database hits improved performance again, bringing it to within an order of magnitude of what one might expect from a website.

While all this was going on, the leaderboard page had begun timing out. Sure enough, it was an N-squared solution: open database, fetch record, close database, repeat, then compare the two records, putting them in order and beginning again. With 500 members, it was doing 250,000 passes each time someone hit the page. Korbin scrapped the whole thing in favor of the site's first stored procedure, then cached it to call only once a day.

The weeks went on, and the site began to take shape, finally getting something like back on track. Thanks to the botched rollout, however, many of the company's endorsements had vanished, and backers were pulling out. The president got on the phone with some VIP about Facebook—because as we all know, the solution to any company's problem is the solution to every company's problems.

"Facebook was written in PHP. He told me it was the best thing out there. So we're going to completely redo the website in PHP," the president confidently announced at the next all-hands meeting. "I want to hear how long everyone thinks this will take to get done."

The only developers left at that point were Korbin and a junior kid just out of college, with one contractor with some experience on the project.

"Two weeks. Maybe three," the kid replied.

They went around the table, and all the non-programmers chimed in with the 2-3 week assessment. Next to last came the experienced contractor. Korbin's jaw nearly dropped when he weighed in at 3-4 weeks.

"None of that is realistic!" Korbin proclaimed. "Even with the existing code as a road map, it's going to take 4-6 months to rewrite. And with the inevitable feature-creep and fixes for things found in testing, it is likely to take even longer."

Korbin was told the next day he could pick up his final check. Seven months later, he ran into the junior kid again, and asked how the rewrite went.

"It's still ongoing," he admitted.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

CryptogramFriday Squid Blogging: Robot Squid Propulsion

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we're told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There's also plenty of work to do with using the fins for dynamic control, which the researchers say will "reveal the superiority of the natural flying squid movement."

I can't find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSoftware Vulnerabilities in the Boeing 787

Boeing left its software unprotected, and researchers have analyzed it for vulnerabilities:

At the Black Hat security conference today in Las Vegas, Santamarta, a researcher for security firm IOActive, plans to present his findings, including the details of multiple serious security flaws in the code for a component of the 787 known as a Crew Information Service/Maintenance System. The CIS/MS is responsible for applications like maintenance systems and the so-called electronic flight bag, a collection of navigation documents and manuals used by pilots. Santamarta says he found a slew of memory corruption vulnerabilities in that CIS/MS, and he claims that a hacker could use those flaws as a foothold inside a restricted part of a plane's network. An attacker could potentially pivot, Santamarta says, from the in-flight entertainment system to the CIS/MS to send commands to far more sensitive components that control the plane's safety-critical systems, including its engine, brakes, and sensors. Boeing maintains that other security barriers in the 787's network architecture would make that progression impossible.

Santamarta admits that he doesn't have enough visibility into the 787's internals to know if those security barriers are circumventable. But he says his research nonetheless represents a significant step toward showing the possibility of an actual plane-hacking technique. "We don't have a 787 to test, so we can't assess the impact," Santamarta says. "We're not saying it's doomsday, or that we can take a plane down. But we can say: This shouldn't happen."

Boeing denies that there's any problem:

In a statement, Boeing said it had investigated IOActive's claims and concluded that they don't represent any real threat of a cyberattack. "IOActive's scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system," the company's statement reads. "IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we're disappointed in IOActive's irresponsible presentation."

This being Black Hat and Las Vegas, I'll say it this way: I would bet money that Boeing is wrong. I don't have an opinion about whether or not it's lying.

Worse Than FailureError'd: What About the Fish?

"On the one hand, I don't want to know what the fish has to do with Boris Johnson's love life...but on the other hand I have to know!" Mark R. writes.

 

"Not sure if that's a new GDPR rule or the Slack Mailbot's weekend was just that much better then mine," Adam G. writes.

 

Connor W. wrote, "You know what, I think I'll just stay inside."

 

"It's great to see that an attempt at personalization was made, but whatever happened to 'trust but verify'?" writes Rob H.

 

"For a while, I thought that, maybe, I didn't actually know how to use my iPhone's alarm. Instead, I found that it just wasn't working right. So, I contacted Apple Support, and while they were initially skeptical that it was an iOS issue, this morning, I actually have proof!" Markus G. wrote.

 

Tim G. writes, "I guess that's better than an angry error message."

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

CryptogramBypassing Apple FaceID's Liveness Detection Feature

Apple's FaceID has a liveness detection feature, which prevents someone from unlocking a victim's phone by putting it in front of his face while he's sleeping. That feature has been hacked:

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim's FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim's face the researchers demonstrated how they could bypass Apple's FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

LongNowAI analyzed 3.3 million scientific abstracts and discovered possible new materials

A new paper shows how AI can accelerate scientific discovery through analyzing millions of scientific abstracts. From the MIT Technology Review:

Natural-language processing has seen major advancements in recent years, thanks to the development of unsupervised machine-learning techniques that are really good at capturing the relationships between words. They count how often and how closely words are used in relation to one another, and map those relationships in a three-dimensional vector space. The patterns can then be used to predict basic analogies like “man is to king as woman is to queen,” or to construct sentences and power things like autocomplete and other predictive text systems.

A group of researchers have now used this technique to munch through 3.3 million scientific abstracts published between 1922 and 2018 in journals that would likely contain materials science research. The resulting word relationships captured fundamental knowledge within the field, including the structure of the periodic table and the way chemicals’ structures relate to their properties. The paper was published in Nature last week.

MIT Technology Review


Worse Than FailureCodeSOD: A Devil With a Date

Jim was adding a feature to the backend. This feature updated a few fields on an object, and then handed the object off as JSON to the front-end.

Adding the feature seemed pretty simple, but when Jim went to check out its behavior in the front-end, he got validation errors. Something in the data getting passed back by his web service was fighting with the front end.

On its surface, that seemed like a reasonable problem, but when looking into it, Jim discovered that it was the record_update_date field which was causing validation issues. The front-end displayed this as a read only field, so there was no reason to do any client-side validation in the first place, and that field was never sent to the backend, so there was even less than no reason to do validation.

Worse, the field had, at least to the eye, a valid date: 2019-07-29T00:00:00.000Z. Even weirder, if Jim changed the backend to just return 2019-07-29, everything worked. He dug into the validation code to see what might be wrong about it:

/**
 * Custom validation
 *
 * This is a callback function for ajv custom keywords
 *
 * @param  {object} wsFormat aiFormat property content
 * @param  {object} data Data (of element type) from document where validation is required
 * @param  {object} itemSchema Schema part from wsValidation keyword
 * @param  {string} dataPath Path to document element
 * @param  {object} parentData Data of parent object
 * @param  {string} key Property name
 * @param  {object} rootData Document data
 */
function wsFormatFunction(wsFormat, data, itemSchema, dataPath, parentData, key, rootData) {

    let valid;
    switch (aiFormat) {
        case 'date': {
            let regex = /^\d\d\d\d-[0-1]\d-[0-3](T00:00:00.000Z)?\d$/;
            valid = regex.test(data);
            break;
        }
        case 'date-time': {
            let regex = /^\d\d\d\d-[0-1]\d-[0-3]\d[t\s](?:[0-2]\d:[0-5]\d:[0-5]\d|23:59:60)(?:\.\d+)?(?:z|[+-]\d\d:\d\d)$/i;
            valid = regex.test(data);
            break;
        }
        case 'time': {
            let regex = /^(0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9]$/;
            valid = regex.test(data);
            break;
        }
        default: throw 'Unknown wsFormat: ' + wsFormat;
    }

    if (!valid) {
        wsFormatFunction['errors'] = wsFormatFunction['errors'] || [];

        wsFormatFunction['errors'].push({
            keyword: 'wsFormat',
            dataPath: dataPath,
            message: 'should match format "' + wsFormat + '"',
            schema: itemSchema,
            data: data
        });
    }

    return valid;
}

When it starts with “Custom validation” and it involves dates, you know you’re in for a bad time. Worse, it’s custom validation, dates, and regular expressions written by someone who clearly didn’t understand regular expressions.

Let’s take a peek at the branch which was causing Jim’s error, and examine the regex:

/^\d\d\d\d-[0-1]\d-[0-3](T00:00:00.000Z)?\d$/

It should start with four digits, followed by a dash, followed by a value between 0 and 1. Then another digit, then a dash, then a number between 0 and 3, then the time (optionally), then a final digit.

It’s obvious why Jim’s perfectly reasonable date wasn’t working: it needed to be 2019-07-2T00:00:00.000Z9. Or, if Jim just didn’t include the timestamp, not only would 2019-07-29 be a valid date, but so would 2019-19-39, which just so happens to be my birthday. Mark your calendars for the 39th of Undevigintiber.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramSide-Channel Attack against Electronic Locks

Several high-security electronic locks are vulnerable to side-channel attacks involving power monitoring.

Cory DoctorowMy appearance on the MMT podcast

I’ve been following the Modern Monetary Theory debate for about 18 months, and I’m largely a convert: governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.

I was delighted to be invited onto the MMT Podcast to discuss the ways that MMT dovetails with the fight against monopoly and inequality, and how science-fiction storytelling can bring complicated technical subjects (like adversarial interoperability) to life.

We talked so long that they’ve split it into two episodes, the first of which is now live (MP3).

Krebs on SecurityMeet Bluetana, the Scourge of Pump Skimmers

Bluetana,” a new mobile app that looks for Bluetooth-based payment card skimmers hidden inside gas pumps, is helping police and state employees more rapidly and accurately locate compromised fuel stations across the nation, a study released this week suggests. Data collected in the course of the investigation also reveals some fascinating details that may help explain why these pump skimmers are so lucrative and ubiquitous.

The new app, now being used by agencies in several states, is the brainchild of computer scientists from the University of California San Diego and the University of Illinois Urbana-Champaign, who say they developed the software in tandem with technical input from the U.S. Secret Service (the federal agency most commonly called in to investigate pump skimming rings).

The Bluetooth pump skimmer scanner app ‘Bluetana’ in action.

Gas pumps are a perennial target of skimmer thieves for several reasons. They are usually unattended, and in too many cases a handful of master keys will open a great many pumps at a variety of filling stations.

The skimming devices can then be attached to electronics inside the pumps in a matter of seconds, and because they’re also wired to the pump’s internal power supply the skimmers can operate indefinitely without the need of short-lived batteries.

And increasingly, these pump skimmers are fashioned to relay stolen card data and PINs via Bluetooth wireless technology, meaning the thieves who install them can periodically download stolen card data just by pulling up to a compromised pump and remotely connecting to it from a Bluetooth-enabled mobile device or laptop.

According to the study, some 44 volunteers  — mostly law enforcement officials and state employees — were equipped with Bluetana over a year-long experiment to test the effectiveness of the scanning app.

The researchers said their volunteers collected Bluetooth scans at 1,185 gas stations across six states, and that Bluetana detected a total of 64 skimmers across four of those states. All of the skimmers were later collected by law enforcement, including two that were reportedly missed in manual safety inspections of the pumps six months earlier.

While several other Android-based apps designed to find pump skimmers are already available, the researchers said Bluetana was developed with an eye toward eliminating false-positives that some of these other apps can fail to distinguish.

“Bluetooth technology used in these skimmers are also used for legitimate products commonly seen at and near gas stations such as speed-limit signs, weather sensors and fleet tracking systems,” said Nishant Bhaskar, UC San Diego Ph.D. student and principal author of the study. “These products can be mistaken for skimmers by existing detection apps.”

BLACK MARKET VALUE

The fuel skimmer study also helps explain how quickly these hidden devices can generate huge profits for the organized gangs that typically deploy them. The researchers found the skimmers their app found collected data from roughly 20 -25 payment cards each day — evenly distributed between debit and credit cards (although they note estimates from payment fraud prevention companies and the Secret Service that put the average figure closer to 50-100 cards daily per compromised machine).

The academics also studied court documents which revealed that skimmer scammers often are only able to “cashout” stolen cards — either through selling them on the black market or using them for fraudulent purchases — a little less than half of the time. This can result from the skimmers sometimes incorrectly reading card data, daily withdrawal limits, or fraud alerts at the issuing bank.

“Based on the prior figures, we estimate the range of per-day revenue from a skimmer is $4,253 (25 cards per day, cashout of $362 per card, and 47% cashout success rate), and our high end estimate is $63,638 (100 cards per day per day, $1,354 cashout per card, and cashout success rate of 47%),” the study notes.

Not a bad haul either way, considering these skimmers typically cost about $25 to produce.

Those earnings estimates assume an even distribution of credit and debit card use among customers of a compromised pump: The more customers pay with a debit card, the more profitable the whole criminal scheme may become. Armed with your PIN and debit card data, skimmer thieves or those who purchase stolen cards can clone your card and pull money out of your account at an ATM.

“Availability of a PIN code with a stolen debit card in particular, can increase its value five-fold on the black market,” the researchers wrote.

This highlights a warning that KrebsOnSecurity has relayed to readers in many previous stories on pump skimming attacks: Using a debit card at the pump can be way riskier than paying with cash or a credit card.

The black market value, impact to consumers and banks, and liability associated with different types of card fraud.

And as the above graphic from the report illustrates, there are different legal protections for fraudulent transactions on debit vs. credit cards. With a credit card, your maximum loss on any transactions you report as fraud is $50; with a debit card, that protection only extends for within two days of the unauthorized transaction. After that, the maximum consumer liability can increase to $500 within 60 days, and to an unlimited amount after 60 days.

In practice, your bank or debit card issuer may still waive additional liabilities, and many do. But even then, having your checking account emptied of cash while your bank sorts out the situation can still be a huge hassle and create secondary problems (bounced checks, for instance).

Interestingly, this advice against using debit cards at the pump often runs counter to the messaging pushed by fuel station owners themselves, many of whom offer lower prices for cash or debit card transactions. That’s because credit card transactions typically are more expensive to process.

For all its skimmer-skewering prowess, Bluetana will not be released to the public. The researchers said the primary reason for this is highlighted in the core findings of the study.

“There are many legitimate devices near gas stations that look exactly like skimmers do in Bluetooth scans,” said UCSD Assistant Professor Aaron Schulman, in an email to KrebsOnSecurity. “Flagging suspicious devices in Bluetana is a only a way of notifying inspectors that they need to gather more data around the gas station to determine if the Bluetooth transmissions appear to be emanating from a device inside of of the pumps. If it does, they can then open the pump door and confirm that the signal strength rises, and begin their visual inspection for the skimmer.”

One of the best tips for avoiding fuel card skimmers is to favor filling stations that have updated security features, such as custom keys for each pump, better compartmentalization of individual components within the machine, and tamper protections that physically shut down a pump if the machine is improperly accessed.

How can you spot a gas station with these updated features, you ask? As noted in last summer’s story, How to Avoid Card Skimmers at the Pumps, these newer-model machines typically feature a horizontal card acceptance slot along with a raised metallic keypad. In contrast, older, less secure pumps usually have a vertical card reader a flat, membrane-based keypad.

Newer, more tamper-resistant fuel pumps include pump-specific key locks, raised metallic keypads, and horizontal card readers.

The researchers will present their work on Bluetana later today at the USENIX Security 2019 conference in Santa Clara, Calif. A copy of their paper is available here (PDF).

If you enjoyed this story, check out my series on all things skimmer-related: All About Skimmers. Looking for more information on fuel pump skimming? Have a look at some of these stories.

CryptogramExploiting GDPR to Get Private Information

A researcher abused the GDPR to get information on his fiancee:

It is one of the first tests of its kind to exploit the EU's General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company -- especially tech ones -- they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests, but said they had included:

  • a UK hotel chain that shared a complete record of his partner's overnight stays

  • two UK rail companies that provided records of all the journeys she had taken with them over several years

  • a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey.

CryptogramAttorney General Barr and Encryption

Last month, Attorney General William Barr gave a major speech on encryption policy足what is commonly known as "going dark." Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability -- a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats -- is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how足 -- an approach we have derisively named "nerd harder."

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having -- not the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about "consumer cybersecurity" and not "nuclear launch codes." This is true, but it ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There's no longer a difference between consumer tech and government tech -- it's all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTE -- which became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003 -- which seems to have been a National Security Agency operation -- and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that it is not about iPhones and data at rest. It is about communications足 -- data in transit. The "going dark" debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr's latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody -- even at the cost of law enforcement access足 -- than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

Worse Than FailureCodeSOD: A Loop in the String

Robert was browsing through a little JavaScript used at his organization, and found this gem of type conversion.

//use only for small numbers
function StringToInteger (str) {
    var int = -1;
    for (var i=0; i<=100; i++) {
        if (i+"" == str) {
            int = i;
            break;
        }
    }
    return int;
}

So, this takes our input str, which is presumably a string, and it starts counting from 0 to 100. i+"" coerces the integer value to a string, which we compare against our string. If it’s a match, we’ll store that value and break out of the loop.

Obviously, this has a glaring flaw: the 100 is hardcoded. So what we really need to do is add a search_low and search_high parameter, so we can write the for loop as i = search_low; i <= search_high; i++ instead. Because that’s the only glaring flaw in this code. I can’t think of any possible better way of converting strings to integers. Not a one.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramPhone Pharming for Ad Fraud

Interesting article on people using banks of smartphones to commit ad fraud for profit.

No one knows how prevalent ad fraud is on the Internet. I believe it is surprisingly high -- here's an article that places losses between $6.5 and $19 billion annually -- and something companies like Google and Facebook would prefer remain unresearched.

Krebs on SecurityPatch Tuesday, August 2019 Edition

Most Microsoft Windows (ab)users probably welcome the monthly ritual of applying security updates about as much as they look forward to going to the dentist: It always seems like you were there just yesterday, and you never quite know how it’s all going to turn out. Fortunately, this month’s patch batch from Redmond is mercifully light, at least compared to last month.

Okay, maybe a trip to the dentist’s office is still preferable. In any case, today is the second Tuesday of the month, which means it’s once again Patch Tuesday (or — depending on your setup and when you’re reading this post — Reboot Wednesday). Microsoft today released patches to fix some 93 vulnerabilities in Windows and related software, 35 of which affect various Server versions of Windows, and another 70 that apply to the Windows 10 operating system.

Although there don’t appear to be any zero-day vulnerabilities fixed this month — i.e. those that get exploited by cybercriminals before an official patch is available — there are several issues that merit attention.

Chief among those are patches to address four moderately terrifying flaws in Microsoft’s Remote Desktop Service, a feature which allows users to remotely access and administer a Windows computer as if they were actually seated in front of the remote computer. Security vendor Qualys says two of these weaknesses can be exploited remotely without any authentication or user interaction.

“According to Microsoft, at least two of these vulnerabilities (CVE-2019-1181 and CVE-2019-1182) can be considered ‘wormable’ and [can be equated] to BlueKeep,” referring to a dangerous bug patched earlier this year that Microsoft warned could be used to spread another WannaCry-like ransomware outbreak. “It is highly likely that at least one of these vulnerabilities will be quickly weaponized, and patching should be prioritized for all Windows systems.”

Fortunately, Remote Desktop is disabled by default in Windows 10, and as such these flaws are more likely to be a threat for enterprises that have enabled the application for various purposes. For those keeping score, this is the fourth time in 2019 Microsoft has had to fix critical security issues with its Remote Desktop service.

For all you Microsoft Edge and Internet Exploiter Explorer users, Microsoft has issued the usual panoply of updates for flaws that could be exploited to install malware after a user merely visits a hacked or booby-trapped Web site. Other equally serious flaws patched in Windows this month could be used to compromise the operating system just by convincing the user to open a malicious file (regardless of which browser the user is running).

As crazy as it may seem, this is the second month in a row that Adobe hasn’t issued a security update for its Flash Player browser plugin, which is bundled in IE/Edge and Chrome (although now hobbled by default in Chrome). However, Adobe did release important updates for its Acrobat and free PDF reader products.

If the tone of this post sounds a wee bit cantankerous, it might be because at least one of the updates I installed last month totally hosed my Windows 10 machine. I consider myself an equal OS abuser, and maintain multiple computers powered by a variety of operating systems, including Windows, Linux and MacOS.

Nevertheless, it is frustrating when being diligent about applying patches introduces so many unfixable problems that you’re forced to completely reinstall the OS and all of the programs that ride on top of it. On the bright side, my newly-refreshed Windows computer is a bit more responsive than it was before crash hell.

So, three words of advice. First off, don’t let Microsoft decide when to apply patches and reboot your computer. On the one hand, it’s nice Microsoft gives us a predictable schedule when it’s going to release patches. On the other, Windows 10 will by default download and install patches whenever it pleases, and then reboot the computer.

Unless you change that setting. Here’s a tutorial on how to do that. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Secondly, it doesn’t hurt to wait a few days to apply updates.  Very often fixes released on Patch Tuesday have glitches that cause problems for an indeterminate number of Windows systems. When this happens, Microsoft then patches their patches to minimize the same problems for users who haven’t yet applied the updates, but it sometimes takes a few days for Redmond to iron out the kinks.

Finally, please have some kind of system for backing up your files before applying any updates. You can use third-party software for this, or just the options built into Windows 10. At some level, it doesn’t matter. Just make sure you’re backing up your files, preferably following the 3-2-1 backup rule. Thankfully, I’m vigilant about backing up my files.

And, as ever, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Cory DoctorowPodcast: Interoperability and Privacy: Squaring the Circle

In my latest podcast (MP3), I read my essay “Interoperability and Privacy: Squaring the Circle, published today on EFF’s Deeplinks; it’s another in the series of “adversarial interoperability” explainers, this one focused on how privacy and adversarial interoperability relate to each other.

Even if we do manage to impose interoperability on Facebook in ways that allow for meaningful competition, in the absence of robust anti-monopoly rules, the ecosystem that grows up around that new standard is likely to view everything that’s not a standard interoperable component as a competitive advantage, something that no competitor should be allowed to make incursions upon, on pain of a lawsuit for violating terms of service or infringing a patent or reverse-engineering a copyright lock or even more nebulous claims like “tortious interference with contract.”

In other words, the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that’s not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge.

Standards should be the floor on interoperability, but adversarial interoperability should be the ceiling. Adversarial interoperability takes place when a new company designs a product or service that works with another company’s existing products or services, without seeking permission to do so.

MP3

Worse Than FailureCodeSOD: Nullable Knowledge

You’ve got a decimal value- maybe. It could be nothing at all, and you need to handle that null gracefully. Fortunately for you, C# has “nullable types”, which make this task easy.

Ian P’s co-worker made this straightforward application of nullable types.

public static decimal ValidateDecimal(decimal? value)
{
if (value == null) return 0;
decimal returnValue = 0;
Decimal.TryParse(value.ToString(), out returnValue);
return returnValue;
}

The lack of indentation was in the original.

The obvious facepalm is the Decimal.TryParse call. If our decimal has a value, we could just return it, but no, instead, we convert it to a string then convert that string back into a Decimal.

But the real problem here is someone who doesn’t understand what .NET’s nullable types offer. For starters, one could make the argument that value.HasValue() is more readable than value == null, though that’s clearly debatable. That’s not really the problem though.

The purpose of ValidateDecimal is to return the input value, unless the input value was null, in which case we want to return 0. Nullable types have a lovely GetValueOrDefault() method, which returns the value, or a reasonable default. What is the default for any built in numeric type?

0.

This method doesn’t need to exist, it’s already built in to the decimal? type. Of course, the built-in method almost certainly doesn’t do a string conversion to get its value, so the one with a string is better, is it knot?

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecuritySEC Investigating Data Leak at First American Financial Corp.

The U.S. Securities and Exchange Commission (SEC) is investigating a security failure on the Web site of real estate title insurance giant First American Financial Corp. that exposed more than 885 million personal and financial records tied to mortgage deals going back to 2003, KrebsOnSecurity has learned.

First American Financial Corp.

In May, KrebsOnSecurity broke the news that the Web site for Santa Ana, Calif.-based First American [NYSE:FAFexposed some 885 million documents related to real estate closings over the past 16 years, including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts and drivers license images. No authentication was required to view the documents.

The initial tip on that story came from Ben Shoval, a real estate developer based in Seattle. Shoval said he recently received a letter from the SEC’s enforcement division which stated the agency was investigating the data exposure to determine if First American had violated federal securities laws.

In its letter, the SEC asked Shoval to preserve and share any documents or evidence he had related to the data exposure.

“This investigation is a non-public, fact-finding inquiry,” the letter explained. “The investigation does not mean that we have concluded that anyone has violated the law.”

The SEC declined to comment for this story.

Word of the SEC investigation comes weeks after regulators in New York said they were investigating the company in what could turn out to be the first test of the state’s strict new cybersecurity regulation, which requires financial companies to periodically audit and report on how they protect sensitive data, and provides for fines in cases where violations were reckless or willful. First American also is now the target of a class action lawsuit that alleges it “failed to implement even rudimentary security measures.”

First American has issued a series of statements over the past few months that seem to downplay the severity of the data exposure, which the company said was the result of a “design defect” in its Web site.

On June 18, First American said a review of system logs by an outside forensic firm, “based on guidance from the company, identified 484 files that likely were accessed by individuals without authorization. The company has reviewed 211 of these files to date and determined that only 14 (or 6.6%) of those files contain non-public personal information. The company is in the process of notifying the affected consumers and will offer them complimentary credit monitoring services.”

In a statement on July 16, First American said its now-completed investigation identified just 32 consumers whose non-public personal information likely was accessed without authorization.

“These 32 consumers have been notified and offered complimentary credit monitoring services,” the company said.

First American has not responded to questions about how long this “design defect” persisted on its site, how far back it maintained access logs, or how far back in those access logs the company’s review extended.

Updated, Aug, 13, 8:40 a.m.: Added “no comment” from the SEC.

CryptogramEvaluating the NSA's Telephony Metadata Program

Interesting analysis: "Examining the Anomalies, Explaining the Value: Should the USA FREEDOM Act's Metadata Program be Extended?" by Susan Landau and Asaf Lubin.

Abstract: The telephony metadata program which was authorized under Section 215 of the PATRIOT Act, remains one of the most controversial programs launched by the U.S. Intelligence Community (IC) in the wake of the 9/11 attacks. Under the program major U.S. carriers were ordered to provide NSA with daily Call Detail Records (CDRs) for all communications to, from, or within the United States. The Snowden disclosures and the public controversy that followed led Congress in 2015 to end bulk collection and amend the CDR authorities with the adoption of the USA FREEDOM Act (UFA).

For a time, the new program seemed to be functioning well. Nonetheless, three issues emerged around the program. The first concern was over high numbers: in both 2016 and 2017, the Foreign Intelligence Surveillance Court issued 40 orders for collection, but the NSA collected hundreds of millions of CDRs, and the agency provided little clarification for the high numbers. The second emerged in June 2018 when the NSA announced the purging of three years' worth of CDR records for "technical irregularities." Finally, in March 2019 it was reported that the NSA had decided to completely abandon the program and not seek its renewal as it is due to sunset in late 2019.

This paper sheds significant light on all three of these concerns. First, we carefully analyze the numbers, showing how forty orders might lead to the collection of several million CDRs, thus offering a model to assist in understanding Intelligence Community transparency reporting across its surveillance programs. Second, we show how the architecture of modern telephone communications might cause collection errors that fit the reported reasons for the 2018 purge. Finally, we show how changes in the terrorist threat environment as well as in the technology and communication methods they employ ­ in particular the deployment of asynchronous encrypted IP-based communications ­ has made the telephony metadata program far less beneficial over time. We further provide policy recommendations for Congress to increase effective intelligence oversight.

Worse Than FailureInternship of Things

Mindy was pretty excited to start her internship with Initech's Internet-of-Things division. She'd been hearing at every job fair how IoT was still going to be blowing up in a few years, and how important it would be for her career to have some background in it.

It was a pretty standard internship. Mindy went to meetings, shadowed developers, did some light-but-heavily-supervised changes to the website for controlling your thermostat/camera/refrigerator all in one device.

As part of testing, Mindy created a customer account on the QA environment for the site. She chucked a junk password at it, only to get a message: "Your password must be at least 8 characters long, contain at least three digits, not in sequence, four symbols, at least one space, and end with a letter, and not be more than 10 characters."

"Um, that's quite the password rule," Mindy said to her mentor, Bob.

"Well, you know how it is, most people use one password for every site, and we don't want them to do that here. That way, when our database leaks again, it minimizes the harm."

"Right, but it's not like you're storing the passwords anyway, right?" Mindy said. She knew that even leaked hashes could be dangerous, but good salting/hashing would go a long way.

"Of course we are," Bob said. "We're selling web connected thermostats to what can be charitably called 'twelve-o-clock flashers'. You know what those are, right? Every clock in their house is flashing twelve?" Bob sneered. "They can't figure out the site, so we often have to log into their account to fix the things they break."

A few days later, Initech was ready to push a firmware update to all of the Model Q baby monitor cameras. Mindy was invited to watch the process so she could understand their workflow. It started off pretty reasonable: their CI/CD system had a verified build, signed off, ready to deploy.

"So, we've got a deployment farm running in the cloud," Bob explained. "There are thousands of these devices, right? So we start by putting the binary up in an S3 bucket." Bob typed a few commands to upload the binary. "What's really important for our process is that it follows this naming convention. Because the next thing we're going to do is spin up a half dozen EC2 instances- virtual servers in the cloud."

A few more commands later, and then Bob had six sessions open to cloud servers in tmux. "Now, these servers are 'clean instances', so the very first thing I have to do is upload our SSH keys." Bob ran an ssh-copy-id command to copy the SSH key from his computer up to the six cloud VMs.

"Wait, you're using your personal SSH keys?"

"No, that'd be crazy!" Bob said. "There's one global key for every one of our Model Q cameras. We've all got a copy of it on our laptops."

"All… the developers?"

"Everybody on the team," Bob said. "Developers to management."

"On their laptops?"

"Well, we were worried about storing something so sensitive on the network."

Bob continued the process, which involved launching a script that would query a webservice to see which Model Q cameras were online, then sshing into them, having them curl down the latest firmware, and then self-update. "For the first few days, we leave all six VMs running, but once most of them have gotten the update, we'll just leave one cloud service running," Bob explained. "Helps us manage costs."

It's safe to say Mindy learned a lot during her internship. Mostly, she learned, "don't buy anything from Initech."

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

CryptogramFriday Squid Blogging: Sinuous Asperoteuthis Mangoldae Squid

Great video of the Sinuous Asperoteuthis Mangoldae Squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityiNSYNQ Ransom Attack Began With Phishing Email

A ransomware outbreak that hit QuickBooks cloud hosting firm iNSYNQ in mid-July appears to have started with an email phishing attack that snared an employee working in sales for the company, KrebsOnSecurity has learned. It also looks like the intruders spent roughly ten days rooting around iNSYNQ’s internal network to properly stage things before unleashing the ransomware. iNSYNQ ultimately declined to pay the ransom demand, and it is still working to completely restore customer access to files.

Some of this detail came in a virtual “town hall” meeting held August 8, in which iNSYNQ chief executive Elliot Luchansky briefed customers on how it all went down, and what the company is doing to prevent such outages in the future.

A great many iNSYNQ’s customers are accountants, and when the company took its network offline on July 16 in response to the ransomware outbreak, some of those customers took to social media to complain that iNSYNQ was stonewalling them.

“We could definitely have been better prepared, and it’s totally unacceptable,” Luchansky told customers. “I take full responsibility for this. People waiting ridiculous amounts of time for a response is unacceptable.”

By way of explaining iNSYNQ’s initial reluctance to share information about the particulars of the attack early on, Luchansky told customers the company had to assume the intruders were watching and listening to everything iNSYNQ was doing to recover operations and data in the wake of the ransomware outbreak.

“That was done strategically for a good reason,” he said. “There were human beings involved with [carrying out] this attack in real time, and we had to assume they were monitoring everything we could say. And that posed risks based on what we did say publicly while the ransom negotiations were going on. It could have been used in a way that would have exposed customers even more. That put us in a really tough bind, because transparency is something we take very seriously. But we decided it was in our customers’ best interests to not do that.”

A paid ad that comes up prominently when one searches for “insynq” in Google.

Luchansky did not say how much the intruders were demanding, but he mentioned two key factors that informed the company’s decision not to pay up.

“It was a very substantial amount, but we had the money wired and were ready to pay it in cryptocurrency in the case that it made sense to do so,” he told customers. “But we also understood [that paying] would put a target on our heads in the future, and even if we actually received the decryption key, that wasn’t really the main issue here. Because of the quick reaction we had, we were able to contain the encryption part” to roughly 50 percent of customer systems, he said.

Luchansky said the intruders seeded its internal network with MegaCortex, a potent new ransomware strain first spotted just a couple of months ago that is being used in targeted attacks on enterprises. He said the attack appears to have been carefully planned out in advance and executed “with human intervention all the way through.”

“They decided they were coming after us,” he said. “It’s one thing to prepare for these sorts of events but it’s an entirely different experience to deal with first hand.”

According to an analysis of MegaCortex published this week by Accenture iDefense, the crooks behind this ransomware strain are targeting businesses — not home users — and demanding ransom payments in the range of two to 600 bitcoins, which is roughly $20,000 to $5.8 million.

“We are working for profit,” reads the ransom note left behind by the latest version of MegaCortex. “The core of this criminal business is to give back your valuable data in the original form (for ransom of course).”

A portion of the ransom note left behind by the latest version of MegaCortex. Image: Accenture iDefense.

Luchansky did not mention in the town hall meeting exactly when the initial phishing attack was thought to have occurred, noting that iNSYNQ is still working with California-based CrowdStrike to gain a more complete picture of the attack.

But Alex Holden, founder of Milwaukee-based cyber intelligence firm Hold Security, showed KrebsOnSecurity information obtained from monitoring dark web communications which suggested the problem started on July 6, after an employee in iNSYNQ’s sales division fell for a targeted phishing email.

“This shows that even after the initial infection, if companies act promptly they can still detect and stop the ransomware,” Holden said. “For these infections hackers take sometimes days, weeks, or even months to encrypt your data.”

iNSYNQ did not respond to requests for comment on Hold Security’s findings.

Asked whether the company had backups of customer data and — if so — why iNSYNQ decided not to restore from those, Luchansky said there were backups but that some of those were also infected.

“The backup system is backing up the primary system, and that by definition entails some level of integration,” Luchansky explained. “The way our system was architected, the malware had spread into the backups as well, at least a little bit. So [by] just turning the backups back on, there was a good chance the the virus would then start to spread through the backup system more. So we had to treat the backups similarly to how we were treating the primary systems.”

Luchansky said their backup system has since been overhauled, and that if a similar attack happened in the future it would take days instead of weeks to recover. However, he declined to get into specifics about exactly what had changed, which is too bad because in every ransomware attack story I’ve written this seems to be the detail most readers are interested in and arguing about.

The CEO added that iNSYNQ also will be partnering with a company that helps firms detect and block targeted phishing attacks, and that it envisioned being able to offer this to its customers at a discounted rate. It wasn’t clear from Luchansky’s responses to questions whether the cloud hosting firm was also considering any kind of employee anti-phishing education and/or testing service.

Luchansky said iNSYNQ was able to restore access to more than 90 percent of customer files by Aug. 2 — roughly two weeks after the ransomware outbreak — and that the company would be offering customers a two month credit as a result of the outage.

Sociological ImagesData Science Needs Social Science

What do college graduates do with a sociology major? We just got an updated look from Phil Cohen this week:

These are all great career fields for our students, but as I was reading the list I realized there is a huge industry missing: data science and analytics. From Netflix to national policy, many interesting and lucrative jobs today are focused on properly observing, understanding, and trying to predict human behavior. With more sociology graduate programs training their students in computational social science, there is a big opportunity to bring those skills to teaching undergraduates as well.

Of course, data science has its challenges. Social scientists have observed that the booming field has some big problems with bias and inequality, but this is sociology’s bread and butter! When we talk about these issues, we usually go straight to very important conversations about equity, inclusion, and justice, and rightfully so; it is easy to design algorithms that seem like they make better decisions, but really just develop their own biases from watching us.

We can also tackle these questions by talking about research methods–another place where sociologists shine! We spend a lot of time thinking about whether our methods for observing people are valid and reliable. Are we just watching talk, or action? Do people change when researchers watch them? Once we get good measures and a strong analytic approach, can we do a better job explaining how and why bias happens to prevent it in the future?

Sociologists are well-positioned to help make sense of big questions in data science, and the field needs them. According to a recent industry report, only 5% of data scientists come out of the social sciences! While other areas of study may provide more of the technical skills to work in analytics, there is only so much that the technology can do before companies and research centers need to start making sense of social behavior. 

Source: Burtch Works Executive Recruiting. 2018. “Salaries of Data Scientists.” Emphasis Mine

So, if students or parents start up the refrain of “what can you do with a sociology major” this fall, consider showing them the social side of data science!

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Intentionally Obtuse

"Normally I do pretty well on the Super Quiz, but then they decided to do it in Latin," writes Mike S.

 

"Uh oh, this month's AWS costs are going to be so much higher than last month's!" Ben H. writes.

 

Amanda C. wrote, "Oh, neat, Azure has some recommendations...wait...no...'just kidding' I guess?"

 

"Here I never thought that SQL Server log space could go negative, and yet, here we are," Michael writes.

 

"I love the form factor on this motherboard, but I'm not sure what case to buy with it," Todd C. writes, "Perhaps, if it isn't working, I can just give it a little kick?"

 

Maarten C. writes, "Next time, I'll name my spreadsheets with dog puns...maybe that'll make things less ruff."

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramSupply-Chain Attack against the Electron Development Platform

Electron is a cross-platform development system for many popular communications apps, including Skype, Slack, and WhatsApp. Security vulnerabilities in the update system allows someone to silently inject malicious code into applications. From a news article:

At the BSides LV security conference on Tuesday, Pavel Tsakalidis demonstrated a tool he created called BEEMKA, a Python-based tool that allows someone to unpack Electron ASAR archive files and inject new code into Electron's JavaScript libraries and built-in Chrome browser extensions. The vulnerability is not part of the applications themselves but of the underlying Electron framework -- ­and that vulnerability allows malicious activities to be hidden within processes that appear to be benign. Tsakalidis said that he had contacted Electron about the vulnerability but that he had gotten no response -- ­and the vulnerability remains.

While making these changes required administrator access on Linux and MacOS, it only requires local access on Windows. Those modifications can create new event-based "features" that can access the file system, activate a Web cam, and exfiltrate information from systems using the functionality of trusted applications­ -- including user credentials and sensitive data. In his demonstration, Tsakalidis showed a backdoored version of Microsoft Visual Studio Code that sent the contents of every code tab opened to a remote website.

Basically, the Electron ASAR files aren't signed or encrypted, so modifying them is easy.

Note that this attack requires local access to the computer, which means that an attacker that could do this could do much more damaging things as well. But once an app has been modified, it can be distributed to other users. It's not a big deal attack, but it's a vulnerability that should be closed.

CryptogramAT&T Employees Took Bribes to Unlock Smartphones

This wasn't a small operation:

A Pakistani man bribed AT&T call-center employees to install malware and unauthorized hardware as part of a scheme to fraudulently unlock cell phones, according to the US Department of Justice. Muhammad Fahd, 34, was extradited from Hong Kong to the US on Friday and is being detained pending trial.

An indictment alleges that "Fahd recruited and paid AT&T insiders to use their computer credentials and access to disable AT&T's proprietary locking software that prevented ineligible phones from being removed from AT&T's network," a DOJ announcement yesterday said. "The scheme resulted in millions of phones being removed from AT&T service and/or payment plans, costing the company millions of dollars. Fahd allegedly paid the insiders hundreds of thousands of dollars­ -- paying one co-conspirator $428,500 over the five-year scheme."

In all, AT&T insiders received more than $1 million in bribes from Fahd and his co-conspirators, who fraudulently unlocked more than 2 million cell phones, the government alleged. Three former AT&T customer service reps from a call center in Bothell, Washington, already pleaded guilty and agreed to pay the money back to AT&T.

Worse Than FailureCodeSOD: Swimming Downstream

When Java added their streams API, they brought the power and richness of functional programming styles to the JVM, if we ignore all the other non-Java JVM languages that already did this. Snark aside, streams were a great addition to the language, especially if we want to use them absolutely wrong.

Like this code Miles found.

See, every object in the application needs to have a unique identifier. So, for every object, there’s a method much like this one:

/**
     * Get next priceId
     *
     * @return next priceId
     */
    public String createPriceId() {
        List<String> ids = this.prices.stream().map(m -> m.getOfferPriceId()).collect(Collectors.toList());
        for (Integer i = 0; i < ids.size(); i++) {
            ids.set(i, ids.get(i).split("PR")[1]);
        }
        try {
            List<Integer> intIds = ids.stream().map(id -> Integer.parseInt(id)).collect(Collectors.toList());
            Integer max = intIds.stream().mapToInt(id -> id).max().orElse(0);
            return "PR" + (max + 1);
        } catch (Exception e) {
            return "PR" + 1;
        }
    }

The point of a stream is that you can build a processing pipeline: starting with a list, you can perform a series of operations but only touch each item in the stream once. That, of course, isn’t what we do here.

First, we map the prices to extract the offerPriceId and convert it into a list. Now, this list is a set of strings, so we iterate across that list of IDs, to break the "PR" prefix off. Then, we’ll map that list of IDs again, to parse the strings into integers. Then, we’ll cycle across that new list one more time, to find the max value. Then we can return a new ID.

And if anything goes wrong in this process, we won’t complain. We just return an ID that’s likely incorrect- "PR1". That’ll probably cause an error later, right? They can deal with it then.

Everything here is just wrong. This is the wrong way to use streams- the whole point is this could have been a single chain of function calls that only needed to iterate across the input data once. It’s also the wrong way to handle exceptions. And it’s also the wrong way to generate IDs.

Worse, a largely copy/pasted version of this code, with the names and prefixes changed, exists in nearly every model class. And these are database model classes, according to Miles, so one has to wonder if there might be a better way to generate IDs…

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Valerie AuroraGoth fashion tips for Ehlers-Danlos Syndrome

A woman wearing a dramatic black hooded jacket typing on a laptop
Skingraft hoodie, INC shirt, Fisherman’s Wharf fingerless gloves

My ideal style could perhaps be best described as “goth chic”—a lot of structured black somewhere on the border between couture and business casual—but because I have Ehlers-Danlos Syndrome, I more often end up wearing “sport goth”: a lot of stretchy black layers in washable fabrics with flat shoes. With great effort, I’ve nudged my style back towards “goth chic,” at least on good days. Enough people have asked me about my gear that I figured I’d share what I’ve learned with other EDS goths (or people who just like being comfortable and also wearing a lot of black).

Here are the constraints I’m operating under:

  • Flat shoes with thin soles to prevent ankle sprains and foot and back pain
  • Stretchy/soft shoes without pressure points to prevent blisters on soft skin
  • Can’t show sweat because POTS causes excessive sweating, also I walk a lot
  • Layers because POTS, walking, and San Francisco weather means I need to adjust my temperature a lot
  • Little or no weight on shoulders due to hypermobile shoulders
  • No tight clothes on abdomen due to pain (many EDS folks don’t have this problem but I do)
  • Soft fabric only touching skin due to sensitive easily irritated skin
  • Warm wrists to prevent hands from losing circulation due to Reynaud’s or POTS

On the other hand, I have a few things that make fashion easier for me. For starters, I can afford a four-figure annual clothing budget. I still shop a lot at thrift stores, discount stores like Ross, or discount versions of more expensive stores like Nordstrom Rack but I can afford a few expensive pieces at full price. Many of the items on this page can be found used on Poshmark, eBay, and other online used clothing marketplaces. I also recommend doing the math for “cost per wear” to figure out if you would save money if you wore a more expensive but more durable piece for a longer period of time. I usually keep clothing and shoes for several years and repair as necessary.

I currently fit within the “standard” size ranges of most clothing and shoe brands, but many of the brands I recommend here have a wider range of sizes. I’ve included the size range where relevant.

Finally, as a cis woman with an extremely femme body type, I can wear a wide range of masculine and feminine styles without being hassled in public for being gender-nonconforming (I still get hassled in public for being a woman, yay). Most of the links here are to women’s styles, but many brands also have men’s styles. (None of these brands have unisex styles that I know of.)

Shoes and socks

Shoes are my favorite part of fashion! I spend much more money on shoes than I used to because more expensive shoes are less likely to give me blisters. If I resole/reheel/polish them regularly, they can last for several years instead of a few months, so they cost the same per wear. Functional shoes are notoriously hard for EDS people to find, so the less often I have to search for new shoes, the better. I nearly always wear my shoes until they can no longer be repaired. If this post does nothing other than convince you that it is economical and wise to spend more money on shoes, I have succeeded.

Woman wearing two coats and holding two rolling bags
Via Spiga trench, Mossimo hoodie, VANELi flats, Aimee Kestenberg rolling laptop bag, Travelpro rolling bag

Smartwool black socks – My poor tender feet need cushiony socks that don’t sag or rub. Smartwool socks are expensive but last forever, and you can get them in 100% black so that you can wash them with your black clothes without covering them in little white balls. I wear mostly the men’s Walk Light Crew and City Slicker, with occasional women’s Hide and Seek No Show.

Skechers Cleo flats – These are a line of flats in a stretchy sweater-like material. The heel can be a little scratchy, but I sewed ribbon over the seam and it was fine. The BOBS line of Skechers is also extremely comfortable. Sizes 5 – 11.

VANELi flats – The sportier versions of these shoes are obscenely comfortable and also on the higher end of fashion. I wore my first pair until they had holes in the soles, and then I kept wearing them another year. I’m currently wearing out this pair. You can get them majorly discounted at DSW and similar places. Sizes 5 – 12.

Stuart Weitzman 5050 boots – These over-the-knee boots are the crown jewel of any EDS goth wardrobe. First, they are almost totally flat and roomy in the toe. Second, the elastic in the boot shaft acts like compression socks, helping with POTS. Third, they look amazing. Charlize Theron wore them in “Atomic Blonde” while performing martial arts. Angelina Jolie wears these in real life. The downside is the price, but there is absolutely no reason to pay full price. I regularly find them in Saks Off 5th for 30% off. Also, they last forever: with reheeling, my first pair lasted around three years of heavy use. Stuart Weitzman makes several other flat boots with elastic shafts which are also worth checking out, but they have been making the 5050 for around 25 years so this style should always be available. Sizes 4 – 12, runs about a half size large.

Pants/leggings/skirts

A woman wearing black leggings and a black sweater
Patty Boutik sweater, Demobaza leggings, VANELi flats

Satina high-waisted leggings – I wear these extremely cheap leggings probably five days a week under skirts or dresses. Available in two sizes, S – L and XL – XXXL. If you can wear tight clothing, you might want to check out the Spanx line of leggings (e.g. the Moto Legging) which I would totally wear if I could.

Toad & Co. Women’s Chaka skirt – I wear this skirt probably three days a week. Ridiculously comfortable and only middling expensive. Sizes XS – L.

NYDJ jeans/leggings – These are pushing it for me in terms of tightness, but I can wear them if I’m standing or walking most of the day. Expensive, but they look professional and last forever. Sizes 00 – 28, including petites, and  they run at least a size large.

Demobaza leggings – The leggings made mostly of stretch material are amazingly comfortable, but also obscenely expensive. They also last forever. Sizes XS – L.

Tops

Patty Boutik – This strange little label makes comfortable tops with long long sleeves and long long bodies, and it keeps making the same styles for years. Unfortunately, they tend to sell out of the solid black versions of my favorite tops on a regular basis. I order two or three of my favorite styles whenever they are in stock as they are reasonably cheap. I’ve been wearing the 3/4 sleeve boat neck shirt at least once a week for about 5 years now. Sizes XS – XL, tend to run a size small.

14th and Union – This label makes very simple pieces out of the most comfortable fabrics I’ve ever worn for not very much money. I wear this turtleneck long sleeve tee about once a week. I also like their skirts. Sizes XS to XL, standard and petite.

Macy’s INC – This label is a reliable source of stretchy black clothing at Macy’s prices. It often edges towards club wear but keeps the simplicity I prefer.

Coats

Mossimo hoodie – Ugh, I love this thing. It’s the perfect cheap fashion staple. I often wear it underneath other coats. Not sure about sizes since it is only available on resale sites.

Skingraft Royal Hoodie – A vastly more expensive version of the black hoodie, but still comfortable, stretchy, and washable. And oh so dramatic. Sizes XS – L.

3/4 length hooded black trench coat – Really any brand will do, but I’ve mostly recently worn out a Calvin Klein and am currently wearing a Via Spiga.

Accessories

A woman wearing all black with a fanny pack
Mossimo hoodie, Toad & Co. skirt, T Tahari fanny pack, Satina leggings, VANELi flats

Fingerless gloves – The cheaper, the better! I buy these from the tourist shops at Fisherman’s Wharf in San Francisco for under $10. I am considering these gloves from Demobaza.

Medline folding cane – Another cheap fashion staple for the EDS goth! Sturdy, adjustable, folding black cane with clean sleek lines.

T Tahari Logo Fanny Pack – I stopped being able to carry a purse right about the time fanny packs came back into style! Ross currently has an entire fanny pack section, most of which are under $13. If I’m using a backpack or the rolling laptop bag, I usually keep my wallet, phone, keys, and lipstick in the fanny pack for easy access.

Duluth Child’s Pack, Envelope style – A bit expensive, but another simple fashion staple. I used to carry the larger roll-top canvas backpack until I realized I was packing it full of stuff and aggravating my shoulders. The child’s pack barely fits a small laptop and a few accessories.

Aimee Kestenberg rolling laptop bag – For the days when I need more than I can fit in my tiny backpack and fanny pack. It has a strap to fit on to the handle of a rolling luggage bag, which is great for air travel.

Apple Watch – The easiest way to diagnose POTS! (Look up “poor man’s tilt table test.”) A great way to track your heart rate and your exercise, two things I am very focused on as someone with EDS. When your first watch band wears out, go ahead and buy a random cheap one off the Internet.

That’s my EDS goth fashion tips! If you have more, please share them in the comments.

,

Krebs on SecurityWho Owns Your Wireless Service? Crooks Do.

Incessantly annoying and fraudulent robocalls. Corrupt wireless company employees taking hundreds of thousands of dollars in bribes to unlock and hijack mobile phone service. Wireless providers selling real-time customer location data, despite repeated promises to the contrary. A noticeable uptick in SIM-swapping attacks that lead to multi-million dollar cyberheists.

If you are somehow under the impression that you — the customer — are in control over the security, privacy and integrity of your mobile phone service, think again. And you’d be forgiven if you assumed the major wireless carriers or federal regulators had their hands firmly on the wheel.

No, a series of recent court cases and unfortunate developments highlight the sad reality that the wireless industry today has all but ceded control over this vital national resource to cybercriminals, scammers, corrupt employees and plain old corporate greed.

On Tuesday, Google announced that an unceasing deluge of automated robocalls had doomed a feature of its Google Voice service that sends transcripts of voicemails via text message.

Google said “certain carriers” are blocking the delivery of these messages because all too often the transcripts resulted from unsolicited robocalls, and that as a result the feature would be discontinued by Aug. 9. This is especially rich given that one big reason people use Google Voice in the first place is to screen unwanted communications from robocalls, mainly because the major wireless carriers have shown themselves incapable or else unwilling to do much to stem the tide of robocalls targeting their customers.

AT&T in particular has had a rough month. In July, the Electronic Frontier Foundation (EFF) filed a class action lawsuit on behalf of AT&T customers in California to stop the telecom giant and two data location aggregators from allowing numerous entities — including bounty hunters, car dealerships, landlords and stalkers — to access wireless customers’ real-time locations without authorization.

And on Monday, the U.S. Justice Department revealed that a Pakistani man was arrested and extradited to the United States to face charges of bribing numerous AT&T call-center employees to install malicious software and unauthorized hardware as part of a scheme to fraudulently unlock cell phones.

Ars Technica reports the scam resulted in millions of phones being removed from AT&T service and/or payment plans, and that the accused allegedly paid insiders hundreds of thousands of dollars to assist in the process.

We should all probably be thankful that the defendant in this case wasn’t using his considerable access to aid criminals who specialize in conducting unauthorized SIM swaps, an extraordinarily invasive form of fraud in which scammers bribe or trick employees at mobile phone stores into seizing control of the target’s phone number and diverting all texts and phone calls to the attacker’s mobile device.

Late last month, a federal judge in New York rejected a request by AT&T to dismiss a $224 million lawsuit over a SIM-swapping incident that led to $24 million in stolen cryptocurrency.

The defendant in that case, 21-year-old Manhattan resident Nicholas Truglia, is alleged to have stolen more than $80 million from victims of SIM swapping, but he is only one of many individuals involved in this incredibly easy, increasingly common and lucrative scheme. The plaintiff in that case alleges that he was SIM-swapped on two different occasions, both allegedly involving crooked or else clueless employees at AT&T wireless stores.

And let’s not forget about all the times various hackers figured out ways to remotely use a carrier’s own internal systems for looking up personal and account information on wireless subscribers.

So what the fresh hell is going on here? And is there any hope that lawmakers or regulators will do anything about these persistent problems? Gigi Sohn, a distinguished fellow at the Georgetown Institute for Technology Law and Policy, said the answer — at least in this administration — is probably a big “no.”

“The takeaway here is the complete and total abdication of any oversight of the mobile wireless industry,” Sohn told KrebsOnSecurity. “Our enforcement agencies aren’t doing anything on these topics right now, and we have a complete and total breakdown of oversight of these incredibly powerful and important companies.”

Aaron Mackey, a staff attorney at the EFF, said that on the location data-sharing issue, federal law already bars the wireless carriers from sharing this with third parties without the expressed consent of consumers.

“What we’ve seen is the Federal Communications Commission (FCC) is well aware of this ongoing behavior about location data sales,” Mackey said. “The FCC has said it’s under investigation, but there has been no public action taken yet and this has been going on for more than a year. The major wireless carriers are not only violating federal law, but they’re also putting people in harm’s way. There are countless stories of folks being able to pretend to be law enforcement and gaining access to information they can use to assault and harass people based on the carriers making location data available to a host of third parties.”

On the issue of illegal SIM swaps, Wired recently ran a column pointing to a solution that many carriers in Africa have implemented which makes it much more difficult for SIM swap thieves to ply their craft.

“The carrier would set up a system to let the bank query phone records for any recent SIM swaps associated with a bank account before they carried out a money transfer,” wrote Wired’s Andy Greenberg in April. “If a SIM swap had occurred in, say, the last two or three days, the transfer would be blocked. Because SIM swap victims can typically see within minutes that their phone has been disabled, that window of time let them report the crime before fraudsters could take advantage.”

For its part, AT&T says it is now offering a solution to help diminish the fallout from unauthorized SIM swaps, and that the company is planning on publishing a consumer blog on this soon. Here are some excerpts from what they sent on that front:

“Our AT&T Authentication and Verification Service, or AAVS. AAVS offers a new method to help businesses determine that you are, in fact, you,” AT&T said in a statement. “This is how it works. If a business or company builds the AAVS capability into its website or mobile app, it can automatically connect with us when you attempt to log-in. Through that connection, the number and the phone are matched to confirm the log-in. If it detects something fishy, like the SIM card not in the right device, the transaction won’t go through without further authorization.”

“It’s like an automatic background check on your phone’s history, but with no personal information changing hands, and it all happens in a flash without you knowing. Think about how you do business with companies on your mobile device now. You typically log into an online account or a mobile app using a password or fingerprint. Some tasks might require you to receive a PIN from your institution for additional security, but once you have access, you complete your transactions. With AAVS, the process is more secure, and nothing changes for you. By creating an additional layer of security without adding any steps for the consumer, we can take larger strides in helping businesses and their customers better protect their data and prevent fraud. Even if it is designed to go unnoticed, we want you to know that extra layer of protection exists.   In fact, we’re offering it to dozens of financial institutions.”

“We are working with several leading banks to roll out this service to protect their customers accessing online accounts and mobile apps in the coming months, with more to follow. By directly working with those banks, we can help to better protect your information.”

In terms of combating the deluge of robocalls, Sohn says we already have a workable approach to arresting these nuisance calls: It’s an authentication procedure known as “SHAKEN/STIR,” and it is premised on the idea that every phone has a certificate of authenticity attached to it that can be used to validate if the call is indeed originating from the number it appears to be calling from.

Under a SHAKEN/STIR regime, anyone who is spoofing their number (and most of these robocalls are spoofed to appear as though they come from a number that is in the same prefix as yours) gets automatically blocked.

“The FCC could make the carriers provide robocall apps for free to customers, but they’re not,” Sohn said. “The carriers instead are turning around and charging customers extra for this service. There was a fairly strong anti-robocalls bill that passed the House, but it’s now stuck in the legislative graveyard that is the Senate.”

AT&T said it and the other major carriers in the US are adopting SHAKEN/STIR and do not plan to charge for it. The company said it is working on building this feature into its Call Protect app, which is free and is meant to help customers block unwanted calls.

What about the prospects of any kind of major overhaul to the privacy laws in this country that might give consumers more say over who can access their private data and what recourse they may have when companies entrusted with that information screw up?

Sohn said there are few signs that anyone in Congress is seriously championing consumer privacy as a major legislative issue. Most of the nascent efforts to bring privacy laws in the United States into the 21st Century she said are interminably bogged down on two sticky issues: Federal preemption of stronger state laws, and the ability of consumers to bring a private right of civil action in the courts against companies that violate those provisions.

“It’s way past time we had a federal privacy bill,” Sohn said. “Companies like Facebook and others are practically begging for some type of regulatory framework on consumer privacy, yet this congress can’t manage to put something together. To me it’s incredible we don’t even have a discussion draft yet. There’s not even a bill that’s being discussed and debated. That is really pitiful, and the closer we get to elections, the less likely it becomes because nobody wants to do anything that upsets their corporate contributions. And, frankly, that’s shameful.”

Update, Aug. 8, 2:05 p.m. ET: Added statements and responses from AT&T.

CryptogramBrazilian Cell Phone Hack

I know there's a lot of politics associated with this story, but concentrate on the cybersecurity aspect for a moment. The cell phones of a thousand Brazilians, including senior government officials, were hacked -- seemingly by actors much less sophisticated than rival governments.

Brazil's federal police arrested four people for allegedly hacking 1,000 cellphones belonging to various government officials, including that of President Jair Bolsonaro.

Police detective João Vianey Xavier Filho said the group hacked into the messaging apps of around 1,000 different cellphone numbers, but provided little additional information at a news conference in Brasilia on Wednesday. Cellphones used by Bolsonaro were among those attacked by the group, the justice ministry said in a statement on Thursday, adding that the president was informed of the security breach.

[...]

In the court order determining the arrest of the four suspects, Judge Vallisney de Souza Oliveira wrote that the hackers had accessed Moro's Telegram messaging app, along with those of two judges and two federal police officers.

When I say that smartphone security equals national security, this is the kind of thing I am talking about.

Worse Than FailureCodeSOD: Seven First Dates

Your programming language is PHP, which represents datetimes as milliseconds since the epoch. Your database, on the other hand, represents datetimes as seconds past the epoch. Now, your database driver certainly has methods to handle this, but can you really trust that?

Nancy found some code which simply needs to check: for the past week, how many customers arrived each day?

$customerCount = array();
$result2 = array();
$result3 = array();
$result4 = array();

$min = 20;
$max = 80;

for ( $i = $date; $i < $date + $days7 + $day; $i += $day ) {

	$first_datetime = date('Y-m-d H:i',substr($i - $day,0,-3));
	$second_datetime = date('Y-m-d H:i',substr($i,0,-3));

	$sql = $mydb ->prepare("SELECT 
								COUNT(DISTINCT Customer.ID) 'Customer'
				            FROM Customer
				                WHERE Timestamp BETWEEN %s AND %s",$first_datetime,$second_datetime);
	$output = $mydb->get_row($sql);
	array_push( $customerCount, $output->Customer == null ? 0 : $output->Customer);
}

array_push( $result4, $customerCount );
array_push( $result4, $result2 );
array_push( $result4, $result3 );

return $result4;

If you have a number of milliseconds and you wish to convert it to seconds, you might do something silly and divide by 1,000, but here we have a more obvious solution: substr the last three digits off to create our $first_datetime and $second_datetime.

Using that, we can prepare a separate query for each day, looping across them to populate $customerCount.

Once we’ve collected all the counts in $customerCount, we then push that into $result4. And then we push the empty $result2 into $result4, followed by the equally empty $result3, at which point we can finally return $result4.

There’s no $result1, but it looks like $customerCount was a renamed version of that, just by the sequence of declarations. And then $min and $max are initialized but never used, and from that, it’s very easy to understand what happened here.

The original developer copied some sample code from a tutorial, but they didn’t understand it. They knew they had a goal, and they knew that their goal was similar to the tutorial, so they just blundered about changing things until they got the results they expected.

Nancy threw all this out and replaced it with a GROUP BY query.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Cory DoctorowPodcast: “IBM PC Compatible”: how adversarial interoperability saved PCs from monopolization

In my latest podcast (MP3), I read my essay “IBM PC Compatible”: how adversarial interoperability saved PCs from monopolization, published today on EFF’s Deeplinks; it’s another installment in my series about “adversarial interoperability,” and the role it has historically played in keeping tech open and competitive. This time, I relate the origin story of the “PC compatible” computer, with help from Tom Jennings (inventor of FidoNet!) who played a key role in the story.

All that changed in 1981, when IBM entered the PC market with its first personal computer, which quickly became the de facto standard for PC hardware. There are many reasons that IBM came to dominate the fragmented PC market: they had the name recognition (“No one ever got fired for buying IBM,” as the saying went) and the manufacturing experience to produce reliable products.

Equally important was IBM’s departure from its usual business practice of pursuing advantage by manufacturing entire systems, down to the subcomponents. Instead, IBM decided to go with an “open” design that incorporated the same commodity parts that the existing PC vendors were using, including MS-DOS and Intel’s 8086 chip. To accompany this open hardware, IBM published exhaustive technical documentation that covered every pin on every chip, every way that programmers could interact with IBM’s firmware (analogous to today’s “APIs”), as well as all the non-standard specifications for its proprietary ROM chip, which included things like the addresses where IBM had stored the fonts it bundled with the system.

Once IBM’s PC became the standard, rival hardware manufacturers realized that they had to create systems that were compatible with IBM’s systems. The software vendors were tired of supporting a lot of idiosyncratic hardware configurations, and IT managers didn’t want to have to juggle multiple versions of the software they relied on. Unless non-IBM PCs could run software optimized for IBM’s systems, the market for those systems would dwindle and wither.

MP3

,

Cory DoctorowPaul Di Filippo on Radicalized: “Upton-Sinclairish muckraking, and Dickensian-Hugonian ashcan realism”

I was incredibly gratified and excited to read Paul Di Filippo’s Locus review of my latest book, Radicalized; Di Filippo is a superb writer, one of the original, Mirrorshades cyberpunks, and he is a superb and insightful literary critic, so when I read his superlative-laden review of my book today, it was an absolute thrill (I haven’t been this excited about a review since Bruce Sterling reviewed Walkaway).


There’s so much to be delighted by in this review, not least a comparison to Rod Serling (!). Below, a couple paras of especial note.

His latest, a collection of four novellas, subtitled “Four Tales of Our Present Moment”, fits the template perfectly, and extends his vision further into a realm where impassioned advocacy, Upton-Sinclairish muckraking, and Dickensian-Hugonian ashcan realism drives a kind of partisan or Cassandran science fiction seen before mostly during the post-WWII atomic bomb panic (think On the Beach) and 1960s New Wave-Age of Aquarius agitation (think Bug Jack Barron). Those earlier troubled eras resonate with our current quandary, but the “present moment” under Doctorow’s microscope – or is that a sniper’s crosshairs? – has its own unique features that he seeks to elucidate. These stories walk a razor’s edge between literature and propaganda, aesthetics and bludgeoning, subtlety and stridency, rant and revelation. The only guaranteed outcome after reading is that no one can be indifferent to them…

…The Radicalized collection strikes me in some sense as an episode of a primo TV anthology series – Night Gallery in the classical mode, or maybe in a more modern version, Philip K. Dick’s Electric Dreams. It gives us polymath Cory Doctorow as talented Rod Serling – himself both a dreamer and a social crusader – telling us that he’s going to show us, as vividly as he can, several nightmares or future hells, but that somehow the human spirit and soul will emerge intact and even triumphant.


Paul Di Filippo Reviews Radicalized by Cory Doctorow [Paul Di Filippo/Locus]

,

Cory DoctorowHoustonites! Come see Hank Green and me in conversation tomorrow night!

Hank Green and I are doing a double act tomorrow night, July 31, as part of the tour for the paperback of his debut novel, An Absolutely Remarkable Thing. It’s a ticketed event (admission includes a copy of Hank’s book), and we’re presenting at 7PM at Spring Forest Middle School in association with Blue Willow Bookshop. Hope to see you there!

,

Cory DoctorowPodcast: Adblocking: How About Nah?

In my latest podcast (MP3), I read my essay Adblocking: How About Nah?, published last week on EFF’s Deeplinks; it’s the latest installment in my series about “adversarial interoperability,” and the role it has historically played in keeping tech open and competitive, and how that role is changing now that yesterday’s scrappy startups have become today’s bloated incumbents, determined to prevent anyone from disrupting them they way they disrupted tech in their early days.

At the height of the pop-up wars, it seemed like there was no end in sight: the future of the Web would be one where humans adapted to pop-ups, then pop-ups found new, obnoxious ways to command humans’ attention, which would wane, until pop-ups got even more obnoxious.

But that’s not how it happened. Instead, browser vendors (beginning with Opera) started to ship on-by-default pop-up blockers. What’s more, users—who hated pop-up ads—started to choose browsers that blocked pop-ups, marginalizing holdouts like Microsoft’s Internet Explorer, until they, too, added pop-up blockers.

Chances are, those blockers are in your browser today. But here’s a funny thing: if you turn them off, you won’t see a million pop-up ads that have been lurking unseen for all these years.

Because once pop-up ads became invisible by default to an ever-larger swathe of Internet users, advertisers stopped demanding that publishers serve pop-up ads. The point of pop-ups was to get people’s attention, but something that is never seen in the first place can’t possibly do that.

MP3

Rondam RamblingsFedex: when it absolutely, positively has to get stuck in the system for over two months

I have seen some pretty serious corporate bureaucratic dysfunction over the years, but I think this one takes the cake: on May 23, we shipped a package via Fedex from California to Colorado.  The package required a signature.  It turned out that the person we sent it to had moved, and so was not able to sign for the package, and so it was not delivered. Now, the package has our return address on

,

Cory DoctorowPodcast: Adversarial Interoperability is Judo for Network Effects

In my latest podcast (MP3), I read my essay SAMBA versus SMB: Adversarial Interoperability is Judo for Network Effects, published last week on EFF’s Deeplinks; it’s a furhter exploration of the idea of “adversarial interoperability” and the role it has played in fighting monopolies and preserving competition, and how we could use it to restore competition today.

In tech, “network effects” can be a powerful force to maintain market dominance: if everyone is using Facebook, then your Facebook replacement doesn’t just have to be better than Facebook, it has to be so much better than Facebook that it’s worth using, even though all the people you want to talk to are still on Facebook. That’s a tall order.

Adversarial interoperability is judo for network effects, using incumbents’ dominance against them. To see how that works, let’s look at a historical example of adversarial interoperability role in helping to unseat a monopolist’s dominance.

The first skirmishes of the PC wars were fought with incompatible file formats and even data-storage formats: Apple users couldn’t open files made by Microsoft users, and vice-versa. Even when file formats were (more or less) harmonized, there was still the problems of storage media: the SCSI drive you plugged into your Mac needed a special add-on and flaky driver software to work on your Windows machine; the ZIP cartridge you formatted for your PC wouldn’t play nice with Macs.

MP3

,

Cory DoctorowAppearance on the Jim Rutt Podcast

Jim Rutt — former chairman of the Santa Fe Institute and ex-Network Solutions CEO — just launched his new podcast, and included me in the first season! (MP3) It was a characteristically wide-ranging, interdisciplinary kind of interview, covering competition and adversarial interoperability, technological self-determination and human rights, conspiracy theories and corruption. There’s a full transcript here.

,

Cory DoctorowPodcast: Occupy Gotham

In my latest podcast (MP3), I read my essay Occupy Gotham, published in Detective Comics: 80 Years of Batman, commemorating the 1000th issue of Batman comics. It’s an essay about the serious hard problem of trusting billionaires to solve your problems, given the likelihood that billionaires are the cause of your problems.

A thousand issues have gone by, nearly 80 years have passed, and Batman still hasn’t cleaned up Gotham. If the formal definition of insanity it trying the same thing and expecting a different outcome, then Bruce Wayne belongs in a group therapy session in Arkham Asylum. Seriously, get that guy some Cognitive Behavioral Therapy before he gets into some *serious* trouble.

As Upton Sinclair wrote in his limited run of *Batman: Class War*[1], “It’s impossible to get a man to understand something when his paycheck depends on his not understanding it.”

Gotham is a city riven by inequality. In 1939, that prospect had a very different valence than it has in 2018. Back in 1939, the wealth of the world’s elites had been seriously eroded, first by the Great War, then by the Great Crash and the interwar Great Depression, and what was left of those vast fortunes was being incinerated on the bonfire of WWII. Billionaire plutocrats were a curious relic of a nostalgic time before the intrinsic instability of extreme wealth inequality plunged the world into conflict.

MP3

,

Cory DoctorowI appeared on Nanowrimo’s awesome Write-Minded podcast to talk about Radicalized

It turned out really well!

Today’s dystopian fiction seems to be closer to reality than the dystopian fiction of the past. Brooke and Grant explore this new reality with Cory Doctorow, whose socially conscientious science fiction novels delve into topics of political consequence. From the ways in which anxieties fuel science fiction writers to how fiction has the power to change the way we think and operate in the world, today’s episode emphasizes the importance of dystopian fiction for its capacity to shed light on what is true, and what might happen, ideally, as Cory suggests, so that we might fix things before it’s too late.

,

Cory DoctorowWhere to catch me at San Diego Comic-Con!

I’m headed back to San Diego for Comic-Con next weekend, and you can catch me on Friday, Saturday and Sunday:

Friday, 5PM: Signing in AA04

Saturday, 5PM: Panel: Writing: Craft, Community, and Crossover (with James Killen, Seanan McGuire, Charlie Jane Anders,, Annalee Newitz, and Sarah Gailey), Room 23ABC

Sunday, 10AM: Signing and giveaway for Radicalized, Tor Booth, #2701.

I hope to see you there!

,

Sam VargheseThe Rise and Fall of the Tamil Tigers is full of errors

How many mistakes should one accept in a book before it is pulled from sale? In the normal course, when a book is accepted for publication by a recognised publishing company, there are experienced editors who go through the text, correct it and ensure that there are no major bloopers.

Then there are fact-checkers who ensure that what is stated within the book is, at least, mostly aligned with public versions of events from reliable sources.

In the case of The Rise and Fall of the Tamil Tigers, a third-rate book that is being sold by some outlets online, neither of these exercises has been carried out. And it shows.

If the author, Damian Tangram, had voiced his views or even put the entire book online as a free offering, that would be fine. He is entitled to his opinion. But when he is trying to trick people into buying what is a very poor-quality book, then warnings are in order.

Here are just a few of the screw-ups in the first 14 pages (the book is 375 pages!):

In the foreword, the words “Civil War” are capitalised. This is incorrect and would be right only if the civil war were exclusive to Sri Lanka. This is not the case; there are numerous civil wars occurring around the world.

Next, the foreword claims the war started in 1985. This, again, is incorrect. It began in July 1983. The next claim is that this war “had its origins in the post-war political exploitation of socially divisive policies.” Really? Post-war means after the war – this conflict must be the first in the world to begin after it was over!

There is a further line indicating that the author does not know how to measure time: “After spanning three decades…” A decade is 10 years, three decades would be 30 years. The war lasted a little less than 26 years – July 23, 1983 to May 19, 2009.

Again, in the foreword, the author claims that the Liberation Tigers of Tamil Eelam “grew from being a small despot insurgency to the most dangerous and effective terrorist organizations the world has ever seen.” The LTTE was started by Velupillai Pirapaharan in the 1970s. By 1983, it was already a well-organised fighting force. Further, the English is all wonky here, the word should be “organization”, not the plural “organizations”.

And this is just the first paragraph of the book!

The second paragraph of the foreword claims about the year 2006: “Just when things could not be worse Sri Lanka was plunged into all-out war.” The war started much earlier and was in a brief hiatus. The final effort to eliminate the LTTE began on April 25, 2006. And a comma would be handy there.

Then again, the book claims in the foreword that the only person who refused to compromise in the conflict had been Pirapaharan. This is incorrect as the government was also equally stubborn until 2002.

To go on, the foreword says the book gives “an example of how a terrorist organisation like the LTTE can proliferate and spread its murderous ambitions”. The book suffers from numerous generalisations of this kind, all of which are standout examples of malapropism. And one’s ambitions grow, one does not “spread ambitions”.

Again, and we are still in the foreword, the book says the LTTE “was a force that lasted for more than twenty-five years…” Given that it took shape in the 1970s, this is again incorrect.

Next, there is a section titled “About this Book”. Again, misplaced capitalisation of the word “Book”. The author says he visited Sri Lanka for the first time in 1989 soon after he “met and married wife….” Great use of butler English, that. Additionally, he could not have married his wife; the woman in question became his wife only after he married her.

That year, he claims the “most frightening organization” was the JVP or Janata Vimukti Peramuna or People’s Liberation Front. Two years later, when he returned for a visit, the JVP had been defeated but “the enemy to peace was the LTTE”. This is incorrect as the LTTE did not offer any let-up while the JVP was engaging the Sri Lankan army.

Of the Tigers he says, “the power that they had acquired over those short years had turned them into a mythical unstoppable force.” This is incorrect; the Tigers became a force to be reckoned with many years earlier. They did not undergo any major evolution between 1989 and 1991.

The author’s only connection to Sri Lanka is through marrying a Sri Lankan woman. This, plus his visits, he claims give him a “close connection” to the island!

So we go on: “I returned to Sri Lankan several times…” The word is Lanka, not Lankan. More proof of a lack of editing, if any is needed by now.

“Lives were being lost; freedoms restricted and the economy being crushed under a financial burden.” The use of that semi-colon illustrates Tangram’s level of ignorance of English. Factually, this is all stating the bleeding obvious as all these fallouts of the war had begun much earlier.

The author claims that one generation started the war, a second continued to fight and a third was about to grow up and be thrown into a conflict. How three generations can come and go in the space of 26 years is a mystery and more evidence that this man just flings words about and hopes that they make sense.

More in this same section: “To know Sri Lanka without war was once an impossible dream…” Rubbish, I lived in Sri Lanka from 1957 till 1972 and I knew peace most of the time.

Ending this section is another screw-up: “I returned to Sri Lanka in 2012, after the war had ended, to witness the one thing I had not seen in over 25 years: Peace.” Leaving aside the wrong capitalisation of the word “peace”, since the author’s first visit was in 1989, how does 2012 make it “over 25 years”? By any calculation, that comes to 23 years. This is a ruse used throughout the book to give the impression that the author has a long connection to Sri Lanka when in reality he is just an opportunist trying to turn some bogus observations about a conflict he knows nothing about into a cash cow.

And so far I have covered hardly three full pages!!!

Let’s have a brief look at Ch-1 (one presumes that means Chapter 1) which is titled “Understanding Sri Lanka” with a sub-heading “Introduction Understanding Sri Lanka: The impossible puzzle”. (If it is impossible as claimed, how does the author claim he can explain it?)

So we begin: “…there is very little information being proliferated into the general media about the nation of Sri Lanka.” The author obviously does not own a dictionary and is unaware how the word “proliferated” should be used.

There are several strange conglomerations of words which mean nothing; for example, take this: “Without referring to a map most people would struggle to name any other city than Colombo. Even the name of the island may reflect some kind of echo of when it changed from being called Ceylon to when it became Sri Lanka.” Apart from all the missing punctuation, and the mixing up of the order of words, what the hell does this mean? Echo?

On the next page, the book says: “At the bottom corner of India is the small teardrop-shaped island of Sri Lankan.” That sentence could have done without the last “n”. Once again, no editor. Only Tangram the great.

The word Sinhalese is spelt that way; there is nobody who spells it “Singhalese”. But since the author is unable to read Sinhala, the local language, he makes errors of this kind over and over again. Again, common convention for the usage of numbers in print dictates that one to nine be spelt out and any higher number be used as a figure. The author is blissfully unaware of this too.

The percentage of Sinhalese-speakers is given as “about 70%” when the actual figure is 74.9%. And then in another illustration of his sloppiness, the author writes “The next largest groups are the Tamils who make up about 15% of the population.” The Tamils are not a single group, being made up of plantation Tamils who were brought in by the British from India to work in the tea estates (4.2%) and the local Tamils (11.2%) who have been there much longer.

He then refers to a group whom he calls Burgers – which is something sold in a fast-food outlet. The Sri Lankan ethnic group is called Burghers, who are the product of inter-marriages between Sinhalese and Portuguese, British or Dutch invaders. There is a reference made to a group of indigenous people, whom the author calls “Vedthas.” Later, on the same page, he calls these people Veddhas. This is not the first time that it is clear that he could not be bothered to spell-check this bogus tome.

There’s more: the “Singhalese” (the author’s spelling) are claimed to be of “Arian” origin. The word is Aryan. Then there is a claim that the Veddhas are related to the “Australian Indigenous Aborigines”. One has yet to hear of any non-Indigenous Aborigines. Redundant words are one thing at which Tangram excels.

There is reference to some king of Sri Lanka known as King Dutigama. The man’s name was Dutugemunu. But then what’s the difference, eh? We might as well have called him Charlie Chaplin!

Referring to the religious groups in Sri Lanka, Tangram writes: “Hinduism also has a long history in Sri Lanka with Kovils…” The word is temples, unless one is writing in the vernacular. He claims Buddhists make up 80%; the correct figure is 70.2%.

Then referring to the Bo tree under which Gautama Buddha is claimed to have found enlightenment, Tangram claims it is more than 2000 years old and the oldest cultivated tree alive today. He does not know about the Bristlecone pine trees that date back more than 4700 years. Or the redwoods that carbon dating has shown to be more than 3000 years old.

This brings me to page 14 and I have crossed 1500 words! The entire book would probably take me a week to cover. But this number of errors should serve to prove my point: this book should not be sold. It is a fraud on the public.

,

Cory DoctorowSteering with the Windshield Wipers

In my latest podcast (MP3), I read my May Locus column: Steering with the Windshield Wipers. It makes the argument that much of the dysfunction of tech regulation — from botched anti-sex-trafficking laws to the EU’s plan to impose mass surveillance and censorship to root out copyright infringement — are the result of trying to jury-rig tools to fix the problems of monopolies, without using anti-monopoly laws, because they have been systematically gutted for 40 years.

A lack of competition rewards bullies, and bullies have insatiable appetites. If your kid is starving because they keep getting beaten up for their lunch money, you can’t solve the problem by giving them more lunch money – the bullies will take that money too. Likewise: in the wildly unequal Borkean inferno we all inhabit, giving artists more copyright will just enrich the companies that control the markets we sell our works into – the media companies, who will demand that we sign over those rights as a condition of their patronage. Of course, these companies will be subsequently menaced and expropriated by the internet distribution companies. And while the media companies are reluctant to share their bounties with us artists, they reliably expect us to share their pain – a bad quarter often means canceled projects, late payments, and lower advances.

And yet, when a lack of competition creates inequities, we do not, by and large, reach for pro-competitive answers. We are the fallen descendants of a lost civilization, destroyed by Robert Bork in the 1970s, and we have forgotten that once we had a mighty tool for correcting our problems in the form of pro-competitive, antitrust enforcement: the power to block mergers, to break up conglomerates, to regulate anticompetitive conduct in the marketplace.

But just because we know where to find the copyright lever, it doesn’t follow that yanking on it hard enough will make it do the work of antitrust law.

MP3

,

Rondam RamblingsThe Trouble with Many Worlds

Ten years ago I wrote an essay entitled "The Trouble with Shadow Photons" describing a problem with the dramatic narrative of what is commonly called the "many-worlds" interpretation of quantum mechanics (but which was originally and IMHO more appropriately called the "relative state" interpretation) as presented by David Deutsch in his (otherwise excellent) book, "The Fabric of Reality."  At the

,

Sam VargheseWhatever happened to the ABC’s story of the century?

In the first three weeks of June last year, the ABC’s Sarah Ferguson presented a three-part saga on the channel’s Four Corners program, which the ABC claimed was the “story of the century”.

It was a rehashing of all the claims against US President Donald Trump, which the American TV stations had gone over with a fine-toothed comb but which Ferguson seemed convinced still had something hidden for her to uncover.

At the time, a special counsel, former FBI chief Robert Mueller, was conducting an investigation into claims that Trump colluded with Russia to win the presidential election.

Earlier this year, Mueller announced the results of his probe: zilch. Zero. Nada. Nothing. A big cipher.

Given that Ferguson echoed all the same claims by interviewing a number of rather dubious individuals, one would think that it was time for a mea culpa – that is, if one had even a semblance of integrity, a shred of honesty in one’s being.

But Ferguson seems to have disappeared off the face of the earth. The ABC has been silent about it too. Given that she and her entourage spent the better part of six weeks traipsing the streets and corridors of power in the US and the UK, considerable funds would have been spent.

This, by an organisation that is always weeping about its budget cuts. One would think that such a publicly-funded organisation would be a little more circumspect and not allow anyone to indulge in such an exercise of vanity.

If Ferguson had unearthed even one morsel of truth, one titbit of information that the American media had not found, then one would not be writing this. But she did nothing of the sort; she just raked over all the old bones.

One hears Ferguson is now preparing a program on the antics that the government indulged in last year by dumping its leader, Malcolm Turnbull. This issue has also been done to death and there has already been a two-part investigation by the Sky News’ presenter David Speers, a fine reporter. There has been one book published, by the former political aide Niki Savva, and more are due.

It looks like Ferguson will again be acting in the manner of a dog that returns to its own vomit. She appears to have cultivated considerable skill in this art.

,

Sam VargheseThe Rise and Fall of the Tamil Tigers is a third-rate book. Don’t waste your money buying it

How do you evaluate a book before buying? If it were from a traditional bookshop, then one scans some pages at least. The art master in my secondary school told his students of a method he had: read page 15 or 16, then flip to page 150 and read that. If the book interests you, then buy it.

But when it’s online buying, what happens? Not every book you buy is from a known author and many online booksellers do not offer the chance to flip through even a few pages. At times, this ends with the buyer getting a dud.

One book I bought recently proved to be a dud. I am interested in the outcome of the civil war in Sri Lanka where I grew up. Given that, I picked up the first book about the ending of the war, written in 2011 by Australian Gordon Weiss, a former UN official. This is an excellent account of the whole conflict, one that also gives a considerable portion of the history of the island and the events that led to the rise of tensions between the Sinhalese and the Tamils.

Prior to that, I picked up a number of other books, including the only biography of the Tamil leader, Velupillai Pirapaharan. Many of the books I picked up are written by Indians and thus the standard of English is not as good as that in Weiss’s book. But the material in all books is of a uniformly high standard.

Recently, I bought a book titled The Rise and Fall of the Tamil Tigers that gave its publication date as 2018 and claimed to tell the story of the war in its entirety. The reason I bought it was to see if it bridged the gap between 2011, when Weiss’s book was published, and 2018, when the current book came out.

But it turned out to be a scam. I am not sure why the bookseller, The Book Depository, stocks this volume, given its shocking quality.

The blurb about the book runs thus: “This book offers an accurate and easy to follow explanation of how the Tamil Tigers, who are officially known as the Liberation Tigers of Tamil Eelam (LTTE), was defeated. Who were the major players in this conflict? What were the critical strategic decisions that worked? What were the strategic mistakes and their consequences? What actually happened on the battlefield? How did Sri Lanka become the only nation in modern history to completely defeat a terrorist organisation? The mind-blowing events of the Sri Lankan civil war are documented in this book to show the truth of how the LTTE terrorist organisation was defeated. The defeat of a terrorist organisation on the battlefield was so unprecedented that it has rewritten the narrative in the fight against terrorism.”

Nothing could be further from the truth.

The book is published by the author himself, an Australian named Damian Tangram, who appears to have no connection to Sri Lanka apart from the fact that he is married to a Sri Lankan woman.

It is extremely badly written, has obviously not been edited and has not even been subjected to a spell-checker before being printed. This can be gauged by the fact that the same word is spelt in different ways on the same page.

Capital letters are spewed all over the pages and even an eighth-grade student would not write rubbish of this kind.

In many parts of the book, government propaganda is republished verbatim and it all looks like a cheap attempt to make some money by taking up a subject that would be of interest, and then producing a low-grade tome.

Some of the sources it quotes are highly dubious, one of them being a Singapore-based so-called terrorism expert Rohan Gunaratne who has been unmasked as a fraud on many occasions.

The reactions of the book sellers — I bought it through Abe Books which groups together a number of sellers from whom one can choose; I chose The Book Depository — were quite disconcerting. When the abysmal quality of the book was brought to their notice, both thought I wanted my money back. I wanted them to remove it from sale so that nobody else would get cheated the way I was.

After some back and forth, and both companies refusing to understand that the book is a fraud, I gave up.

MELong-term Device Use

It seems to me that Android phones have recently passed the stage where hardware advances are well ahead of software bloat. This is the point that desktop PCs passed about 15 years ago and laptops passed about 8 years ago. For just over 15 years I’ve been avoiding buying desktop PCs, the hardware that organisations I work for throw out is good enough that I don’t need to. For the last 8 years I’ve been avoiding buying new laptops, instead buying refurbished or second hand ones which are more than adequate for my needs. Now it seems that Android phones have reached the same stage of development.

3 years ago I purchased my last phone, a Nexus 6P [1]. Then 18 months ago I got a Huawei Mate 9 as a warranty replacement [2] (I had swapped phones with my wife so the phone I was using which broke was less than a year old). The Nexus 6P had been working quite well for me until it stopped booting, but I was happy to have something a little newer and faster to replace it at no extra cost.

Prior to the Nexus 6P I had a Samsung Galaxy Note 3 for 1 year 9 months which was a personal record for owning a phone and not wanting to replace it. I was quite happy with the Note 3 until the day I fell on top of it and cracked the screen (it would have been ok if I had just dropped it). While the Note 3 still has my personal record for continuous phone use, the Nexus 6P/Huawei Mate 9 have the record for going without paying for a new phone.

A few days ago when browsing the Kogan web site I saw a refurbished Mate 10 Pro on sale for about $380. That’s not much money (I usually have spent $500+ on each phone) and while the Mate 9 is still going strong the Mate 10 is a little faster and has more RAM. The extra RAM is important to me as I have problems with Android killing apps when I don’t want it to. Also the IP67 protection will be a handy feature. So that phone should be delivered to me soon.

Some phones are getting ridiculously expensive nowadays (who wants to walk around with a $1000+ Pixel?) but it seems that the slightly lower end models are more than adequate and the older versions are still good.

Cost Summary

If I can buy a refurbished or old model phone every 2 years for under $400 that will make using a phone cost about $0.50 per day. The Nexus 6P cost me $704 in June 2016 which means that for the past 3 years my phone cost was about $0.62 per day.

It seems that laptops tend to last me about 4 years [3], and I don’t need high-end models (I even used one from a rubbish pile for a while). The last laptops I bought cost me $289 for a Thinkpad X1 Carbon [4] and $306 for the Thinkpad T420 [5]. That makes laptops about $0.20 per day.

In May 2014 I bought a Samsung Galaxy Note 10.1 2014 edition tablet for $579. That is still working very well for me today, apart from only having 32G of internal storage space and an OS update preventing Android apps from writing to the micro SD card (so I have to use USB to copy TV shows on to it) there’s nothing more than I need from a tablet. Strangely I even get good battery life out of it, I can use it for a couple of hours without the battery running out. Battery life isn’t nearly as good as when it was new, but it’s still OK for my needs. As Samsung stopped providing security updates I can’t use the tablet as a SSH client, but now that my primary laptop is a small and light model that’s less of an issue. Currently that tablet has cost me just over $0.30 per day and it’s still working well.

Currently it seems that my hardware expense for the forseeable future is likely to be about $1 per day. 20 cents for laptop, 30 cents for tablet, and 50 cents for phone. The overall expense is about $1.66 per month as I’m on a $20 per month pre-paid plan with Aldi Mobile.

Saving Money

A laptop is very important to me, the amounts of money that I’m spending don’t reflect that. But it seems that I don’t have any option for spending more on a laptop (the Thinkpad X1 Carbon I have now is just great and there’s no real option for getting more utility by spending more). I also don’t have any option to spend less on a tablet, 5 years is a great lifetime for a device that is practically impossible to repair (repair will cost a significant portion of the replacement cost).

I hope that the Mate 10 can last at least 2 years which will make it a new record for low cost of ownership of a phone for me. If app vendors can refrain from making their bloated software take 50% more RAM in the next 2 years that should be achievable.

The surprising thing I learned while writing this post is that my mobile phone expense is the largest of all my expenses related to mobile computing. Given that I want to get good reception in remote areas (needs to be Telstra or another company that uses their network) and that I need at least 3GB of data transfer per month it doesn’t seem that I have any options for reducing that cost.

,

Sam VargheseMethinks Israel Folau is acting like a hypocrite

The case of Israel Folau has been a polarising one in Australia with some supporting the rugby union player’s airing of his Christian beliefs and others loudly opposed. In the end, it turns out that Folau may be guilty of one of the sins of which he accuses others: hypocrisy.

Last year, Folau made a post on Instagram saying adulterers, drunkards, fornicators, homosexuals and the like would all go to hell if they did not repent and come to Jesus. In this, he was merely stating what the Bible says about these kinds of people. He was cautioned about such posts by his employer, Rugby Australia. Whether he signed any agreement about not putting up similar posts in the future is unknown.

A second similar post this year resulted in a fairly big outcry among the media and those who champion the gay cause. Folau had a number of meetings with his employers and was finally shown the door. He was on a four-year $4 million contract so he has lost a considerable amount of cash. The Australian team has lost a lot too, as he was by far the best player and the World Cup rugby tournament is in September this year. The main sponsor of the team is Qantas and the chief executive, Alan Joyce, is gay. There have been accusations that Joyce has been a pivotal force in pushing for Folau’s sacking.

Soon after this, Folau announced that he was suing Rugby Australia and sought to raise $3 million for defending himself. His campaign on GoFundMe had reached about $750,000 when it was pulled down by the site. But the Christian lobby started another fund for Folau and it has now raised well beyond a million dollars.

Now Folau has the right to hold his own religious beliefs. He is also free to state them openly. But in this he goes against the very Bible he quotes, for Christians are told to practise their faith quietly, and not in the manner of the scribes and Pharisees of Jesus’ time, people who took great care to show outwardly that they were religious – though in private they were as worldly and non-religious as anyone else. In short, they were hypocrites.

Christians were told to behave in this manner and promised that their God would reward them openly. In other words, a Christian is expected to influence others not by talking loudly and flaunting his/her faith, but by impressing others by one’s behaviour and attitudes. Folau’s flaunting of his faith appears to go against this admonishment.

Then again, Folau’s seeking of money to fund his court case is a very worldly act. First of all, Christians are told not to go to court against their neighbours but rather to settle things peacefully. Even if Folau decided to violate this teaching, he has plenty of properties and did not need to take money from others. If he wanted a court battle, then he could have used his own money. This is un-Christian in the extreme.

Folau’s supporters cite the admonishment by Jesus that his followers should go to all corners of the world and preach the gospel. That is an admonishment given to pastors and leaders of the flock. The rest of us are told to influence others by the way we live.

Folau is the one who set himself up as one who acts according to the Christian faith and left himself open to be judged by the same creed. If all his actions had been in keeping with the faith, then one would have no quarrel with him. But when one chooses Christianity when it is convenient, and goes the way of the world when it suits, then there is only word to describe it: hypocrisy.

Hypocrites were one category of people who attracted a huge amount of criticism from Jesus Christ during his earthly sojourn. Israel Folau should muse on this.