Planet Russell

,

CryptogramMore on Law Enforcement Backdoor Demands

The Carnegie Endowment for International Peace and Princeton University's Center for Information Technology Policy convened an Encryption Working Group to attempt progress on the "going dark" debate. They have released their report: "Moving the Encryption Policy Conversation Forward.

The main contribution seems to be that attempts to backdoor devices like smartphones shouldn't also backdoor communications systems:

Conclusion: There will be no single approach for requests for lawful access that can be applied to every technology or means of communication. More work is necessary, such as that initiated in this paper, to separate the debate into its component parts, examine risks and benefits in greater granularity, and seek better data to inform the debate. Based on our attempt to do this for one particular area, the working group believes that some forms of access to encrypted information, such as access to data at rest on mobile phones, should be further discussed. If we cannot have a constructive dialogue in that easiest of cases, then there is likely none to be had with respect to any of the other areas. Other forms of access to encrypted information, including encrypted data-in-motion, may not offer an achievable balance of risk vs. benefit, and as such are not worth pursuing and should not be the subject of policy changes, at least for now. We believe that to be productive, any approach must separate the issue into its component parts.

I don't believe that backdoor access to encryption data at rest offers "an achievable balance of risk vs. benefit" either, but I agree that the two aspects should be treated independently.

CryptogramOn Cybersecurity Insurance

Good paper on cybersecurity insurance: both the history and the promise for the future. From the conclusion:

Policy makers have long held high hopes for cyber insurance as a tool for improving security. Unfortunately, the available evidence so far should give policymakers pause. Cyber insurance appears to be a weak form of governance at present. Insurers writing cyber insurance focus more on organisational procedures than technical controls, rarely include basic security procedures in contracts, and offer discounts that only offer a marginal incentive to invest in security. However, the cost of external response services is covered, which suggests insurers believe ex-post responses to be more effective than ex-ante mitigation. (Alternatively, they can more easily translate the costs associated with ex-post responses into manageable claims.)

The private governance role of cyber insurance is limited by market dynamics. Competitive pressures drive a race-to-the-bottom in risk assessment standards and prevent insurers including security procedures in contracts. Policy interventions, such as minimum risk assessment standards, could solve this collective action problem. Policy-holders and brokers could also drive this change by looking to insurers who conduct rigorous assessments. Doing otherwise ensures adverse selection and moral hazard will increase costs for firms with responsible security postures. Moving toward standardised risk assessment via proposal forms or external scans supports the actuarial base in the long-term. But there is a danger policyholders will succumb to Goodhart's law by internalising these metrics and optimising the metric rather than minimising risk. This is particularly likely given these assessments are constructed by private actors with their own incentives. Search-light effects may drive the scores towards being based on what can be measured, not what is important.

EDITED TO ADD (9/11): BoingBoing post.

Worse Than FailureCodeSOD: ImAlNumb?

I think it’s fair to say that C, as a language, has never had a particularly great story for working with text. Individual characters are okay, but strings are a nightmare. The need to support unicode has only made that story a little more fraught, especially as older code now suddenly needs to support extended characters. And by “older” I mean, “wchar was added in 1995, which is practically yesterday in C time”.

Lexie inherited some older code. It was not designed to support unicode, which is certainly a problem in 2019, and it’s the problem Lexie was tasked with fixing. But it had an… interesting approach to deciding if a character was alphanumeric.

Now, if we limit ourselves to ASCII, there are a variety of ways we could do this check. We could convert it to a number and do a simple check- characters 48–57 are numeric, 65–90 and 97–122 cover the alphabetic characters. But that’s a conditional expression- six comparison operations! So maybe we should be more clever. There is a built-in library function, isalnum, which might be more optimized, and is available on Lexie’s platform. But we’re dedicated to really doing some serious premature optimization, so there has to be a better way.

bool isalnumCache[256] =
{false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, false, false, false, false, false, false,
false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, false, false, false, false, false,
false,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true,  true, true, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false,
false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false, false};

This is a lookup table. Convert your character to an integer, and then use it to index the array. This is fast. It’s also error prone, and this block does incorrectly identify a non-alphanumeric as an alphanumeric. It also 100% fails if you are dealing with wchar_t, which is how Lexie ended up looking at this block in the first place.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianBenjamin Mako Hill: How Discord moderators build innovative solutions to problems of scale with the past as a guide

Introducing new technology into a work place is often disruptive, but what if your work was also completely mediated by technology? This is exactly the case for the teams of volunteer moderators who work to regulate content and protect online communities from harm. What happens when the social media platforms these communities rely on change completely? How do moderation teams overcome the challenges caused by new technological environments? How do they do so while managing a “brand new” community with tens of thousands of users?

For a new study that will be published in CSCW in November, we interviewed 14 moderators of 8 “subreddit” communities from the social media aggregation and discussion platform Reddit to answer these questions. We chose these communities because each community had recently adopted the real-time chat platform Discord to support real-time chat in their community. This expansion into Discord introduced a range of challenges—especially for the moderation teams of large communities.

We found that moderation teams of large communities improvised their own creative solutions to challenges they faced by building bots on top of Discord’s API. This was not too shocking given that APIs and bots are frequently cited as tools that allow innovation and experimentation when scaling up digital work. What did surprise us, however, was how important moderators’ past experiences were in guiding the way they used bots. In the largest communities that faced the biggest challenges, moderators relied on bots to reproduce the tools they had used on Reddit. The moderators would often go so far as to give their bots the names of moderator tools available on Reddit. Our findings suggest that support for user-driven innovation is important not only in that it allows users to explore new technological possibilities but also in that it allows users to mine their past experiences to introduce old systems into new environments.

What Challenges Emerged in Discord?

Discord’s text channels allow for more natural, in the moment conversations compared to Reddit. In Discord, this social aspect also made moderation work much more difficult. One moderator explained:

“It’s kind of rough because if you miss it, it’s really hard to go back to something that happened eight hours ago and the conversation moved on and be like ‘hey, don’t do that.’ ”

Moderators we spoke to found that the work of managing their communities was made even more difficult by their community’s size:

On the day to day of running 65,000 people, it’s literally like running a small city…We have people that are actively online and chatting that are larger than a city…So it’s like, that’s a lot to actually keep track of and run and manage.”

The moderators of large communities repeatedly told us that the tools provided to moderators on Discord were insufficient. For example, they pointed out tools like Discord’s Audit Log was inadequate for keeping track of the tens of thousands of members of their communities. Discord also lacks automated moderation tools like the Reddit’s Automoderator and Modmail leaving moderators on Discord with few tools to scale their work and manage communications with community members. 

How Did Moderation Teams Overcome These Challenges?

The moderation teams we talked with adapted to these challenges through innovative uses of Discord’s API toolkit. Like many social media platforms, Discord offers a public API where users can develop apps that interact with the platform through a Discord “bot.” We found that these bots play a critical role in helping moderation teams manage Discord communities with large populations.

Guided by their experience with using tools like Automoderator on Reddit, moderators working on Discord built bots with similar functionality to solve the problems associated with scaled content and Discord’s fast-paced chat affordances. This bots would search for regular expressions and URLs that go against the community’s rules:

“It makes it so that rather than having to watch every single channel all of the time for this sort of thing or rely on users to tell us when someone is basically running amuck, posting derogatory terms and terrible things that Discord wouldn’t catch itself…so it makes it that we don’t have to watch every channel.”

Bots were also used to replace Discord’s Audit Log feature with what moderators referred to often as “Mod logs”—another term borrowed from Reddit. Moderators will send commands to a bot like “!warn username” to store information such as when a member of their community has been warned for breaking a rule and automatically store this information in a private text channel in Discord. This information helps organize information about community members, and it can be instantly recalled with another command to the bot to help inform future moderation actions against other community members.

Finally, moderators also used Discord’s API to develop bots that functioned virtually identically to Reddit’s Modmail tool. Moderators are limited in their availability to answer questions from members of their community, but tools like the “Modmail” helps moderation teams manage this problem by mediating communication to community members with a bot:

“So instead of having somebody DM a moderator specifically and then having to talk…indirectly with the team, a [text] channel is made for that specific question and everybody can see that and comment on that. And then whoever’s online responds to the community member through the bot, but everybody else is able to see what is being responded.”

The tools created with Discord’s API — customizable automated content moderation, Mod logs, and a Modmail system — all resembled moderation tools on Reddit. They even bear their names! Over and over, we found that moderation teams essentially created and used bots to transform aspects of Discord, like text channels into Mod logs and Mod Mail, to resemble the same tools they were using to moderate their communities on Reddit. 

What Does This Mean for Online Communities?

We think that the experience of moderators we interviewed points to a potentially important underlooked source of value for groups navigating technological change: the potent combination of users’ past experience combined with their ability to redesign and reconfigure their technological environments. Our work suggests the value of innovation platforms like APIs and bots is not only that they allow the discovery of “new” things. Our work suggests that these systems value also flows from the fact that they allow the re-creation of the the things that communities already know can solve their problems and that they already know how to use.


Both this blog post and the paper it describes are collaborative work by Charles Kiene, Jialun “Aaron” Jiang, and Benjamin Mako Hill. For more details, check out check out the full 23 page paper. The work will be presented in Austin, Texas at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW’19) in November 2019. The work was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). If you have questions or comments about this study, contact Charles Kiene at ckiene [at] uw [dot] edu.

,

Planet DebianMarkus Koschany: My Free Software Activities in August 2019

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java

Misc

  • I fixed two minor CVE in binaryen, a compiler and toolchain infrastructure library for WebAssembly, by packaging the latest upstream release.

Debian LTS

This was my 42. month as a paid contributor and I have been paid to work 21,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 12.8.2019 until 18.08.2019 and from 09.09.2019 until 10.09.2019 I was in charge of our LTS frontdesk. I investigated and triaged CVE in kde4libs, apache2, nodejs-mysql, pdfresurrect, nginx, mongodb, nova, radare2, flask, bundler, giflib, ansible, zabbix, salt, imapfilter, opensc and sqlite3.
  • DLA-1886-2. Issued a regression update for openjdk-7. The regression was caused by the removal of several classes in rt.jar by upstream. Since Debian never shipped the SunEC security provider SSL connections based on elliptic curve algorithms could not be established anymore. The problem was solved by building sunec.jar and its native library libsunec.so from source. An update of the nss source package was required too which resolved a five year old bug. (#750400).
  • DLA-1900-1. Issued a security update for apache2 fixing 2 CVE, three more CVE did not affect the version in Jessie.
  • DLA-1914-1. Issued a security update for icedtea-web fixing 3 CVE.
  • I have been working on a backport of opensc, a set of libraries and utilities to access smart cards that support cryptographic operations, from Stretch which will fix more than a dozen CVE.

ELTS

Extended Long Term Support (ELTS) is a project led by Freexian to further extend the lifetime of Debian releases. It is not an official Debian project but all Debian users benefit from it without cost. The current ELTS release is Debian 7 „Wheezy“. This was my fifteenth month and I have been assigned to work 15 hours on ELTS of which I used 10 of them.

  •  I was in charge of our ELTS frontdesk from 26.08.2019 until 01.09.2019 and I triaged CVE in dovecot, libcommons-compress-java, clamav, ghostscript, gosa as end-of-life because security support for them has ended in Wheezy. There were no new issues for supported packages. All in all this was a rather unspectacular week.
  • ELA-156-1. Issued a security update for linux fixing 9 CVE.
  • ELA-154-2. Issued a regression update for openjdk-7 and nss because the removed classes in rt.jar caused the same issues in Wheezy too.

Thanks for reading and see you next time.

Krebs on SecurityPatch Tuesday, September 2019 Edition

Microsoft today issued security updates to plug some 80 security holes in various flavors of its Windows operating systems and related software. The software giant assigned a “critical” rating to almost a quarter of those vulnerabilities, meaning they could be used by malware or miscreants to hijack vulnerable systems with little or no interaction on the part of the user.

Two of the bugs quashed in this month’s patch batch (CVE-2019-1214 and CVE-2019-1215) involve vulnerabilities in all supported versions of Windows that have already been exploited in the wild. Both are known as “privilege escalation” flaws in that they allow an attacker to assume the all-powerful administrator status on a targeted system. Exploits for these types of weaknesses are often deployed along with other attacks that don’t require administrative rights.

September also marks the fourth time this year Microsoft has fixed critical bugs in its Remote Desktop Protocol (RDP) feature, with four critical flaws being patched in the service. According to security vendor Qualys, these Remote Desktop flaws were discovered in a code review by Microsoft, and in order to exploit them an attacker would have to trick a user into connecting to a malicious or hacked RDP server.

Microsoft also fixed another critical vulnerability in the way Windows handles link files ending in “.lnk” that could be used to launch malware on a vulnerable system if a user were to open a removable drive or access a shared folder with a booby-trapped .lnk file on it.

Shortcut files — or those ending in the “.lnk” extension — are Windows files that link easy-to-recognize icons to specific executable programs, and are typically placed on the user’s Desktop or Start Menu. It’s perhaps worth noting that poisoned .lnk files were one of the four known exploits bundled with Stuxnet, a multi-million dollar cyber weapon that American and Israeli intelligence services used to derail Iran’s nuclear enrichment plans roughly a decade ago.

In last month’s Microsoft patch dispatch, I ruefully lamented the utter hose job inflicted on my Windows 10 system by the July round of security updates from Redmond. Many readers responded by saying one or another updates released by Microsoft in August similarly caused reboot loops or issues with Windows repeatedly crashing.

As there do not appear to be any patch-now-or-be-compromised-tomorrow flaws in the September patch rollup, it’s probably safe to say most Windows end-users would benefit from waiting a few days to apply these fixes. 

Very often fixes released on Patch Tuesday have glitches that cause problems for an indeterminate number of Windows systems. When this happens, Microsoft then patches their patches to minimize the same problems for users who haven’t yet applied the updates, but it sometimes takes a few days for Redmond to iron out the kinks.

The trouble is, Windows 10 by default will install patches and reboot your computer whenever it likes. Here’s a tutorial on how to undo that. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Most importantly, please have some kind of system for backing up your files before applying any updates. You can use third-party software to do this, or just rely on the options built into Windows 10. At some level, it doesn’t matter. Just make sure you’re backing up your files, preferably following the 3-2-1 backup rule.

Finally, Adobe fixed two critical bugs in its Flash Player browser plugin, which is bundled in Microsoft’s IE/Edge and Chrome (although now hobbled by default in Chrome). Firefox forces users with the Flash add-on installed to click in order to play Flash content; instructions for disabling or removing Flash from Firefox are here. Adobe will stop supporting Flash at the end of 2020.

As always, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Cory DoctorowCharles de Lint on Radicalized

I’ve been a Charles de Lint fan since I was a kid (see photographic evidence, above, of a 13-year-old me attending one of Charles’s signings at Bakka Books in 1984!), and so I was absolutely delighted to read his kind words in his books column in Fantasy and Science Fiction for my latest book, Radicalized. This book has received a lot of critical acclaim (“among my favorite things I’ve read so far this year”), but to get such a positive notice from Charles is wonderful on a whole different level.

The stories, like “The Masque of the Red Death,” are all set in a very near future. They tackle immigration and poverty, police corruption and brutality, the U.S. health care system and the big pharma companies. None of this is particularly cheerful fodder. The difference is that each of the other three stories give us characters we can really care about, and allow for at least the presence of some hopefulness.

“Unauthorized Bread” takes something we already have and projects it into the future. You’ve heard of Juciero? It’s a Wi-Fi juicer that only lets you use the proprietary pre-chopped produce packs that you have to buy from the company. Produce you already have at home? It doesn’t work because it doesn’t carry the required codes that will let the machine do its work.

In the story, a young woman named Salima discovers that her toaster won’t work, so she goes through the usual steps one does when electronics stop working. Unplug. Reset to factory settings. Finally…

“There was a touchscreen option on the toaster to call support but that wasn’t working, so she used the fridge to look up the number and call it.”

I loved that line.

Books To Look For [Charles de Lint/F&SF]

Planet DebianErich Schubert: Altmetrics of a Retraction Notice

As pointed out by RetractionWatch, AltMetrics even tracks the metrics of a retraction notices.

This retraction notice has an AltMetric of 9 as I write, and it will grow with every mention on blogs (such as this) and Twitter. Even worse, even just one blog post and one tweet by Retraction watch was enough to put the retraction notice “In the top 25% of all research outputs”.

In my opinion, this shows how unreliable these altmetrics are. They are based on the false assumption that Twitter and blogs would be central (or at least representative) of academic importance and attention. But given the very low usage rates of these media by academics, this does not appear to work well, except for a few high-shot papers.

Existing citation indexes, with all their drawbacks, may still be more useful.

Planet DebianJonathan McDowell: Making xinput set-button-map permanent

Since 2006 I’ve been buying a Logitech Trackman Marble (or, as Amazon calls it, a USB Marble Mouse) for both my home and work setups (they don’t die, I just seem to lose them somehow). It’s got a solid feel to it, helps me avoid RSI twinges and when I’m thinking I can take the ball out and play with it. It has 4 buttons, but I find the small one on the right inconvenient to use so I treat it as a 3 button device (the lack of scroll wheel functionality doesn’t generally annoy me). Problem is the small left most button defaults to “Back”, rather than “Middle button”. You can fix this with xinput:

xinput set-button-map "Logitech USB Trackball" 1 8 3 4 5 6 7 2 9

but remembering to do that every boot is annoying. I could put it in a script, but a better approach is to drop the following in /usr/share/X11/xorg.conf.d/50-marblemouse.conf (the fact it’s in /usr/share instead of /etc or ~ is what meant it took me so long to figure out how I’d done it on my laptop for my new machine):

Section "InputClass"
    Identifier      "Marble Mouse"
    MatchProduct    "Logitech USB Trackball"
    MatchIsPointer  "on"
    MatchDevicePath "/dev/input/event*"
    Driver          "evdev"
    Option          "SendCoreEvents" "true"

    #  Physical buttons come from the mouse as:
    #     Big:   1 3
    #     Small: 8 9
    #
    # This makes left small button (8) into the middle, and puts
    #  scrolling on the right small button (9).
    #
    Option "Buttons"            "9"
    Option "ButtonMapping"      "1 8 3 4 5 6 7 2 9"
    Option "EmulateWheel"       "true"
    Option "EmulateWheelButton" "9"

EndSection

This post exists solely for the purpose of reminding future me how I did this on my Debian setup (given that it’s taken me way too long to figure out how I did it 2+ years ago) and apparently original credit goes to Ubuntu for their Logitech Marblemouse USB page.

Worse Than FailureDeath by Consumption

Tryton Party Module Address Database Diagram

The task was simple: change an AMQ consumer to insert data into a new Oracle database instead of an old MS-SQL database. It sounded like the perfect task for the new intern, Rodger; Rodger was fresh out of a boot camp and ready for the real world, if he could only get a little experience under his belt. The kid was bright as they came, but boot camp only does so much, after all.

But there are always complications. The existing service was installed on the old app servers that weren't setup to work with the new corporate app deployment tool. The fix? To uninstall the service on the old app servers and install it on the new ones. Okay, simple enough, if not well suited to the intern.

Rodger got permissions to set up the service on his local machine so he could test his install scripts, and a senior engineer got an uninstall script working as well, so they could seamlessly switch over to the new machines. They flipped the service; deployment day came, and everything went smoothly. The business kicked off their process, the consumer service picked up their message and inserted data correctly to the new database.

The next week, the business kicked off their process again. After the weekend, the owners of the old database realized that the data was inserted into the old database and not the new database. They promptly asked how this had happened. Rodger and his senior engineer friend checked the queue; it correctly had two consumers set up, pointing at the new database. Just to be sure, they also checked the old servers to make sure the service was correctly uninstalled and removed by tech services. All clear.

Hours later, the senior engineer refreshed the queue monitor and saw the queue now had three consumers despite the new setup having only two servers. But how? They checked all three servers—two new and one old—and found no sign of a rogue process.

By that point, Rodger was online for his shift, so the senior engineer headed over to talk to him. "Say, Rodger, any chance one of your installs duplicated itself or inserted itself twice into the consumer list?"

"No way!" Rodger replied. "Here, look, you can see my script, I'll run it again locally to show you."

Running it locally ... with dawning horror, the senior engineer realized what had happened. Roger had the install script, but not the uninstall—meaning he had a copy still running on his local developer laptop, connected to the production queue, but with the old config for some reason. Every time he turned on his computer, hey presto, the service started up.

The moral of the story: always give the intern the destructive task, not the constructive one. That can't go wrong, right?

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Cory DoctorowPodcast: DRM Broke Its Promise

In my latest podcast (MP3), I read my new Locus column, DRM Broke Its Promise, which recalls the days when digital rights management was pitched to us as a way to enable exciting new markets where we’d all save big by only buying the rights we needed (like the low-cost right to read a book for an hour-long plane ride), but instead (unsurprisingly) everything got more expensive and less capable.

The established religion of markets once told us that we must abandon the idea of owning things, that this was an old fashioned idea from the world of grubby atoms. In the futuristic digital realm, no one would own things, we would only license them, and thus be relieved of the terrible burden of ownership.

They were telling the truth. We don’t own things anymore. This summer, Microsoft shut down its ebook store, and in so doing, deactivated its DRM servers, rendering every book the company had sold inert, unreadable. To make up for this, Microsoft sent refunds to the custom­ers it could find, but obviously this is a poor replacement for the books themselves. When I was a bookseller in Toronto, noth­ing that happened would ever result in me breaking into your house to take back the books I’d sold you, and if I did, the fact that I left you a refund wouldn’t have made up for the theft. Not all the books Microsoft is confiscating are even for sale any lon­ger, and some of the people whose books they’re stealing made extensive annotations that will go up in smoke.

What’s more, this isn’t even the first time an electronic bookseller has done this. Walmart announced that it was shutting off its DRM ebooks in 2008 (but stopped after a threat from the FTC). It’s not even the first time Microsoft has done this: in 2004, Microsoft created a line of music players tied to its music store that it called (I’m not making this up) “Plays for Sure.” In 2008, it shut the DRM serv­ers down, and the Plays for Sure titles its customers had bought became Never Plays Ever Again titles.

We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness.

MP3

,

Planet DebianIain R. Learmonth: Spoofing commits to repositories on GitHub

The following has already been reported to GitHub via HackerOne. Someone from GitHub has closed the report as “informative” but told me that it’s a known low-risk issue. As such, while they haven’t explicitly said so, I figure they don’t mind me blogging about it.

Check out this commit in torvalds’ linux.git on GitHub. In case this is fixed, here’s a screenshot of what I see when I look at this link:

GitHub page showing a commit in torvalds/linux with the commit message add super evil code

How did this get past review? It didn’t. You can spoof commits in any repo on GitHub due to the way they handle forks of repositories internally. Instead of copying repositories when forks occur, the objects in the git repository are shared and only the refs are stored per-repository. (GitHub tell me that private repositories are handled differently to avoid private objects leaking out this way. I didn’t verify this but I have no reason to suspect it is not true.)

To reproduce this:

  1. Fork a repository
  2. Push a commit to your fork
  3. Put your commit ref on the end of:
https://github.com/[parent]/[repo]/commit/

That’s all there is to it. You can also add .diff or .patch to the end of the URL and those URLs work too, in the namespace of the parent.

The situation that worries me relates to distribution packaging. Debian has a policy that deltas to packages in the stable repository should be as small as possible, targetting fixes by backporting patches from newer releases.

If you get a bug report on your Debian package with a link to a commit on GitHub, you had better double check that this commit really did come from the upstream author and hasn’t been spoofed in this way. Even if it shows it was authored by the upstream’s GitHub account or email address, this still isn’t proof because this is easily spoofed in git too.

The best defence against being caught out by this is probably signed commits, but if the upstream is not doing that, you can clone the repository from GitHub and check to see that the commit is on a branch that exists in the upstream repository. If the commit is in another fork, the upstream repo won’t have a ref for a branch that contains that commit.

Krebs on SecuritySecret Service Investigates Breach at U.S. Govt IT Contractor

The U.S. Secret Service is investigating a breach at a Virginia-based government technology contractor that saw access to several of its systems put up for sale in the cybercrime underground, KrebsOnSecurity has learned. The contractor claims the access being auctioned off was to old test systems that do not have direct connections to its government partner networks.

In mid-August, a member of a popular Russian-language cybercrime forum offered to sell access to the internal network of a U.S. government IT contractor that does business with more than 20 federal agencies, including several branches of the military. The seller bragged that he had access to email correspondence and credentials needed to view databases of the client agencies, and set the opening price at six bitcoins (~USD $60,000).

A review of the screenshots posted to the cybercrime forum as evidence of the unauthorized access revealed several Internet addresses tied to systems at the U.S. Department of Transportation, the National Institutes of Health (NIH), and U.S. Citizenship and Immigration Services (USCIS), a component of the U.S. Department of Homeland Security that manages the nation’s naturalization and immigration system.

Other domains and Internet addresses included in those screenshots pointed to Miracle Systems LLC, an Arlington, Va. based IT contractor that states on its site that it serves 20+ federal agencies as a prime contractor, including the aforementioned agencies.

In an interview with KrebsOnSecurity, Miracle Systems CEO Sandesh Sharda confirmed that the auction concerned credentials and databases were managed by his company, and that an investigating agent from the Secret Service was in his firm’s offices at that very moment looking into the matter.

But he maintained that the purloined data shown in the screenshots was years-old and mapped only to internal test systems that were never connected to its government agency clients.

“The Secret Service came to us and said they’re looking into the issue,” Sharda said. “But it was all old stuff [that was] in our own internal test environment, and it is no longer valid.”

Still, Sharda did acknowledge information shared by Wisconsin-based security firm Hold Security, which alerted KrebsOnSecurity to this incident, indicating that at least eight of its internal systems had been compromised on three separate occasions between November 2018 and July 2019 by Emotet, a malware strain usually distributed via malware-laced email attachments that typically is used to deploy other malicious software.

The Department of Homeland Security did not respond to requests for comment, nor did the Department of Transportation. A spokesperson for the NIH said the agency had investigated the activity and found it was not compromised by the incident.

“As is the case for all agencies of the Federal Government, the NIH is constantly under threat of cyber-attack,” NIH spokesperson Julius Patterson said. “The NIH has a comprehensive security program that is continuously monitoring and responding to security events, and cyber-related incidents are reported to the Department of Homeland Security through the HHS Computer Security Incident Response Center.”

One of several screenshots offered by the dark web seller as proof of access to a federal IT contractor later identified as Arlington, Va. based Miracle Systems. Image: Hold Security.

The dust-up involving Miracle Systems comes amid much hand-wringing among U.S. federal agencies about how best to beef up and ensure security at a slew of private companies that manage federal IT contracts and handle government data.

For years, federal agencies had few options to hold private contractors to the same security standards to which they must adhere — beyond perhaps restricting how federal dollars are spent. But recent updates to federal acquisition regulations allow agencies to extend those same rules to vendors, enforce specific security requirements, and even kill contracts that are found to be in violation of specific security clauses.

In July, DHS’s Customs and Border Patrol (CPB) suspended all federal contracts with Perceptics, a contractor which sells license-plate scanners and other border control equipment, after data collected by the company was made available for download on the dark web. The CPB later said the breach was the result of a federal contractor copying data on its corporate network, which was subsequently compromised.

For its part, the Department of Defense recently issued long-awaited cybersecurity standards for contractors who work with the Pentagon’s sensitive data.

“This problem is not necessarily a tier-one supply level,” DOD Chief Information Officer Dana Deasy told the Senate Armed Services Committee earlier this year. “It’s down when you get to the tier-three and the tier-four” subcontractors.

Planet DebianBen Hutchings: Distribution kernels at Linux Plumbers Conference 2019

I'm attending the Linux Plumbers Conference in Lisbon from Monday to Wednesday this week. This morning I followed the "Distribution kernels" track, organised by Laura Abbott.

I took notes, included below, mostly with a view to what could be relevant to Debian. Other people took notes in Etherpad. There should also be video recordings available at some point.

Upstream 1st: Tools and workflows for multi kernel version juggling of short term fixes, long term support, board enablement and features with the upstream kernel

Speaker: Bruce Ashfield, working on Yocto at Xilinx.

Details: https://linuxplumbersconf.org/event/4/contributions/467/

Yocto's kernel build recipes need to support multiple active kernel versions (3+ supported streams), multiple architectures, and many different boards. Many patches are required for hardware and other feature support including -rt and aufs.

Goals for maintenance:

  • Changes w.r.t. upstream are visible as discrete patches, so rebased rather than merged
  • Common feature set and configuration
  • Different feature enablements
  • Use as few custom tools as possible

Other distributions have similar goals but very few tools in common. So there is a lot of duplicated effort.

Supporting developers, distro builds and end users is challenging. E.g. developers complained about Yocto having separate git repos for different kernel versions, as this led to them needing more disk space.

Yocto solution:

  • Config fragments, patch tracking repo, generated tree(s)
  • Branched repository with all patches applied
  • Custom change management tools

Using Yocto to build a distro and maintain a kernel tree

Speaker: Senthil Rajaram & Anatoly ? from Microsoft.

Details: https://linuxplumbersconf.org/event/4/contributions/469/

Microsoft chose Yocto as build tool for maintaining Linux distros for different internal customers. Wanted to use a single kernel branch for different products but it was difficult to support all hardware this way.

Maintaining config fragments and sensible inheritance tree is difficult (?). It might be helpful to put config fragments upstream.

Laura Abbott said that the upstream kconfig system had some support for fragments now, and asked what sort of config fragments would be useful. There seemed to be consensus on adding fragments for specific applications and use cases like "what Docker needs".

Kernel build should be decoupled from image build, to reduce unnecessary rebuilding.

Initramfs is unpacked from cpio, which doesn't support SELinux. So they build an initramfs into the kernel, and add a separate initramfs containing a squashfs image which the initramfs code will switch to.

Making it easier for distros to package kernel source

Speaker: Don Zickus, working on RHEL at Red Hat.

Details: https://linuxplumbersconf.org/event/4/contributions/466/

Fedora/RHEL approach:

  • Makefile includes Makefile.distro
  • Other distro stuff goes under distro sub-directory (merge or copy)
  • Add targets like fedora-configs, fedora-srpm

Lots of discussion about whether config can be shared upstream, but no agreement on that.

Kyle McMartin(?): Everyone does the hierarchical config layout - like generic, x86, x86-64 - can we at least put this upstream?

Monitoring and Stabilizing the In-Kernel ABI

Speaker: Matthias Männich, working on Android kernel at Google.

Details: https://linuxplumbersconf.org/event/4/contributions/468/

Why does Android need it?

  • Decouple kernel vs module development
  • Provide single ABI/API for vendor modules
  • Reduce fragmentation (multiple kernel versions for same Android version; one kernel per device)

Project Treble made most of Android user-space independent of device. Now they want to make the kernel and in-tree modules independent too. For each kernel version and architecture there should be a single ABI. Currently they accept one ABI bump per year. Requires single kernel configuration and toolchain. (Vendors would still be allowed to change configuration so long as it didn't change ABI - presumably to enable additional drivers.)

ABI stability is scoped - i.e. they include/exclude which symbols need to be stable.

ABI is compared using libabigail, not genksyms. (Looks like they were using it for libraries already, so now using it for kernel too.)

Q: How we can ignore compatible struct extensions with libabigail?

A: (from Dodji Seketeli, main author) You can add specific "suppressions" for such additions.

KernelCI applied to distributions

Speaker: Guillaume Tucker from Collabora.

Details: https://linuxplumbersconf.org/event/4/contributions/470/

Can KernelCI be used to build distro kernels?

KernelCI currently builds arbitrary branch with in-tree defconfig or small config fragment.

Improvements needed:

  • Preparation steps to apply patches, generate config
  • Package result
  • Track OS image version that kernel should be installed in

Some in audience questioned whether building a package was necessary.

Possible further improvements:

  • Enable testing based on user-space changes
  • Product-oriented features, like running installer
Should KernelCI be used to build distro kernels?

Seems like a pretty close match. Adding support for different use-cases is healthy for KernelCI project. It will help distro kernels stay close to upstream, and distro vendors will then want to contribute to KernelCI.

Discussion

Someone pointed out that this is not only useful for distributions. Distro kernels are sometimes used in embedded systems, and the system builders also want to check for regressions on their specific hardware.

Q: (from Takashi Iwai) How long does testing typically takes? SUSE's full automated tests take ~1 week.

A: A few hours to build, depending on system load, and up to 12 hours to complete boot tests.

Automatically testing distribution kernel packages

Speaker: Alice Ferrazzi of Gentoo.

Details: https://linuxplumbersconf.org/event/4/contributions/471/

Gentoo wants to provide safe, tested kernel packages. Currently testing gentoo-sources and derived packages. gentoo-sources combines upstream kernel source and "genpatches", which contains patches for bug fixes and target-specific features.

Testing multiple kernel configurations - allyesconfig, defconfig, other reasonable configurations. Building with different toolchains.

Tests are implemented using buildbot. Kernel is installed on top of a Gentoo image and then booted in QEMU.

Generalising for discussion:

  • Jenkins vs buildbot vs other
  • Beyond boot testing, like LTP and kselftest
  • LAVA integration
  • Supporting other configurations
  • Any other Gentoo or meta-distro topic

Don Zickus talked briefly about Red Hat's experience. They eventually settled on Gitlab CI for RHEL.

Some discussion of what test suites to run, and whether they are reliable. Varying opinions on LTP.

There is some useful scripting for different test suites at https://github.com/linaro/test-definitions.

Tim Bird talked about his experience testing with Fuego. A lot of the test definitions there aren't reusable. kselftest currently is hard to integrate because tests are supposed to follow TAP13 protocol for reporting but not all of them do!

Distros and Syzkaller - Why bother?

Speaker: George Kennedy, working on virtualisation at Oracle.

Details: https://linuxplumbersconf.org/event/4/contributions/473/

Which distros are using syzkaller? Apparently Google uses it for Android, ChromeOS, and internal kernels.

Oracle is using syzkaller as part of CI for Oracle Linux. "syz-manager" schedules jobs on dedicated servers. There is a cron job that automatically creates bug reports based on crashes triggered by syzkaller.

Google's syzbot currently runs syzkaller on GCE. Planning to also run on QEMU with a wider range of emulated devices.

How to make syzkaller part of distro release process? Need to rebuild the distro kernel with config changes to make syzkaller work better (KASAN, KCOV, etc.) and then install kernel in test VM image.

How to correlate crashes detected on distro kernel with those known and fixed upstream?

Example of benefit: syzkaller found regression in rds_sendmsg, fixed upstream and backported into the distro, but then regressed in Oracle Linux. It turned out that patches to upgrade rds had undone the fix.

syzkaller can generate test cases that fail to build on old kernel versions due to symbols missing from UAPI headers. How to avoid this?

Q: How often does this catch bugs in the distro kernel?

A: It doesn't often catch new bugs but does catch missing fixes and regressions.

Q: Is anyone checking the syzkaller test cases against backported fixes?

A: Yes [but it wasn't clear who or when]

Google has public database of reproducers for all the crashes found by syzbot.

Wish list:

  • Syzkaller repo tag indicating which version is suitable for a given kernel version's UAPI
  • tarball of syzbot reproducers

Other possible types of fuzzing (mostly concentrated on KVM):

  • They fuzz MSRs, control & debug regs with "nano-VM"
  • Missing QEMU and PCI fuzzing
  • Intel and AMD virtualisation work differently, and AMD may not be covered well
  • Missing support for other architectures than x86

CryptogramNotPetya

Worse Than FailureCodeSOD: Making a Nest

Tiffany started the code review with an apology. "I only did this to stay in style with the existing code, because it's either that or we rewrite the whole thing from scratch."

Jim J, who was running the code review nodded. Before Tiffany, this application had been designed from the ground up by Armando. Armando had gone to a tech conference, and learned about F#, and how all those exciting functional features were available in C#, and returned jabbering about "immutable data" and "functors" and "metaprogramming" and decided that he was now a functional programmer, who just happened to work in C#.

Some struggling object-oriented developers use dictionaries for everything. As a struggling functional programmer, Armando used tuples for everything. And these tuples would get deeply nested. Sometimes, you needed to flatten them back out.

Tiffany had contributed this method to do that:

public static Result<Tuple<T1, T2, T3, T4, T5>> FlatternTupleResult<T1, T2, T3, T4, T5>( Result<Tuple<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>, T5>> tuple ) { return tuple.Map(x => new Tuple<T1, T2, T3, T4, T5>(x.Item1.Item1.Item1.Item1, x.Item1.Item1.Item1.Item2, x.Item1.Item1.Item2, x.Item1.Item2, x.Item2)); }

It's safe to say that deeply nested generics are a super clear code smell, and this line: Result<Tuple<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>, T5>> tuple downright reeks. Tuples in tuples in tuples.

Tiffany cringed at the code she had written, but this method lived in the TaskResultHelper class, and lived alongside methods with these signatures:

public static Result<Tuple<T1, T2, T3, T4>> FlatternTupleResult<T1, T2, T3, T4>(Result<Tuple<Tuple<Tuple<T1, T2>, T3>, T4>> tuple) public static Result<Tuple<T1, T2, T3>> FlatternTupleResult<T1, T2, T3>(Result<Tuple<Tuple<T1, T2>, T3>> tuple)

"This does fit in with the way the application currently works," Jim admitted. "I'm sorry."

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Cory DoctorowCome see me in Santa Cruz, San Francisco, Toronto and Maine!

I’m about to leave for a couple of weeks’ worth of lectures, public events and teaching, and you can catch me in many places: Santa Cruz (in conversation with XKCD’s Randall Munroe); San Francisco (for EFF’s Pioneer Awards); Toronto (for Word on the Street, Seeding Utopias and Resisting Dystopias and 6 Degrees); Newry, ME (Maine Library Association) and Portland, ME (in conversation with James Patrick Kelly).

Here’s the full itinerary:

Santa Cruz, September 11, 7PM: Bookshop Santa Cruz Presents an Evening with Randall Munroe, Santa Cruz Bible Church, 440 Frederick St, Santa Cruz, CA 95062

San Francisco, September 12, 6PM: EFF Pioneer Awards, with Adam Savage, William Gibson, danah boyd, and Oakland Privacy; Delancey Street Town Hall, 600 Embarcadero St., San Francisco, California, 94107

Houston and beyond, September 13-22: The Writing Excuses Cruise (sorry, sold out!)

Toronto, September 22: Word on the Street:

Toronto, September 23, 6PM-8PM: Cory Doctorow in Discussion: Seeding Utopias & Resisting Dystopias , with Jim Munroe, Madeline Ashby and Emily Macrae; Oakwood Village Library & Arts Centre, 341 Oakwood Avenue, Toronto, ON M6E 2W1

Toronto, September 24: 360: How to Make Sense at the 6 Degrees Conference, with Aude Favre, Ryan McMahon and Nanjala Nyabola, Art Gallery of Ontario.

Newry, ME, September 30: Keynote for the Maine Library Association Annual Conference, Sunday River Resort, Newry, ME

Portland, ME, September 30, 6:30PM-8PM: In Conversation With James Patrick Kelly, Main Library, Rines Auditorium.

I hope you can make it!

,

Sam VargheseSerena Williams loses another Grand Slam final

Serena Williams has fallen flat on her face again in her bid to equal Margaret Court’s record of 24 Grand Slam titles. This time Williams’ loss was to Canadian teenager Bianca Andreescu – and what makes it better is that she lost in straight sets, 6-3, 7-5.

Andreescu, 19, is a raw hand at the game; she has never played in the main draw of the US Open before. Last year, ranked 208, she was beaten in the first round by Olga Danilovic.

Williams has now lost four Grand Slam finals in pursuit of 24 wins: Angelique Kerber defeated her at Wimbledon in 2018, Naomi Osaka defeated her in the last US Open and Simona Halep accounted for Williams at Wimbledon this year. In all those finals, Williams was unable to win more than four games in any set. And now Andreescu has sent her packing.

Williams appears to be obsessed with being the winner of most Grand Slams before she quits the game. But after returning from maternity leave, she has shown the inability to cope with the pressure of a final. Her last win was in the Australian Open in 2017, when she beat her sister, Venus, 6-4, 6-4.

Unlike many other players, Williams is obsessed with herself. Not for her the low-profile attitude cultivated by the likes of Roger Federer or Steffi Graf. The German woman, who dominated tennis for many years, was a great example for others.

In 1988, Graf thrashed Russian Natasha Zvereva 6-0, 6-0 in the final of the French Open in 34 minutes – the shortest and most one-sided Grand Slam final on record. And Zvereva had beaten the great Martina Navratilova en route to the final!

Yet Graf was low-key at the presentation. She did not laud it over Zvereva who was in tears, she did not indulge in triumphalism. One shudders to think of the way Williams would have carried on in such a situation. She was graciousness personified.

Williams is precisely the opposite. When she wins, it is because she played well. And when she loses, it is all because she did not play well. Her opponent only gets some reluctant praise.

It is time for Williams to do some serious soul-searching and consider whether it is time to bow out. This constant search for a 24th title — and I’m sure she will look for a 25th after that to be atop the winners’ list — is getting a little tiresome.

There is a time in life for everything as it says in the Biblical book of Ecclesiastes. Williams has had a good run but now her obsession with another win is getting on people’s nerves. There is much more to women’s tennis than Serena Williams – and it is time that she realised it as well and retired.

Planet DebianDirk Eddelbuettel: pinp 0.0.8: Bugfix

A new release of our pinp package is now on CRAN. pinp allows for snazzier one or two column Markdown-based pdf vignettes, and is now used by a few packages. A screenshot of the package vignette can be seen below. Additional screenshots are at the pinp page.

pinp vignette

This release was spurned by one of those "CRAN package xyz" emails I received yesterday: processing of pinp-using vignettes was breaking at CRAN under the newest TeX Live release present on Debian testing as well as recent Fedora. The rticles package (which uses the PNAS style directly) apparently has a similar issue with PNAS.

Kurt was a usual extremely helpful in debugging, and we narrowed this down to an interaction with the newer versions of titlesec latex package. So for now we did two things: upgrade our code reusing the PNAS class to their newest verson of the PNAS class (as suggested by Norbert whom I also roped in), but also copying in an older version of titlesec.sty (plus a support file). In the meantime, we are also looking into titlesec directly as Javier offered help—all this was a really decent example of open source firing on all cylinders. It is refreshing.

Because of the move to a newer PNAS version (which seems to clearly help with the occassionally odd formatting of floating blocks near the document end) I may have trampled on earlier extension pull requests. I will reach out to the authors of the PRs to work towards a better process with cleaner diffs, a process I should probably have set up earlier.

The NEWS entry for this release follows.

Changes in pinp version 0.0.8 (2019-09-08)

  • Two erroraneous 'Provides' were removed from the pinp class.

  • The upquote package is now use to use actual (non-fancy) quotes in verbatim mode (Dirk fixing #75)

  • The underlying PNAS style was updated to the most recent v1.44 version of 2018-05-06 to avoid issues with newer TeXLive (Dirk in #79 fixing #77 and #78)

  • The new PNAS code brings some changes eg watermark is longer an option but typesetting paragraphs seems greatly improved. We may have stomped on an existing behavior, if see please file an issue.

  • However, it also conflicts with the current texlive version of titlesec so for now we copy titlesec.sty (and a support file) in using a prior version, just like we do for pinp.cls and jss.bst.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the tint page. For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianShirish Agarwal: Depression, Harrapa, Space program

I was and had been depressed mostly when the election results were out. I was expecting like many others that Congress will come into power, but it didn’t . With that, came one bad after other, whether it was on politics (Jammu and Kashmir, Assam) both of which from my POV are inhumane not just on citizenship but even simply on humane levels. How people can behave like this with each other is beyond me. On the Economic Front, the less said the better. We are in midst of a prolonged recession and don’t see things turning out for the better any time soon. But as we have to come to terms with it and somehow live day-to-day, we are living. Because of the web, came to know there are so many countries where it is happening right now, whether it is Britian (Brexit), South Africa, Brazil. In fact, the West Papu thing is similar in many ways to what happened in Kashmir. Of course each region has its own complexities but this can be safely said that such events are happening all over. In every incident, one way ‘The Other’ is demonized. This has happened in all of the above incidents.

One question I have often asked and have had no clear answers. If Germany knew that Israel would be big and strong as it is now, would they have done what they did ? Had they known that Einstein, A Jew would go on to change the face of Science. Would America have been great without Einstein to such a degree ? I was flabbergasted when I saw ‘The Red Sea Diving Resort‘ which is based on real life done by Mossad as shared in the pictures after the movie.

Even among such blackness, I do see some hope. One thing which has good has been the rise of independant media. While the mainstream media has become completely ridiculous and instead of questioning the Government is toeing its line, independant media is trying to do what mainstream media should have been doing all along. I wouldn’t say much about this otherwise the whole blog post would be about independant India only. Maybe some other day 🙂

Harrapan Civilization

One of the more interesting things, videos has been the gamification of Evolution. There is a game called ‘Ancestors, the Humankind Odyssey‘ . While sadly the game is only on Epic Games Store, I have been following the gameplay as shared by GameRiotArmy . While almost all the people who are playing the game have divorced it from their personal beliefs because of the whole evolution, natural selection vs creationism debate, the game itself feeds on the evolution and natural selection bits. The game is open-world in nature. The only quibble I have is it should have started with the big bang but then it probably would have been too long a game perhaps. I am sure for many people, even this gameplay when the game would be complete would be at least 20-30 episodes .

The Harrappan bit comes in when the following bits came onto twitter . While looking into it, saw this also. I think most of the libraries for it are already in Debian. The papers they are presenting can be found at this link for those interested. What is/was interesting is that the ancient DNA they found is supposed to be Dravidian. As can be seen from the atlantic piece it is pretty political in nature hence the researchers are trying to just do their job. It does make for some interesting reading though.

Space, Chandrayaan 2 and Mars-sim

As far as space is concerned, it has been an eventful week. India crash-landed Chandrayaan 2. While it is too early to say what has gone wrong and we are waiting for the scientists to confirm exactly what went wrong, it came to the fore for the wrong reasons. The images with Mr. Modi and how he reacted before and after became the story rather than what Chandrayaan 2 will be doing. Also it came to the fore that ISRO’s scientists salaries have been cut which is a saddening affair. I had already spoken before how I had spoken to some ISRO scientists for merch. and how they had shared, that merchandising only happens in Gujarat. It really seems sad.

The only thing we know as of date is that we lost communications when it was two and half kilometers before touching the surface of the moon. I do hope there are lots of sensors which have captured but do also understand they can’t put many due to problems like cross-talk as well as power issues probably. I do hope that the lander is able to communicate with the orbiter and soon the lander starts on its wheels. Even if it does not, there is lots the orbiter will be able to do as shared by this twitter thread. I shared the unroll from threadreaderapp. Although I do hope it does start talking and takes baby steps.

As far as mars-sim is concerned, a game I am helping in my spare-time, it is going to take lot of time. We are hoping kotlin comes soon. I am thankful to the Java-team and hopefully the packages which are in NEW come to Debian archive soonish and we have kotlin in Debian. I know this will help with update to gradle as well, which is the reason that kotlin is coming in.

Planet DebianAndrew Cater: Chasing around installing CD images for Buster 10.1 ...

and having great fun, as ever, making a few mistakes and contributing mayhem and entropy to the CD release process. Buster 10.1 point update just released, thanks to RattusRattus, Sledge and Isy and Schweer (amongst others).

Waiting on the Stretch point release to try all over again.. I'd much rather be in Cambridge, but hey, you can't have everything.

Planet DebianDebian GSoC Kotlin project blog: Begining of the end.

Work done.

Hey all, since the last page of this post we have come so far into packaging Kotlin 1.3.30. I am glad to announce that Kotlin 1.3.30's dependencies are completely packaged and only refining work on intellij-community-java( which is the source package of the intellij related jars that kotlin depended on) and Kotlin remain.

I have roughly packaged Kotlin, the debian folder is pretty much done, and have pushed it here. Also the bootstrap package can be found here.

The links to all the dependencies of Kotlin 1.3.30 can be found in my previous blog pages but I ll list them here for convinience of the reader.

1.->java-compatibility-1.0.1 -> https://github.com/JetBrains/intellij-deps-java-compatibility (DONE: here)
2.->jps-model -> https://github.com/JetBrains/intellij-community/tree/master/jps (DONE: here)
3.->intellij-core -> https://github.com/JetBrains/intellij-community/tree/183.5153 (DONE: here)
4.->streamex-0.6.7 -> https://github.com/amaembo/streamex/tree/streamex-0.6.7 (DONE: here)
5.->guava-25.1 -> https://github.com/google/guava/tree/v25.1 (DONE: Used guava-19 from libguava-java)
6.->lz4-java -> https://github.com/lz4/lz4-java/blob/1.3.0/build.xml(DONE:here)
7.->libjna-java & libjna-platform-java recompiled in jdk 8. -> https://salsa.debian.org/java-team/libjna-java (DONE : commit)
8.->liboro-java recompiled in jdk8 -> https://salsa.debian.org/java-team/liboro-java (DONE : commit)
9.->picocontainer-1.3 refining -> https://salsa.debian.org/java-team/libpicocontainer-1-java (DONE: here)
10.->platform-api -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
11.->util -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
12.->platform-impl -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
13.->extensions -> https://github.com/JetBrains/intellij-community/tree/183.5153/platform (DONE: here)
14.->jengeleman:shadow:4.0.3 --> https://github.com/johnrengelman/shadow (DONE)
15.->trove4j 1.x -> https://github.com/JetBrains/intellij-deps-trove4j (DONE)
16.->proguard:6.0.3 in jdk8 (DONE: released as libproguard-java 6.0.3-2)
17.->io.javaslang:2.0.6 --> https://github.com/vavr-io/vavr/tree/javaslang-v2.0.6 (DONE)
18.->jline 3.0.3 --> https://github.com/jline/jline3/tree/jline-3.3.1 (DONE)
19.->protobuf-2.6.1 in jdk8 (DONE)
20.->com.jcabi:jcabi-aether:1.0 -> the file that requires this is commented out;can be seen here and here
21.->org.sonatype.aether:aether-api:1.13.1 -> the file that requires this is commented out;can be seen here and here

Important Notes.

It should be noted that at this point in time, 8th September 2019, the kotlin package only aims to package the jars generated by the ":dist" task of the kotlin build scripts. This task builds the kotlin home. So thats all we have, we don't have the kotlin-gradle-plugins or kotlinx or anything that isn't part of the kotlin home.

It can be noted that the kotlin boostrap package has kotlin-gradle-plugin, kotlinx and kotlin-dsl jars. The eventual plan is to build kotlin-gradle-plugins and kotlinx from kotlin source itself and to build kotlindsl from gradle source using kotlin as a dependency for gradle. After we do that we can get rid of the kotlin bootstrap package.

It should also be noted that this kotlin package as of 8th September 2019 may not be perfect and might contain a ton of bugs, this is because of 2 reasons; partly because I have ignored some code that depended on jcabi-aether(mentioned above with link to commit) and mostly because the platform-api.jar and patform-impl.jar from intellij-community-idea are not the same as their upstream counterpart but minimum files that are required to make kotlin compile without errors; I did this because they needed packaging new dependencies and at this time it didn't look like it was worth it.

Work left to be done.

Now I believe most of the building blocks of packaging kotlin are done and whats left is to remove this pesky bootstrap. I believe this can be counted as the completion of my GSoC (officially ended in August 26). The tasks left are as follows:

Major Tasks.

  1. Make kotlin build just using openjdk-11-jdk; now it builds iwth openjdk-8-jdk and openjdk-11-jdk.
  2. Build kotlin-gradle-plugins.
  3. Build kotlinx.
  4. Build kotlindsl from gradle.
  5. Do 2,3 and 4 and make kotlin build without bootstrap.

Things that will help the kotlin effort.

  1. refine intellij-community-idea and do its copyrights file proper.
  2. import kotlin 1.3.30 into a new debian-java-maintainers repository.
  3. move kotlin changes(now maintained as git commits) to quilt patches. link to kotlin -> here.
  4. do kotlin's copyrights file.
  5. refine kotlin.

Authors Notes.

Hey guys its been a wonderful ride so far. I hope to keep doing this and maintain kotlin in debian. I am only a final year student and my career fare starts this october 17nth 2019 so I have to prepare for coding interviews and start searching jobs. So until late November 2019 I'll only be taking on the smaller tasks and be doing them. Please note that I won't be doing it as fast as I used to up until now since I am going to be a little busy during this period. I hope I can land a job that lets me keep doing this :) .

I would love to take this section to thank _hc, ebourg, andrewsh and seamlik for helping and mentoring me trough all this.

So if any of you want to help please kindly take on any of these tasks.

!!NOTE-ping me if you want to build kotlin in your system and are stuck!!

You can find me as m36 or m36[m] on #debian-mobile and #debian-java in OFTC.

I ll try to maintain this blog and post the major updates.

,

Planet DebianDima Kogan: Are planes crashing any less than they used to?

Recently, I've been spending more of my hiking time looking for old plane crashes in the mountains. And I've been looking for data that helps me do that, for instance the last post. A question that came up in conversation is: "are crashes getting more rare?" And since I now have several datasets at my disposal, I can very easily come up with a crude answer.

The last post describes how to map the available NTSB reports describing aviation incidents. I was only using the post-1982 reports in that project, but here let's also look at the older reports. Today I can download both from their site:

$ wget https://app.ntsb.gov/avdata/Access/avall.zip
$ unzip avall.zip    # <------- Post 1982

$ wget https://app.ntsb.gov/avdata/PRE1982.zip
$ unzip PRE1982.zip  # <------- Pre 1982

I import the relevant parts of each of these into sqlite:

$ ( mdb-schema avall.mdb sqlite -T events;
    echo "BEGIN;";
    mdb-export -I sqlite avall.mdb events;
    echo "COMMIT;";
  ) | sqlite3 post1982.sqlite

$ ( mdb-schema PRE1982.MDB sqlite -T tblFirstHalf;
    echo "BEGIN;";
    mdb-export -I sqlite PRE1982.MDB tblFirstHalf;
    echo "COMMIT;";
  ) | sqlite3 pre1982.sqlite

And then I pull out the incident dates, and make a histogram:

$ cat <(sqlite3 pre1982.sqlite 'select DATE_OCCURRENCE from tblFirstHalf') \
      <(sqlite3 post1982.sqlite 'select ev_date from events') |
  perl -pe 's{^../../(..) .*}{$1 + (($1<40)? 2000: 1900)}e'   |
  feedgnuplot --histo 0 --binwidth 1 --xmin 1960 --xlabel Year \
              --title 'NTSB-reported incident counts by year'

ntsb-histogram-by-year.svg

I guess by that metric everything is getting safer. This clearly just counts NTSB incidents, and I don't do any filtering by the severity of the incident (not all reports describe crashes), but close-enough. The NTSB only deals with civilian incidents in the USA, and only after the early 1960s, it looks like. Any info about the military?

At one point I went through "Historic Aircraft Wrecks of Los Angeles County" by G. Pat Macha, and listed all the described incidents in that book. This histogram of that dataset looks like this:

macha-la-histogram-by-year.svg

Aaand there're a few internet resources that list out significant incidents in Southern California. For instance:

I visualize that dataset:

$ < [abc].htm perl -nE '/^ \s* 19(\d\d) | \d\d \s*(?:\s|-|\/)\s* \d\d \s*(?:\s|-|\/)\s* (\d\d)[^\d]/x || next; $y = 1900+($1 or $2); say $y unless $y==1910' |
  feedgnuplot --histo 0 --binwidth 5

carcomm-by-year.svg

So what did we learn? I guess overall crashes are becoming more rare. And there was a glut of military incidents in the 1940s and 1950s in Southern California (not surprising given all the military bases and aircraft construction facilities here at that time). And by one metric there were lots of incidents in the late 1970s/early 1980s, but they were much more interesting to this "carcomm" person, than they were to Pat Macha.

CryptogramMassive iPhone Hack Targets Uyghurs

China is being blamed for a massive surveillance operation that targeted Uyghur Muslims. This story broke in waves, the first wave being about the iPhone.

Earlier this year, Google's Project Zero found a series of websites that have been using zero-day vulnerabilities to indiscriminately install malware on iPhones that would visit the site. (The vulnerabilities were patched in iOS 12.1.4, released on February 7.)

Earlier this year Google's Threat Analysis Group (TAG) discovered a small collection of hacked websites. The hacked sites were being used in indiscriminate watering hole attacks against their visitors, using iPhone 0-day.

There was no target discrimination; simply visiting the hacked site was enough for the exploit server to attack your device, and if it was successful, install a monitoring implant. We estimate that these sites receive thousands of visitors per week.

TAG was able to collect five separate, complete and unique iPhone exploit chains, covering almost every version from iOS 10 through to the latest version of iOS 12. This indicated a group making a sustained effort to hack the users of iPhones in certain communities over a period of at least two years.

Four more news stories.

This upends pretty much everything we know about iPhone hacking. We believed that it was hard. We believed that effective zero-day exploits cost $2M or $3M, and were used sparingly by governments only against high-value targets. We believed that if an exploit was used too frequently, it would be quickly discovered and patched.

None of that is true here. This operation used fourteen zero-days exploits. It used them indiscriminately. And it remained undetected for two years. (I waited before posting this because I wanted to see if someone would rebut this story, or explain it somehow.)

Google's announcement left out of details, like the URLs of the sites delivering the malware. That omission meant that we had no idea who was behind the attack, although the speculation was that it was a nation-state.

Subsequent reporting added that malware against Android phones and the Windows operating system were also delivered by those websites. And then that the websites were targeted at Uyghurs. Which leads us all to blame China.

So now this is a story of a large, expensive, indiscriminate, Chinese-run surveillance operation against an ethnic minority in their country. And the politics will overshadow the tech. But the tech is still really impressive.

EDITED TO ADD: New data on the value of smartphone exploits:

According to the company, starting today, a zero-click (no user interaction) exploit chain for Android can get hackers and security researchers up to $2.5 million in rewards. A similar exploit chain impacting iOS is worth only $2 million.

EDITED TO ADD (9/6): Apple disputes some of the claims Google made about the extent of the vulnerabilities and the attack.

EDITED TO ADD (9/7): More on Apple's pushbacks.

Valerie AuroraWhy you shouldn’t trust people who support sexual predators

[CW: mention of child sexual abuse]

Should you trust people who support sexual predators? My answer is no. Here’s why:

Anyone who is ethically flexible enough to justify knowingly supporting a sexual predator is ethically flexible enough to justify harming the people who trust and support them.

This week’s news provides a useful case study.

After writing about how to avoid supporting sexual predators, I talked to some of the 250 people who signed a letter of support for Joi Ito to remain as head of MIT Media Lab. They signed this letter between August 26th and September 6th, when they were aware of the initial revelations that Ito and the ML had taken about $2 million from Jeffrey Epstein after his 2008 conviction for child sex offenses.

Here’s the dilemma these signatories were facing: Ito was powerful, and charming, and had inspired loyalty and support in them. The letter says, “We have experienced first-hand Joi’s integrity, and stand in testament to his overwhelmingly positive influence on our lives—and sincerely hope he remains our visionary director for many years to come.” When given evidence that Ito had knowingly supported a convicted serial child rapist, they chose to believe that there was some as-yet unknown explanation which would square with their image of Ito as a person of integrity and ethics. Others viewed taking Epstein’s money as some kind of moral imperative: the money was available, they could do good with it, no one was preventing them from taking it. They denied that Epstein accrued any advantage from the donations. Finally, many of the signatories also depend on Ito for a living; after all, as Upton Sinclair says, it is difficult to get a person to understand something when their salary depends upon their not understanding it.

These 250 people expected their public pledge of loyalty to be rewarded. Instead, on September 6th, we all learned that Ito and other ML staff had been deliberately covering up Epstein’s role in about $8 million in donations to the ML, in contravention of MIT’s explicit disqualification of Epstein as a donor. The article is filled with horrifying details, but most damning of all: Epstein visited the ML in 2015 to meet with Ito in person (a privilege accorded to him for his financial support). The women on the ML staff offered to help two extremely young women accompanying Epstein escape, fearing they were trafficked.

Ito knew Epstein was almost certainly still committing rape after 2008.

Needless to say, this not what the signatories of the letter of support expected. Less than 24 hours after this news broke, the number of signatories had dropped from 250 to 228, and this disclaimer was added: “This petition was drafted by students on August 26th, 2019, and signed by members of the broader Media Lab community in the days that followed, to show their support for Joi and his apology. Given when community members added their names to this petition, their signatures should not be read as continued support of Joi staying on as Media Lab Director following the most recent revelations in the September 6th New Yorker article by Ronan Farrow.”

What happened? This is a phenomenon I’ve seen before, from my time working in the Linux kernel community. It’s this: Every nasty horror show of an abuser is surrounded by a ring of charming enablers who mediate between the abuser and the rest of the world. They make the abuser’s actions more palatable, smooth over the disagreements, invent explanations: the abuser can’t help it, the abuser needs help, the abuser is doing more good than harm, the abuse isn’t real abuse, we’ll always have an abuser so might as well stick with the abuser we know, etc. And around the immediate circle of enablers is a wider circle of dozens and hundreds of kind, trusting, supportive people who believe, in spite of all the evidence, that keeping the abuser and their enablers in power is ethically justified, in some way they aren’t privileged to understand. They don’t fully understand why, but they trust the people in power and keep working on faith.

That first level of charming enabler surrounding the abuser is doing that work with full knowledge of how terrible the abuser is, and they are rationalizing their decision in some way. It might be pure self-interest, it might be in service of some supposed greater goal, it might be a deep psychological need to believe that the abuser can be reformed. Whatever it is, it is a rationalization, and they are daily acting in a way that the surrounding circle of kind, trusting people would consider wildly unethical.

Here’s the key: you can’t trust anyone in that inner circle of enablers. They are people who are ethically flexible enough to rationalize supporting an abuser. They can easily rationalize screwing over the kind people who trust them, as Ito did with the 250 signatories of a letter that said, “We are here for you, we support you, we will forever be grateful for your impact on our lives.” His supporters are finding out the hard way that this kind of devotion and love is only one-way.

I am lucky enough to be in a position where I can refuse to knowingly support sexual predators. I also refuse to associate with people who support sexual predators because I know I can’t trust them to act ethically. I encourage you to join me.

Planet DebianAndreas Metzler: exim update

Testing users might want to manually pull the latest (4.92.1-3) upload of Exim from sid instead of waiting for regular migration to testing. It fixes a nasty vulnerability.

,

CryptogramFriday Squid Blogging: Squid Perfume

It's not perfume for squids. Nor is it perfume made from squids. It's a perfume called Squid, "inspired by life in the sea."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianIustin Pop: Nationalpark Bike Marathon 2019

This is a longer story… but I think it’s interesting, nonetheless.

The previous 6 months

Due to my long-going foot injury, my training was sporadic at the end of 2018, and by February I realised I will have to stop most if not all training in order to have a chance of recovery. I knew that meant no bike races this year, and I was fine with that. Well, I had to.

The only compromise was that I wanted to do one race, the NBM short route, since that is really easy (both technique and endurance), and even untrained should be fine.

So my fitness (well, CTL) went down and down and down, and my weight didn’t improve either.

As April and May went by, my foot was getting better, but training on the indoor trainer was still causing problems and moving my recovery backwards, so easy times.

By June things were looking better - even was able to do a couple slow runs!, July started even better, but trainer sessions were still a no-go. Only early August I could reliably do a short trainer session without issues. The good side was that since around June I could bike to work and back without problems, but that’s a short commute.

But, I felt confident I could do ~50Km on a bike with some uphill, so I registered for the race.

In August I could also restart trainer sessions, and to my pleasant surprise, even harder ones. So, I started preparing for the race in the last 2 weeks before it :( Well, better than nothing.

Overall, my CTL went from 62-65 in August 2018, to 6 (six!!) in early May, and started increasing in June, reaching a peak of 23 on the day before the race. That’s almost three times lower… so my expectations for the race were low. Especially as the longest ride I did in these 6 months was one hour long or so, whereas the race is double this time.

The race week

Things were going quite well. I also started doing some of the Zwift Academy workouts, more precisely 1 and 2 which are at low power, and everything good.

On Wednesday however, I did workout number 3, which has two “as hard as possible” intervals. Which are exactly the ones that my foot can’t yet do, so it caused some pain, and some concern about the race.

Then we went to Scuol, and I didn’t feel very well on Thursday as I was driving. I thought some exercise would help, so I went for a short run, which reminded me I made my foot worse the previous day, and was even more concerned.

On Friday morning, instead of better, I felt terrible. All I wanted was to go back to bed and sleep the whole day, but I knew that would mean no race tomorrow. I thought - maybe some slight walking would be better for me than lie in bed… At least I didn’t have a running nose or couching, but this definitely felt like a cold.

We went up with the gondola, walked ~2Km, got back down, and I was feeling at least not worse. All this time, I was overly dressed and feeling cold, while everybody else was in t-shirt.

A bit of rest in the afternoon helped, I went and picked my race number and felt better. After dinner, as I was preparing my stuff for next morning, I started feeling a bit better about doing the race. “Did not start” was now out of the question, but whether it will be a DNF was not clear yet.

Race (day)

Thankfully the race doesn’t start early for this specific route, so the morning was relatively relaxed. But then of course I was late a few minutes, so I hurried on my bike to the train station, only to realise I’m among the early people. Loading bike, get on the bus (the train station in Scuol is off-line for repairs), long bus ride to start point, and then… 2 hours of waiting. And to think I thought I’m 5 minutes late :)

I spent the two hours just reading email and browsing the internet (and posting a selfie on FB), and then finally it was on.

And I was surprised how “on” the race was from the first 100 meters. Despite repeated announcements in those two hours that the first 2-3 km do not matter since they’re through the S-chanf village, people started going very hard as soon as there was some space.

So I find myself going 40km/h (forty!!!) on a mountain bike on relatively flat gravel road. This sounds unbelievable, right? But the data says:

  • race started at 1’660m altitude
  • after the first 4.4km, I was at 1’650m, with a total altitude gain of 37m (and thus a total descent of 47m); thus, not flat-flat, but not downhill
  • over this 4.4km, my average speed was 32.5km/h, and that includes starting from zero speed, and in the block (average speed for the first minute was 20km/h)

While 32.5km/h on an MTB sounds cool, the sad part was that I knew this was unsustainable, both from the pure speed point of view, and from the heart rate point of view. I was already at 148bpm after 2½ minutes, but then at minute 6 it went over 160bpm and stayed that way. That is above my LTHR (estimated by various Garmin devices), so I was dipping into reserves. VeloViewer estimates power output here was 300-370W in these initial minutes, which is again above my FTP, so…

But, it was fun. Then at 4.5 a bit of climb (800m long, 50 altitude, ~6.3%), after which it became mostly flow on gravel. And for the next hour, until the single long climb (towards Guarda), it was the best ride I had this year, and one of the best segments in races in general. Yes, there are a few short climbs here and there (e.g. a 10% one over ~700m, another ~11% one over 300m or so), but in general it’s slowly descending route from ~1700m altitude to almost 1400m (plus add in another ~120m gained), so ~420m descent over ~22km. This means, despite the short climbs, average speed is still god - a bit more than 25km/h, which made this a very, very nice segment. No foot pain, no exertion, mean heart rate 152bpm, which is fine. Estimated power is a bit high (mean 231W, NP: 271W ← this is strange, too high); I’d really like to have a power meter on my MTB as well.

Then, after about an hour, the climb towards Guarda starts. It’s an easy climb for a fit person, but as I said I was worse for fitness this year, and also my weight was not good. Data for this segment:

  • it took me 33m:48s
  • 281m altitude gained
  • 4.7km length
  • mean HR: 145bpm
  • mean cadence: 75rpm

I remember stopping to drink once, and maybe another time to rest for about half a minute, but not sure. I stopped in total 33s during this half hour.

Then slowly descending on nice roads towards the next small climb to Ftan, then another short climb (thankfully, I was pretty dead at this point) of 780m distance, 7m36s including almost a minute stop, then down, another tiny climb, then down for the finish.

At the finish, knowing that there’s a final climb after you descend into Scuol itself and before the finish, I gathered all my reserves to do the climb standing. Alas, it was a bit longer than I thought; I think I managed to do 75-80% of it standing, but then sat down. Nevertheless, a good short climb:

  • 22m altitude over 245m distance, done in 1m02s
  • mean grade 8.8%, max grade 13.9%
  • mean HR 161bpm, max HR 167bpm which actually was my max for this race
  • mean speed 14.0km/h
  • estimated mean power 433W, NP: 499w; seems optimistic, but I’ll take it :)

Not bad, not bad. I was pretty happy about being able to push this hard, for an entire minute, at the end of the race. Yay for 1m power?

And obligatory picture, which also shows the grade pretty well:

Final climb! And beating my PR by ~9% Final climb! And beating my PR by ~9%

I don’t know how the photographer managed to do it, but having those other people in the picture makes it look much better :)

Comparison with last year

Let’s first look at official race results:

  • 2018: 2h11m37s
  • 2019: 2h22m13s

That’s 8% slower. Honestly, I thought I will do much worse, given my lack of training. Or does a 2.5× lower CTL only result in 8% time loss?

Honestly, I don’t think so. I think what saved me this year was that—since I couldn’t do bike rides—I did much more cross-train as in core exercises. Nothing fancy, just planks, push-ups, yoga, etc. but it helped significantly. If my foot will be fine and I can do both for next year, I’ll be in a much better position.

And this is why the sub-title of this post is “Fitness has many meanings”. I really need to diversify my training in general, but I was thinking in a somewhat theoretical way about it; this race showed it quite practically.

If I look at Strava data, it gives an even more clear picture:

  • on the 1 hour long flat segment I was telling about, which I really loved, I got a PR beating previous year by 1 minute; Strava estimates 250W for this hour, which is what my FTP was last year;
  • on all the climbs, I was slower than last year, as expected, but on the longer climbs significantly so; and I was many times slower than even 2016, when I did the next longer route.

And I just realise, of the 10½m I took longer this year, 6½m I lost on the Guarda climb :)

So yes, you can’t discount fitness, but leg fitness is not everything, and Training Peaks it seems can’t show overall fitness.

At least I did beat my PR on the finishing climb (1m12s vs. 1m19s last year), because I had left aside those final reserves for it.

Next steps

Assuming I’m successful at dealing my foot issue, and that early next year I can restart consistent training, I’m not concerned. I need to put in regular session, I also need to put in long sessions. The success story here is clear, it all depends on willpower.

Oh, and losing ~10kg of fat wouldn’t be bad, like at all.

Cory DoctorowTalking RADICALIZED and MAKERS on Writers Voice

The Writers Voice podcast just published their interview with me about Radicalized; as a bonus, they include my decade-old interview about Makers in the recording!

MP3

CryptogramThe Doghouse: Crown Sterling

A decade ago, the Doghouse was a regular feature in both my email newsletter Crypto-Gram and my blog. In it, I would call out particularly egregious -- and amusing -- examples of cryptographic "snake oil."

I dropped it both because it stopped being fun and because almost everyone converged on standard cryptographic libraries, which meant standard non-snake-oil cryptography. But every so often, a new company comes along that is so ridiculous, so nonsensical, so bizarre, that there is nothing to do but call it out.

Crown Sterling is complete and utter snake oil. The company sells "TIME AI," "the world's first dynamic 'non-factor' based quantum AI encryption software," "utilizing multi-dimensional encryption technology, including time, music's infinite variability, artificial intelligence, and most notably mathematical constancies to generate entangled key pairs." Those sentence fragments tick three of my snake-oil warning signs -- from 1999! -- right there: pseudo-math gobbledygook (warning sign #1), new mathematics (warning sign #2), and extreme cluelessness (warning sign #4).

More: "In March of 2019, Grant identified the first Infinite Prime Number prediction pattern, where the discovery was published on Cornell University's www.arXiv.org titled: 'Accurate and Infinite Prime Number Prediction from Novel Quasi-Prime Analytical Methodology.' The paper was co-authored by Physicist and Number Theorist Talal Ghannam PhD. The discovery challenges today's current encryption framework by enabling the accurate prediction of prime numbers." Note the attempt to leverage Cornell's reputation, even though the preprint server is not peer-reviewed and allows anyone to upload anything. (That should be another warning sign: undeserved appeals to authority.) PhD student Mark Carney took the time to refute it. Most of it is wrong, and what's right isn't new.

I first encountered the company earlier this year. In January, Tom Yemington from the company emailed me, asking to talk. "The founder and CEO, Robert Grant is a successful healthcare CEO and amateur mathematician that has discovered a method for cracking asymmetric encryption methods that are based on the difficulty of finding the prime factors of a large quasi-prime numbers. Thankfully the newly discovered math also provides us with much a stronger approach to encryption based on entangled-pairs of keys." Sounds like complete snake-oil, right? I responded as I usually do when companies contact me, which is to tell them that I'm too busy.

In April, a colleague at IBM suggested I talk with the company. I poked around at the website, and sent back: "That screams 'snake oil.' Bet you a gazillion dollars they have absolutely nothing of value -- and that none of their tech people have any cryptography expertise." But I thought this might be an amusing conversation to have. I wrote back to Yemington. I never heard back -- LinkedIn suggests he left in April -- and forgot about the company completely until it surfaced at Black Hat this year.

Robert Grant, president of Crown Sterling, gave a sponsored talk: "The 2019 Discovery of Quasi-Prime Numbers: What Does This Mean For Encryption?" I didn't see it, but it was widely criticized and heckled. Black Hat was so embarrassed that it removed the presentation from the conference website. (Parts of it remain on the Internet. Here's a short video from the company, if you want to laugh along with everyone else at terms like "infinite wave conjugations" and "quantum AI encryption." Or you can read the company's press release about what happened at Black Hat, or Grant's Twitter feed.)

Grant has no cryptographic credentials. His bio -- on the website of something called the "Resonance Science Foundation" -- is all over the place: "He holds several patents in the fields of photonics, electromagnetism, genetic combinatorics, DNA and phenotypic expression, and cybernetic implant technologies. Mr. Grant published and confirmed the existence of quasi-prime numbers (a new classification of prime numbers) and their infinite pattern inherent to icositetragonal geometry."

Grant's bio on the Crown Sterling website contains this sentence, absolutely beautiful in its nonsensical use of mathematical terms: "He has multiple publications in unified mathematics and physics related to his discoveries of quasi-prime numbers (a new classification for prime numbers), the world's first predictive algorithm determining infinite prime numbers, and a unification wave-based theory connecting and correlating fundamental mathematical constants such as Pi, Euler, Alpha, Gamma and Phi." (Quasi-primes are real, and they're not new. They're numbers with only large prime factors, like RSA moduli.)

Near as I can tell, Grant's coauthor is the mathematician of the company: "Talal Ghannam -- a physicist who has self-published a book called The Mystery of Numbers: Revealed through their Digital Root as well as a comic book called The Chronicles of Maroof the Knight: The Byzantine." Nothing about cryptography.

There seems to be another technical person. Ars Technica writes: "Alan Green (who, according to the Resonance Foundation website, is a research team member and adjunct faculty for the Resonance Academy) is a consultant to the Crown Sterling team, according to a company spokesperson. Until earlier this month, Green -- a musician who was 'musical director for Davy Jones of The Monkees' -- was listed on the Crown Sterling website as Director of Cryptography. Green has written books and a musical about hidden codes in the sonnets of William Shakespeare."

None of these people have demonstrated any cryptographic credentials. No papers, no research, no nothing. (And, no, self-publishing doesn't count.)

After the Black Hat talk, Grant -- and maybe some of those others -- sat down with Ars Technica and spun more snake oil. They claimed that the patterns they found in prime numbers allows them to break RSA. They're not publishing their results "because Crown Sterling's team felt it would be irresponsible to disclose discoveries that would break encryption." (Snake-oil warning sign #7: unsubstantiated claims.) They also claim to have "some very, very strong advisors to the company" who are "experts in the field of cryptography, truly experts." The only one they name is Larry Ponemon, who is a privacy researcher and not a cryptographer at all.

Enough of this. All of us can create ciphers that we cannot break ourselves, which means that amateur cryptographers regularly produce amateur cryptography. These guys are amateurs. Their math is amateurish. Their claims are nonsensical. Run away. Run, far, far, away.

But be careful how loudly you laugh when you do. Not only is the company ridiculous, it's litigious as well. It has sued ten unnamed "John Doe" defendants for booing the Black Hat talk. (It also sued Black Hat, which may have more merit. The company paid $115K to have its talk presented amongst actual peer-reviewed talks. For Black Hat to remove its nonsense may very well be a breach of contract.)

Maybe Crown Sterling can file a meritless lawsuit against me instead for this post. I'm sure it would think it'd result in all sorts of positive press coverage. (Although any press is good press, so maybe it's right.) But if I can prevent others from getting taken in by this stuff, it would be a good thing.

Planet DebianReproducible Builds: Reproducible Builds in August 2019

Welcome to the August 2019 report from the Reproducible Builds project!

In these monthly reports we outline the most important things that have happened in the world of Reproducible Builds and we have been up to.

As a quick recap of our project, whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed to end users or systems as precompiled binaries. The motivation behind the reproducible builds effort is to ensure zero changes have been introduced during these compilation processes. This is achieved by promising identical results are always generated from a given source thus allowing multiple third-parties to come to a consensus on whether a build was changed or even compromised.

In August’s month’s report, we cover:

  • Media coverage & eventsWebmin, CCCamp, etc.
  • Distribution workThe first fully-reproducible package sets, openSUSE update, etc
  • Upstream newslibfaketime updates, gzip, ensuring good definitions, etc.
  • Software developmentMore work on diffoscope, new variations in our testing framework, etc.
  • Misc newsFrom our mailing list, etc.
  • Getting in touchHow to contribute, etc

If you are interested in contributing to our project, please visit our Contribute page on our website.


Media coverage & events

A backdoor was found in Webmin a popular web-based application used by sysadmins to remotely manage Unix-based systems. Whilst more details can be found on upstream’s dedicated exploit page, it appears that the build toolchain was compromised. Especially of note is that the exploit “did not show up in any Git diffs” and thus would not have been found via an audit of the source code. The backdoor would allow a remote attacker to execute arbitrary commands with superuser privileges on the machine running Webmin. Once a machine is compromised, an attacker could then use it to launch attacks on other systems managed through Webmin or indeed any other connected system. Techniques such as reproducible builds can help detect exactly these kinds of attacks that can lay dormant for years. (LWN comments)

In a talk titled There and Back Again, Reproducibly! Holger Levsen and Vagrant Cascadian presented at the 2019 edition of the Linux Developer Conference in São Paulo, Brazil on Reproducible Builds.

LWN posted and hosted an interesting summary and discussion on Hardening the file utility for Debian. In July, Chris Lamb had cross-posted his reply to the “Re: file(1) now with seccomp support enabled” thread, originally started on the debian-devel mailing list. In this post, Chris refers to our strip-nondeterminism tool not being able to accommodate the additional security hardening in file(1) and the changes made to the tool in order to fix this issue which was causing a huge number of regressions in our testing framework.

The Chaos Communication Camp — an international, five-day open-air event for hackers that provides a relaxed atmosphere for free exchange of technical, social, and political ideas — hosted its 2019 edition where there were many discussions and meet-ups at least partly related to Reproducible Builds. This including the titular Reproducible Builds Meetup session which was attended by around twenty-five people where half of them were new to the project as well as a session dedicated to all Arch Linux related issues.


Distribution work

In Debian, the first “package sets” — ie. defined subsets of the entire archive — have become 100% reproducible including as the so-called “essential” set for the bullseye distribution on the amd64 and the armhf architectures. This is thanks to work by Chris Lamb on bash, readline and other low-level libraries and tools. Perl still has issues on i386 and arm64, however.

Dmitry Shachnev filed a bug report against the debhelper utility that speaks to issues around using the date from the debian/changelog file as the source for the SOURCE_DATE_EPOCH environment variable as this can lead to non-intuitive results when package is automatically rebuilt via so-called binary (NB. not “source”) NMUs. A related issue was later filed against qtbase5-dev by Helmut Grohne as this exact issue led to an issue with co-installability across architectures.

Lastly, 115 reviews of Debian packages were added, 45 were updated and 244 were removed this month, appreciably adding to our knowledge about identified issues. Many issue types were updated by Chris Lamb, including embeds_build_data_via_node_preamble, embeds_build_data_via_node_rollup, captures_build_path_in_beam_cma_cmt_files, captures_varying_number_of_build_path_directory_components (discussed later), timezone_specific_files_due_to_haskell_devscripts, etc.

Bernhard M. Wiedemann posted his monthly Reproducible Builds status update for the openSUSE distribution. New issues were found from enabling Link Time Optimization (LTO) in this distribution’s Tumbleweed branch. This affected, for example, nvme-cli as well as perl-XML-Parser and pcc with packaging issues.


Upstream news


Software development

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. In August we wrote a large number of such patches, including:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues. It is run countless times a day on our testing infrastructure and is essential for identifying fixes and causes of non-deterministic behaviour.

This month, Chris Lamb made the following changes:

  • Improvements:
    • Don’t fallback to an unhelpful raw hexdump when, for example, readelf(1) reports an minor issue in a section in an ELF binary. For example, when the .frames section is of the NOBITS type its contents are apparently “unreliable” and thus readelf(1) returns 1. (#58, #931962)
    • Include either standard error or standard output (not just the latter) when an external command fails. []
  • Bug fixes:
    • Skip calls to unsquashfs when we are neither root nor running under fakeroot. (#63)
    • Ensure that all of our artificially-created subprocess.CalledProcessError instances have output instances that are bytes objects, not str. []
    • Correct a reference to parser.diff; diff in this context is a Python function in the module. []
    • Avoid a possible traceback caused by a str/bytes type confusion when handling the output of failing external commands. []
  • Testsuite improvements:

    • Test for 4.4 in the output of squashfs -version, even though the Debian package version is 1:4.3+git190823-1. []
    • Apply a patch from László Böszörményi to update the squashfs test output and additionally bump the required version for the test itself. (#62 & #935684)
    • Add the wabt Debian package to the test-dependencies so that we run the WebAssembly tests on our continuous integration platform, etc. []
  • Improve debugging:
    • Add the containing module name to the (eg.) “Using StaticLibFile for ...” debugging messages. []
    • Strip off trailing “original size modulo 2^32 671” (etc.) from gzip compressed data as this is just a symptom of the contents itself changing that will be reflected elsewhere. (#61)
    • Avoid a lack of space between “... with return code 1” and “Standard output”. []
    • Improve debugging output when instantantiating our Comparator object types. []
    • Add a literal “eg.” to the comment on stripping “original size modulo...” text to emphasise that the actual numbers are not fixed. []
  • Internal code improvements:
    • No need to parse the section group from the class name; we can pass it via type built-in kwargs argument. []
    • Add support to Difference.from_command_exc and friends to ignore specific returncodes from the called program and treat them as “no” difference. []
    • Simplify parsing of optional command_args argument to Difference.from_command_exc. []
    • Set long_description_content_type to text/x-rst to appease the PyPI.org linter. []
    • Reposition a comment regarding an exception within the indented block to match Python code convention. []

In addition, Mattia Rizzolo made the following changes:

  • Now that we install wabt, expect its tools to be available. []
  • Bump the Debian backport check. []

Lastly, Vagrant Cascadian updated diffoscope to versions 120, 121 and 122 in the GNU Guix distribution.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. This month, Chris Lamb made the following changes.

  • Add support for enabling and disabling specific normalizers via the command line. (#10)
  • Drop accidentally-committed warning emitted on every fixture-based test. []
  • Reintroduce the .ar normalizer [] but disable it by default so that it can be enabled with --normalizers=+ar or similar. (#3)
  • In verbose mode, print the normalizers that strip-nondeterminism will apply. []

In addition, there was some movement on an issue in the Archive::Zip Perl module that strip-nondeterminism uses regarding the lack of support for bzip compression that was originally filed in 2016 by Andrew Ayer.

Test framework

We operate a comprehensive Jenkins-based testing framework that powers tests.reproducible-builds.org.

This month Vagrant Cascadian suggested and subsequently implemented that we additionally test a varying build directory of different string lengths (eg. /path/to/123 vs /path/to/123456 but we also vary the number of directory components within this, eg. /path/to/dir vs. /path/to/parent/subdir. Curiously, whilst it was a priori believed that was rather unlikely to yield differences, Chris Lamb has managed to identify approximately twenty packages that are affected by this issue.

It was also noticed that our testing of the Coreboot free software firmware fails to build the toolchain since we switched to building on the Debian buster distribution. The last successful build was on August 7th but all newer builds have failed.

In addition, the following code changes were performed in the last month:

  • Chris Lamb: Ensure that the size the log for the second build in HTML pages was also correctly formatted (eg. “12KB” vs “12345”). []

  • Holger Levsen:

  • Mathieu Parent: Update the contact details for the Debian PHP Group. []

  • Mattia Rizzolo:

The usual node maintenance was performed by Holger Levsen [][] and Vagrant Cascadian [].


Misc news

There was a yet more effort put into our our website this month, including misc copyediting by Chris Lamb [], Mathieu Parent referencing his fix for php-pear [] and Vagrant Cascadian updating a link to his homepage. [].

On our mailing list this month Santiago Torres Arias started a Setting up a MS-hosted rebuilder with in-toto metadata thread regarding Microsoft’s interest in setting up a rebuilder for Debian packages touching on issues of transparency logs and the integration of in-toto by the Secure Systems Lab at New York University. In addition, Lars Wirzenius continued conversation regarding various questions about reproducible builds and their bearing on building a distributed continuous integration system.

Lastly, in a thread titled Reproducible Builds technical introduction tutorial Jathan asked whether anyone had some “easy” Reproducible Builds tutorials in slides, video or written document format.


Getting in touch

If you are interested in contributing the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:



This month’s report was written by Bernhard M. Wiedemann, Chris Lamb, Eli Schwartz, Holger Levsen, Jelle van der Waa, Mathieu Parent and Vagrant Cascadian. It was subsequently reviewed by a bunch of Reproducible Builds folks on IRC and the mailing list.

CryptogramDefault Password for GPS Trackers

Many GPS trackers are shipped with the default password 123456. Many users don't change them.

We just need to eliminate default passwords. This is an easy win.

Worse Than FailureError'd: Does Your Child Say "WTF" at Home?

Abby wrote, "I'm tempted to tell the school that my child mostly speaks Sanskrit."

 

"First of all, I have 58,199 rewards points, so I'm a little bit past joining, second, I'm pretty sure Bing Rewards was rebranded as Microsoft Rewards a while ago, and third...SERPBubbleXL...wat?" writes Zander.

 

"I guess, for T-Mobile, time really is money," Greg writes.

 

Hans K. wrote, "I guess it's sort of fitting, but in a quiz about Generics in Java, I was left a little bit confused.

 

"Wait, so if I do, um, nothing, am I allowed to make further changes or any new appointment?" Jeff K. writes.

 

Soumya wrote, "Yeah...I'm not a big fan of Starbucks' reward program..."

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

LongNowWhat a Prehistoric Monument Reveals about the Value of Maintenance

Members of Long Now London chalking the White Horse of Uffington, a 3000-year-old prehistoric hill figure in England. Photo by Peter Landers.

Imagine, if you will, that you could travel back in time three thousand years to the late Bronze Age, with a bird’s eye view of a hill near the present-day village of Uffington, in Oxfordshire, England. From that vantage, you’d see the unmistakable outlines of a white horse etched into the hillside. It is enormous — roughly the size of a football field — and visible from 20 miles away.

Now, fast forward. Bounding through the millennia, you’d see groups of people arrive from nearby villages at regular intervals, making their way up the hill to partake in good old fashioned maintenance. Using hammers and buckets of chalk, they scour the hillside to ensure the giant pictogram is not obscured. Without this regular maintenance, the hill figure would not last more than twenty years before becoming entirely eroded and overgrown. After the work is done, a festival is held.

Entire civilizations rise and fall. The White Horse of Uffington remains. Scribes and historians make occasional note of the hill figure, such as in the Welsh Red Book of Hergest in 01382 (“Near to the town of Abinton there is a mountain with a figure of a stallion upon it, and it is white. Nothing grows upon it.”) or by the Oxford archivist Francis Wise in 01736 (“The ceremony of scouring the Horse, from time immemorial, has been solemnized by a numerous concourse of people from all the villages roundabout.”). Easily recognizable by air, the horse is temporarily hidden by turf during World War II to confuse Luftwaffe pilots during bombing raids. Today, the National Trust preserves the site, overseeing a regular act of maintenance 3,000 years in the making.

Long Now London chalking the White Horse. Photo by Peter Landers.

Earlier this summer, members of Long Now London took a field trip to Uffington to participate in the time-honored ceremony. Christopher Daniel, the lead organizer of Long Now London, says the idea to chalk the White Horse came from a conversation with Sarah Davis of Longplayer about the maintenance of art, places and meaning across generations and millennia.

“Sitting there, performing the same task as people in 01819, 00819 and around 800 BCE, it is hard not to consider the types and quantities of meaning and ceremony that may have been attached to those actions in those times,” Daniel says.

The White Horse of Uffington in 01937. Photo by Paul Nash.

Researchers still do not know why the horse was made. Archaeologist David Miles, who was able to date the horse to the late Bronze Age using a technique called optical stimulated luminescence, told The Smithsonian that the figure of the horse might be related to early Celtic art, where horses are depicted pulling the chariot of the sun across the sky. From the bottom of the Uffington hill, the sun appears to rise behind the horse.

“From the start the horse would have required regular upkeep to stay visible,” Emily Cleaver writes in The Smithsonian. “It might seem strange that the horse’s creators chose such an unstable form for their monument, but archaeologists believe this could have been intentional. A chalk hill figure requires a social group to maintain it, and it could be that today’s cleaning is an echo of an early ritual gathering that was part of the horse’s original function.”

In her lecture at Long Now earlier this summer, Monica L. Smith, an archaeologist at UCLA, highlighted the importance of ritual sites like Stonehenge and Göbekli Tepe in the eventual formation of cities.

“The first move towards getting people into larger and larger groups was probably something that was a ritual impetus,” she said. “The idea of coming together and gathering with a bunch of strangers was something that is evident in the earliest physical ritual structures that we have in the world today.”

Photo by Peter Landers.

For Christopher Daniel, the visit to Uffington underscored that there are different approaches to making things last. “The White Horse requires rather more regular maintenance than somewhere like Stonehenge,” he said. “But thankfully the required techniques and materials are smaller, simpler and much closer to hand.”

Though it requires considerably less resources to maintain, and is more symbolic than functional, the Uffington White Horse nonetheless offers a lesson in maintaining the infrastructure of cities today. “As humans, we are historically biased against maintenance,” Smith said in her Long Now lecture. “And yet that is exactly what infrastructure needs.”

The Golden Gate Bridge in San Francisco. Photo by Rich Niewiroski Jr.

When infrastructure becomes symbolic to a built environment, it is more likely to be maintained. Smith gave the example of San Francisco’s Golden Gate Bridge to illustrate this point. Much like the White Horse, the Golden Gate Bridge undergoes a willing and regular form of maintenance. “Somewhere between five to ten thousand gallons of paint a year, and thirty painters, are dedicated to keeping the Golden Gate Bridge golden,” Smith said.

Photos by Peter Landers.

For members of Long Now London, chalking the White Horse revealed that participating in acts of maintenance can be deeply meaningful. “It felt at once both quite ordinary and utterly sublime,” Daniel said. “The physical activity itself is in many ways straightforward. It is the context and history that elevate those actions into what we found to be a profound experience. It was also interesting to realize that on some level it does not matter why we do this. What matters most is that it is done.”

Daniel hopes Long Now London will carry out this “secular pilgrimage” every year. 

“Many of the oldest protected routes across Europe are routes of pilgrimage,” he says. “They were stamped out over centuries by people carrying or searching for meaning. I want the horse chalking to carry meaning across both time and space. If even just a few of us go to the horse each year with this intent, it becomes a tradition. Once something becomes a tradition, it attracts meaning, year by year, generation by generation. On this first visit to the horse, one member brought his kids. A couple of other members said they want to bring theirs in the future. This relatively simple act becomes something we do together—something we remember as much for the communal spirit as for the activity itself. In so doing, we layer new meaning onto old as we bash new chalk into old.”


Learn More

Worse Than FailureCodeSOD: Give Your Date a Workout

Bob E inherited a site which helps amateur sports clubs plan their recurring workouts/practices during the season. To do this, given the start date of the season, and the number of weeks, it needs to figure out all of the days in that range.

function GenWorkoutDates()
{

   global $SeasonStartDate, $WorkoutDate, $WeeksInSeason;

   $TempDate = explode("/", $SeasonStartDate);

   for ($i = 1; $i <= $WeeksInSeason; $i++)
   {
     for ($j = 1; $j <= 7; $j++)
     {

	   $MonthName = substr("JanFebMarAprMayJunJulAugSepOctNovDec", $TempDate[0] * 3 - 3, 3);

       $WorkoutDate[$i][$j] = $MonthName . " " . $TempDate[1] . "  ";
       $TempDate[1] += 1;


       switch ( $TempDate[0] )
	   {
     case 9:
	   case 4:
	   case 6:
	   case 11:
	     $DaysInMonth = 30;
	     break;

	   case 2:
     	 $DaysInMonth = 28;

	     switch ( $TempDate[2] )
	     {
	     case 2012:
	     case 2016:
	     case 2020:
	     	$DaysInMonth = 29;
	        break;

	     default:
	       $DaysInMonth = 28;
	       break;
	     }

	     break;

	   default:
	     $DaysInMonth = 31;
	     break;
	   }

	   if ($TempDate[1] > $DaysInMonth)
	   {
	     $TempDate[1] = 1;
	     $TempDate[0] += 1;
	     if ($TempDate[0] > 12)
	     {
	       $TempDate[0] = 1;
	       $TempDate[2] += 1;
	     }
	   }
     }
   }
}

I do enjoy that PHP’s string-splitting function is called explode. That’s not a WTF. More functions should be called explode.

This method of figuring out the month name, though:

$MonthName = substr("JanFebMarAprMayJunJulAugSepOctNovDec", $TempDate[0] * 3 - 3, 3);

I want to hate it, but I’m impressed with it.

From there, we have lovely hard-coded leap years, the “Thirty days has September…” poem implemented as a switch statement, and then that lovely rollover calculation for the end of a month (and the end of the year).

“I’m not a PHP developer,” Bob writes. “But I know how to use Google.” After some googling, he replaced this block of code with a 6-line version that uses built-in date handling functions.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Cory DoctorowCritical essays (including mine) discuss Toronto’s plan to let Google build a surveillance-based “smart city” along its waterfront

Sidewalk Labs is Google’s sister company that sells “smart city” technology; its showcase partner is Toronto, my hometown, where it has made a creepy shitshow out of its freshman outing, from the mass resignations of its privacy advisors to the underhanded way it snuck in the right to take over most of the lakeshore without further consultations (something the company straight up lied about after they were outed). Unsurprisingly, the city, the province, the country, and the company are all being sued over the plan.

Toronto Life has run a great, large package of short essays by proponents and critics of the project, from Sidewalk Labs CEO Dan Doctoroff (no, really, that’s his name) to former privacy commissioner Ann Cavoukian (who evinces an unfortunate belief in data-deidentification) to city councillor and former Greenpeace campaigner Gord Perks to urban guru Richard Florida to me.

I wrote about the prospect that a city could be organized around the principle that people are sensors, not things to be sensed — that is, imagine an internet of things that doesn’t relegate the humans it notionally serves to the status of “thing.”

Our cities are necessarily complex, and they benefit from sensing and control. From census tracts to John Snow’s 19th-century map of central London cholera infections, we have been gathering telemetry on the performance of our cities in order to tune and optimize them for hundreds of years. As cities advance, they demand ever-higher degrees of sensing and actuating. But smart cities have to be built by cities themselves, democratically controlled and publicly owned. Reinventing company towns with high-tech fillips is not a path to a brighter future. It’s a new form of digital feudalism.

Humans are excellent sensors. We’re spectacular at deciding what we want for dinner, which seat on the subway we prefer, which restaurants we’re likely to enjoy and which strangers we want to talk to at parties. What if people were the things that smart cities were designed to serve, rather than the data that smart cities lived to process? Here’s how that could work. Imagine someone ripped all the surveillance out of Android and all the anti-user controls out of iOS and left behind nothing on your phone but code that serves you, not manufacturers or advertisers. It could still collect data—where you are, who you talk to, what you say—but it would be a roach motel for that data, which would check in to your device but not check out. It wouldn’t be available to third parties without your ongoing consent.

A phone that knows about you—but doesn’t tell anyone what it knows about you—would be your interface to a better smart city. The city’s systems could stream data to your device, which could pick the relevant elements out of the torrent: the nearest public restroom, whether the next bus has a seat for you, where to get a great sandwich.


A smart city should serve its users, not mine their data
[Cory Doctorow/Toronto Life]

The Sidewalk Wars [Toronto Life]

(Image: Cryteria, CC-BY, modified)

Planet DebianTim Retout: PA Consulting

In early October, I will be saying goodbye to my colleagues at CV-Library after 7.5 years, and joining PA Consulting in London as a Principal Consultant.

Over the course of my time at CV-Library I have got married, had a child, and moved from Southampton to Bedford. I am happy to have played a part in the growth of CV-Library as a leading recruitment brand in the UK, especially helping to make the site more reliable - I can tell more than a few war stories.

Most of all I will remember the people. I still have much to learn about management, but working with such an excellent team, the years passed very quickly. I am grateful to everyone, and wish them all every future success.

CryptogramCredit Card Privacy

Good article in the Washington Post on all the surveillance associated with credit card use.

Worse Than FailureCodeSOD: UnINTentional Errors

Data type conversions are one of those areas where we have rich, well-supported, well-documented features built into most languages. Thus, we also have endless attempts for people to re-implement them. Or worse, wrap a built-in method in a way which makes everything less clear.

Mindy encountered this.

/* For converting (KEY_MSG_INPUT) to int format. */
public static int numberToIntFormat(String value) {
  int returnValue = -1;    	
  if (!StringUtil.isNullOrEmpty(value)) {
    try {
      int temp = Integer.parseInt(value);
      if (temp > 0) {
        returnValue = temp;
      }
    } catch (NumberFormatException e) {}
  }    	
  return returnValue;
}

The isNullOrEmpty check is arguably pointless, here. Any invalid input string, including null or empty ones, would cause parseInt to throw a NumberFormatException, which we’re already catching. Of course, we’re catching and ignoring it.

That’s assuming that StringUtil.isNullOrEmpty does what we think it does, since while there are third party Java libraries that offer that functionality, it’s not a built-in class (and do we really think the culprit here was using libraries?). Who knows what it actually does.

And, another useful highlight: note how we check if (temp > 0)? Well, this is a problem. Not only does the downstream code handle negative numbers, –1 is a perfectly reasonable value, which means when this method takes -10 and returns -1, what it’s actually done is passed incorrect but valid data back up the chain. And since any errors were swallowed, no one knows if this was intentional or not.

This method wasn’t called in any context relating to KEY_MSG_INPUT, but it was called everywhere, as it’s one of those utility methods that finds new uses any time someone wants to convert a string into a number. Due to its use in pretty much every module, fixing this is considered a "high risk" change, and has been scheduled for sometime in the 2020s.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Krebs on Security‘Satori’ IoT Botnet Operator Pleads Guilty

A 21-year-old man from Vancouver, Wash. has pleaded guilty to federal hacking charges tied to his role in operating the “Satori” botnet, a crime machine powered by hacked Internet of Things (IoT) devices that was built to conduct massive denial-of-service attacks targeting Internet service providers, online gaming platforms and Web hosting companies.

Kenneth “Nexus-Zeta” Schuchman, in an undated photo.

Kenneth Currin Schuchman pleaded guilty to one count of aiding and abetting computer intrusions. Between July 2017 and October 2018, Schuchman was part of a conspiracy with at least two other unnamed individuals to develop and use Satori in large scale online attacks designed to flood their targets with so much junk Internet traffic that the targets became unreachable by legitimate visitors.

According to his plea agreement, Schuchman — who went by the online aliases “Nexus” and “Nexus-Zeta” — worked with at least two other individuals to build and use the Satori botnet, which harnessed the collective bandwidth of approximately 100,000 hacked IoT devices by exploiting vulnerabilities in various wireless routers, digital video recorders, Internet-connected security cameras, and fiber-optic networking devices.

Satori was originally based on the leaked source code for Mirai, a powerful IoT botnet that first appeared in the summer of 2016 and was responsible for some of the largest denial-of-service attacks ever recorded (including a 620 Gbps attack that took KrebsOnSecurity offline for almost four days).

Throughout 2017 and into 2018, Schuchman worked with his co-conspirators — who used the nicknames “Vamp” and “Drake” — to further develop Satori by identifying and exploiting additional security flaws in other IoT systems.

Schuchman and his accomplices gave new monikers to their IoT botnets with almost each new improvement, rechristening their creations with names including “Okiru,” and “Masuta,” and infecting up to 700,000 compromised systems.

The plea agreement states that the object of the conspiracy was to sell access to their botnets to those who wished to rent them for launching attacks against others, although it’s not clear to what extent Schuchman and his alleged co-conspirators succeeded in this regard.

Even after he was indicted in connection with his activities in August 2018, Schuchman created a new botnet variant while on supervised release. At the time, Schuchman and Drake had something of a falling out, and Schuchman later acknowledged using information gleaned by prosecutors to identify Drake’s home address for the purposes of “swatting” him.

Swatting involves making false reports of a potentially violent incident — usually a phony hostage situation, bomb threat or murder — to prompt a heavily-armed police response to the target’s location. According to his plea agreement, the swatting that Schuchman set in motion in October 2018 resulted in “a substantial law enforcement response at Drake’s residence.”

As noted in a September 2018 story, Schuchman was not exactly skilled in the art of obscuring his real identity online. For one thing, the domain name used as a control server to synchronize the activities of the Satori botnet was registered to the email address nexuczeta1337@gmail.com. That domain name was originally registered to a “ZetaSec Inc.” and to a “Kenny Schuchman” in Vancouver, Wash.

People who operate IoT-based botnets maintain and build up their pool of infected IoT systems by constantly scanning the Internet for other vulnerable systems. Schuchman’s plea agreement states that when he received abuse complaints related to his scanning activities, he responded in his father’s identity.

“Schuchman frequently used identification devices belonging to his father to further the criminal scheme,” the plea agreement explains.

While Schuchman may be the first person to plead guilty in connection with Satori and its progeny, he appears to be hardly the most culpable. Multiple sources tell KrebsOnSecurity that Schuchman’s co-conspirator Vamp is a U.K. resident who was principally responsible for coding the Satori botnet, and as a minor was involved in the 2015 hack against U.K. phone and broadband provider TalkTalk.

Multiple sources also say Vamp was principally responsible for the 2016 massive denial-of-service attack that swamped Dyn — a company that provides core Internet services for a host of big-name Web sites. On October 21, 2016, an attack by a Mirai-based IoT botnet variant overwhelmed Dyn’s infrastructure, causing outages at a number of top Internet destinations, including Twitter, Spotify, Reddit and others.

The investigation into Schuchman and his alleged co-conspirators is being run out the FBI field office in Alaska, spearheaded by some of the same agents who helped track down and ultimately secure guilty pleas from the original co-authors of the Mirai botnet.

It remains to be seen what kind of punishment a federal judge will hand down for Schuchman, who reportedly has been diagnosed with Asperger Syndrome and autism. The maximum penalty for the single criminal count to which he’s pleaded guilty is 10 years in prison and fines of up to $250,000.

However, it seems likely his sentencing will fall well short of that maximum: Schuchman’s plea deal states that he agreed to a recommended sentence “at the low end of the guideline range as calculated and adopted by the court.”

Cory DoctorowPodcast: Barlow’s Legacy

Even though I’m at Burning Man, I’ve snuck out an extra scheduled podcast episode (MP3): Barlow’s Legacy is my contribution to the Duke Law and Tech Review’s special edition, THE PAST AND FUTURE OF THE INTERNET: Symposium for John Perry Barlow:

“Who controls the past controls the future; who controls the present controls the past.”1

And now we are come to the great techlash, long overdue and desperately needed. With the techlash comes the political contest to assemble the narrative of What Just Happened and How We Got Here, because “Who controls the past controls the future. Who controls the present controls the past.”Barlow is a key figure in that narrative, and so defining his legacy is key to the project of seizing the future.

As we contest over that legacy, I will here set out my view on it. It’s an insider’s view: I met Barlow first through his writing, and then as a teenager on The WELL, and then at a dinner in London with Electronic Frontier Foundation (EFF) attorney Cindy Cohn (now the executive director of EFF), and then I worked with him, on and off, for more than a decade, through my work with EFF. He lectured to my students at USC, and wrote the introduction to one of my essay collections, and hung out with me at Burning Man, and we spoke on so many bills together, and I wrote him into one of my novels as a character, an act that he blessed. I emceed events where he spoke and sat with him in his hospital room as he lay dying. I make no claim to being Barlow’s best or closest friend, but I count myself mightily privileged to have been a friend, a colleague, and a protege of his.

There is a story today about “cyber-utopians”told as a part of the techlash: Once, there were people who believed that the internet would automatically be a force for good. They told us all to connect to one another and fended off anyone who sought to rein in the power of the technology industry, naively ushering in an era of mass surveillance, monopolism, manipulation, even genocide. These people may have been well-intentioned, but they were smart enough that they should have known better, and if they hadn’t been so unforgivably naive (and, possibly, secretly in the pay of the future monopolists) we might not be in such dire shape today.

MP3

Cory DoctorowPodcast: Barlow’s Legacy

Even though I’m at Burning Man, I’ve snuck out an extra scheduled podcast episode (MP3): Barlow’s Legacy is my contribution to the Duke Law and Tech Review’s special edition, THE PAST AND FUTURE OF THE INTERNET: Symposium for John Perry Barlow:

“Who controls the past controls the future; who controls the present controls the past.”1

And now we are come to the great techlash, long overdue and desperately needed. With the techlash comes the political contest to assemble the narrative of What Just Happened and How We Got Here, because “Who controls the past controls the future. Who controls the present controls the past.”Barlow is a key figure in that narrative, and so defining his legacy is key to the project of seizing the future.

As we contest over that legacy, I will here set out my view on it. It’s an insider’s view: I met Barlow first through his writing, and then as a teenager on The WELL, and then at a dinner in London with Electronic Frontier Foundation (EFF) attorney Cindy Cohn (now the executive director of EFF), and then I worked with him, on and off, for more than a decade, through my work with EFF. He lectured to my students at USC, and wrote the introduction to one of my essay collections, and hung out with me at Burning Man, and we spoke on so many bills together, and I wrote him into one of my novels as a character, an act that he blessed. I emceed events where he spoke and sat with him in his hospital room as he lay dying. I make no claim to being Barlow’s best or closest friend, but I count myself mightily privileged to have been a friend, a colleague, and a protege of his.

There is a story today about “cyber-utopians”told as a part of the techlash: Once, there were people who believed that the internet would automatically be a force for good. They told us all to connect to one another and fended off anyone who sought to rein in the power of the technology industry, naively ushering in an era of mass surveillance, monopolism, manipulation, even genocide. These people may have been well-intentioned, but they were smart enough that they should have known better, and if they hadn’t been so unforgivably naive (and, possibly, secretly in the pay of the future monopolists) we might not be in such dire shape today.

MP3

Planet DebianCandy Tsai: Beyond Outreachy: Final Interviews and Internship Review

The last few weeks (week 11 – week 13) of Outreachy were probably the hardest weeks. I had to do 3 informational interviews with the goal of getting a better picture of the open source/free software industry.

The thought of talking to someone I don’t even know just overwhelms me. So this assignment just leaves me scared to death. Pressing that “Send Email” button to these interviewees required me to summon up all of my courage but it was totally worth it. I really appreciate their time for chatting with me.

On the other hand, it’s hard to believe the internship is coming to an end! Good news is that I will be sticking around Debci after this.

Informational Interviews

The theme for week 11 was “Making connections”, so I had to reach out to 3 people that is beyond my network for an informational interview. I’d rather just call it an informational chat so it doesn’t sound too scary. My goal is to know better about how companies involved with open source survive and how others are working remotely. Therefore, my criteria for the interviewees were really simple but not so easy to find:

  • Lives in Taiwan
  • Works remotely
  • Their company is dedicated to open source/free software

At last I was really lucky to have them for my final assignment:

  • Andrew Lee: also part of the Debian community, has been working on open source for more than 20 years in Taiwan, works at Collabora, an open source consulting company
  • James Tien: works at Automattic, a company known for working on WordPress, link to his blog here, it’s in Chinese
  • Gordon Tai: works at Ververica, a company known for working on Apache Flink

A big thanks to them and to terceiro who guided me through this. During my search, it was hard to find someone working for a local company here in Taiwan that fulfilled my criteria.

I have organized and summarized below:

Staying in Open Source

  • Passion is needed for coding and open source, you have to really enjoy it to stay in the long run
  • Opportunities come unexpectedly, you never know when or how they would come to you
  • Write “code”

Remote work

  • People can still sense your up and downs through your chat messages and facial expressions in video calls
  • Communication is much more important than the actual code itself, sometimes you spend more time speaking out than coding down
  • You can use a pomodora clock to help focus or try working different hours
  • Try working in different environments: cafe shop, under the tree, in the forest, beside the ocean etc.
  • Exercise, exercise, exercise!

These above were very general but it was the stories and experiences that I heard that were special. It is for you to find out by doing your own informational interviews!

Internship Review

Last but not least, here’s a wrap-up of my internship in QA format. Hope that this helps anyone that wants to participate in future rounds get a better picture of how the Outreachy Internship was with Debian Debci.

What communication skills have you learned during the internship?

Asking questions and leaving comments. Since I am not a user of Debci, I started with absolutely zero knowledge. I even had to write a blog post to help me clarify what those terminology were for and come back to it if I forget in the future. I asked lots of questions and luckily my mentors were really patient. As we only have a video chat once per week, we discussed mostly through comments in the merge request or issue most of the time. Sometimes I find it hard for me to convey my thoughts with just words (or images), so this was a really good practice.

What technical skills have you learned during the internship?

I only started writing Ruby because of this internship. Also, I wrote my first VagrantFile. In general, I think getting familiar with the code base was the best part.

How did your Outreachy mentor help you along the way?

My mentor reviewed my code thoroughly and guided my through the whole internship. We did pair programming sessions and that was really helpful.

What was one amazing thing that happened during the internship?

The informational interview was pretty horrifying and at the same time amazing. The idea never really came to me that people would really take the time and talk to someone they don’t know. I am really grateful for their time. Their personal stories were really inspiring and motivating too.

How did Outreachy help you feel more confident in making open source and free software contributions?

In my opinion, Outreachy’s initial contribution phase is really important. It kind of forces candidates to at least reach out and take the first step. Even if you didn’t get accepted in the end, you still went from 0 to 1. That is when you find out that the community is actually pretty welcoming to newcomers. So for me, it wasn’t about being more confident, but rather a not so scared case.

What parts of your project did you complete?

I added a self service section where people can request their own test through the Debci UI without fumbling through CURL commands. Also added a VagrantFile for future newcomers to setup the project more easily. Hope it works for them because I’ve only tested on my computer. We’ll see then.

What are the next steps for you to complete the project?

I’m sticking around and at least until I finish the parts that I started because I think it was fun and people actually made some requests related to this. It’s always exciting to see what you are building is wanted by the users.

Really appreciate the opportunity that Outreachy has been offering to interns! Assuming that you have read through this post, you probably are interested in Outreachy. Please do come and apply if you are interested or recommend it to others!

Cory DoctorowThey told us DRM would give us more for less, but they lied

My latest Locus Magazine column is DRM Broke Its Promise, which recalls the days when digital rights management was pitched to us as a way to enable exciting new markets where we’d all save big by only buying the rights we needed (like the low-cost right to read a book for an hour-long plane ride), but instead (unsurprisingly) everything got more expensive and less capable.

For 40 years, University of Chicago-style market orthodoxy has promised widespread prosperity as a natural consequence of turning everything into unfettered, unregulated, monopolistic businesses. For 40 years, everyone except the paymasters who bankrolled the University of Chicago’s priesthood have gotten poorer.

Today, DRM stands as a perfect example of everything terrible about monopolies, surveillance, and shareholder capitalism.

The established religion of markets once told us that we must abandon the idea of owning things, that this was an old fashioned idea from the world of grubby atoms. In the futuristic digital realm, no one would own things, we would only license them, and thus be relieved of the terrible burden of ownership.

They were telling the truth. We don’t own things anymore. This summer, Microsoft shut down its ebook store, and in so doing, deactivated its DRM servers, rendering every book the company had sold inert, unreadable. To make up for this, Microsoft sent refunds to the custom­ers it could find, but obviously this is a poor replacement for the books themselves. When I was a bookseller in Toronto, noth­ing that happened would ever result in me breaking into your house to take back the books I’d sold you, and if I did, the fact that I left you a refund wouldn’t have made up for the theft. Not all the books Microsoft is confiscating are even for sale any lon­ger, and some of the people whose books they’re stealing made extensive annotations that will go up in smoke.

What’s more, this isn’t even the first time an electronic bookseller has done this. Walmart announced that it was shutting off its DRM ebooks in 2008 (but stopped after a threat from the FTC). It’s not even the first time Microsoft has done this: in 2004, Microsoft created a line of music players tied to its music store that it called (I’m not making this up) “Plays for Sure.” In 2008, it shut the DRM serv­ers down, and the Plays for Sure titles its customers had bought became Never Plays Ever Again titles.

We gave up on owning things – property now being the exclusive purview of transhuman immortal colony organisms called corporations – and we were promised flexibility and bargains. We got price-gouging and brittle­ness.

DRM Broke Its Promise [Locus/Cory Doctorow]

(Image: Cryteria, CC-BY, modified)

,

Krebs on SecuritySpam In your Calendar? Here’s What to Do.

Many spam trends are cyclical: Spammers tend to switch tactics when one method of hijacking your time and attention stops working. But periodically they circle back to old tricks, and few spam trends are as perennial as calendar spam, in which invitations to click on dodgy links show up unbidden in your digital calendar application from Apple, Google and Microsoft. Here’s a brief primer on what you can do about it.

Image: Reddit

Over the past few weeks, a good number of readers have written in to say they feared their calendar app or email account was hacked after noticing a spammy event had been added to their calendars.

The truth is, all that a spammer needs to add an unwelcome appointment to your calendar is the email address tied to your calendar account. That’s because the calendar applications from Apple, Google and Microsoft are set by default to accept calendar invites from anyone.

Calendar invites from spammers run the gamut from ads for porn or pharmacy sites, to claims of an unexpected financial windfall or “free” items of value, to outright phishing attacks and malware lures. The important thing is that you don’t click on any links embedded in these appointments. And resist the temptation to respond to such invitations by selecting “yes,” “no,” or “maybe,” as doing so may only serve to guarantee you more calendar spam.

Fortunately, the are a few simple steps you can take that should help minimize this nuisance. To stop events from being automatically added to your Google calendar:

-Open the Calendar application, and click the gear icon to get to the Calendar Settings page.
-Under “Event Settings,” change the default setting to “No, only show invitations to which I have responded.”

To prevent events from automatically being added to your Microsoft Outlook calendar, click the gear icon in the upper right corner of Outlook to open the settings menu, and then scroll down and select “View all Outlook settings.” From there:

-Click “Calendar,” then “Events from email.”
-Change the default setting for each type of reservation settings to “Only show event summaries in email.”

For Apple calendar users, log in to your iCloud.com account, and select Calendar.

-Click the gear icon in the lower left corner of the Calendar application, and select “Preferences.”
-Click the “Advanced” tab at the top of the box that appears.
-Change the default setting to “Email to [your email here].”

Making these changes will mean that any events your email provider previously added to your calendar automatically by scanning your inbox for certain types of messages from common events — such as making hotel, dining, plane or train reservations, or paying recurring bills — may no longer be added for you. Spammy calendar invitations may still show up via email; in the event they do, make sure to mark the missives as spam.

Have you experienced a spike in calendar spam of late? Or maybe you have another suggestion for blocking it? If so, sound off in the comments below.

Planet DebianNorbert Preining: Debian Activities of the last few months

I haven’t written about specific Debian activities in recent times, but I haven’t been lazy. In fact I have been very active with a lot of new packages I am contributing to.

TeX and Friends

Lots of updates since we first released TeX Live 2019 for Debian, too many to actually mention. We also have bumped the binary package with backports of fixes for dvipdfmx and other programs. Another item that is still pending is the separation of dvisvgm into a separate package (currently in the NEW queue). Biber has been updated to match the version of biblatex shipped in the TeX Live packages.

Calibre

Calibre development is continuing as usual, with lots of activity for getting Calibre ready for Python3. To prepare for this move, I have taken over the Python mechanize package which has been not updated for many years. At the moment it is already possible to build a Calibre package for Python3, but unfortunately by now practically all external plugins are still based on Python2 and thus fail with Python3. As a consequence I will keep Calibre at Python2 version for the time being, and hope that Calibre officially switches to Python3, which would trigger a conversion of the plugins, too, before Bulleye (the next Debian release) is released with the aim to get rid of Python2.

Cinnamon

The packages of Cinnamon 4.0 I have prepared together with the Cinnamon Team have been uploaded to sid, and I have uploaded packages of Cinnamon 4.2 to experimental. We plan to move the 4.2 packages to sid after the 4.0 packages have entered testing.

Onedrive

Onedrive didn’t cut it into the release of buster, in particular because the release masters weren’t happy with an upgrade request I made to get a new version (scheduled to enter testing 1 day after the freeze day!) with loads of fixes into buster. So I decided to remove onedrive altogether from Buster, better nothing than something broken. It is a bit a pain for me – but users are advised to get the source code from Github and install a self compiled version – this is definitely safer.


All in all quite a lot of work. Enjoy.

Worse Than FailureCodeSOD: Boxing with the InTern

A few years ago, Naomi did an internship with Initech. Before her first day, she was very clear on what her responsibilities would be: she'd be on a team modernizing an older product called "Gem" (no relation to Ruby libraries).

By the time her first day rolled around, however, Initech had new priorities. There were a collection of fires on some hyperspecific internal enterprise tool, and everyone was running around and screaming about the apocalypse while dealing with that. Except Naomi, because nobody had any time to bring the intern up to speed on this disaster. Instead, she was given a new priority: just maintain Gem. And no, she wouldn't have a mentor. For the next six months, Naomi was the Gem support team.

"Start by looking at the code quality metrics," was the advice she was given.

It was bad advice. First, while Initech had installed an automated code review tool in their source control system, they weren't using the tool. It had started crashing instead of outputting a report six years ago. Nobody had noticed, or perhaps nobody had cared. Or maybe they just didn't like getting bad news, because once Naomi had the tool running again, the report was full of bad news.

A huge mass of the code was reimplemented copies of the standard library, "tuned for performance", which meant instead of a sensible implementation it was a pile of 4,000 line functions wrapping around massive switch statements. The linter didn't catch that they were parsing XML using regular expressions, but Naomi spotted that and wisely decided not to touch that bit.

What she did find, and fix, was this pattern:

private Boolean isSided; // dozens more properties public GemGeometryEntryPoint(GemGeometryEntryPoint gemGeometryEntryPoint) { this.isSided = gemGeometryEntryPoint.isSided == null ? null : new Boolean(gemGeometryEntryPoint.isSided); // and so on, for those dozens of properties }

Java has two boolean types. The Boolean reference type, and boolean primitive type. The boolean is not a full-fledged object, and thus is smaller in memory and faster to allocate. The Boolean is a full class implementation, with all the overhead contained within. A Java developer will generally need to use both, as if you want a list of boolean values, you need to "box" any primitives into Boolean objects.

I say generally need both, because Naomi's predecessors decided that worrying about boxing was complicated, so they only used the reference types. There wasn't a boolean or an int to be found, just Booleans and Integers. Maybe they just thought "primitive" meant "legacy"?

You can't convert a null into a boxed type, so new Boolean(null) would throw an exception. Thus, the ternary check in the code above. At no point did anyone think that "hey, we're doing a null check on pretty much every variable access" mean that there was something wrong in the code.

The bright side to this whole thing was that the unit tests were exemplary. A few hours with sed meant that Naomi was able to switch most everything to primitive types, confirm that she hadn't introduced any regressions in the process, and even demonstrated that using primitives greatly improved performance, as it cut down on heap memory allocations. The downside was replacing all those ternaries with lines like this.isSided = other.gemGeometryEntryPoint.isSided didn't look nearly as impressive.

Of course, changing that many lines of code in a single commit triggered some alarms, which precipitated a mini-crisis as no one knew what to do when the intern submitted a 15,000 line commit.

Naomi adds: "Mabye null was supposed to represent FILE_NOT_FOUND?"

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Planet DebianJunichi Uekawa: I have an issue remembering where I took notes.

I have an issue remembering where I took notes. In the past it was all in emacs. Now it's somewhere in one of the web services.

Planet DebianSean Whitton: Debian Policy call for participation -- September 2019

There hasn’t been much activity lately, but no shortage of interesting and hopefully-accessible Debian Policy work. Do write to debian-policy@lists.debian.org if you’d like to participate but are struggling to figure out how.

Consensus has been reached and help is needed to write a patch:

#425523 Describe error unwind when unpacking a package fails

#452393 Clarify difference between required and important priorities

#582109 document triggers where appropriate

#592610 Clarify when Conflicts + Replaces et al are appropriate

#682347 mark ‘editor’ virtual package name as obsolete

#685506 copyright-format: new Files-Excluded field

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#770440 policy should mention systemd timers

#823256 Update maintscript arguments with dpkg >= 1.18.5

#905453 Policy does not include a section on NEWS.Debian files

#907051 Say much more about vendoring of libraries

Wording proposed, awaiting review from anyone and/or seconds by DDs:

#786470 [copyright-format] Add an optional “License-Grant” field

#919507 Policy contains no mention of /var/run/reboot-required

#920692 Packages must not install files or directories into /var/cache

#922654 Section 9.1.2 points to a wrong FHS section?

Krebs on SecurityFeds Allege Adconion Employees Hijacked IP Addresses for Spamming

Federal prosecutors in California have filed criminal charges against four employees of Adconion Direct, an email advertising firm, alleging they unlawfully hijacked vast swaths of Internet addresses and used them in large-scale spam campaigns. KrebsOnSecurity has learned that the charges are likely just the opening salvo in a much larger, ongoing federal investigation into the company’s commercial email practices.

Prior to its acquisition, Adconion offered digital advertising solutions to some of the world’s biggest companies, including Adidas, AT&T, Fidelity, Honda, Kohl’s and T-Mobile. Amobee, the Redwood City, Calif. online ad firm that acquired Adconion in 2014, bills itself as the world’s leading independent advertising platform. The CEO of Amobee is Kim Perell, formerly CEO of Adconion.

In October 2018, prosecutors in the Southern District of California named four Adconion employees — Jacob Bychak, Mark ManoogianPetr Pacas, and Mohammed Abdul Qayyum —  in a ten-count indictment on charges of conspiracy, wire fraud, and electronic mail fraud. All four men have pleaded not guilty to the charges, which stem from a grand jury indictment handed down in June 2017.

‘COMPANY A’

The indictment and other court filings in this case refer to the employer of the four men only as “Company A.” However, LinkedIn profiles under the names of three of the accused show they each work(ed) for Adconion and/or Amobee.

Mark Manoogian is an attorney whose LinkedIn profile states that he is director of legal and business affairs at Amobee, and formerly was senior business development manager at Adconion Direct; Bychak is listed as director of operations at Adconion Direct; Quayyum’s LinkedIn page lists him as manager of technical operations at Adconion. A statement of facts filed by the government indicates Petr Pacas was at one point director of operations at Company A (Adconion).

According to the indictment, between December 2010 and September 2014 the defendants engaged in a conspiracy to identify or pay to identify blocks of Internet Protocol (IP) addresses that were registered to others but which were otherwise inactive.

The government alleges the men sent forged letters to an Internet hosting firm claiming they had been authorized by the registrants of the inactive IP addresses to use that space for their own purposes.

“Members of the conspiracy would use the fraudulently acquired IP addresses to send commercial email (‘spam’) messages,” the government charged.

HOSTING IN THE WIND

Prosecutors say the accused were able to spam from the purloined IP address blocks after tricking the owner of Hostwinds, an Oklahoma-based Internet hosting firm, into routing the fraudulently obtained IP addresses on their behalf.

Hostwinds owner Peter Holden was the subject of a 2015 KrebsOnSecurity story titled, “Like Cutting Off a Limb to Save the Body,” which described how he’d initially built a lucrative business catering mainly to spammers, only to later have a change of heart and aggressively work to keep spammers off of his network.

That a case of such potential import for the digital marketing industry has escaped any media attention for so long is unusual but not surprising given what’s at stake for the companies involved and for the government’s ongoing investigations.

Adconion’s parent Amobee manages ad campaigns for some of the world’s top brands, and has every reason not to call attention to charges that some of its key employees may have been involved in criminal activity.

Meanwhile, prosecutors are busy following up on evidence supplied by several cooperating witnesses in this and a related grand jury investigation, including a confidential informant who received information from an Adconion employee about the company’s internal operations.

THE BIGGER PICTURE

According to a memo jointly filed by the defendants, “this case spun off from a larger ongoing investigation into the commercial email practices of Company A.” Ironically, this memo appears to be the only one of several dozen documents related to the indictment that mentions Adconion by name (albeit only in a series of footnote references).

Prosecutors allege the four men bought hijacked IP address blocks from another man tied to this case who was charged separately. This individual, Daniel Dye, has a history of working with others to hijack IP addresses for use by spammers.

For many years, Dye was a system administrator for Optinrealbig, a Colorado company that relentlessly pimped all manner of junk email, from mortgage leads and adult-related services to counterfeit products and Viagra.

Optinrealbig’s CEO was the spam king Scott Richter, who later changed the name of the company to Media Breakaway after being successfully sued for spamming by AOL, MicrosoftMySpace, and the New York Attorney General Office, among others. In 2008, this author penned a column for The Washington Post detailing how Media Breakaway had hijacked tens of thousands of IP addresses from a defunct San Francisco company for use in its spamming operations.

Dye has been charged with violations of the CAN-SPAM Act. A review of the documents in his case suggest Dye accepted a guilty plea agreement in connection with the IP address thefts and is cooperating with the government’s ongoing investigation into Adconion’s email marketing practices, although the plea agreement itself remains under seal.

Lawyers for the four defendants in this case have asserted in court filings that the government’s confidential informant is an employee of Spamhaus.org, an organization that many Internet service providers around the world rely upon to help identify and block sources of malware and spam.

Interestingly, in 2014 Spamhaus was sued by Blackstar Media LLC, a bulk email marketing company and subsidiary of Adconion. Blackstar’s owners sued Spamhaus for defamation after Spamhaus included them at the top of its list of the Top 10 world’s worst spammers. Blackstar later dropped the lawsuit and agreed to paid Spamhaus’ legal costs.

Representatives for Spamhaus declined to comment for this story. Responding to questions about the indictment of Adconion employees, Amobee’s parent company SingTel referred comments to Amobee, which issued a brief statement saying, “Amobee has fully cooperated with the government’s investigation of this 2017 matter which pertains to alleged activities that occurred years prior to Amobee’s acquisition of the company.”

ONE OF THE LARGEST SPAMMERS IN HISTORY?

It appears the government has been investigating Adconion’s email practices since at least 2015, and possibly as early as 2013. The very first result in an online search for the words “Adconion” and “spam” returns a Microsoft Powerpoint document that was presented alongside this talk at an ARIN meeting in October 2016. ARIN stands for the American Registry for Internet Numbers, and it handles IP addresses allocations for entities in the United States, Canada and parts of the Caribbean.

As the screenshot above shows, that Powerpoint deck was originally named “Adconion – Arin,” but the file has since been renamed. That is, unless one downloads the file and looks at the metadata attached to it, which shows the original filename and that it was created in 2015 by someone at the U.S. Department of Justice.

Slide #8 in that Powerpoint document references a case example of an unnamed company (again, “Company A”), which the presenter said was “alleged to be one of the largest spammers in history,” that had hijacked “hundreds of thousands of IP addresses.”

A slide from an ARIN presentation in 2016 that referenced Adconion.

There are fewer than four billion IPv4 addresses available for use, but the vast majority of them have already been allocated. In recent years, this global shortage has turned IP addresses into a commodity wherein each IP can fetch between $15-$25 on the open market.

The dearth of available IP addresses has created boom times for those engaged in the acquisition and sale of IP address blocks. It also has emboldened scammers and spammers who specialize in absconding with and spamming from dormant IP address blocks without permission from the rightful owners.

In May, KrebsOnSecurity broke the news that Amir Golestan — the owner of a prominent Charleston, S.C. tech company called Micfo LLC — had been indicted on criminal charges of fraudulently obtaining more than 735,000 IP addresses from ARIN and reselling the space to others.

KrebsOnSecurity has since learned that for several years prior to 2014, Adconion was one of Golestan’s biggest clients. More on that in an upcoming story.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.9.700.2.0

armadillo image

A new RcppArmadillo release based on a new Armadillo upstream release arrived on CRAN, and will get to Debian shortly. It brings continued improvements for sparse matrices and a few other things; see below for more details. I also appear to have skipped blogging about the preceding 0.9.600.4.0 release (which was actually extra-rigorous with an unprecedented number of reverse-depends runs) so I included its changes (with very nice sparse matrix improvements) as well.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 656 other packages on CRAN.

Changes in RcppArmadillo version 0.9.700.2.0 (2019-09-01)

  • Upgraded to Armadillo release 9.700.2 (Gangster Democracy)

    • faster handling of cubes by vectorise()

    • faster faster handling of sparse matrices by nonzeros()

    • faster row-wise index_min() and index_max()

    • expanded join_rows() and join_cols() to handle joining up to 4 matrices

    • expanded .save() and .load() to allow storing sparse matrices in CSV format

    • added randperm() to generate a vector with random permutation of a sequence of integers

  • Expanded the list of known good gcc and clang versions in configure.ac

Changes in RcppArmadillo version 0.9.600.4.0 (2019-07-14)

  • Upgraded to Armadillo release 9.600.4 (Napa Invasion)

    • faster handling of sparse submatrices

    • faster handling of sparse diagonal views

    • faster handling of sparse matrices by symmatu() and symmatl()

    • faster handling of sparse matrices by join_cols()

    • expanded clamp() to handle sparse matrices

    • added .clean() to replace elements below a threshold with zeros

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Carter: Free Software Activities (2019-08)

Ah, spring time at last. The last month I caught up a bit with my Debian packaging work after the Buster freeze, release and subsequent DebConf. Still a bit to catch up on (mostly kpmcore and partitionmanager that’s waiting on new kdelibs and a few bugs). Other than that I made two new videos, and I’m busy with renovations at home this week so my home office is packed up and in the garage. I’m hoping that it will be done towards the end of next week, until then I’ll have little screen time for anything that’s not work work.

2019-08-01: Review package hipercontracer (1.4.4-1) (mentors.debian.net request) (needs some work).

2019-08-01: Upload package bundlewrap (3.6.2-1) to debian unstable.

2019-08-01: Upload package gnome-shell-extension-dash-to-panel (20-1) to debian unstable.

2019-08-01: Accept MR!2 for gamemode, for new upstream version (1.4-1).

2019-08-02: Upload package gnome-shell-extension-workspaces-to-dock (51-1) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-hide-activities (0.00~git20131024.1.6574986-2) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-trash (0.2.0-git20161122.ad29112-2) to debian unstable.

2019-08-04: Upload package toot (0.22.0-1) to debian unstable.

2019-08-05: Upload package gamemode (gamemode-1.4.1+git20190722.4ecac89-1) to debian unstable.

2019-08-05: Upload package calamares-settings-debian (10.0.24-2) to debian unstable.

2019-08-05: Upload package python3-flask-restful (0.3.7-3) to debian unstable.

2019-08-05: Upload package python3-aniso8601 (7.0.0-2) to debian unstable.

2019-08-06: Upload package gamemode (1.5~git20190722.4ecac89-1) to debian unstable.

2019-08-06: Sponsor package assaultcube (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-06: Sponsor package assaultcube-data (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-07: Request more info on Debian bug #825185 (“Please which tasks should be installed at a default installation of the blend”).

2019-08-07: Close debian bug #689022 in desktop-base (“lxde: Debian wallpaper distorted on 4:3 monitor”).

2019-08-07: Close debian bug #680583 in desktop-base (“please demote librsvg2-common to Recommends”).

2019-08-07: Comment on debian bug #931875 in gnome-shell-extension-multi-monitors (“Error loading extension”) to temporarily avoid autorm.

2019-08-07: File bug (multimedia-devel)

2019-08-07: Upload package python3-grapefruit (0.1~a3+dfsg-7) to debian unstable (Closes: #926414).

2019-08-07: Comment on debian bug #933997 in gamemode (“gamemode isn’t automatically activated for rise of the tomb raider”).

2019-08-07: Sponsor package assaultcube-data (1.2.0.2.1-2) for debian unstable (e-mail request).

2019-08-08: Upload package calamares (3.2.12-1) to debian unstable.

2019-08-08: Close debian bug #32673 in aalib (“open /dev/vcsa* write-only”).

2019-08-08: Upload package tanglet (1.5.4-1) to debian unstable.

2019-08-08: Upload package tmux-theme-jimeh (0+git20190430-1b1b809-1) to debian unstable (Closes: #933222).

2019-08-08: Close debian bug #927219 (“amdgpu graphics fail to be configured”).

2019-08-08: Close debian bugs #861065 and #861067 (For creating nextstep task and live media).

2019-08-10: Sponsor package scons (3.1.1-1) for debian unstable (mentors.debian.org request) (Closes RFS: #932817).

2019-08-10: Sponsor package fractgen (2.1.7-1) for debian unstable (mentors.debian.net request).

2019-08-10: Sponsor package bitwise (0.33-1) for debian unstable (mentors.debian.net request). (Closes RFS: #934022).

2019-08-10: Review package python-pyspike (0.6.0-1) (mentors.debian.net request) (needs some additional work).

2019-08-10: Upload package connectagram (1.2.10-1) to debian unstable.

2019-08-11: Review package bitwise (0.40-1) (mentors.debian.net request) (need some further work).

2019-08-11: Sponsor package sane-backends (1.0.28-1~experimental1) to debian experimental (mentors.debian.net request).

2019-08-11: Review package hcloud-python (1.4.0-1) (mentors.debian.net).

2019-08-13: Review package bitwise (0.40-1) (e-mail request) (needs some further work).

2019-08-15: Sponsor package bitwise (0.40-1) for debian unstable (email request).

2019-08-19: Upload package calamares-settings-debian (10.0.20-1+deb10u1) to debian buster (CVE #2019-13179).

2019-08-19: Upload package gnome-shell-extension-dash-to-panel (21-1) to debian unstable.

2019-08-19: Upload package flask-restful (0.3.7-4) to debian unstable.

2019-08-20: Upload package python3-grapefruit (0.1~a3+dfsg-8) to debian unstable (Closes: #934599).

2019-08-20: Sponsor package runescape (0.6-1) for debian unstable (mentors.debian.net request).

2019-08-20: Review package ukui-menu (1.1.12-1) (needs some mor work) (mentors.debian.net request).

2019-08-20: File ITP #935178 for bcachefs-tools.

2019-08-21: Fix two typos in bcachefs-tools (Github bcachefs-tools PR: #20).

2019-08-25: Published Debian Package of the Day video #60: 5 Fonts (highvoltage.tv / YouTube).

2019-08-26: Upload new upstream release of speedtest-cli (2.1.2-1) to debian unstable (Closes: #934768).

2019-08-26: Upload new package gnome-shell-extension-draw-on-your-screen to NEW for debian untable. (ITP: #925518)

2019-08-27: File upstream bug for btfs so that python2 depencency can be dropped from Debian package (BTFS: #53).

2019-08-28: Published Debian Package Management #4: Maintainer Scripts (highvoltage.tv / YouTube).

2019-08-28: File upstream feature request in Calamares unpackfs module to help speed up installations (Calamares: #1229).

2019-08-28: File upstream request at smlinux/rtl8723de driver for license clarification (RTL8723DE: #49).

Planet DebianMike Gabriel: My Work on Debian LTS/ELTS (August 2019)

In August 2019, I have worked on the Debian LTS project for 24 hours (of 24.75 hours planned) and on the Debian ELTS project for another 2 hours (of 12 hours planned) as a paid contributor.

LTS Work

  • Upload fusiondirectory 1.0.8.2-5+deb8u2 to jessie-security (1 CVE, DLA 1875-1 [1])
  • Upload gosa 2.7.4+reloaded2+deb8u4 to jessie-security (1 CVE, DLA 1876-1 [2])
  • Upload gosa 2.7.4+reloaded2+deb8u5 to jessie-security (1 CVE, DLA 1905-1 [3])
  • Upload libav 6:11.12-1~deb8u8 to jessie-security (5 CVEs, DLA 1907-1 [4])
  • Investigate on CVE-2019-13627 (libgcrypt20). Upstream patch applies, build succeeds, but some tests fail. More work required on this.
  • Triage 14 packages with my LTS frontdesk hat on during the last week of August
  • Do a second pair of eyes review on changes uploaded with dovecot 1:2.2.13-12~deb8u7
  • File a merge request against security-tracker [5], add --minor option to contact-maintainers script.

ELTS Work

  • Investigate on CVE-2019-13627 (libgcrypt11). More work needed to assess if libgrypt11 in wheezy is affected by CVE-2019-13627.

References

Planet DebianJulien Danjou: Dependencies Handling in Python

Dependencies Handling in Python

Dependencies are a nightmare for many people. Some even argue they are technical debt. Managing the list of the libraries of your software is a horrible experience. Updating them — automatically? — sounds like a delirium.

Stick with me here as I am going to help you get a better grasp on something that you cannot, in practice, get rid of — unless you're incredibly rich and talented and can live without the code of others.

First, we need to be clear of something about dependencies: there are two types of them. Donald Stuff wrote better than I would about the subject years ago. To make it simple, one can say that they are two types of code packages depending on  external code: applications and libraries.

Libraries Dependencies

Python libraries should specify their dependencies in a generic way. A library should not require requests 2.1.5: it does not make sense. If every library out there needs a different version of requests, they can't be used at the same time.

Libraries need to declare dependencies based on ranges of version numbers. Requiring requests>=2 is correct. Requiring requests>=1,<2 is also correct if you know that requests 2.x does not work with the library. The problem that your version range specification is solving is the API compatibility issue between your code and your dependencies — nothing else. That's a good reason for libraries to use Semantic Versioning whenever possible.

Therefore, dependencies should be written in setup.py as something like:

from setuptools import setup

setup(
    name="MyLibrary",
    version="1.0",
    install_requires=[
        "requests",
    ],
    # ...
)

This way, it is easy for any application to use the library and co-exist with others.

Applications Dependencies

An application is just a particular case of libraries. They are not intended to be reused (imported) by other libraries of applications — though nothing would prevent it in practice.

In the end, that means that you should specify the dependencies the same way that you would do for a library in the application's setup.py.

The main difference is that an application is usually deployed in production to provide its service. Deployments need to be reproducible. For that, you can't solely rely on setup.py: the requested range of the dependencies are too broad. You're at the mercy of random version changes at any time when re-deploying your application.

You, therefore, need a different version management mechanism to handle deployment than just setup.py.

pipenv has an excellent section recapping this in its documentation. It splits dependency types into abstract and concrete dependencies: abstract dependencies are based on ranges (e.g., libraries) whereas concrete dependencies are specified with precise versions (e.g., application deployments) — as we've just seen here.

Handling Deployment

The requirements.txt file has been used to solve application deployment reproducibility for a long time now. Its format is usually something like:

requests==3.1.5
foobar==2.0

Each library sees itself specified to the micro version. That makes sure each of your deployment is going to install the same version of your dependency. Using a requirements.txt is a simple solution and a first step toward reproducible deployment. However, it's not enough.

Indeed, while you can specify which version of requests you want, if requests depends on urllib3, that could make pip install urllib 2.1 or urllib 2.2. You can't know which one will be installed, which does not make your deployment 100% reproducible.

Of course, you could duplicate all requests dependencies yourself in your requirements.txt, but that would be madness!

Dependencies Handling in PythonAn application dependency tree can be quite deep and complex sometimes.

There are various hacks available to fix this limitation, but the real saviors here are pipenv and poetry. The way they solve it is similar to many package managers in other programming languages. They generate a lock file that contains the list of all installed dependencies (and their own dependencies, etc.) with their version numbers. That makes sure the deployment is 100% reproducible.

Check out their documentation on how to set up and use them!

Handling Dependencies Updates

Now that you have your lock file that makes sure your deployment is reproducible in a snap, you've another problem. How do you make sure that your dependencies are up-to-date? There is a real security concern about this, but also bug fixes and optimizations that you might miss by staying behind.

If your project is hosted on GitHub, Dependabot is an excellent solution to solve this issue. Enabling this application on your repository creates automatically pull requests whenever a new version of the library listed in your lock file is available. For example, if you've deployed your application with redis 3.3.6, Dependabot will create a pull request updating to redis 3.3.7 as soon as it gets released. Furthermore, Dependabot supports requirements.txt, pipenv, and poetry!

Dependencies Handling in PythonDependabot updating jinja2 for you

Automatic Deployment Update

You're almost there. You have a bot that is letting you know that a new version of a library your project needs is available.

Once the pull request is created, your continuous integration system is going to kick in, deploy your project, and runs the test. If everything works fine, your pull request is ready to be merged. But are you really needed in this process?

Unless you have a particular and personal aversion on specific version numbers —"Gosh I hate versions that end with a 3. It's always bad luck."— or unless you have zero automated testing, you, human, is useless. This merge can be fully automatic.

This is where Mergify comes into play. Mergify is a GitHub application allowing to define precise rules about how to merge your pull requests. Here's a rule that I use in every project:

pull_requests_rules:
  - name: automatic merge from dependabot
    conditions:
      - author~=^dependabot(|-preview)\[bot\]$
      - label!=work-in-progress
      - "status-success=ci/circleci: pep8"
      - "status-success=ci/circleci: py37"
    actions:
      merge:
        method: merge
Dependencies Handling in PythonMergify reports when the rule fully matches

As soon as your continuous integration system passes, Mergify merges the pull request for you.

Dependencies Handling in Python

You can then automatically trigger your deployment hooks to update your production deployment and get the new library version installed right away. This leaves your application always up-to-date with newer libraries and not lagging behind several years of releases.

If anything goes wrong, you're still able to revert the commit from Dependabot — which you can also automate if you wish with a Mergify rule.

Beyond

This is to me the state of the art of dependency management lifecycle right now. And while this applies exceptionally well to Python, it can be applied to many other languages that use a similar pattern — such as Node and npm.

Worse Than FailureClassic WTF: Hyperlink 2.0

It's Labor Day in the US, where we celebrate the workers of the world by having a barbecue. Speaking of work, in these days of web frameworks and miles of unnecessary JavaScript to do basic things on the web, let's look back at a simpler time, where we still used server-side code and miles of unnecessary JavaScript to do basic things on the web. Original. --Remy

For those of you who haven't upgraded to Web 2.0 yet, today's submission from Daniel is a perfect example of what you're missing out on. Since the beginning of the Web (the "1.0 days"), website owners have always wanted to know who was visiting their website, how often, and when. Back then, this was accomplished by recording each website "hit" in a log file and running a report on the log later.

But the problem with this method in Web 2.0 is that people don't use logs anymore; they use blogs, and everyone knows that blogs are a pretty stupid way of tracking web traffic. Fortunately, Daniel's colleagues developed an elegant, clever, and -- most importantly -- "AJAX" way of solving this problem. Instead of being coded in HTML pages, all hyperlinks are assigned a numeric identifier and kept in a database table. This identifier is then used on the HTML pages within an anchor tag:

<a href="Javascript: followLink(124);">View Products</a>

When the user clicks on the hyperlink, the followLink() Javascript function is executed and the following occur:

  • a translucent layer (DIV) is placed over the entire page, causing it to appear "grayed out", and ...
  • a "please wait" layer is placed on top of that, with an animated pendulum swinging back and forth, then ...
  • the XmlHttpRequest object is used to call the "GetHyperlink" web service which, in turn ...
  • opens a connection to the database server to ...
  • log the request in the RequestedHyperlinks table and ...
  • retrieves the URL from the Hyperlinks table, then ...
  • returns it to the client script, which then ...
  • sets the window.location property to the URL retrieved, which causes ...
  • the user to be redirected to the appropriate page

Now that's two-point-ohey.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: rra-c-util 8.0

This is a roll-up of a lot of changes to my utility package for C (and increasingly for Perl). It's been more than a year since the last release, so it's long-overdue.

Most of the changes in this release are to the Perl test libraries and accompanying tests. Test::RRA now must be imported before Test::More so that it can handle the absence of Test::More (such as on Red Hat systems with perl but not perl-core installed). The is_file_contents function in Test::RRA now handles Windows and other systems without a diff program. And there are more minor improvements to the various tests written in Perl.

The Autoconf probe RRA_LIB_KRB5_OPTIONAL now correctly handles the case where Kerberos libraries are not available but libcom_err is, rather than incorrectly believing that Kerberos libraries were present.

As of this release, rra-c-util now tests the Perl test programs that it includes, which requires it to build and test a dummy Perl module. This means the build system now requires Perl 5.6.2 and the Module::Build module.

You can get the latest version from the rra-c-util distribution page.

,

Planet DebianThorsten Alteholz: My Debian Activities in August 2019

FTP master

This month the numbers went up again and I accepted 389 packages and rejected 43. The overall number of packages that got accepted was 460.

Debian LTS

This was my sixty second month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 21.75h. During that time I did LTS uploads of:

  • [DLA 1887-1] freetype security update for one CVE
  • [DLA 1889-1] python3.4 security update for one CVE
  • [DLA 1893-1] cups security update for two CVEs
  • [DLA 1895-1] libmspack security update for one CVE
  • [DLA 1894-1] libapache2-mod-auth-openidc security update for one CVE
  • [DLA 1897-1] tiff security update for one CVE
  • [DLA 1902-1] djvulibre security update for four CVEs
  • [DLA 1904-1] libextractor security update for one CVE
  • [DLA 1906-1] python2.7 security update for one CVE

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the fifteenth ELTS month.

During my allocated time I uploaded:

  • ELA-155-1 of cups
  • ELA-157-1 of djvulibre
  • ELA-158-1 of python2.7

I spent some time to work on tiff3 only to find that the affected features are not yet available.

I also did some days of frontdesk duties.

Other stuff

This month I uploaded new packages of …

I also uploaded new upstream versions of …

I improved packaging of …

On my Go challenge I uploaded golang-github-gin-contrib-static, golang-github-gin-contrib-cors, golang-github-yourbasic-graph, golang-github-cnf-structhash, golang-github-deanthompson-ginpprof, golang-github-jarcoal-httpmock, golang-github-gin-contrib-gzip, golang-github-mcuadros-go-gin-prometheus, golang-github-abdullin-seq, golang-github-centurylinkcloud-clc-sdk, golang-github-ziutek-mymysql, golang-github-terra-farm-udnssdk, golang-github-ensighten-udnssdk, golang-github-sethvargo-go-fastly.

I again reuploaded some go packages (golang-github-go-xorm-core, golang-github-jarcoal-httpmock, golang-github-mcuadros-go-gin-prometheus, golang-github-deanthompson-ginpprof, golang-github-gin-contrib-cors, golang-github-gin-contrib-gzip, golang-github-gin-contrib-static, golang-github-cyberdelia-heroku-go, golang-github-corpix-uarand, golang-github-cnf-structhash, golang-github-rs-zerolog, golang-gopkg-ldap.v3, golang-github-yourbasic-graph, golang-github-ovh-go-ovh, , that would not migrate due to being binary uploads before.

I also sponsored the following packages: golang-github-jesseduffield-gocui, printrun, cura-engine, theme-d, theme-d-gnome.

The DOPOM package for this month was gengetopt.

LongNowThe Amazon is not the Earth’s Lungs

An aerial view of forest fire of the Amazon taken with a drone is seen from an Indigenous territory in the state of Mato Grosso, in Brazil, August 23, 2019, obtained by Reuters on August 25, 2019. Marizilda Cruppe/Amnesty International/Handout via REUTERS.

In the wake of the troubling reports about fires in Brazil’s Amazon rainforest, much misinformation spread across social media. On Facebook posts and news reports, the Amazon was described as being the “lungs of the Earth.” Peter Brannen, writing in The Atlantic, details why that isn’t the case—not to downplay the impact of the fires, but to educate audiences on how the various systems of our planet interact:

The Amazon is a vast, ineffable, vital, living wonder. It does not, however, supply the planet with 20 percent of its oxygen.

As the biochemist Nick Lane wrote in his 2003 book Oxygen, “Even the most foolhardy destruction of world forests could hardly dint our oxygen supply, though in other respects such short-sighted idiocy is an unspeakable tragedy.”

The Amazon produces about 6 percent of the oxygen currently being made by photosynthetic organisms alive on the planet today. But surprisingly, this is not where most of our oxygen comes from. In fact, from a broader Earth-system perspective, in which the biosphere not only creates but also consumes free oxygen, the Amazon’s contribution to our planet’s unusual abundance of the stuff is more or less zero. This is not a pedantic detail. Geology provides a strange picture of how the world works that helps illuminate just how bizarre and unprecedented the ongoing human experiment on the planet really is. Contrary to almost every popular account, Earth maintains an unusual surfeit of free oxygen—an incredibly reactive gas that does not want to be in the atmosphere—largely due not to living, breathing trees, but to the existence, underground, of fossil fuels.

Read Brannen’s piece in full here.

Planet DebianPetter Reinholdtsen: Norwegian movies that might be legal to share on the Internet

While working on identifying and counting movies that can be legally shared on the Internet, I also looked at the Norwegian movies listed in IMDb. So far I have identified 54 candidates published before 1940 that might no longer be protected by norwegian copyright law. Of these, only 29 are available at least in part from the Norwegian National Library. It can be assumed that the remaining 25 movies are lost. It seem most useful to identify the copyright status of movies that are not lost. To verify that the movie is really no longer protected, one need to verify the list of copyright holders and figure out if and when they died. I've been able to identify some of them, but for some it is hard to figure out when they died.

This is the list of 29 movies both available from the library and possibly no longer protected by copyright law. The year range (1909-1979 on the first line) is year of publication and last year with copyright protection.

1909-1979 ( 70 year) NSB Bergensbanen 1909 - http://www.imdb.com/title/tt0347601/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons likfærd - http://www.imdb.com/title/tt9299304/
1910-1980 ( 70 year) Bjørnstjerne Bjørnsons begravelse - http://www.imdb.com/title/tt9299300/
1912-1998 ( 86 year) Roald Amundsens Sydpolsferd (1910-1912) - http://www.imdb.com/title/tt9237500/
1913-2006 ( 93 year) Roald Amundsen på sydpolen - http://www.imdb.com/title/tt0347886/
1917-1987 ( 70 year) Fanden i nøtten - http://www.imdb.com/title/tt0346964/
1919-2018 ( 99 year) Historien om en gut - http://www.imdb.com/title/tt0010259/
1920-1990 ( 70 year) Kaksen på Øverland - http://www.imdb.com/title/tt0011361/
1923-1993 ( 70 year) Norge - en skildring i 6 akter - http://www.imdb.com/title/tt0014319/
1925-1997 ( 72 year) Roald Amundsen - Ellsworths flyveekspedition 1925 - http://www.imdb.com/title/tt0016295/
1925-1995 ( 70 year) En verdensreise, eller Da knold og tott vaskede negrene hvite med 13 sæpen - http://www.imdb.com/title/tt1018948/
1926-1996 ( 70 year) Luftskibet 'Norge's flugt over polhavet - http://www.imdb.com/title/tt0017090/
1926-1996 ( 70 year) Med 'Maud' over Polhavet - http://www.imdb.com/title/tt0017129/
1927-1997 ( 70 year) Den store sultan - http://www.imdb.com/title/tt1017997/
1928-1998 ( 70 year) Noahs ark - http://www.imdb.com/title/tt1018917/
1928-1998 ( 70 year) Skjæbnen - http://www.imdb.com/title/tt1002652/
1928-1998 ( 70 year) Chefens cigarett - http://www.imdb.com/title/tt1019896/
1929-1999 ( 70 year) Se Norge - http://www.imdb.com/title/tt0020378/
1929-1999 ( 70 year) Fra Chr. Michelsen til Kronprins Olav og Prinsesse Martha - http://www.imdb.com/title/tt0019899/
1930-2000 ( 70 year) Mot ukjent land - http://www.imdb.com/title/tt0021158/
1930-2000 ( 70 year) Det er natt - http://www.imdb.com/title/tt1017904/
1930-2000 ( 70 year) Over Besseggen på motorcykel - http://www.imdb.com/title/tt0347721/
1931-2001 ( 70 year) Glimt fra New York og den Norske koloni - http://www.imdb.com/title/tt0021913/
1932-2007 ( 75 year) En glad gutt - http://www.imdb.com/title/tt0022946/
1934-2004 ( 70 year) Den lystige radio-trio - http://www.imdb.com/title/tt1002628/
1935-2005 ( 70 year) Kronprinsparets reise i Nord Norge - http://www.imdb.com/title/tt0268411/
1935-2005 ( 70 year) Stormangrep - http://www.imdb.com/title/tt1017998/
1936-2006 ( 70 year) En fargesymfoni i blått - http://www.imdb.com/title/tt1002762/
1939-2009 ( 70 year) Til Vesterheimen - http://www.imdb.com/title/tt0032036/
To be sure which one of these can be legally shared on the Internet, in addition to verifying the right holders list is complete, one need to verify the death year of these persons:
Bjørnstjerne Bjørnson (dead 1910) - http://www.imdb.com/name/nm0085085/
Gustav Adolf Olsen (missing death year) - http://www.imdb.com/name/nm0647652/
Gustav Lund (missing death year) - http://www.imdb.com/name/nm0526168/
John W. Brunius (dead 1937) - http://www.imdb.com/name/nm0116307/
Ola Cornelius (missing death year) - http://www.imdb.com/name/nm1227236/
Oskar Omdal (dead 1927) - http://www.imdb.com/name/nm3116241/
Paul Berge (missing death year) - http://www.imdb.com/name/nm0074006/
Peter Lykke-Seest (dead 1948) - http://www.imdb.com/name/nm0528064/
Roald Amundsen (dead 1928) - https://www.imdb.com/name/nm0025468/
Sverre Halvorsen (dead 1936) - http://www.imdb.com/name/nm1299757/
Thomas W. Schwartz (missing death year) - http://www.imdb.com/name/nm2616250/

Perhaps you can help me figuring death year of those missing it, or right holders if some are missing in IMDb? It would be nice to have a definite list of Norwegian movies that are legal to share on the Internet.

This is the list of 25 movies not available from the library and possibly no longer protected by copyright law:

1907-2009 (102 year) Fiskerlivets farer - http://www.imdb.com/title/tt0121288/
1912-2018 (106 year) Historien omen moder - http://www.imdb.com/title/tt0382852/
1912-2002 ( 90 year) Anny - en gatepiges roman - http://www.imdb.com/title/tt0002026/
1916-1986 ( 70 year) The Mother Who Paid - http://www.imdb.com/title/tt3619226/
1917-2018 (101 year) En vinternat - http://www.imdb.com/title/tt0008740/
1917-2018 (101 year) Unge hjerter - http://www.imdb.com/title/tt0008719/
1917-2018 (101 year) De forældreløse - http://www.imdb.com/title/tt0007972/
1918-2018 (100 year) Vor tids helte - http://www.imdb.com/title/tt0009769/
1918-2018 (100 year) Lodsens datter - http://www.imdb.com/title/tt0009314/
1919-2018 ( 99 year) Æresgjesten - http://www.imdb.com/title/tt0010939/
1921-2006 ( 85 year) Det nye year? - http://www.imdb.com/title/tt0347686/
1921-1991 ( 70 year) Under Polarkredsens himmel - http://www.imdb.com/title/tt0012789/
1923-1993 ( 70 year) Nordenfor polarcirkelen - http://www.imdb.com/title/tt0014318/
1925-1995 ( 70 year) Med 'Stavangerfjord' til Nordkap - http://www.imdb.com/title/tt0016098/
1926-1996 ( 70 year) Over Atlanterhavet og gjennem Amerika - http://www.imdb.com/title/tt0017241/
1926-1996 ( 70 year) Hallo! Amerika! - http://www.imdb.com/title/tt0016945/
1926-1996 ( 70 year) Tigeren Teodors triumf - http://www.imdb.com/title/tt1008052/
1927-1997 ( 70 year) Rød sultan - http://www.imdb.com/title/tt1017979/
1927-1997 ( 70 year) Søndagsfiskeren Flag - http://www.imdb.com/title/tt1018002/
1930-2000 ( 70 year) Ro-ro til fiskeskjær - http://www.imdb.com/title/tt1017973/
1933-2003 ( 70 year) I kongens klær - http://www.imdb.com/title/tt0024164/
1934-2004 ( 70 year) Eventyret om de tre bukkene bruse - http://www.imdb.com/title/tt1007963/
1934-2004 ( 70 year) Pål sine høner - http://www.imdb.com/title/tt1017966/
1937-2007 ( 70 year) Et mesterverk - http://www.imdb.com/title/tt1019937/
1938-2008 ( 70 year) En Harmony - http://www.imdb.com/title/tt1007975/

Several of these movies completely lack right holder information in IMDb and elsewhere. Without access to a copy of the movie, it is often impossible to get the list of people involved in making the movie, making it impossible to figure out the correct copyright status.

Not listed here are the movies still protected by copyright law. Their copyright terms varies from 79 to 144 years, according to the information I have available so far. One of the non-lost movies might change status next year, Mustads Mono from 1920. The next one might be Hvor isbjørnen ferdes from 1935 in 2024.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianPaul Wise: FLOSS Activities August 2019

Changes

Issues

Review

Administration

  • Debian: restart dead stunnels
  • Debian wiki: unblacklist IP addresses, whitelist email addresses, whitelist email domains

Communication

Sponsors

The purple-discord, libgoogle-protocolbuffers-perl work was sponsored by my employer. All other work was done on a volunteer basis.

Planet DebianRuss Allbery: C TAP Harness 4.5

Peter Paris requested that C TAP Harness support being built as C++ code. I've not been a big fan of doing this with pure C code since I find some of the requirements of C++ mildly irritating, but Peter's initial patch also fixed one type error in a malloc uncovered because of one of C++'s rules requiring the return of malloc be cast. It turned out to be a mostly harmless error since the code was allocating a larger struct than it needed to, but it's still evidence that there's some potential here for catching bugs.

That said, adding an explicit cast to every malloc isn't likely to catch bugs. That's just having to repeat oneself in every allocation, and you're nearly as likely to repeat yourself incorrectly.

However, if one is willing to use a macro instead of malloc directly, this is fixable, and I'm willing to do that since I was already using a macro for allocation to do error handling. So I've modified the code to pass in the type of object to allocate instead of the size, and then used a macro to add the return cast. This makes for somewhat cleaner code and also makes it possible to build the code as pure C++. I also added some functions to the TAP generator library, bcalloc_type and breallocarray_type, that take the same approach. (I didn't remove the old functions, to maintain backward compatibility.)

I'm reasonably happy with the results, although it's a bit of a hassle and I'm not sure if I'm going to go to the trouble in all of my other C packages. But I'm at least considering it. (Of course, I'm also considering rewriting them all in Rust, and considering my profound lack of time to do either of these things.)

You can get the latest release from the C TAP Harness distribution page.

,

Planet DebianSylvain Beucler: Debian LTS and ELTS - August 2019

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

Yes, that changed since last month, as I was offered to work on ELTS :)

In August, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 21.75h for LTS (out of 30 max) and 14h for ELTS (max).

Interestingly I was able to factor out some time between LTS and ELTS while working on vim and tomcat for both suites.

LTS - Jessie

  • squirrelmail: CVE-2019-12970: locate patch, refresh previous fix with new upstream-blessed version, security upload
  • vim: CVE-2017-11109, CVE-2017-17087, CVE-2019-12735: analyze and reproduce issues (one of them not fully exploitable), fix new and postponed issues, security upload
  • tomcat8: improve past patch to fix the test suite, report and refresh test certificates
  • tomcat8: CVE-2016-5388, CVE-2018-8014, CVE-2019-0221: requalify old not-affected issue, fix new and postponed issues, security upload

Documentation:

  • wiki: document good upload/test practices (pbuilder and lintian+debdiff+piuparts); request for comments
  • www.debian.org: import missing DLA-1810 (tomcat7/CVE-2019-0221)
  • freeimage: update dla-needed.txt status

ELTS - Wheezy

  • Get acquainted with the new procedures and setup build/test environments
  • vim: CVE-2017-17087, CVE-2019-12735: analyze and reproduce issues (one of them not fully exploitable), fix new and pending issues, security upload
  • tomcat7: CVE-2016-5388: requalify old not-affected issue, security upload

Documentation:

  • raise concern about missing dependency in our list of supported packages
  • user documentation: doc fix apt-key list -> apt-key finger
  • triage: mark a few CVE as EOL, fix-up missing fixed versions in data/ELA/list (not automated anymore following the oldoldstable -> oldoldold(!)stable switch)

While not part of Debian strictly speaking, ELTS strives for the same level of transparency, see in particular the Git repositories: https://salsa.debian.org/freexian-team/extended-lts

Sam VargheseAustralian politicians are in it for the money

Australian politicians are in the game for one thing: money. Most of them are so incompetent that they would not be paid even half of what they earn were they to try for jobs in the private sector.

That’s why former members of the Victorian state parliament, who were voted out at the last election in 2018, are struggling to find jobs.

Apparently, some have been told by recruitment agencies that they “don’t know where to fit you”, according to a news report from the Melbourne tabloid Herald Sun.

People who enter politics are paid well in Australia, far above what people are paid by the private sector, unless one is very high up in the hierarchy.

Politicians get where they are by doing favours for people in high places and moving up the greasy pole.

They get all kinds of fancy allowances and benefits. They have no scruples about taking from the public purse whenever they can without getting caught.

They are the worst kind of scum.

Australia is a highly over-governed place, with three levels of government: the national parliament, the parliaments in the different states and territories and the local governments.

At each level there is plenty of scope for fattening one’s own lamb. There are a handful of people who have some kind of vocation for public service; the rest are out to grab whatever they can before they are voted out.

Nobody should have any pity for people of this kind given what they do when they are in office. About the only thing they do is to prepare things so that they will have a job here, there or anywhere when they finally get thrown out of politics.

Some get lanced so early in their political lives that they are unprepared. Perhaps they should be put to work as garbage collectors. But one doubts they would have the physical and mental fortitude to get through such a job.

Planet DebianChris Lamb: Free software activities in August 2019

Here is my monthly update covering most of what I have been doing in the free software world during August 2019 (previous month):

  • Opened pull requests to make the build reproducible for Mozilla's Bleach [...] and the re2c regular expression library [...].

Tails

For the Tails privacy-oriented operating system, I was made a number of updates as part of the pkg-privacy-tools team in Debian:

  • onionshare:

    • Package new upstream version 2.1. [...]
    • Correct spelling, format and syntax errors in manpage.
    • Update debian/copyright; socks.py no longer in upstream.
    • Misc updates:
      • Drop "ancient" X-Python3-Version specifier (satisfied in oldoldstable).
      • Move to debhelper compatibility level 12 and use the debhelper-compat virtual package, dropping debian/compat.
    • debian/watch: Ignore dev releases and move to version 4 format.
  • monkeysphere:

    • Prevent a FTBFS by updating the tests to accommodate an updated GnuPG in stretch now producing a different output. (#934034).

    • I also filed a "proposed update" to actually update the package in the stretch distribution. (#934775)

  • onioncircuits: Update continuous integration tests to the Python 3.x version of Dogtail. (#935174)

  • seahorse-nautilus: (Almost) no-change upload to unstable to ensure migration to the testing distribution as binaries were uploaded with previous 3.11.92-3 release. [...]

  • obfs4proxy: Move to using the debian-compat virtual package, level 12. [...]


Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The initiative is proud to be a member project of the Software Freedom Conservancy, a not-for-profit 501(c)(3) charity focused on ethical technology and user freedom.

Conservancy acts as a corporate umbrella, allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month:


I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

Improvements:

  • Don't fallback to an unhelpful raw hexdump when, for example, readelf(1) reports an minor issue in a section in an ELF binary. For example, when the .frames section is of the NOBITS type its contents are apparently "unreliable" and thus readelf(1) returns 1. (#58, #931962)
  • Include either standard error or standard output (not just the latter) when an external command fails. [...]

Bug fixes:

  • Skip calls to unsquashfs when we are neither root nor running under fakeroot. (#63)
  • Ensure that all of our artificially-created subprocess.CalledProcessError instances have output instances that are bytes objects, not str. [...]
  • Correct a reference to parser.diff; diff in this context is a Python function in the module. [...]
  • Avoid a possible traceback caused by a str/bytes type confusion when handling the output of failing external commands. [...]

Testsuite improvements:

  • Test for 4.4 in the output of squashfs -version, even though the Debian package version is 1:4.3+git190823-1. [...]
  • Apply a patch from László Böszörményi to update the squashfs test output and additionally bump the required version for the test itself. (#62 & #935684)
  • Add the wabt Debian package to the test-dependencies so that we run the WebAssembly tests on our continuous integration platform, etc. [...]

Improve debugging:

  • Add the containing module name to the (eg.) Using StaticLibFile for ... debugging messages. [...]
  • Strip off trailing "original size modulo 2^32 671" (etc.) from gzip compressed data as this is just a symptom of the contents itself changing that will be reflected elsewhere. (#61)
  • Avoid a lack of space between "... with return code 1" and "Standard output". [...]
  • Improve debugging output when instantantiating our Comparator object types. [...]
  • Add a literal "eg." to the comment on stripping "original size modulo..." text to emphasise that the actual numbers are not fixed. [...]

Internal code improvements:

  • No need to parse the section group from the class name; we can pass it via type built-in kwargs argument. [...]
  • Add support to Difference.from_command_exc and friends to ignore specific returncodes from the called program and treat them as "no" difference. [...]
  • Simplify parsing of optional command_args argument to Difference.from_command_exc. [...]
  • Set long_description_content_type to text/x-rst to appease the PyPI.org linter. [...]
  • Reposition a comment regarding an exception within the indented block to match Python code convention. [...]


strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Add support for enabling and disabling specific normalizers via the command line. (#10)
  • Drop accidentally-committed warning emitted on every fixture-based test. [...]
  • Reintroduce the .ar normalizer [...] but disable it by default so that it can be enabled with --normalizers=+ar or similar. (#3)
  • In verbose mode, print the normalizers that strip-nondeterminism will apply. [...]

Debian

Lintian

More hacking on the Lintian static analysis tool for Debian packages, including uploading versions 2.17.0, 2.18.0 and 2.19.0:

New features:

Bug fixes:

Other:


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 hours on its sister Extended LTS project.

  • Frontdesk duties, responding to user/developer questions, reviewing others' packages, participating in mailing list discussions, etc.

  • Investigated and triaged cent, clamav, enigmail, freeradius, ghostscript, libcrypto++, musl, open-cobol, pango1.0, php5, python-django, python-werkzeug, radare2, salt, subversion, suricata, u-boot, xtrlock & yara.

  • Updated our lts-cve-triage.py script to correct undefined reference to colored when standard output is not a terminal [...] and address a number of flake8 issues [...].

  • Worked on a number of interations towards a comprehensive patch to xtrlock to address an issue whereby multitouch events (such as on a tablet or many modern laptops) are not correct locked. Whilst originally filed by a user as #830726 whilst triaging issues for this package I was able to reproduce it. I thus requested and was granted my first CVE number (CVE-2016-10894) and hope to upload a patched version early next month.

  • Issued DLA 1896-1 for to fix a remote arbitrary code vulnerability in commons-beanutils, a set of tools and utilities for manipulating JavaBeans.

  • Issued DLA 1872-1 for the Django web development framework correcting two denial of service vulnerabilities and requiring a backport of upstream's patch series. I also fixed these issues in the buster distribution as well as an SQL injection possibility and potential memory exhaustion issues.

You can find out more about the project in the following video:


Debian uploads


FTP Team

As a Debian FTP assistant I ACCEPTed 28 packages: bitshuffle, golang-github-abdullin-seq, golang-github-centurylinkcloud-clc-sdk, golang-github-cnf-structhash, golang-github-deanthompson-ginpprof, golang-github-ensighten-udnssdk, golang-github-gin-contrib-cors, golang-github-gin-contrib-gzip, golang-github-gin-contrib-static, golang-github-hansrodtang-randomcolor, golang-github-jarcoal-httpmock, golang-github-mcuadros-go-gin-prometheus, golang-github-mitchellh-go-linereader, golang-github-nesv-go-dynect, golang-github-sethvargo-go-fastly, golang-github-terra-farm-udnssdk, golang-github-yourbasic-graph, golang-github-ziutek-mymysql, golang-gopkg-go-playground-colors.v1, gulkan, kdeplasma-applets-xrdesktop, libcds, libinputsynth, openvr, parfive, transip, znc & znc-push.

,

CryptogramFriday Squid Blogging: Why Mexican Jumbo Squid Populations Have Declined

A group of scientists conclude that it's shifting weather patterns and ocean conditions.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDWhat does it mean to become a TED Fellow?

TED Fellows celebrate the 10-year anniversary of the program at TEDSummit: A Community Beyond Borders, July 22, 2019 in Edinburgh, Scotland. (Photo: Ryan Lash / TED)

Every year, TED begins a new search looking for the brightest thinkers and innovators to be part of the TED Fellows program. With nearly 500 visionaries representing 300 different disciplines, these extraordinary individuals are making waves, disrupting the status quo and creating real impact.

Through a rigorous application process, we narrow down our candidate pool of thousands to just 20 exceptional people. (Trust us, this is not easy to do.) You may be wondering what makes for a good application (read more about that here), but just as importantly: What exactly does it mean to be a TED Fellow? Yes, you’ll work hand-in-hand with the Fellows team to give a TED Talk on stage, but being a Fellow is so much more than that. Here’s what happens once you get that call.

1. You instantly have a built-in support system.

Once selected, Fellows become part of our active global community. They are connected to a diverse network of other Fellows who they can lean on for support, resources and more. To get a better sense of who these people are (fishing cat conservationists! space environmentalists! police captains!), take a closer look at our class of 2019 Fellows, who represent 12 countries across four continents. Their common denominator? They are looking to address today’s most complex challenges and collaborate with others — which could include you.

2. You can participate in TED’s coaching and mentorship program.

To help Fellows achieve an even greater impact with their work, they are given the opportunity to participate in a one-of-a-kind coaching and mentoring initiative. Collaboration with a world-class coach or mentor helps Fellows maximize effectiveness in their professional and personal lives and make the most of the fellowship.

The coaches and mentors who support the program are some of the world’s most effective and intuitive individuals, each inspired by the TED mission. Fellows have reported breakthroughs in financial planning, organizational effectiveness, confidence and interpersonal relationships thanks to coaches and mentors. Head here to learn more about this initiative. 

3. You’ll receive public relations guidance and professional development opportunities, curated through workshops and webinars. 

Have you published exciting new research or launched a groundbreaking project? We partner with a dedicated PR agency to provide PR training and valuable media opportunities with top tier publications to help spread your ideas beyond the TED stage. The TED Fellows program has been recognized by PR News for our “PR for Fellows” program.

In addition, there are vast opportunities for Fellows to hone their skills and build new ones through invigorating workshops and webinars that we arrange throughout the year. We also maintain a Fellows Blog, where we continue to spotlight Fellows long after they give their talks.

***

Over the last decade, our program has helped Fellows impact the lives of more than 180 million people. Success and innovation like this doesn’t happen in a vacuum — it’s sparked by bringing Fellows together and giving them this kind of support. If this sounds like a community you want to join, apply to become a TED Fellow by August 27, 2019 11:59pm UTC.

Sociological ImagesSurviving Student Debt

Recent estimates indicate that roughly 45 million students in the United States have incurred student loans during college. Democratic candidates like Senators Elizabeth Warren and Bernie Sanders have proposed legislation to relieve or cancel  this debt burden. Sociologist Tressie McMillan Cottom’s congressional testimony on behalf of Warren’s student loan relief plan last April reveals the importance of sociological perspectives on the debt crisis. Sociologists have recently documented the conditions driving student loan debt and its impacts across race and gender. 

College debt is the new black.
Photo Credit: Mike Rastiello, Flickr CC

In recent decades, students have enrolled in universities at increasing rates due to the “education gospel,” where college credentials are touted as public goods and career necessities, encouraging students to seek credit. At the same time, student loan debt has rapidly increased, urging students to ask whether the risks of loan debt during early adulthood outweigh the reward of a college degree. Student loan risks include economic hardship, mental health problems, and delayed adult transitions such as starting a family.Individual debt has also led to disparate impacts among students of color, who are more likely to hail from low-income families. Recent evidence suggests that Black students are more likely to drop out of college due to debt and return home after incurring more debt than their white peers. Racial disparities in student loan debt continue into their mid-thirties and impact the white-Black racial wealth gap.

365.75
Photo Credit: Kirstie Warner, Flickr CC

Other work reveals gendered disparities in student debt. One survey found that while women were more likely to incur debt than their male peers, men with higher levels of student debt were more likely to drop out of college than women with similar amounts of debt. The authors suggest that women’s labor market opportunities — often more likely to require college degrees than men’s — may account for these differences. McMillan Cottom’s interviews with 109 students from for-profit colleges uncovers how Black, low-income women in particular bear the burden of student loans. For many of these women, the rewards of college credentials outweigh the risks of high student loan debt.

Amber Joy is a PhD candidate in the Department of Sociology at the University of Minnesota. Her current research interests include punishment, policing, victimization, youth, and the intersections of race, gender, and sexuality. Her dissertation explores youth responses to sexual violence within youth correctional facilities.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityPhishers are Angling for Your Cloud Providers

Many companies are now outsourcing their marketing efforts to cloud-based Customer Relationship Management (CRM) providers. But when accounts at those CRM providers get hacked or phished, the results can be damaging for both the client’s brand and their customers. Here’s a look at a recent CRM-based phishing campaign that targeted customers of Fortune 500 construction equipment vendor United Rentals.

Stamford, Ct.-based United Rentals [NYSE:URI] is the world’s largest equipment rental company, with some 18,000 employees and earnings of approximately $4 billion in 2018. On August 21, multiple United Rental customers reported receiving invoice emails with booby-trapped links that led to a malware download for anyone who clicked.

While phony invoices are a common malware lure, this particular campaign sent users to a page on United Rentals’ own Web site (unitedrentals.com).

A screen shot of the malicious email that spoofed United Rentals.

In a notice to customers, the company said the unauthorized messages were not sent by United Rentals. One source who had at least two employees fall for the scheme forwarded KrebsOnSecurity a response from UR’s privacy division, which blamed the incident on a third-party advertising partner.

“Based on current knowledge, we believe that an unauthorized party gained access to a vendor platform United Rentals uses in connection with designing and executing email campaigns,” the response read.

“The unauthorized party was able to send a phishing email that appears to be from United Rentals through this platform,” the reply continued. “The phishing email contained links to a purported invoice that, if clicked on, could deliver malware to the recipient’s system. While our investigation is continuing, we currently have no reason to believe that there was unauthorized access to the United Rentals systems used by customers, or to any internal United Rentals systems.”

United Rentals told KrebsOnSecurity that its investigation so far reveals no compromise of its internal systems.

“At this point, we believe this to be an email phishing incident in which an unauthorized third party used a third-party system to generate an email campaign to deliver what we believe to be a banking trojan,” said Dan Higgins, UR’s chief information officer.

United Rentals would not name the third party marketing firm thought to be involved, but passive DNS lookups on the UR subdomain referenced in the phishing email (used by UL for marketing since 2014 and visible in the screenshot above as “wVw.unitedrentals.com”) points to Pardot, an email marketing division of cloud CRM giant Salesforce.

Companies that use cloud-based CRMs sometimes will dedicate a domain or subdomain they own specifically for use by their CRM provider, allowing the CRM to send emails that appear to come directly from the client’s own domains. However, in such setups the content that gets promoted through the client’s domain is actually hosted on the cloud CRM provider’s systems.

Salesforce told KrebsOnSecurity that this was not a compromise of Pardot, but of a Pardot customer account that was not using multi-factor authentication.

“UR uses a third party marketing agency that utilizes the Pardot platform,” said Salesforce spokesman Bradford Burns. “The third party marketing agency is who was compromised, not a Pardot employee.”

This attack comes on the heels of another targeted phishing campaign leveraging Pardot that was documented earlier this month by Netskope, a cloud security firm. Netskope’s Ashwin Vamshi said users of cloud CRM platforms have a high level of trust in the software because they view the data and associated links as internal, even though they are hosted in the cloud.

“A large number of enterprises provide their vendors and partners access to their CRM for uploading documents such as invoices, purchase orders, etc. (and often these happen as automated workflows),” Vamshi wrote. “The enterprise has no control over the vendor or partner device and, more importantly, over the files being uploaded from them. In many cases, vendor- or partner-uploaded files carry with them a high level of implicit trust.”

Cybercriminals increasingly are targeting cloud CRM providers because compromised accounts on these systems can be leveraged to conduct extremely targeted and convincing phishing attacks. According to the most recent stats (PDF) from the Anti-Phishing Working Group, software-as-a-service providers (including CRM and Webmail providers) were the most-targeted industry sector in the first quarter of 2019, accounting for 36 percent of all phishing attacks.

Image: APWG

Update, 2:55 p.m. ET: Added comments and responses from Salesforce.

Planet DebianDimitri John Ledkov: How to disable TLS 1.0 and TLS 1.1 on Ubuntu

Example of website that only supports TLS v1.0, which is rejected by the client

Overivew

TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1.

As announced on the 15th of October 2018 Apple, Google, and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well.

To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that do not support TLS v1.2.

How to disable TLS v1.0 and TLS v1.1 in Google Chrome on Ubuntu

  1. Create policy directory
    sudo mkdir -p /etc/opt/chrome/policies/managed
  2. Create /etc/opt/chrome/policies/managed/mintlsver.json with
    {
        "SSLVersionMin" : "tls1.2"

How to disable TLS v1.0 and TLS v1.1 in Firefox on Ubuntu

  1. Navigate to about:config in the URL bar
  2. Search for security.tls.version.min setting
  3. Set it to 3, which stand for minimum TLS v1.2

How to disable TLS v1.0 and TLS v1.1 in OpenSSL

  1. Edit /etc/ssl/openssl.cnf
  2. After oid_section stanza add
    # System default
    openssl_conf = default_conf
  3. After oid_section stanza add
    [default_conf]
    ssl_conf = ssl_sect

    [ssl_sect]
    system_default = system_default_sect

    [system_default_sect]
    MinProtocol = TLSv1.2
    CipherString = DEFAULT@SECLEVEL=2
  4.  Save the file

How to disable TLS v1.0 and TLS v1.1 in GnuTLS

  1. Create config directory
    sudo mkdir -p /etc/gnutls/
  2. Create /etc/gnutls/default-priorities with
    SYSTEM=SECURE192:-VERS-ALL:+VERS-TLS1.3:+VERS-TLS1.2 
After performing above tasks most common applications will use TLS v1.2+

I have set these defaults on my systems, and I occasionally hit websites that only support TLS v1.0 and I report them. Have you found any websites and systems you use that do not support TLS v1.2 yet?

Planet DebianJonathan Dowland: PhD Stage 1 Progression Report

As promised, here's the report I wrote for my PhD Stage 1 progression in the hope that it is useful or interesting to someone. I've made some very small modifications to the submitted copy in order to remove some personal information.

I'll reiterate something from when I published my proposal:

A document produced for one institution's expectations might not be directly applicable to another. … You don't have any idea whether it has been judged to be particularly good or bad one by those who received it (you can make your own judgements).

CryptogramAttacking the Intel Secure Enclave

Interesting paper by Michael Schwarz, Samuel Weiser, Daniel Gruss. The upshot is that both Intel and AMD have assumed that trusted enclaves will run only trustworthy code. Of course, that's not true. And there are no security mechanisms that can deal with malicious enclaves, because the designers couldn't imagine that they would be necessary. The results are predictable.

The paper: "Practical Enclave Malware with Intel SGX."

Abstract: Modern CPU architectures offer strong isolation guarantees towards user applications in the form of enclaves. For instance, Intel's threat model for SGX assumes fully trusted enclaves, yet there is an ongoing debate on whether this threat model is realistic. In particular, it is unclear to what extent enclave malware could harm a system. In this work, we practically demonstrate the first enclave malware which fully and stealthily impersonates its host application. Together with poorly-deployed application isolation on personal computers, such malware can not only steal or encrypt documents for extortion, but also act on the user's behalf, e.g., sending phishing emails or mounting denial-of-service attacks. Our SGX-ROP attack uses new TSX-based memory-disclosure primitive and a write-anything-anywhere primitive to construct a code-reuse attack from within an enclave which is then inadvertently executed by the host application. With SGX-ROP, we bypass ASLR, stack canaries, and address sanitizer. We demonstrate that instead of protecting users from harm, SGX currently poses a security threat, facilitating so-called super-malware with ready-to-hit exploits. With our results, we seek to demystify the enclave malware threat and lay solid ground for future research on and defense against enclave malware.

Worse Than FailureError'd: Resistant to Change

Tom H. writes, "They got rid of their old, outdated fax machine, but updating their website? Yeah, that might take a while."

 

"In casinos, they say the house always wins. In this case, when I wanted to cash in my winnings, I gambled and lost against Windows 7 Professional," Michelle M. wrote.

 

Martin writes, "Wow! It's great to see Apple is going the extra mile by protecting my own privacy from myself!"

 

"Yes, Amazon Photos, with my mouse clicks, I will fix you," wrote Amos B.

 

"When searches go wrong at AliExpress they want you to know these three things," Erwan R. wrote.

 

Chris A. writes, "It's like Authy is saying 'I have no idea what you just did, but, on the bright side, there weren`t any errors!'"

 

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Krebs on SecurityRansomware Bites Dental Data Backup Firm

PerCSoft, a Wisconsin-based company that manages a remote data backup service relied upon by hundreds of dental offices across the country, is struggling to restore access to client systems after falling victim to a ransomware attack.

West Allis, Wis.-based PerCSoft is a cloud management provider for Digital Dental Record (DDR), which operates an online data backup service called DDS Safe that archives medical records, charts, insurance documents and other personal information for various dental offices across the United States.

The ransomware attack hit PerCSoft on the morning of Monday, Aug. 26, and encrypted dental records for some — but not all — of the practices that rely on DDS Safe.

PercSoft did not respond to requests for comment. But Brenna Sadler, director of  communications for the Wisconsin Dental Association, said the ransomware encrypted files for approximate 400 dental practices, and that somewhere between 80-100 of those clients have now had their files restored.

Sadler said she did not know whether PerCSoft and/or DDR had paid the ransom demand, what ransomware strain was involved, or how much the attackers had demanded.

But updates to PerCSoft’s Facebook page and statements published by both PerCSoft and DDR suggest someone may have paid up: The statements note that both companies worked with a third party software company and were able to obtain a decryptor to help clients regain access to files that were locked by the ransomware.

Update: Several sources are now reporting that PerCSoft did pay the ransom, although it is not clear how much was paid. One member of a private Facebook group dedicated to IT professionals serving the dental industry shared the following screenshot, which is purportedly from a conversation between PerCSoft and an affected dental office, indicating the cloud provider was planning to pay the ransom:

Another image shared by members of that Facebook group indicates the ransomware that attacked PerCSoft is an extremely advanced and fairly recent strain known variously as REvil and Sodinokibi.

Original story:

However, some affected dental offices have reported that the decryptor did not work to unlock at least some of the files encrypted by the ransomware. Meanwhile, several affected dentistry practices said they feared they might be unable to process payroll payments this week as a result of the attack.

Cloud data and backup services are a prime target of cybercriminals who deploy ransomware. In July, attackers hit QuickBooks cloud hosting firm iNSYNQ, holding data hostage for many of the company’s clients. In February, cloud payroll data provider Apex Human Capital Management was knocked offline for three days following a ransomware infestation.

On Christmas Eve 2018, cloud hosting provider Dataresolution.net took its systems offline in response to a ransomware outbreak on its internal networks. The company was adamant that it would not pay the ransom demand, but it ended up taking several weeks for customers to fully regain access to their data.

The FBI and multiple security firms have advised victims not to pay any ransom demands, as doing so just encourages the attackers and in any case may not result in actually regaining access to encrypted files. In practice, however, many cybersecurity consulting firms are quietly urging their customers that paying up is the fastest route back to business-as-usual.

It remains unclear whether PerCSoft or DDR — or perhaps their insurance provider — paid the ransom demand in this attack. But new reporting from independent news outlet ProPublica this week sheds light on another possible explanation why so many victims are simply coughing up the money: Their insurance providers will cover the cost — minus a deductible that is usually far less than the total ransom demanded by the attackers.

More to the point, ProPublica found, such attacks may be great for business if you’re in the insurance industry.

“More often than not, paying the ransom is a lot cheaper for insurers than the loss of revenue they have to cover otherwise,” said Minhee Cho, public relations director of ProPublica, in an email to KrebsOnSecurity. “But, by rewarding hackers, these companies have created a perverted cycle that encourages more ransomware attacks, which in turn frighten more businesses and government agencies into buying policies.”

“In fact, it seems hackers are specifically extorting American companies that they know have cyber insurance,” Cho continued. “After one small insurer highlighted the names of some of its cyber policyholders on its website, three of them were attacked by ransomware.”

Read the full ProPublica piece here. And if you haven’t already done so, check out this outstanding related reporting by ProPublica from earlier this year on how security firms that help companies respond to ransomware attacks also may be enabling and emboldening attackers.

Planet DebianDirk Eddelbuettel: anytime 0.3.6

A fresh and very exciting release of the anytime package is arriving on CRAN right now. This is the seventeenth release, and it comes pretty much exactly one month after the preceding 0.3.5 release.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release updates a number of things (see below for details). For users, maybe the most important change is that we now also convert single-digit months, i.e. a not-quite ISO input like “2019-7-5” passes. This required adding %e as a month format; I had overlooked this detail in the (copious) Boost date_time documentation. Another nice change is that we now use standard S3 dispatching rather a manual approach as we probably should have for a long time :-) but better late than never. The code change was actually rather minimal and done in a few minutes. Another change is a further extended use of unit testing via the excellent tinytest package which remains a joy to use. We also expanded the introductory pdf vignette; the benchmark comparisons we included look pretty decent for anytime which still combines ease of use and versability with performance.

Lastly, a somewhat sad “lowlight”. We submitted the package to the Journal of Open Source Software who then told us within days of the unworthyness of anytime for lack of research focus. Needless to see, we disagree. So here is plea: If you use anytime in a research setting, would you mind adding to the this very issue ticket and saying so? This may permit us a somewhat more emphatic data-driven riposte to the editors. Many thanks in advance for considering this.

The full list of changes follows.

Changes in anytime version 0.3.6 (2019-08-29)

  • Added, and then removed, required file for JOSS; added 'unworthy' badge as we earned a desk reject (cf #1605 there).

  • Renamed internal helper function format() to fmt() to avoid clashes with base::format() (Dirk in #104).

  • Use S3 dispatch and generics for key functions (Dirk in #106).

  • Continued to tweak tests as we find some of the rhub platform to behave strangely (Dirk via commits as well as #107).

  • Added %e format for single-digit day parsing by Boost (Dirk addressing at least #24, #70 and #99).

  • Expansed and updated vignette with benchmark comparisons.

  • Updated unit tests using tinytest which remains a pleasure to use; versioned Suggests: is now '>= 1.0.0'.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramAI Emotion-Detection Arms Race

Voice systems are increasingly using AI techniques to determine emotion. A new paper describes an AI-based countermeasure to mask emotion in spoken words.

Their method for masking emotion involves collecting speech, analyzing it, and extracting emotional features from the raw signal. Next, an AI program trains on this signal and replaces the emotional indicators in speech, flattening them. Finally, a voice synthesizer re-generates the normalized speech using the AIs outputs, which gets sent to the cloud. The researchers say that this method reduced emotional identification by 96 percent in an experiment, although speech recognition accuracy decreased, with a word error rate of 35 percent.

Academic paper.

Worse Than FailureCodeSOD: Bassackwards Compatibility

A long time ago, you built a web service. It was long enough ago that you chose XML as your serialization format. It worked fine, but before long, customers started saying that they’d really like to use JSON, so now you need to expose a slightly different, JSON-powered version of your API. To make it easy, you release a JSON client developers can drop into their front-ends.

Conor is one of those developers, and while examining the requests the client sent, he discovered a unique way of making your XML web-service JSON-friendly.

{"fetch":"<fetch version='1.0'><entity><entityDescriptor id='10'/>…<loadsMoreXML/></entity></fetch>"}

Simplicity itself!

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Planet DebianSteve McIntyre: If you can't stand the heat, get out of the kitchen...

Wow, we had a hot weekend in Cambridge. About 40 people turned up to our place in Cambridge for this year's OMGWTFBBQ. Last year we were huddling under the gazebos for shelter from torrential rain; this year we again had all the gazebos up, but this time to hide from the sun instead. We saw temperatures well into the 30s, which is silly for Cambridge at the end of August.

I think it's fair to say that everybody enjoyed themselves despite the ludicrous heat levels. We had folks from all over the UK, and Lars and Soile travelled all the way from Helsinki in Finland to help him celebrate his birthday!

cake!

We had a selection of beers again from the nice folks at Milton Brewery:
is 3 firkins enough?

Lars made pancakes, Paul made bread, and people brought lots of nice food and drink with them too.

Many thanks to a number of awesome friendly companies for again sponsoring the important refreshments for the weekend. It's hungry/thirsty work celebrating like this!

Planet DebianJulien Danjou: The Art of PostgreSQL is out!

The Art of PostgreSQL is out!

If you remember well, a couple of years ago, I wrote about Mastering PostgreSQL, a fantastic book written by my friend Dimitri Fontaine.

Dimitri is a long-time PostgreSQL core developer — for example, he wrote the extension support in PostgreSQL — no less. He is featured in my book Serious Python, where he advises on using databases and ORM in Python.

Today, Dimitri comes back with the new version of this book, named The Art of PostgreSQL.

The Art of PostgreSQL is out!As a bonus, here's a picture of me and Dimitri having fun in a PostgreSQL meetup!

I love the motto of this book: Turn Thousands of Lines of Code into Simple Queries. I have spent all my career working with code that talks to databases, and I can't count the number of times where I've seen people write lengthy, slow code in their pet language rather than a single well-thought SQL query which would do a better job.

The Art of PostgreSQL is out!

This is exactly what this book is about.

That's why it's my favorite SQL book. I learned so many things from it. In many cases, I've been able to divide by 10 the size of the code I had to write in Python to implement a feature. All I had to do is to browse the book to discover the right PostgreSQL feature and write a single SQL query. The right query that does the job for me.

Less code, fewer bugs, more happiness!

The book also features interviews with great PostgreSQL users and developers — hey, no wonder where Dimitri got this idea, right? ;-)

The Art of PostgreSQL is out!

I loved those interviews. What's better than reading Kris Jenkins explaining how Clojure and PostgreSQL play nice together, or Markus Winand (from the famous use-the-index-luke.com) talking about the relationship developers have with their database. :-)

No need to say that you should get your hands on this right now. Dimitri just made a launch offer where he offers a 15% discount on the book until the end of this month! You can also read the free chapter to get an idea of what you'll get.

Last thing: it's DRM-free and money-back guaranteed. You can get this book with your eyes closed.

The Art of PostgreSQL is out!

Google AdsenseSimplifying our content policies for publishers

One of our top priorities is to sustain a healthy digital advertising ecosystem, one that works for everyone: users, advertisers and publishers. On a daily basis, teams of Google engineers, policy experts, and product managers combat and stop bad actors. Just last year, we removed 734,000 publishers and app developers from our ad network and ads from nearly 28 million pages that violated our publisher policies.

But we’re not just stopping bad actors. Just as critical to our mission is the work we do every day to help good publishers in our network succeed. One consistent piece of feedback we’ve heard from our publishers is that they want us to further simplify our policies, across products, so they are easier to understand and follow. That’s why we'll be simplifying the way our content policies are presented to publishers, and standardizing content policies across our publisher products.

A simplified publisher experience
In September, we’ll update the way our publisher content policies are presented with a clear outline of the types of content where advertising is not allowed or will be restricted.

Our Google Publisher Policies will outline the types of content that are not allowed to show ads through any of our publisher products. This includes policies against illegal content, dangerous or derogatory content, and sexually explicit content, among others.

Our Google Publisher Restrictions will detail the types of content, such as alcohol or tobacco, that don’t violate policy, but that may not be appealing for all advertisers. Publishers will not receive a policy violation for trying to monetize this content, but only some advertisers and advertising products—the ones that choose this kind of content—will bid on it. As a result, Google Ads will not appear on this content and this content will receive less advertising than non-restricted content will. 

The Google Publisher Policies and Google Publisher Restrictions will apply to all publishers, regardless of the products they use—AdSense, AdMob or Ad Manager.

These changes are the next step in our ongoing efforts to make it easier for publishers to navigate our policies so their businesses can continue to thrive with the help of our publisher products.


Posted by:
Scott Spencer, Director of Sustainable Ads


Planet DebianArturo Borrero González: Wikimania 2019 Stockholm summary

Wikimania 2019 logo

A couple of weeks ago I attended the Wikimania 2019 conference in Stockholm, Sweden. This is the general and global conference for the Wikimedia movement, in which people interested in free knowledge gather together for a few days. The event happens annually, and this was my first time attending such conference. Wikimania 2019 main program ran for 3 days, but we had 2 pre-conference days in which a hackathon was held.

The venue was an amazing building in the Stockholm University, Aula Magna.

The hackathon reunited technical contributors, such as developers, which are interested in a variety of technical challenges in the wiki movement. You can find in the hackathon people interested in wiki edits automation, research, anti harassment tools and also infrastructure engineering and architecture, among other things.

My full time job is at the Wikimedia Cloud Services team. We provide platforms and services for wikimedia movement collaborators who want to perform technical tasks and contributions. Some examples of what we provide:

  • a public cloud service based on Openstack, AKA IaaS. We call this CloudVPS.
  • a PaaS product, based on Kubernetes and GridEngine. We call this Toolforge.
  • direct access to wiki databases in both SQL and XML format.
  • several other software products, like Quarry, PAWS, etc.

These services are widely used in the wiki community. About 40% of total edits to wiki projects come from software running in our platform. Some coworkers and myself attended the hackathon to provide support related to these tools and services, and to introduce them to new contributors.

Talk

We had session/talk called Introduction to Wikimedia Cloud Services the first day of the hackathon, and folks showed genuine interests in the things we offer. Some stuff I did during the hackathon included creating lots of Toolforge accounts, fixing issues in Cloud VPS projects, talks with many people about related technical topics, etc.

Once the hackathon ended, the main program conference started. I was amazed to see how vibrant the wiki movement is. Seeing people from all over the world sharing such a great mission and goals was really inspiring and I truly felt grateful for being part of it. The conference is joined by many wiki enthusiasts, editors and other volunteers from many organizations and local wiki chapters. For the record, the amount of paid staff from the Wikimedia Foundation is limited.

Honestly, until I attended this conference I was not aware of the scope and size of the movement and the variety of topics and approaches that involve free knowledge, the ultimate goal, which is not a far-fetched mission: we are in good track despite the many challenges :-)

After the conference, we had another week in Stockholm for a Tehcnical Engagement team offsite.

CryptogramThe Myth of Consumer-Grade Security

The Department of Justice wants access to encrypted consumer devices but promises not to infiltrate business products or affect critical infrastructure. Yet that's not possible, because there is no longer any difference between those categories of devices. Consumer devices are critical infrastructure. They affect national security. And it would be foolish to weaken them, even at the request of law enforcement.

In his keynote address at the International Conference on Cybersecurity, Attorney General William Barr argued that companies should weaken encryption systems to gain access to consumer devices for criminal investigations. Barr repeated a common fallacy about a difference between military-grade encryption and consumer encryption: "After all, we are not talking about protecting the nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications."

The thing is, that distinction between military and consumer products largely doesn't exist. All of those "consumer products" Barr wants access to are used by government officials -- heads of state, legislators, judges, military commanders and everyone else -- worldwide. They're used by election officials, police at all levels, nuclear power plant operators, CEOs and human rights activists. They're critical to national security as well as personal security.

This wasn't true during much of the Cold War. Before the Internet revolution, military-grade electronics were different from consumer-grade. Military contracts drove innovation in many areas, and those sectors got the cool new stuff first. That started to change in the 1980s, when consumer electronics started to become the place where innovation happened. The military responded by creating a category of military hardware called COTS: commercial off-the-shelf technology. More consumer products became approved for military applications. Today, pretty much everything that doesn't have to be hardened for battle is COTS and is the exact same product purchased by consumers. And a lot of battle-hardened technologies are the same computer hardware and software products as the commercial items, but in sturdier packaging.

Through the mid-1990s, there was a difference between military-grade encryption and consumer-grade encryption. Laws regulated encryption as a munition and limited what could legally be exported only to key lengths that were easily breakable. That changed with the rise of Internet commerce, because the needs of commercial applications more closely mirrored the needs of the military. Today, the predominant encryption algorithm for commercial applications -- Advanced Encryption Standard (AES) -- is approved by the National Security Agency (NSA) to secure information up to the level of Top Secret. The Department of Defense's classified analogs of the Internet­ -- Secret Internet Protocol Router Network (SIPRNet), Joint Worldwide Intelligence Communications System (JWICS) and probably others whose names aren't yet public -- use the same Internet protocols, software, and hardware that the rest of the world does, albeit with additional physical controls. And the NSA routinely assists in securing business and consumer systems, including helping Google defend itself from Chinese hackers in 2010.

Yes, there are some military applications that are different. The US nuclear system Barr mentions is one such example -- and it uses ancient computers and 8-inch floppy drives. But for pretty much everything that doesn't see active combat, it's modern laptops, iPhones, the same Internet everyone else uses, and the same cloud services.

This is also true for corporate applications. Corporations rarely use customized encryption to protect their operations. They also use the same types of computers, networks, and cloud services that the government and consumers use. Customized security is both more expensive because it is unique, and less secure because it's nonstandard and untested.

During the Cold War, the NSA had the dual mission of attacking Soviet computers and communications systems and defending domestic counterparts. It was possible to do both simultaneously only because the two systems were different at every level. Today, the entire world uses Internet protocols; iPhones and Android phones; and iMessage, WhatsApp and Signal to secure their chats. Consumer-grade encryption is the same as military-grade encryption, and consumer security is the same as national security.

Barr can't weaken consumer systems without also weakening commercial, government, and military systems. There's one world, one network, and one answer. As a matter of policy, the nation has to decide which takes precedence: offense or defense. If security is deliberately weakened, it will be weakened for everybody. And if security is strengthened, it is strengthened for everybody. It's time to accept the fact that these systems are too critical to society to weaken. Everyone will be more secure with stronger encryption, even if it means the bad guys get to use that encryption as well.

This essay previously appeared on Lawfare.com.

LongNowThe Vineyard Gazette on Revive & Restore’s Heath Hen De-extinction Efforts

 The world’s last heath hen went extinct in Martha’s Vineyard in 01932. The Revive & Restore team recently paid a visit there to discuss their efforts to bring the species back.

Members of the Revive & Restore team next to a statue of Booming Ben, the last heath hen.

From the Vineyard Gazette:

Buried deep within the woods of the Manuel Correllus State Forest is a statue of Booming Ben, the world’s final heath hen. Once common all along the eastern seaboard, the species was hunted to near-extinction in the 1870s. Although a small number of the birds found refuge on Martha’s Vineyard, they officially disappeared in 1932 — with Booming Ben, the last of their kind, calling for female mates who were no longer there to hear him.

“There is no survivor, there is no future, there is no life to be recreated in this form again,” Gazette editor Henry Beetle Hough wrote. “We are looking upon the uttermost finality which can be written, glimpsing the darkness which will not know another ray of light. We are in touch with the reality of extinction.”

The statue memorializes that reality.

Since 2013, however, a group of cutting-edge researchers with the group Revive and Restore have been hard at work to bring back the heath hen as part of an ambitious avian de-extinction project. The project got started when Ryan Phelan, who co-founded Revive and Restore with her husband, scientist and publisher of the Whole Earth Catalogue, Stewart Brand, began to think broadly about the goals for their organization.

“We started by saying what’s the most wild idea possible?” Ms. Phelan said. “What’s the most audacious? That would be bringing back an extinct species.”

Read the piece in full here.

Planet DebianDaniel Silverstone: RFH: Naming things is hard

As with all things in computing, one of two problems always seem to raise their ugly heads… We either have an off-by-one error, or we have a caching error, or we have a naming problem.

Lars and I have been working on an acceptance testing tool recently. You may have seen the soft launch announcement on Lars' blog. Sadly since that time we've discovered that Fable is an overloaded name in the domain of software quality assurance and we do not want to try and compete with Fable since (a) they were there first, and (b) accessibility is super-important and we don't want to detract from the work they're doing.

As such, this is a request for help. We need to name our tool usefully, since how can we make a git repository until we have a name? Previous incarnations of the tool were called Yarn and we chose Fable to carry on the sense of telling a story (the fundamental unit of testing in these systems is a scenario), but we are not wedded to the idea of continuing in the same vein.

If you have an idea for a name for our tool, please consider reading about it on the Fable website, and then either comment here, or send me an email, prod me on IRC, or indeed any of the various ways you have to find me.

Worse Than FailureTeleported Release

Matt works at an accounting firm, as a data engineer. He makes reports for people who don’t read said reports. Accounting firms specialize in different areas of accountancy, and Matt’s firm is a general firm with mid-size clients.

The CEO of the firm is a legacy from the last century. The most advanced technology on his desk is a business calculator and a pencil sharpener. He still doesn’t use a cellphone. But he does have a son, who is “tech savvy”, which gives the CEO a horrible idea of how things work.

Usually, this is pretty light, in that it’s sorting Excel files or sorting the output of an existing report. Sometimes the requests are bizarre or utter nonsense. And, because the boss doesn’t know what the technical folks are doing, some of the IT staff may be a bit lazy about following best practices.

This means that most of Matt’s morning is spent doing what is essentially Tier 1 support before he gets into doing his real job. Recently, there was a worse crunch, as actual support person Lucinda was out for materinity leave, and Jackie, the one other developer, was off on vacation on a foreign island with no Internet. Matt was in the middle of eating a delicious lunch of take-out lo mein when his phone rang. He sighed when he saw the number.

“Matt!” the CEO exclaimed. “Matt! We need to do a build of the flagship app! And a deploy!”

The app was rather large, and a build could take upwards of 45 minutes, depending on the day and how the IT gods were feeling. But the process was automated, the latest changes all got built and deployed each night. Anything approved was released within 24 hours. With everyone out of the office, there hadn’t been any approved changes for a few weeks.

Matt checked the Github to see if something went wrong with the automated build. Everything was fine.

“Okay, so I’m seeing that everything built on GitHub and everything is available in production,” Matt said.

“I want you to do a manual build, like you used to.”

“If I were to compile right now, it could take quite awhile, and redeploying runs the risk of taking our clients offline, and nothing would be any different.”

“Yes, but I want a build that has the changes which Jackie was working on before she left for vacation.”

Matt checked the commit history, and sure enough, Jackie hadn’t committed any changes since two weeks before leaving on vacation. “It doesn’t looked like she pushed those changes to Github.”

“Githoob? I thought everything was automated. You told me the process was automated,” the CEO said.

“It’s kind of like…” Matt paused to think of an analogy that could explain this to a golden retriever. “Your dishwasher, you could put a timer on it to run it every night, but if you don’t load the dishwasher first, nothing gets cleaned.”

There was a long pause as the CEO failed to understand this. “I want Jackie’s front-page changes to be in the demo I’m about to do. This is for Initech, and there’s millions of dollars riding on their account.”

“Well,” Matt said, “Jackie hasn’t pushed- hasn’t loaded her metaphorical dishes into the dishwasher, so I can’t really build them.”

“I don’t understand, it’s on her computer. I thought these computers were on the cloud. Why am I spending all this money on clouds?”

“If Jackie doesn’t put it on the cloud, it’s not there. It’s uh… like a fax machine, and she hasn’t sent us the fax.”

“Can’t you get it off her laptop?”

“I think she took it home with her,” Matt said.

“So?”

“Have you ever seen Star Trek? Unless Scotty can teleport us to Jackie’s laptop, we can’t get at her files.”

The CEO locked up on that metaphor. “Can’t you just hack into it? I thought the NSA could do that.”

“No-” Matt paused. Maybe Matt could try and recreate the changes quickly? “How long before this meeting?” he asked.

“Twenty minutes.”

“Just to be clear, you want me to do a local build with files I don’t have by hacking them from a computer which may or may not be on and connected to the Internet, and then complete a build process which usually takes 45 minutes- at least- deploy to production, so you can do a demo in twenty minutes?”

“Why is that so difficult?” the CEO demanded.

“I can call Jackie, and if she answers, maybe we can figure something out.”

The CEO sighed. “Fine.”

Matt called Jackie. She didn’t answer. Matt left a voicemail and then went back to eating his now-cold lo mein.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Planet DebianMike Gabriel: Release of nx-libs 3.5.99.22 (Call for Testing: Keyboard auto-grab Support)

Long time not blogged about, however, there is a new release of nx-libs: nx-libs 3.5.99.22.

What is nx-libs?

The nx-libs team maintains a software originally developed by NoMachine under the name nx-X11 (version 3) or shorter: NXv3. For years now, a small team of volunteers is continually improving, fixing and maintaining the code base (after some major and radical cleanups) of NXv3. NXv3 aka x2goagent has been the only graphical backend in X2Go [0], a remote desktop framework for Linux terminal servers, over the past years.

(Spoiler: in the near future, there will be two graphical backends for X2Go sessions, if you got curious... stay tuned...).

Credits

You may have noticed, that I skipped announcing several releases of nx-libs. All interim releases should have had their own announcements, indeed, as each of them deserved it. So I am sorry and I dearly apologize for not mentioning all the details of each individual release. I am sorry for not giving credits to the team of developers around me who do pretty hard work on keeping this beast intact.

The more, let me here here and now especially give credits to Ulrich Sibiller, Mihai Moldovan and Mario Trangoni for keeping the torch burning and for actually having achieved awesome results in each of the recent nx-libs releases over the past year or so. Thanks, folks!!!

Luckily, Mihai Moldovan (X2Go Release Manager) wrote regular release announcements for every version of nx-libs that he pulled over to the X2Go Git site and the X2Go upstream-DEBs archive site [1]. Also a big thanks for this!

Changes for nx-libs 3.5.99.21 and 3.5.99.22

3.5.99.21

  • Ulrich Sibiller did a major memory leak, double-free, etc. hunt all over the code and fixed several of such issues. Most of them will be in nx-libs shipped with Debian 10.1. (The one that is not yet in there has only recently been discovered).
  • There was also work done on the reparenting code when switching between fullscreen and windowed desktop session mode.
  • Ulrich Sibiller also reworked the NX-specific part of the XKB integration and cleaned up Font path handling.

For a complete list of changes, see the 3.5.99.21 upstream release commit [2].

3.5.99.22

  • The nxagent DDX code now uses the SAFE_Xfree and SAFE_free macros recently introduced everywhere.
  • The NX splash screen code had been tidied up entirely, plus: with nxagent option "-wr" you can now create a root window with white background.
  • Keyboard Auto-Grab support (see below)
  • Fix a double-free situation in the RandR implementation that occurred on NX session resumption

For a complete list of changes, see the 3.5.99.22 upstream release commit [3].

The new Feature: Keyboard auto-grab Support ( call for testing )

There is a new feature in nx-libs (aka nxagent, aka x2goagent) that people may find interesting. Ulrich Sibiller and I have been working on and off on a keyboard auto-grab feature for NX. See various discussions on nx-libs's issue tracker [4, 5, 6].

With keyboard auto-grab enabled (toggle switch is CTRL+ALT+G, configurable via /etc/{nxagent,x2goagent}/keystrokes.cfg), you can now run e.g. an "i3" [7] (or "awesome" [8]) window manager nested inside X2Go sessions with the local desktop environment also being an "i3" (or "awesome") window manager. I hear some of you cheering up now, in fact. Yes, it has become possible, finally.

Before we had this keyboard auto-grab feature in NX, it was not possible to connect to an X2Go session running i3 desktop from within an i3 window manager running on the local $DISPLAY. Keyboard input would never really end up in the X2Go session.

With keyboard auto-grab enabled, you can now nest "i3" (or "awesome") based desktops (local + remote via X2Go). If keyboard auto-grab is enabled, nearly all keyboard events (except the NX keystrokes) end up in the X2Go session window. With auto-grab disabled, all keyboard events end up in the local $DISPLAY's i3 (or "awesome") desktop.

Here is a little command line LOVE, to play with:

Log into a local desktop session, running the i3 window manager (if you have never touch i3, use awesome). If you don't know what tiling window managers are and how to use them... Try them out first.

If you are in a local i3wm session, do this from one terminal:

sudo apt-get install nxagent
nxagent -ac :1

And from another terminal:

export DISPLAY=:1
STARTUP=i3 dbus-run-session /etc/X11/Xsession

(You could do this more than once... You can use STARTUP=awesome instead of STARTUP=i3, too. ).

Have fun with nested tiling desktop environments tiled all over your screen. Use CTRL-ALT-G to toggle keyboard auto-grabbing for each NX session window individually. By default, auto-grab is disabled on startup of nxagent, so the local i3wm gets all the keyboard attention. Move the mouse over an nxagent + i3 window / tile and hit CTRL-ALT-G. Now the NX session window has all keyboard attention as long as the mouse pointer hovers above it.

And: please report any special and unexpected effects to the nx-libs issue tracker [9]. Thanks!

Have fun!!! Mike Gabriel (aka sunweaver)

References

Planet DebianMark Brown: Linux Audio Miniconference 2019

As in previous years we’re going to have an audio miniconference so we can get together and talk through issues, especially design decisions, face to face. This year’s event will be held on Sunday October 31st in Lyon, France, the day after ELC-E. This will be held at the Lyon Convention Center (the ELC-E venue), generously sponsored by Intel.

As with previous years let’s pull together an agenda through a mailing list discussion – this announcement has been posted to alsa-devel as well, the most convenient thing would be to follow up to it. Of course if we can sort things out more quickly via the mailing list that’s even better!

If you’re planning to attend please fill out the form here.

This event will be covered by the same code of conduct as ELC-E.

Thanks again to Intel for supporting this event.

Krebs on SecurityCybersecurity Firm Imperva Discloses Breach

Imperva, a leading provider of Internet firewall services that help Web sites block malicious cyberattacks, alerted customers on Tuesday that a recent data breach exposed email addresses, scrambled passwords, API keys and SSL certificates for a subset of its firewall users.

Redwood Shores, Calif.-based Imperva sells technology and services designed to detect and block various types of malicious Web traffic, from denial-of-service attacks to digital probes aimed at undermining the security of Web-based software applications.

Image: Imperva

Earlier today, Imperva told customers that it learned on Aug. 20 about a security incident that exposed sensitive information for some users of Incapsula, the company’s cloud-based Web Application Firewall (WAF) product.

“On August 20, 2019, we learned from a third party of a data exposure that impacts a subset of customers of our Cloud WAF product who had accounts through September 15, 2017,” wrote Heli Erickson, director of analyst relations at Imperva.

“We want to be very clear that this data exposure is limited to our Cloud WAF product,” Erickson’s message continued. “While the situation remains under investigation, what we know today is that elements of our Incapsula customer database from 2017, including email addresses and hashed and salted passwords, and, for a subset of the Incapsula customers from 2017, API keys and customer-provided SSL certificates, were exposed.”

Companies that use the Incapsula WAF route all of their Web site traffic through the service, which scrubs the communications for any suspicious activity or attacks and then forwards the benign traffic on to its intended destination.

Rich Mogull, founder and vice president of product at Kansas City-based cloud security firm DisruptOps, said Imperva is among the top three Web-based firewall providers in business today.

According to Mogull, an attacker in possession of a customer’s API keys and SSL certificates could use that access to significantly undermine the security of traffic flowing to and from a customer’s various Web sites.

At a minimum, he said, an attacker in possession of these key assets could reduce the security of the WAF settings and exempt or “whitelist” from the WAF’s scrubbing technology any traffic coming from the attacker. A worst-case scenario could allow an attacker to intercept, view or modify traffic destined for an Incapsula client Web site, and even to divert all traffic for that site to or through a site owned by the attacker.

“Attackers could whitelist themselves and begin attacking the site without the WAF’s protection,” Mogull told KrebsOnSecurity. “They could modify any of the security Incapsula security settings, and if they got [the target’s SSL] certificate, that can potentially expose traffic. For a security-as-a-service provider like Imperva, this is the kind of mistake that’s up there with their worst nightmare.”

Imperva urged all of its customers to take several steps that might mitigate the threat from the data exposure, such as changing passwords for user accounts at Incapsula, enabling multi-factor authentication, resetting API keys, and generating/uploading new SSL certificates.

Alissa Knight, a senior analyst at Aite Group, said the exposure of Incapsula users’ scrambled passwords and email addresses was almost incidental given that the intruders also made off with customer API keys and SSL certificates.

Knight said although we don’t yet know the cause of this incident, such breaches at cloud-based firms often come down to small but ultimately significant security failures on the part of the provider.

“The moral of the story here is that people need to be asking tough questions of software-as-a-service firms they rely upon, because those vendors are being trusted with the keys to the kingdom,” Knight said. “Even if the vendor in question is a cybersecurity company, it doesn’t necessarily mean they’re eating their own dog food.”

Planet DebianHolger Levsen: 20190827-cccamp

On my way home from CCCamp 2019

During the last week I've been swimming many times in 4 different lakes, enjoyed a great variety of talks, music, food, drinks and lots of nerdstuff. The small forest I put my tent in was illuminated through a disco ball. And almost best of it all, until an hour ago, I spent the last 72h offline with friends.

I <3 cccamp.

Planet DebianMike Gabriel: Debian goes libjpeg-turbo 2.0.x [RFH]

I recently uploaded libjpeg-turbo 2.0.2-1~exp1 to Debian experimental. This has been the first upload of the 2.0.x release series of libjpeg-turbo.

After 3 further upload iterations (~exp4 that is), the package now builds on nearly all (except 3) architectures supported by Debian.

@all: Please Test

For those architectures that libjpeg-turbo 2.0.2-1~exp* is already available in Debian experimental, please start testing your applications on Debian testing/unstable systems with libjpeg-turbo 2.0.2-1~exp* installed from experimental. If you observe any peculiarities, please file bugs against src:libjpeg-turbo on Debian BTS. Thanks!

Please note: the major 2.x release series does not introduce an SOVERSION bump, so applications don't have to be rebuilt against the newer libjpeg-turbo. Simply drop-in-replace installed libjpeg62-turbo bin:pkg by the version from Debian experimental.

[RFH] FTBFS during Unit Tests

On the alphas, powerpc and sparc64 architectures, the builds [1] fail during unit tests:

301/302 Test #155: tjunittest-static-yuv-alloc .......................   Passed   60.08 sec
302/302 Test #156: tjunittest-static-yuv-nopad .......................   Passed   60.01 sec

99% tests passed, 2 tests failed out of 302

Total Test time (real) = 121.40 sec

The following tests FAILED:
     83 - djpeg-shared-3x2-float-prog-cmp (Failed)
    234 - djpeg-static-3x2-float-prog-cmp (Failed)
Errors while running CTest
make[1]: *** [Makefile:133: test] Error 8
make[1]: Leaving directory '/<<PKGBUILDDIR>>/obj-sparc64-linux-gnu'
dh_auto_test: cd obj-sparc64-linux-gnu && make -j8 test ARGS\+=-j8 returned exit code 2
make: *** [debian/rules:40: build-arch] Error 255

As I am not so much a porter, nor a JPEG adept, I'll appreciate some help from people with more porting and/or JPEG experience. If you feel called to work on this, please ping me on IRC (OFTC) so we can coordinate our research. The packaging Git of libjpeg-turbo has recently been migrated to Salsa [2].

References

Thanks in advance to anyone who chimes in,
Mike (aka sunweaver)

Planet DebianJonathan Dowland: Debian hiatus

Back In July I decided to take a (minimum) six months hiatus from involvement in the Debian project. This is for a number of reasons, but I completely forgot to write about it publically. So here we are.

I'm going to look at things again no sooner than January 2020 and decide whether or not (or how much) to pick it back up.

CryptogramThe Threat of Fake Academic Research

Interesting analysis of the possibility, feasibility, and efficacy of deliberately fake scientific research, something I had previously speculated about.

Worse Than FailureCodeSOD: Yesterday's Enterprise

I bumped into a few co-workers (and a few readers- that was a treat!) at Abstractions last week. My old co-workers informed me that the mainframe system, which had been “going away any day now” since about 1999, had finally gone away, as of this year.

A big part of my work at that job had been about running systems in parallel with the mainframe in some fashion, which meant I made a bunch of “datapump” applications which fed data into or pulled data out of the mainframe. Enterprise organizations often don’t know what their business processes are: the software which encodes the process is older than most anyone working in the organization, and it must work that way, because that’s the process (even though no one knows why).

Robert used to work for a company which offers an “enterprise” product, and since they know that their customers don’t actually know what they’re doing, this product can run in parallel with their existing systems. Of course, running in parallel means that you need to synchronize with the existing system.

So, for example, there were two systems. One we’ll call CQ and one we’ll call FP. Let’s say FP has the latest data. We need a method which updates CQ based on the state of FP. This is that method.

private boolean updateCQAttrFromFPAttrValue(CQRecordAttribute cqAttr, String cqtype,
        Attribute fpAttr)
        throws Exception
    {
        AppLogService.debug("Invoking " + this.getClass().getName()
            + "->updateCSAttrFromFPAttrValue()");

        String csAttrName = cqAttr.getName();
        String csAttrtype = cqAttr.getAttrType();
        String str = avt.getFPAttributeValueAsString(fpAttr);
        if (str == null)
            return false;

        if (csAttrtype.compareTo(CQConstants.CQ_SHORT_STRING_TYPE) != 0
            || csAttrtype.compareTo(CQConstants.CQ_MULTILINE_STRING_TYPE) != 0)
        {
            String csStrValue = cqAttr.getStringValue();
            if (str == null) {
                return false;
            }
            if (csStrValue != null) {
                if (str.compareTo(csStrValue) == 0) // No need to update. Still
                // same values
                {
                    return false;
                }
            }
            cqAttr.setStringValue(str);
            AppLogService.debug("CQ Attribute Name- " + csAttrName + ", Type- "
                + csAttrtype + ", Value- " + str);
            AppLogService.debug("Exiting " + this.getClass().getName()
                + "->updateCSAttrFromFPAttrValue()");
            return true;
        }
        if (csAttrtype.compareTo(CQConstants.CQ_SHORT_STRING_TYPE) == 0) {
            AttributeChoice_type0 choicetype = fpAttr
                .getAttributeChoice_type0();
            if (choicetype.getCheckBox() != null) {
                boolean val = choicetype.getCheckBox().getValue();

                if (val) {
                    str = "1";
                }

                if (str.equals(cqAttr.getStringValue())) {
                    return false;
                }

                cqAttr.setStringValue(str);

                AppLogService.debug("CS Attribute Name- " + csAttrName
                    + ", Type- " + csAttrtype + ", Value- " + str);
                AppLogService.debug("Exiting " + this.getClass().getName()
                    + "->updateCQAttrFromFPAttrValue()");
                return true;
            }
            return false;
        }
        if (csAttrtype.compareTo(CQConstants.CQ_DATE_TYPE) == 0) {
            AttributeChoice_type0 choicetype = fpAttr
                .getAttributeChoice_type0();
            if (choicetype.getDate() != null) {
                Calendar cald = choicetype.getDate().getValue();
                if (cald == null) {
                    return false;
                } else {
                    SimpleDateFormat fmt = new SimpleDateFormat(template
                        .getAdapterdateformat());
                    cqAttr.setStringValue(fmt.format(cald.getTime()));
                }
                AppLogService.debug("CS Attribute Name- " + csAttrName
                    + ", Type- " + csAttrtype + ", Value- " + str);
                AppLogService.debug("Exiting " + this.getClass().getName()
                    + "->updateCSAttrFromFPAttrValue()");
                return true;
            }
            return false;
        }

        AppLogService.debug("Exiting " + this.getClass().getName()
            + "->updateCSAttrFromFPAttrValue()");
        return false;
    }

For starters, I have to say that the method name is a thing of beauty: updateCQAttrFromFPAttrValue. It’s meaningful if you know the organizational jargon, but utterly opaque to everyone else in the world. Of course, this is the last time the code is clear even to those folks, as the very first line is a log message which outputs the wrong method name: updateCSAttrFromFPAttrValue. After that, all of our cqAttr properties get stuffed into csAttr variables.

And the fifth line: String str = avt.getFPAttributeValueAsString(fpAttr);

avt stands for “attribute value translator”, and yes, everything is string-ly typed, because of course it is.

That gets us five lines in, and it’s all downhill from there. Judging from all the getCheckBox() calls, we’re interacting with UI components directly, pretty much every logging message outputs the wrong method name, except the rare case where it doesn’t.

And as ugly and awful as this code is, it’s strangely familiar. Oh, I’ve never seen this particular bit of code before. But I have seen the code my old job wrote to keep the mainframe in sync with the Oracle ERP and the home-grown Access databases and internally developed dashboards and… it all looked pretty much like this.

The code you see here? This is the code that runs the world. This is what gets invoices processed, credit cards billed, inventory shipped, factories staffed, and hazardous materials accounted for.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianColin Watson: man-db 2.8.7

I’ve released man-db 2.8.7 (announcement, NEWS), and uploaded it to Debian unstable.

There are a few things of note that I wanted to talk about here. Firstly, I made some further improvements to the seccomp sandbox originally introduced in 2.8.0. I do still think it’s correct to try to confine subprocesses this way as a defence against malicious documents, but it’s also been a pretty rough ride for some users, especially those who use various kinds of VPNs or antivirus programs that install themselves using /etc/ld.so.preload and cause other programs to perform additional system calls. As well as a few specific tweaks, a recent discussion on LWN reminded me that it would be better to make seccomp return EPERM rather than raising SIGSYS, since that’s easier to handle gracefully: in particular, it fixes an odd corner case related to glibc’s nscd handling.

Secondly, there was a build failure on macOS that took a while to figure out, not least because I don’t have a macOS test system myself. In 2.8.6 I tried to make life easier for people on this platform with a CFLAGS tweak, but I made it a bit too general and accidentally took away configure’s ability to detect undefined symbols properly, which caused very confusing failures. More importantly, I hadn’t really thought through why this change was necessary and whether it was a good idea. man-db uses private shared libraries to keep its executable size down, and it passes -no-undefined to libtool to declare that those shared libraries have no undefined symbols after linking, which is necessary to build shared libraries on some platforms. But the CFLAGS tweak above directly contradicts this! So, instead of playing core wars with my own build system, I did some refactoring so that the assertion that man-db’s shared libraries have no undefined symbols after linking is actually true: this involved moving decompression code out of libman, and arranging for the code in libmandb to take the database path as a parameter rather than as a global variable (something I’ve meant to fix for ages anyway; 252d7cbc23, 036aa910ea, a97d977b0b). Lesson: don’t make build system changes you don’t quite understand.

Planet DebianRuss Allbery: Review: Space Opera

Review: Space Opera, by Catherynne M. Valente

Publisher: Saga
Copyright: 2018
ISBN: 1-4814-9751-0
Format: Kindle
Pages: 304

Life is not, as humans had come to think, rare. The universe is packed with it, bursting at the seams. The answer to the Fermi paradox is not that life on Earth is a flukish chance. It's that, until recently, everyone else was distracted by total galactic war.

Thankfully by the time the other intelligent inhabitants of the galaxy stumble across Earth the Sentience Wars are over. They have found a workable solution to the everlasting problem of who counts as people and who counts as meat, who is sufficiently sentient and self-aware to be allowed to join the galactic community and who needs to be quietly annihilated and never spoken of again. That solution is the Metagalactic Grand Prix, a musical extravaganza that is also the highest-rated entertainment in the galaxy. All the newly-discovered species has to do is not finish dead last.

An overwhelmingly adorable giant space flamingo appears simultaneously to every person on Earth to explain this, and also to reassure everyone that they don't need to agonize over which musical act to send to save their species. As their sponsors and the last new species to survive the Grand Prix, the Esca have made a list of Earth bands they think would be suitable. Sadly though, due to some misunderstandings about the tragically short lifespans of humans, every entry on the list is dead but one: Decibel Jones and the Absolute Zeroes. Or their surviving two members, at least.

Space Opera is unapologetically and explicitly The Hitchhiker's Guide to the Galaxy meets Eurovision. Decibel Jones and his bandmate Oort are the Arthur Dent of this story, whisked away in an impossible spaceship to an alien music festival where they're expected to sing for the survival of their planet, minus one band member and well past their prime. When they were at the height of their career, they were the sort of sequin-covered glam rock act that would fit right in to a Eurovision contest. Decibel Jones still wants to be that person; Oort, on the other hand, has a wife and kids and has cashed in the glitterpunk life for stability. Neither of them have any idea what to sing, assuming they even survive to the final round; sabotage is allowed in the rules (it's great for ratings).

I love the idea of Eurovision, one that it shares with the Olympics but delivers with less seriousness and therefore possibly more effectiveness. One way to avoid war is to build shared cultural ties through friendly competition, to laugh with each other and applaud each other, and to make a glorious show out of it. It's a great hook for a book. But this book has serious problems.

The first is that emulating The Hitchhiker's Guide to the Galaxy rarely ends well. Many people have tried, and I don't know of anyone who has succeeded. It sits near the top of many people's lists of the best humorous SF not because it's a foundational model for other people's work, but because Douglas Adams had a singular voice that is almost impossible to reproduce.

To be fair, Valente doesn't try that hard. She goes a different direction: she tries to stuff into the text of the book the written equivalent of the over-the-top, glitter-covered, hilariously excessive stage shows of unapologetic pop rock spectacle. The result... well, it's like an overstuffed couch upholstered in fuchsia and spangles, onto which have plopped the four members of a vaguely-remembered boy band attired in the most eye-wrenching shade of violet satin and sulking petulantly because you have failed to provide usable cans of silly string due to the unfortunate antics of your pet cat, Eunice (it's a long story involving an ex and a book collection), in an ocean-reef aquarium that was a birthday gift from your sister, thus provoking a frustrated glare across an Escher knot of brilliant yellow and now-empty hollow-sounding cans of propellant, when Joe, the cute blonde one who was always your favorite, asks you why your couch and its impossibly green rug is sitting in the middle of Grand Central Station, and you have to admit that you do not remember because the beginning of the sentence slipped into a simile singularity so long ago.

Valente always loves her descriptions and metaphors, but in Space Opera she takes this to a new level, one covered in garish, cheap plastic. Also, if you can get through the Esca's explanation of what's going on without wanting to strangle their entire civilization, you have a higher tolerance for weaponized cutesy condescension than I do.

That leads me back to Hitchhiker's Guide and the difficulties of humor based on bizarre aliens and ludicrous technology: it's not funny or effective unless someone is taking it seriously.

Valente includes, in an early chapter, the rules of the Metagalactic Grand Prix. Here's the first one:

The Grand Prix shall occur once per Standard Alumizar Year, which is hereby defined by how long it takes Aluno Secundus to drag its business around its morbidly obese star, get tired, have a nap, wake up cranky, yell at everyone for existing, turn around, go back around the other way, get lost, start crying, feel sorry for itself and give up on the whole business, and finally try to finish the rest of its orbit all in one go the night before it's due, which is to say, far longer than a year by almost anyone else's annoyed wristwatch.

This is, in isolation, perhaps moderately amusing, but it's the formal text of the rules of the foundational event of galactic politics. Eurovision does not take itself that seriously, but it does have rules, which you can read, and they don't sound like that, because this isn't how bureaucracies work. Even bureaucracies that put on ridiculous stage shows. This shouldn't have been the actual rules. It should have been the Hitchhiker's Guide entry for the rules, but this book doesn't seem to know the difference.

One of the things that makes Hitchhiker's Guide work is that much of what happens is impossible for Arthur Dent or the reader to take seriously, but to everyone else in the book it's just normal. The humor lies in the contrast.

In Space Opera, no one takes anything seriously, even when they should. The rules are a joke, the Esca think the whole thing is a lark, the representatives of galactic powers are annoying contestants on a cut-rate reality show, and the relentless drumbeat of more outrageous descriptions never stops. Even the angst is covered in glitter. Without that contrast, without the pause for Arthur to suddenly realize what it means for the planet to be destroyed, without Ford Prefect dryly explaining things in a way that almost makes sense, the attempted humor just piles on itself until it collapses under its own confusing weight. Valente has no characters capable of creating enough emotional space to breathe. Decibel Jones only does introspection by moping, Oort is single-note grumbling, and each alien species is more wildly fantastic than the last.

This book works best when Valente puts the plot aside and tells the stories of the previous Grands Prix. By that point in the book, I was somewhat acclimated to the over-enthusiastic descriptions and was able to read past them to appreciate some entertainingly creative alien designs. Those sections of the book felt like a group of friends read a dozen books on designing alien species, dropped acid, and then tried to write a Traveler supplement. A book with those sections and some better characters and less strained writing could have been a lot of fun.

Unfortunately, there is a plot, if a paper-thin one, and it involves tedious and unlikable characters. There were three people I truly liked in this book: Decibel's Nani (I'm going to remember Mr. Elmer of the Fudd) who appears only in quotes, Oort's cat, and Mira. Valente, beneath the overblown writing, does some lovely characterization of the band as a trio, but Mira is the anchor and the only character of the three who is interesting in her own right. If this book had been about her... well, there are still a lot of problems, but I would have enjoyed it more. Sadly, she appears mostly around the edges of other people's manic despair.

That brings me to a final complaint. The core of this book is musical performance, which means that Valente has set herself the challenging task of describing music and performance sufficiently well to give the reader some vague hint of what's good, what isn't, and why. This does not work even a little bit. Most of the alien music is described in terms of hyperspecific genres that the characters are assumed to have heard of and haven't, which was a nice bit of parody of musical writing but which doesn't do much to create a mental soundtrack. The rest is nonspecific superlatives. Even when a performance is successful, I had no idea why, or what would make the audience like one performance and not another. This would have been the one useful purpose of all that overwrought description.

Clearly some people liked this book well enough to nominate it for awards. Humor is unpredictable; I'm sure there are readers who thought Space Opera was hilarious. But I wanted to salvage about 10% of this book, three of the supporting characters, and a couple of the alien ideas, and transport them into a better book far away from the tedious deluge of words.

I am now inspired to re-read The Hitchhiker's Guide to the Galaxy, though, so there is that.

Rating: 3 out of 10

,

Chaotic IdealismHow I Live Now

Years ago, when I was a biomedical engineering major and I thought I was going to be employable, I lived in an apartment and had a car and did all those things non-disabled people do. And I was stressed out, really stressed out, living on the edge of independence and just teetering, trying to keep my balance.

Eventually I switched majors from BME to psychology–an easier program, and one that interested me.

The car didn’t last long, totaled thanks to my poor reflexes and lack of the sort of short-notice judgment that makes me a dangerous driver. My driver’s license ran out; now I just have a state ID. I moved closer to WSU, but my executive function was still bad, and it was hard for me to get to class. They sent a van across the street to pick me up. I forgot to study; they provided one of their testing rooms, distraction-free, so I would have somewhere away from the temptations of my apartment to study. They interceded with professors and got me extra time.

I was taking classes part-time, with intensive help from the department of disability services; I couldn’t sustain full-time work. If Wright State hadn’t been willing to go out of its way for me, I’d never have gotten a degree at all. I was diagnosed with dysthymia as well as episodic major depression, which explained why I never seemed to get my energy back after an episode.

I graduated. GPA 3.5, respectable. Dreaming of graduate school. Blew the practice GRE out of the water.

I tried to get a job. I worked with my job coach for more than a year. I wanted a graduate assistantship, but nobody wanted me. We looked at jobs that would let me use my education, but nobody was hiring. Eventually we branched out into more low-level work–hospital receptionist, dog kennel attendant, pharmacy technician. They were all part-time; by that point I knew better than to assume I could stick it out for a 40-hour work week.

The pharmacy tech job almost succeeded, but the boss couldn’t work with the assisted transport service that could only deliver me between the hours of 9 and 3–plus, they’d assured me it was part time, only to schedule me for 35 hours. I can only assume they hired “part-time” workers to avoid paying them benefits.

I signed up with Varsity Tutors to teach math, science, writing, and statistics. I enjoyed the work, especially when I got to use my writing ability to help someone communicate clearly, or made statistics understandable to someone unused to thinking about math. But it wasn’t steady work; you were competing with all the other tutors. You had to accept a new assignment within seconds, even before you knew what it was or whether you could teach that topic, because if you didn’t someone else would click on it first. Students paid a huge fee–$50 an hour or thereabouts–of which we only got about $10. Sometimes, when I grabbed a job that involved teaching something I myself hadn’t learned yet, I had to spend hours preparing for a one-hour session–and no, preparation hours aren’t paid.

I grew tired of cheating the customers; I’m not worth a $50-an-hour tutoring fee, and practically all of the money went to the company for doing nothing more than maintaining a Web service to match tutors and clients. And since I’d paid, out of my own pocket, for a tablet, Web cam, and internet connection, I hadn’t actually made any money anyway. I suppose I would have, if I’d stuck with it, but I just don’t like feeling so dishonest. It’s been more than a year since I last had contact with them, so I can say that. No more non-disclosure agreement. I’m sure they haven’t changed, though.

I was running out of money. My disability payments couldn’t pay for my rent. Eventually, a friend who was remodeling a house in a Cincinnati suburb offered me a rented room, within my means, and I accepted.

For a year, I lived in a room of a house undergoing remodeling. Eventually, I moved downstairs, into a finished basement room. College loan companies bombarded me with mail, demanding money I didn’t have. With the US government becoming increasingly unstable, I worried that if I even tried to work, I might lose Medicaid, and without a Medicaid buy-in available, I would have to choose between working and taking my medication (note: I cannot work if I am not taking my meds; in fact, I am in deadly danger if I do not take my meds). It didn’t help that my area has no particularly good public transport service, and the assisted transport service is–as always–unreliable and cannot be used to get to work.

Eventually I gave in. I applied for permanent disability discharge of my student loans, and was granted it. I feel dishonest–again–for not being able to predict, when I got my degree, that it wouldn’t make me employable. But there it is. The world doesn’t like to hire people who are different, or who need accommodations, or who can’t fit into the machinery of society.

But a person can’t just sit around. I do a lot of volunteer work now. I’m the primary researcher for ASAN’s disability day of mourning web site; I spend an hour or more every day monitoring the news, keeping records, and writing bios of disabled people murdered by their families and caregivers. I’ve kept up with my own Autism Memorial site, too, and the list is nearly 500 names long now. Seems like a lot, but my spreadsheet of disabled homicide victims in general is approaching five thousand.

Two days a week, I volunteer at the library. I put away books, straighten shelves, help patrons find things. The board of directors of the library fired all the pages years ago as a cost-cutting measure, so it’s volunteers like me that keep the books on the shelves while the employees are stuck manning the checkout desk or the book return. I find the work very meaningful, especially in the current political climate; libraries are wonderful, subversive places that teach a person to think on their own.

In the backyard of the house, I’m growing a garden. Gardening is new to me, but last year I had an overabundance of cherry tomatoes, and this year I’m growing tomatoes, eppers, cucumbers, carrots, sunflowers, and various herbs. I keep the lawn mowed and the bushes trimmed. The garden is a good thing, because lately my food stamps have been cut and I can’t really afford produce anymore.

My housemate’s girlfriend moved in with him last summer. She’s a sweet teacher with two guinea pigs and a love of stories. On Fridays, we drive for an hour to go play D&D with friends, and I bake cookies. I’ve learned to bake cookies over the last few years; at first it was just frozen cookie dough, then from scratch. I’ve gotten pretty good at it.

After my cat Tiny died of kidney failure, Christy got more vocal and demanding. She yells at me now when she wants attention, and climbs up on my bed to snuggle with me. She seems to think she needs to do the job of two cats. She’s getting older now, less able to climb to the top of the furniture or snatch a fly out of the air with her paws; but she still gets the kitty crazies, running around and skating on the rag rugs I made to keep the concrete floor from being quite so chilly.

I’m still myself–idealistic, protective, with a deep need to be useful. Living now is easier than it used to be when I had college loans; I just don’t buy anything I don’t absolutely need, help where I can, and let the rest go. I still have to deal with depression and with the executive dysfunction and weird brain of autism, but that’s a part of me, and I see no sense in looking down on myself just because I’m disabled.

I worry about the future. Just when it’s becoming crucial, our country’s dropping the ball on climate change. Our president is erratic, untrustworthy, and unethical. Authoritarianism looms large on the horizon. I do my best as a private citizen to help change things–with a focus on preserving democracy–but it’s still frightening, because disabled people are always the ones who get hurt first, right along with the poor and the minorities. I have quite a few deaths in ICE detainment in that database of mine, all of disabled immigrants. Why do people have to hate each other so much? Life is not a zero-sum game; if we help others, we ourselves benefit. We have so much to give; why are we refusing to share?

I find meaning in life from all the little things I do do make the world a little better, even if it’s just making cookies or showing a kid where to find the “Harry Potter” books. I used to think I might do something grand with my life, but now I don’t really think so. I think maybe a better world is made up of a lot of little people, all doing little things, all pushing in the right direction, until the sheer weight of numbers can move mountains.

Planet DebianAlberto García: The status of WebKitGTK in Debian

Like all other major browser engines, WebKit is a project that evolves very fast with releases every few weeks containing new features and security fixes.

WebKitGTK is available in Debian under the webkit2gtk name, and we are doing our best to provide the most up-to-date packages for as many users as possible.

I would like to give a quick summary of the status of WebKitGTK in Debian: what you can expect and where you can find the packages.

  • Debian unstable (sid): The most recent stable version of WebKitGTK (2.24.3 at the time of writing) is always available in Debian unstable, typically on the same day of the upstream release.
  • Debian testing (bullseye): If no new bugs are found, that same version will be available in Debian testing a few days later.
  • Debian stable (buster): WebKitGTK is covered by security support for the first time in Debian buster, so stable releases that contain security fixes will be made available through debian-security. The upstream dependencies policy guarantees that this will be possible during the buster lifetime. Apart from security updates, users of Debian buster will get newer packages during point releases.
  • Debian experimental: The most recent development version of WebKitGTK (2.25.4 at the time of writing) is always available in Debian experimental.

In addition to that, the most recent stable versions are also available as backports.

  • Debian stable (buster): Users can get the most recent stable releases of WebKitGTK from buster-backports, usually a couple of days after they are available in Debian testing.
  • Debian oldstable (stretch): While possible we are also providing backports for stretch using stretch-backports-sloppy. Due to older or missing dependencies some features may be disabled when compared to the packages in buster or testing.

You can also find a table with an overview of all available packages here.

One last thing: as explained on the release notes, users of i386 CPUs without SSE2 support will have problems with the packages available in Debian buster (webkit2gtk 2.24.2-1). This problem has already been corrected in the packages available in buster-backports or in the upcoming point release.

CryptogramDetecting Credit Card Skimmers

Modern credit card skimmers hidden in self-service gas pumps communicate via Bluetooth. There's now an app that can detect them:

The team from the University of California San Diego, who worked with other computer scientists from the University of Illinois, developed an app called Bluetana which not only scans and detects Bluetooth signals, but can actually differentiate those coming from legitimate devices -- like sensors, smartphones, or vehicle tracking hardware -- from card skimmers that are using the wireless protocol as a way to harvest stolen data. The full details of what criteria Bluetana uses to differentiate the two isn't being made public, but its algorithm takes into account metrics like signal strength and other telltale markers that were pulled from data based on scans made at 1,185 gas stations across six different states.

LongNowDavid Byrne Launches New Online Magazine, Reasons to Be Cheerful

In his Long Now talk earlier this summer, David Byrne announced that he would soon launch a new website called Reasons to Be Cheerful. The premise, Byrne said, was to document stories and projects that give cause for optimism in troubles times. He was after solutions-oriented efforts that provided tangible lessons that could be broadly utilized in different parts of the world.

“I didn’t want something that would only be applied to one culture,” Byrne said.

Reasons to Be Cheerful has now officially launched. Here is Byrne on the project from the press release:

It often seems as if the world is going straight to Hell. I wake up in the morning, I look at the paper, and I say to myself, “Oh no!” Often I’m depressed for half the day. I imagine some of you feel the same.

Recently, I realized this isn’t helping. Nothing changes when you’re numb. So, as a kind of remedy, and possibly as a kind of therapy, I started collecting good news. Not schmaltzy, feel-good news, but stuff that reminded me, “Hey, there’s positive stuff going on! People are solving problems and it’s making a difference!”

I began telling others about what I’d found.

Their responses were encouraging, so I created a website called Reasons to be Cheerful and started writing. Later on, I realized I wanted to make the endeavor a bit more formal. So we got a team together and began commissioning stories from other writers and redesigned the website. Today, we’re relaunching Reasons to be Cheerful as an ongoing editorial project.

We’re telling stories that reveal that there are, in fact, a surprising number of reasons to feel cheerful — that provide a more optimistic and, we believe, more accurate depiction of the world. We hope to balance out some of the amplified negativity and show that things might not be as bad as we think. Stop by whenever you need a reminder.

Learn More

  • Byrne also released a trailer for the website, which you can watch below:
  • Watch David Byrne’s Long Now talk here.

Worse Than FailureCodeSOD: Checksum Yourself Before you Wrecksum Yourself

Mistakes happen. Errors crop up. Since we know this, we need to defend against it. When it comes to things like account numbers, we can make a rule about which numbers are valid by using a checksum. A simple checksum might be, "Add the digits together, and repeat until you get a single digit, which, after modulus with a constant, must be zero." This means that most simple data-entry errors will result in an invalid account number, but there's still a nice large pool of valid numbers to draw from.

James works for a company that deals with tax certificates, and thus needs to generate numbers which meet a similar checksum rule. Unfortunately for James, this is how his predecessor chose to implement it:

while (true) { digits = ""; for (int i = 0; i < certificateNumber.ToString().Length; i++) { int doubleDigit = Convert.ToInt32(certificateNumber.ToString().Substring(i, 1)) * 2; digits += (doubleDigit.ToString().Length > 1 ? Convert.ToInt32(doubleDigit.ToString().Substring(0, 1)) + Convert.ToInt32(doubleDigit.ToString().Substring(1, 1)) : Convert.ToInt32(doubleDigit.ToString().Substring(0, 1))); } int result = digits.ToString().Sum(c => c - '0'); if ((result % 10) == 0) break; else certificateNumber++; }

Whitespace added to make the ternary vaguely more readable.

We start by treating the number as a string, which allows us to access each digit individually, and as we loop, we'll grab a digit and double it. That, unfortunately, gives us a number, which is a big problem. There's absolutely no way to tell if a number is two digits long without turning it back into a string. Absolutely no way! So that's what we do. If the number is two digits, we'll split it back up and add those digits together.

Which again, gives us one of those pesky numbers. So once we've checked every digit, we'll convert that number back to a useful string, then Sum the characters in the string to produce a result. A result which, we hope, is divisible by 10. If not, we check the next number. Repeat and repeat until we get a valid result.

The worst part is, though, is that you can see from the while loop that this is just dropped into a larger method. This isn't a single function which generates valid certificate numbers. This is a block that gets dropped in line. Similar, but slightly different blocks are dropped in when numbers need to be validated. There's no single isValidCertificate method.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianUtkarsh Gupta: Farewell, GSoC o/

Hello, there.

In open source, we feel strongly that to really do something well, you have to get a lot of people involved.

Guess Linus Torvalds got that right from the start.
While GSoC 2019 comes to end, this project hasn’t. With GSoC, I started this project from scratch and I guess, this won’t “die” an early age.

Here’s a quick recap:

My GSoC project is to package a software called Loomio.
A little about it, Loomio is a decision-making software, designed to assist groups with the collaborative decision-making process.
It is a free software web-application, where users can initiate discussions and put up proposals.

In the span of last 3 months, I worked on creating a package of Loomio for the Debian repositories. Loomio is a big, complex software to package.
With over 484 directories and 4607 files as a part of it’s code base, it has a huge number of Ruby and Node dependencies, along with a couple of fonts that it uses.
Out of which, around 72 ruby gems, 58 node modules, 3 fonts, and other 27 packages which were the reverse dependencies needed work. Both, including packaged and unpackaged libraries.

Also, little did I know about the need of having loomio-installer.
Thus a good amount of time went there as well (which I also talked about in my first and second report).


Work done so far!

At the time of writing this report, the following work has been done:

NEW packages

Packages that have been uploaded to the archive:

» ruby-ahoy-matey
» ruby-aws-partitions
» ruby-aws-sdk-core
» ruby-aws-sdk-kms
» ruby-aws-sdk-s3
» ruby-aws-sigv4
» ruby-cancancan
» ruby-data-uri
» ruby-geocoder
» ruby-google-cloud-core
» ruby-google-cloud-env
» ruby-inherited-resources
» ruby-maxitest
» ruby-safely-block
» ruby-terrapin
» ruby-memory-profiler
» ruby-devise-i18n
» ruby-discourse-diff
» ruby-discriminator
» ruby-doorkeeper-i18n
» ruby-friendly-id
» ruby-google-cloud-core
» ruby-google-cloud-env
» ruby-has-scope
» ruby-has-secure-token
» ruby-heroku-deflater
» ruby-i18n-spec
» ruby-iso
» ruby-omniauth-openid-connect
» ruby-paper-trail
» ruby-referer-parser
» ruby-safely-block
» ruby-user-agent-parser
» ruby-google-cloud-translate
» ruby-maxminddb
» ruby-omniauth-ultraauth

Packages that are yet to be uploaded:

» ruby-arbre
» ruby-paperclip
» ruby-ahoy-email
» ruby-ransack
» ruby-benchmark-memory
» ruby-ammeter
» ruby-rspec-tag-matchers
» ruby-formtastic
» ruby-formtastic-i18n
» ruby-rails-serve-static-assets
» ruby-activeadmin
» ruby-rails-12factor
» ruby-rails-stdout-logging
» loomio-installer

Updated packages

» rails
» ruby-devise
» ruby-globalid
» ruby-pg
» ruby-activerecord-import
» ruby-rack-oauth2
» ruby-rugged
» ruby-task-list
» gem2deb
» node-find-up
» node-matcher
» node-supports-color
» node-array-union
» node-dot-prop
» node-flush-write-stream
» node-irregular-plurals
» node-loud-rejection
» node-make-dir
» node-tmp
» node-strip-ansi


Work left!

Whilst it is clear how big and complex Loomio is, it was not humanly possible to complete the entire package of Loomio.
At the moment, the following tasks are remaining for this project to get close to completion:

» Debug loomio-installer.
» Check what all node dependencies are not really needed.
» Package and update the needed dependencies for loomio.
» Package loomio.
» Fix autopkgtests (if humanly possible).
» Maintain it for life :D


Other Debian activites!

Debian is more than just my GSoC organisation to me.
As my NM profile says and I quote,

Debian has really been an amazing journey, an amazing place, and an amazing family!

With such lovely people and teams and with my DM hat on, I have been involved with a lot more than just GSoC. In the last 3 months, my activity within Debian (other than GSoC) can be summarized as follows.

Cloud Team

Since I’ve been interested in the work they do, I joined the team recently and currently helping in packaging image finder.

NEW packages

» python-flask-marshmallow
» python-marshmallow-sqlalchemy


Perl Team

With Gregor, Intrigeri, Yadd, Nodens, and Bremner being there, I learned Perl packaging and helped in maintaining the Perl modules.

NEW packages

» libdata-dumper-compact-perl
» libminion-backend-sqlite-perl
» libmoox-shorthas-perl
» libmu-perl

Updated packages

» libasync-interrupt-perl
» libbareword-filehandles-perl
» libcatalyst-manual-perl
» libdancer2-perl
» libdist-zilla-plugin-git-perl
» libdist-zilla-plugin-makemaker-awesome-perl
» libdist-zilla-plugin-ourpkgversion-perl
» libdomain-publicsuffix-perl
» libfile-find-object-rule-perl
» libfile-flock-retry-perl
» libgeoip2-perl
» libgraphics-colornames-www-perl
» libio-aio-perl
» libio-async-perl
» libmail-box-perl
» libmail-chimp3-perl
» libmath-clipper-perl
» libminion-perl
» libmojo-pg-perl
» libnet-amazon-s3-perl
» libnet-appliance-session-perl
» libnet-cli-interact-perl
» libnet-frame-perl
» libnetpacket-perl
» librinci-perl
» libperl-critic-policy-variables-prohibitlooponhash-perl
» libsah-schemas-rinci-perl
» libstrictures-perl
» libsisimai-perl
» libstring-tagged-perl
» libsystem-info-perl
» libtex-encode-perl
» libxxx-perl


Python Team

Since I lately learned Python packaging, there are a couple of packages that I worked on which I haven’t pushed yet, but by later this month.

» python3-dotenv
» python3-phonenumbers
» django-phonenumber-field
» django-phone-verify
» Helping newbies (thanks to DC19 talk).


JavaScript Team

Super thanks to Xavier (yadd) and Praveen for being right there. Worked on the following things.

» Helping in webpack transition (bit).
» Helping in nodejs transition.
» Helping in complying pkg-js-tools in all packages.
» Packaging dependencies of ava.
» node-d3-request
» node-find-up
» node-matcher
» node-supports-color
» node-array-union
» node-dot-prop
» node-flush-write-stream
» node-irregular-plurals
» node-loud-rejection
» node-make-dir
» node-tmp
» node-strip-ansi


Golang Team

I joined the Golang team to mostly help in doing the GitLab stuff. Thus did the following things.

» gitlab-workhorse
» gitaly
» Upstream contribution to gitaly.


Ruby Team

This is where I started from. All thanks to Praveen, Abhijith, and Raju.
In the last 3 months, except for maintaining packages for Loomio, I did the following things.

» Helping in maintaining GitLab (one of the maintainers).
» Setting the fasttrack repo; announcements soon!
» Fixing gem2deb for adding d/upstream/metadata.
» Enabling Salsa CI for 1392 packages (yes, I broke salsa :/).
» Reviewing and sponsoring packages.
» Co-chairing the Ruby Team BoF.
» And others.


Others

» Part of DC19 Content Team (thanks to Antonio).
» Part of DC19 Bursary Team (thanks to Jonathan).
» Perl sprint (DebCamp).
» Newbie’s Perspective Towards Debian talk (Open day).
» Chairing Ruby Team BoF.
» Presenting my GSoC project.
» Part of DC19 Video Team.
» Talking about Debian elsewhere (cf: mail archive).
» DC21 Indian bid ;)
» Organising MiniDebConf Goa :D


Acknowledgement :)

Never forget your roots.

And I haven’t. The last 8 months with Debian have been super amazing. Nothing I’d like to change, even if I could. Every person here is a mentor to me.
But above all, there are a couple of people who helped me immensely.
Starting with Pirate Praveen, Rajudev, Abhijith, Sruthi, Gregor, Xavier, Intrigeri, Nodens, Holger, Antonio Terceiro, Kanashiro, Boutil, Georg, Sanyam, Sakshi, Jatin, and Samyak. And of course, my little brother, Aryan.
Sorry if I’m forgetting anyone. Thank y’all :)

NOTE: Sorry for making this extremely long; someone told me to put in all the crap I did in last 90 days :P
Also, sorry if it gets too long on planet.d.o. :)

Until next time.
:wq for today.

,

Planet DebianRuss Allbery: Review: A Memory Called Empire

Review: A Memory Called Empire, by Arkady Martine

Series: Teixcalaan #1
Publisher: Tor
Copyright: March 2019
ISBN: 1-250-18645-5
Format: Kindle
Pages: 462

Mahit Dzmare grew up dreaming of Teixcalaan. She learned its language, read its stories, and even ventured some of her own poetry, in love with the partial and censored glimpses of its culture that were visible outside of the empire. From her home in Lsel Station, an independent mining station, Teixcalaan was a vast, lurking weight of history, drama, and military force. She dreamed of going there in person. She did not expect to be rushed to Teixcalaan as the new ambassador from Lsel Station, bearing a woefully out-of-date imago that she's barely begun to integrate, with no word from the previous ambassador and no indication of why Teixcalaan has suddenly demanded a replacement.

Lsel is small, precarious, and tightly managed, a station without a planet and with only the resources that it can maintain and mine for itself, but it does have a valuable secret. It cannot afford to lose vital skills to accident or age, and therefore has mastered the technology of recording people's personalities, memories, and skills using a device called an imago. The imago can then be implanted in the brain of another, giving them at first a companion in the back of their mind and, with time, a unification that grants them inherited skills and memory. Valuable expertise in piloting, mining, and every other field of importance need not be lost to death, but can be preserved through carefully tended imago lines and passed on to others who test as compatible.

Mahit has the imago of the previous ambassador to Teixcalaan, but it's a copy from five years after his appointment, and he was the first of his line. Yskandr Aghavn served another fifteen years before the loss of contact and Teixcalaan's emergency summons, never returning home to deposit another copy. Worse, the implantation had to be rushed due to Teixcalaan's demand. Rather than the normal six months of careful integration under active psychiatric supervision, Mahit has had only a month with her new imago, spent on a Teixcalaan ship without any Lsel support.

With only that assistance from home, Mahit's job is to navigate the complex bureaucracy and rich culture of an all-consuming interstellar empire to prevent the ruthlessly expansionist Teixcalaanli from deciding to absorb Lsel Station like they have so many other stations, planets, and cultures before them. Oh, and determine what happened to her predecessor, while keeping the imagos secret.

I love when my on-line circles light up with delight about a new novel, and it turns out to be just as good as everyone said it was.

A Memory Called Empire is a fascinating, twisty, complex political drama set primarily in the City at the heart of an empire, a city filled with people, computer-controlled services, factions, manuevering, frighteningly unified city guards, automated defense mechanisms, unexpected allies, and untrustworthy offers. Martine weaves a culture that feels down to its bones like an empire at the height of its powers and confidence: glorious, sophisticated, deeply aware of its history, rich in poetry and convention, inward-looking, and alternately bemused by and contemptuous of anyone from outside what Teixcalaan defines as civilization, when Teixcalaan thinks of them at all.

But as good as the setting is (and it's superb, with a deep, lived-in feel), the strength of this book is its characters. Mahit was expecting to be the relatively insignificant ambassador of a small station, tasked with trade negotiations and routine approvals and given time to get her feet under her. But when it quickly becomes clear that Yskandr was involved in some complex machinations at the heart of the Teixcalaan government, she shows admirable skill for thinking on her feet, making fast decisions, and mixing thoughtful reserve and daring leaps of judgment.

Mahit is here alone from Lsel, but she's not without assistance. Teixcalaan has assigned her an asekreta, a cultural liaison who works for the Information Ministry. Her name is Three Seagrass, and she is the best part of this book. Mahit starts wisely suspicious of her, and Three Seagrass starts carefully and thoroughly professional. But as the complexities of Mahit's situation mount, she and Three Seagrass develop a complex and delightful friendship, one that slowly builds on cautious trust and crosses cultural boundaries without ignoring them. Three Seagrass's nearly-unflappable curiosity and guidance is a perfect complement to Mahit's reserve and calculated gambits, and then inverts beautifully later in the book when the politics Mahit uncovers start to shake Three Seagrass's sense of stability. Their friendship is the emotional heart of this story, full of delicate grace notes and never falling into stock patterns.

Martine also does some things with gender and sexuality that are remarkable in how smoothly they lie below the surface. Neither culture in this novel cares much about the gender configurations of sexual partnerships, which means A Memory Called Empire shares with Nicola Griffith novels an unmarked acceptance of same-sex relationships. It's also not eager to pair up characters or put romance at the center of the story, which I greatly appreciated. And I was delighted that the character who navigates hierarchy via emotional connection and tumbling into the beds of the politically influential is, for once, the man.

I am stunned that this is a first novel. Martine has masterful control over both the characters and plot, keeping me engrossed and fully engaged from the first chapter. Mahit's caution towards her possible allies and her discovery of the lay of the political land parallel the reader's discovery of the shape of the plot in a way that lets one absorb Teixcalaanli politics alongside her. Lsel is at the center of the story, but only as part of Teixcalaanli internal maneuvering. It is important to the empire but is not treated as significant or worthy of its own voice, which is a knife-sharp thrust of cultural characterization. And the shadow of Yskandr's prior actions is beautifully handled, leaving both the reader and Mahit wondering whether he was a brilliant strategic genius or in way over his head. Or perhaps both.

This is also a book about empire, colonization, and absorption, about what it's like to delight in the vastness of its culture and history while simultaneously fearful of drowning in it. I've never before read a book that captures the tension of being an ambassador to a larger and more powerful nation: the complex feelings of admiration and fear, and the need to both understand and respect and in some ways crave the culture while still holding oneself apart. Mahit is by turns isolated and accepted, and by turns craves acceptance and inclusion and is wary of it. It's a set of emotions that I rarely see in space opera.

This is one of the best science fiction novels I've read, one that I'll mention in the same breath as Ancillary Justice or Cyteen. It is a thoroughly satisfying story, one that lasted just as long as it should and left me feeling satiated, happy, and eager for the sequel. You will not regret reading this, and I expect to see it on a lot of award lists next year.

Followed by A Desolation Called Peace, which I've already pre-ordered.

Rating: 10 out of 10

Planet DebianAndrew Cater: Cambridge BBQ 2019 - 2

Another day with a garden full of people. A house full of coders, talkers, coffee drinkers and unexpected bread makers - including a huge fresh loaf. Playing "the DebConf card game" for the first time was confusing as anything and a lot of fun. The youngest person there turned out to be one of the toughest players.

Hotter than yesterday - 32 degrees as I've just driven back across country and the sun in my eyes.. Sorry to leave everyone there for tomorrow's end of BBQ but there'll be another opportunity.

Thanks even more to Steve, Jo and everyone there - it's been a fantastic weekend.

Planet DebianAndrew Cater: Cambridge BBQ 2019

Usual friendly Debian family chaos: a garden full of people last night: lots of chat, lots  of catching up and conviviality including a birthday cake. The house was also full: games of cards ensued last thing at night :) Highlights: home made cookies, chilli and cheese bread [and the company as always]. One of the hotter days of the year at 30 degrees.

Now folk are filtering in: coffee machine is getting a workout and breakfast is happening. Lots more folk expected gradually as the morning progresses: it's 0955 UTC as I'm typing. Today is due to be hotter, apparently. Thanks to Steve and Jo for hosting, as always.

,

CryptogramFriday Squid Blogging: Vulnerabilities in Squid Server

It's always nice when I can combine squid and security:

Multiple versions of the Squid web proxy cache server built with Basic Authentication features are currently vulnerable to code execution and denial-of-service (DoS) attacks triggered by the exploitation of a heap buffer overflow security flaw.

The vulnerability present in Squid 4.0.23 through 4.7 is caused by incorrect buffer management which renders vulnerable installations to "a heap overflow and possible remote code execution attack when processing HTTP Authentication credentials."

"When checking Basic Authentication with HttpHeader::getAuth, Squid uses a global buffer to store the decoded data," says MITRE's description of the vulnerability. "Squid does not check that the decoded length isn't greater than the buffer, leading to a heap-based buffer overflow with user controlled data."

The flaw was patched by the web proxy's development team with the release of Squid 4.8 on July 9.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Valerie AuroraHow to avoid supporting sexual predators

[TW: child sex abuse]

Recently, I received an email from a computer security company asking for more information on why I refuse to work with them. My reason? The company was founded by a registered child sex offender who still serves as its CTO, which I found out during my standard client research process.

My first reaction was, “Do I really need to explain why I won’t work with you???” but as I write this, we’re at the part of the Jeffrey Epstein news cycle where we are learning about the people in computer science who supported Epstein—after Epstein pleaded guilty to two counts of “procuring prostitution with a child under 18,” registered as a sex offender, and paid restitution to dozens of victims. As someone who outed her own father as a serial child molester, I can tell you that it is quite common for people to support and help known sexual predators in this way.

I would like to share how I actively avoid supporting sexual predators, as someone who provides diversity and inclusion training, mostly to software companies:

  1. When a new client approaches me, I find the names of the CEO, CTO, COO, board members, and founders—usually on the “About Us” or “Who We Are” or “Founders” page of the company’s web site. Crunchbase and LinkedIn are also useful for this step.
  2. For each of the CEO, CTO, COO, board members, and/or founders, I search their name plus “allegations,” “sexism,” “sexual assault,” “sexual harassment,” and “women.” I do this for the company name too.
  3. If I find out any executives, board members, or founders have been credibly accused of sexual harassment or assault, I refuse to work with that company.
  4. I look up the funders of the company on Crunchbase. If any of their funders are listed on Sexism and Racism in Venture Capital, I give the company extra scrutiny.
  5. If the company agreed to take funding from a firm (or person) after knowing the lead partner(s) were sexual harassers or predators, I refuse to work with that company.

If you don’t have time to do this personally, I recommend hiring or contracting with someone to do it for you.

That’s just part of my research process (I search for other terms, such as “racism”). This has saved me from agreeing to help make money for a sexual predator or harasser many times. Specifically, I’ve turned down 13 out of 303 potential clients for this reason, or about 4% of clients who approached me. To be sure, it has also cost me money—I’d estimate at least $50,000—but I’d like to believe that my reputation and conscience are worth more than that. If you’re not in a position where you can say no to supporting a sexual predator, you have my sympathy and respect, and I hope you can find a way out sooner or later.

Your research process will look different depending on your situation, but the key elements will be:

  1. Assume that sexual predators exist in your field and you don’t know who all of them are.
  2. When you are asked to work with or support someone new, do research to find out if they are a sexual predator.
  3. When you find out someone is probably a sexual predator, refuse to support them.

What do I do if, say, the CEO has been credibly accused of sexual harassment or assault but the company has taken appropriate steps to make amends and heal the harm done to the victims? I don’t know, because I can’t remember a potential client who did that. I’ve had plenty that published a non-apology, forced victims to sign NDAs for trivial sums of money, or (very rarely) fired the CEO but allowed them to keep all or most of their equity, board seat, voting rights, etc. That’s not enough, because the CEO hasn’t shown remorse, made amends, or removed themselves from positions of power.

I don’t think all sexual predators should be ostracized completely, but I do think everyone has a moral responsibility not to help known sexual predators back into positions of power and influence without strong evidence of reform. Power and influence are privileges which should only be granted to people who are unlikely to abuse them, not rights which certain people “deserve” as long as they claim to have reformed. Someone with a history of sexually predatory behavior should be assumed to be dangerous unless exhaustively proven otherwise. One sign of complete reform is that the former sexual predator will themselves avoid and reject situations in which power and access would make sexual abuse easy to resume.

In this specific case, the CTO of this company maintains a public web site which briefly and vaguely mentions the harm done to victims of sex abuse—and then devotes the majority of the text to passionately advocating for the repeal of sex offender registry laws because of the incredible harm they do to the health and happiness of convicted sex offenders. So, no, I don’t think he has changed meaningfully, he is not a safe person to be around, he should not be the CTO of a computer security company, and I should not help him gain more wealth.

Don’t be the person helping the sexual predator insinuate themself back into a position with easy access to victims. If your first instinct is to feel sorry for the powerful and predatory, you need to do some serious work on your sense of empathy. Plenty of people have shared what it’s like to be the victim of sexual harassment and assault; go read their stories and try to imagine the suffering they’ve been through. Then compare that to the suffering of people who occasionally experience moderate consequences for sexually abusing people with less power than themselves. I hope you will adjust your empathy accordingly.

Sociological ImagesFamily Matters

The ‘power elite’ as we conceive it, also rests upon the similarity of its personnel, and their personal and official relations with one another, upon their social and psychological affinities. In order to grasp the personal and social basis of the power elite’s unity, we have first to remind ourselves of the facts of origin, career, and style of life of each of the types of circle whose members compose the power elite.

— C. Wright Mills. 1956. The Power Elite. Oxford University Press

President John F. Kennedy addresses the Prayer Breakfast in 1961. Wikimedia Commons.

A big question in political sociology is “what keeps leaders working together?” The drive to stay in public office and common business interests can encourage elites to cooperate, but politics is still messy. Different constituent groups and social movements demand that representatives support their interests, and the U.S. political system was originally designed to use this big, diverse set of factions to keep any single person or party from becoming too powerful.

Sociologists know that shared culture, or what Mills calls a “style of life,” is really important among elites. One of my favorite profiles of a style of life is Jeff Sharlet’s The Family, a look at how one religious fellowship has a big influence on the networks behind political power in the modern world. The book is a gripping case of embedded reporting that shows how this elite culture works. It also has a new documentary series:

When we talk about the religious right in politics, it is easy to jump to images of loud, pro-life protests and controversial speakers. What interests me about the Family is how the group has worked so hard to avoid this contentious approach. Instead, everything is geared toward simply getting newcomers to think of themselves as elites, bringing leaders together, and keeping them connected. A major theme in the first episode of the series is just how simple the theology is (“Jesus plus nothing”) and how quiet the group is, even drawing comparisons to the mafia.

Vipassana Meditation in Chiang Mai, Thailand. Source: Matteo, Flickr CC.

Sociologists see similar trends in other elite networks. In research on how mindfulness and meditation caught on in the corporate world, Jaime Kucinskas calls this “unobtrusive organizing.” Both the Family and the mindfulness movement show how leaders draw on core theological ideas in Christianity and Buddhism, but also modify those ideas to support their relationships in business and government. Rather than challenging those institutions, adapting and modifying these traditions creates new opportunities for elites to meet, mingle, and coordinate their work.

When we study politics and culture, it is easy to assume that core beliefs make people do things by giving them an agenda to follow. These cases are important because they show how that’s not always the point; sometimes core beliefs just shape how people do things in the halls of power.

Evan Stewart is an assistant professor of sociology at University of Massachusetts Boston. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramLicense Plate "NULL"

There was a DefCon talk by someone with the vanity plate "NULL." The California system assigned him every ticket with no license plate: $12,000.

Although the initial $12,000-worth of fines were removed, the private company that administers the database didn't fix the issue and new NULL tickets are still showing up.

The unanswered question is: now that he has a way to get parking fines removed, can he park anywhere for free?

And this isn't the first time this sort of thing has happened. Wired has a roundup of people whose license places read things like "NOPLATE," "NO TAG," and "XXXXXXX."

Worse Than FailureError'd: One Size Fits All

"Multi-platform AND multi-gender! Who knew SSDs could be so accomodating?" Felipe C. wrote.

 

"This is a progress indicator from a certain Australian "Enterprise" ERP vendor. I suspect their sales guys use it to claim that their software updates over 1000% faster than their competitors," Erin D. writes.

 

Bruce W. writes, "I guess LinkedIn wants me to know that I'm not as popular as I think."

 

"According to Icinga's Round Trip Average calculation, one of our servers must have been teleported about a quarter of the way to the center of the Milky Way. The good news is that I have negative packet loss on that route. Guess the packets got bored on the way," Mike T. writes.

 

"From undefined to invalid, this bankruptcy site has it all...or is it nothing?" Pascal writes.

 

 

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityBreach at Hy-Vee Supermarket Chain Tied to Sale of 5M+ Stolen Credit, Debit Cards

On Tuesday of this week, one of the more popular underground stores peddling credit and debit card data stolen from hacked merchants announced a blockbuster new sale: More than 5.3 million new accounts belonging to cardholders from 35 U.S. states. Multiple sources now tell KrebsOnSecurity that the card data came from compromised gas pumps, coffee shops and restaurants operated by Hy-Vee, an Iowa-based company that operates a chain of more than 245 supermarkets throughout the Midwestern United States.

Hy-Vee, based in Des Moines, announced on Aug. 14 it was investigating a data breach involving payment processing systems that handle transactions at some Hy-Vee fuel pumps, drive-thru coffee shops and restaurants.

The restaurants affected include Hy-Vee Market Grilles, Market Grille Expresses and Wahlburgers locations that the company owns and operates. Hy-Vee said it was too early to tell when the breach initially began or for how long intruders were inside their payment systems.

But typically, such breaches occur when cybercriminals manage to remotely install malicious software on a retailer’s card-processing systems. This type of point-of-sale malware is capable of copying data stored on a credit or debit card’s magnetic stripe when those cards are swiped at compromised payment terminals. This data can then be used to create counterfeit copies of the cards.

Hy-Vee said it believes the breach does not affect payment card terminals used at its grocery store checkout lanes, pharmacies or convenience stores, as these systems rely on a security technology designed to defeat card-skimming malware.

“These locations have different point-of-sale systems than those located at our grocery stores, drugstores and inside our convenience stores, which utilize point-to-point encryption technology for processing payment card transactions,” Hy-Vee said. “This encryption technology protects card data by making it unreadable. Based on our preliminary investigation, we believe payment card transactions that were swiped or inserted on these systems, which are utilized at our front-end checkout lanes, pharmacies, customer service counters, wine & spirits locations, floral departments, clinics and all other food service areas, as well as transactions processed through Aisles Online, are not involved.”

According to two sources who asked not to be identified for this story — including one at a major U.S. financial institution — the card data stolen from Hy-Vee is now being sold under the code name “Solar Energy,” at the infamous Joker’s Stash carding bazaar.

An ad at the Joker’s Stash carding site for “Solar Energy,” a batch of more than 5 million credit and debit cards sources say was stolen from customers of supermarket chain Hy-Vee.

Hy-Vee said the company’s investigation is continuing.

“We are aware of reports from payment processors and the card networks of payment data being offered for sale and are working with the payment card networks so that they can identify the cards and work with issuing banks to initiate heightened monitoring on accounts,” Hy-Vee spokesperson Tina Pothoff said.

The card account records sold by Joker’s Stash, known as “dumps,” apparently stolen from Hy-Vee are being sold for prices ranging from $17 to $35 apiece. Buyers typically receive a text file that includes all of their dumps. Those individual dumps records — when encoded onto a new magnetic stripe on virtually anything the size of a credit card — can be used to purchase stolen merchandise in big box stores.

As noted in previous stories here, the organized cyberthieves involved in stealing card data from main street merchants have gradually moved down the food chain from big box retailers like Target and Home Depot to smaller but far more plentiful and probably less secure merchants (either by choice or because the larger stores became a harder target).

It’s really not worth spending time worrying about where your card number may have been breached, since it’s almost always impossible to say for sure and because it’s common for the same card to be breached at multiple establishments during the same time period.

Just remember that while consumers are not liable for fraudulent charges, it may still fall to you the consumer to spot and report any suspicious charges. So keep a close eye on your statements, and consider signing up for text message notifications of new charges if your card issuer offers this service. Most of these services also can be set to alert you if you’re about to miss an upcoming payment, so they can also be handy for avoiding late fees and other costly charges.

Rondam RamblingsFedex: three months and counting

It has now been three months since we shipped a package via Fedex that turned out to be undeliverable (we sent it signature-required, and the recipient, unbeknownst to us, had moved).  We expected that in a situation like that, the package would simply be returned to us, but it wasn't because we paid cash for the original shipment and (again, unbeknownst to us) the shipping cost doesn't include

CryptogramModifying a Tesla to Become a Surveillance Platform

From DefCon:

At the Defcon hacker conference today, security researcher Truman Kain debuted what he calls the Surveillance Detection Scout. The DIY computer fits into the middle console of a Tesla Model S or Model 3, plugs into its dashboard USB port, and turns the car's built-in cameras­ -- the same dash and rearview cameras providing a 360-degree view used for Tesla's Autopilot and Sentry features­ -- into a system that spots, tracks, and stores license plates and faces over time. The tool uses open source image recognition software to automatically put an alert on the Tesla's display and the user's phone if it repeatedly sees the same license plate. When the car is parked, it can track nearby faces to see which ones repeatedly appear. Kain says the intent is to offer a warning that someone might be preparing to steal the car, tamper with it, or break into the driver's nearby home.

Worse Than FailureKeeping Busy

Djungarian Hamster Pearl White run wheel

In 1979, Argle was 18, happy to be working at a large firm specializing in aerospace equipment. There was plenty of opportunity to work with interesting technology and learn from dozens of more senior programs—well, usually. But then came the day when Argle's boss summoned him to his cube for something rather different.

"This is a listing of the code we had prior to the last review," the boss said, pointing to a stack of printed Fortran code that was at least 6 inches thick. "This is what we have now." He gestured to a second printout that was slightly thicker. "I need you to read through this code and, in the old code, mark lines with 'WAS' where there was a change and 'IS' in the new listing to indicate what it was changed to."

Argle frowned at the daunting paper mountains. "I'm sorry, but, why do you need this exactly?"

"It's for FAA compliance," the boss said, waving his hand toward his cubicle's threshold. "Thanks!"

Weighed down with piles of code, Argle returned to his cube with a similarly sinking heart. At this place and time, he'd never even heard of UNIX, and his coworkers weren't likely to know anything about it, either. Their development computer had a TMS9900 CPU, the same one in the TI-99 home computer, and it ran its own proprietary OS from Texas Instruments. There was no diff command or anything like it. The closest analog was a file comparison program, but it only reported whether two files were identical or not.

Back at his cube, Argle stared at the printouts for a while, dreading the weeks of manual, mind-numbing dullness that loomed ahead of him. There was no way he'd avoid errors, no matter how careful he was. There was no way he'd complete this to every stakeholder's satisfaction. He was staring imminent failure in the face.

Was there a better way? If there weren't already a program for this kind of thing, could he write his own?

Argle had never heard of the Hunt–McIlroy algorithm, but he thought he might be able to do line comparisons between files, then hunt ahead in one file or the other until he re-synched again. He asked one of the senior programmers for the files' source code. Within one afternoon of tinkering, he'd written his very own diff program.

The next morning, Argle handed his boss 2 newly printed stacks of code, with "WAS -->" and "IS -->" printed neatly on all the relevant lines. As the boss began flipping through the pages, Argle smiled proudly, anticipating the pleasant surprise and glowing praise to come.

Quite to Argle's surprise, his boss fixed him with a red-faced, accusing glare. "Who said you could write a program?!"

Argle was speechless at first. "I was hired to program!" he finally blurted. "Besides, that's totally error-free! I know I couldn't have gotten everything correct by hand!"

The boss sighed. "I suppose not."

It wasn't until Argle was much older that his boss' reaction made any sense to him. The boss' goal hadn't been "compliance." He simply hadn't had anything constructive for Argle to do, and had thought he'd come up with a brilliant way to keep the new young hire busy and out of his hair for a few weeks.

Writer's note: Through the ages and across time, absolutely nothing has changed. In 2001, I worked at a (paid, thankfully) corporate internship where I was asked to manually browse through a huge network share and write down what every folder contained, all the way through thousands of files and sub-folders. Fortunately, I had heard of the dir command in DOS. Within 30 minutes, I proudly handed my boss the printout of the output—to his bemusement and dismay. —Ellis

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Cory DoctorowMy MMT Podcast appearance, part 2: monopoly, money, and the power of narrative


Last week, the Modern Monetary Theory Podcast ran part 1 of my interview with co-host Christian Reilly; they’ve just published the second and final half of our chat (MP3), where we talk about the link between corruption and monopoly, how to pitch monetary theory to people who want to abolish money altogether, and how stories shape the future.

If you’re new to MMT, here’s my brief summary of its underlying premises: “Governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.”

Cory DoctorowWhere to catch me at Burning Man!

This is my last day at my desk until Labor Day: tomorrow, we’re driving to Burning Man to get our annual dirtrave fix! If you’re heading to the playa, here’s three places and times you can find me:

Seating is always limited at these things (our living room is big, but it’s not that big!) so come by early!

I hope you have an amazing burn — we always do! This year I’m taking a break from working in the cafe pulling shots in favor of my first-ever Greeter shift, which I’m really looking forward to.

While we’re on the subject, there’s still time to sign up for the Liminal Labs Assassination Game!

Google AdsenseAdditional safeguards to protect the quality of our ad network

Supporting a healthy ads ecosystem that works for publishers, advertisers, and users continues to be a top priority in our effort to sustain a free and open web. As the ecosystem evolves, our ad systems and defenses must adapt as well. Today, we’d like to highlight some of our efforts to protect the quality of our ad network, and the benefits to our publishers and the advertising ecosystem. 


Last year, we introduced a site verification process in AdSense to provide additional safeguards before a publisher can serve ads. This feature allows us to provide more direct feedback to our publishers on the eligibility of their site, while allowing us to communicate issues sooner and lessen the likelihood of future violations. As an added benefit, confirming which websites a publisher intends to monetize allows us to reduce potential misuse of a publisher's ad code, such as when a bad actor tries to claim a website as their own, or when they use a legitimate publisher's ad code to serve ads on bad content in an attempt to demonetize the good website — each day, we now block more than 120 million ad requests with this feature. 


This year, we’re enhancing our defenses even more by improving the systems that identify potentially invalid traffic or high risk activities before ads are served. These defenses allow us to limit ad serving as needed to further protect our advertisers and users, while maximizing revenue opportunities for legitimate publishers. While most publishers will not notice any changes to their ad traffic, we are working on improving the experience for those that may be impacted, by providing more transparency around these actions. Publishers on AdSense and AdMob that are affected will soon be notified of these ad traffic restrictions directly in their Policy Center. This will allow them to understand why they may be experiencing reduced ad serving, and what steps they can take to resolve any issues and continue partnering with us.


We’re excited for what’s to come, and will continue to roll out improvements to these systems with all of our users in mind. Look out for future updates on our ongoing efforts to promote and sustain a healthy ads ecosystem.


Posted by: 
Andres Ferrate - Chief Advocate for Ad Traffic Quality

Krebs on SecurityForced Password Reset? Check Your Assumptions

Almost weekly now I hear from an indignant reader who suspects a data breach at a Web site they frequent that has just asked the reader to reset their password. Further investigation almost invariably reveals that the password reset demand was not the result of a breach but rather the site’s efforts to identify customers who are reusing passwords from other sites that have already been hacked.

But ironically, many companies taking these proactive steps soon discover that their explanation as to why they’re doing it can get misinterpreted as more evidence of lax security. This post attempts to unravel what’s going on here.

Over the weekend, a follower on Twitter included me in a tweet sent to California-based job search site Glassdoor, which had just sent him the following notice:

The Twitter follower expressed concern about this message, because it suggested to him that in order for Glassdoor to have done what it described, the company would have had to be storing its users’ passwords in plain text. I replied that this was in fact not an indication of storing passwords in plain text, and that many companies are now testing their users’ credentials against lists of hacked credentials that have been leaked and made available online.

The reality is Facebook, Netflix and a number of big-name companies are regularly combing through huge data leak troves for credentials that match those of their customers, and then forcing a password reset for those users. Some are even checking for password re-use on all new account signups.

The idea here is to stymie a massively pervasive problem facing all companies that do business online today: Namely, “credential-stuffing attacks,” in which attackers take millions or even billions of email addresses and corresponding cracked passwords from compromised databases and see how many of them work at other online properties.

So how does the defense against this daily deluge of credential stuffing work? A company employing this strategy will first extract from these leaked credential lists any email addresses that correspond to their current user base.

From there, the corresponding cracked (plain text) passwords are fed into the same process that the company relies upon when users log in: That is, the company feeds those plain text passwords through its own password “hashing” or scrambling routine.

Password hashing is designed to be a one-way function which scrambles a plain text password so that it produces a long string of numbers and letters. Not all hashing methods are created equal, and some of the most commonly used methods — MD5 and SHA-1, for example — can be far less secure than others, depending on how they’re implemented (more on that in a moment). Whatever the hashing method used, it’s the hashed output that gets stored, not the password itself.

Back to the process: If a user’s plain text password from a hacked database matches the output of what a company would expect to see after running it through their own internal hashing process, that user is then prompted to change their password to something truly unique.

Now, password hashing methods can be made more secure by amending the password with what’s known as a “salt” — or random data added to the input of a hash function to guarantee a unique output. And many readers of the Twitter thread on Glassdoor’s approach reasoned that the company couldn’t have been doing what it described without also forgoing this additional layer of security.

My tweeted explanatory reply as to why Glassdoor was doing this was (in hindsight) incomplete and in any case not as clear as it should have been. Fortunately, Glassdoor’s chief information officer Anthony Moisant chimed in to the Twitter thread to explain that the salt is in fact added as part of the password testing procedure.

“In our [user] database, we’ve got three columns — username, salt value and scrypt hash,” Moisant explained in an interview with KrebsOnSecurity. “We apply the salt that’s stored in the database and the hash [function] to the plain text password, and that resulting value is then checked against the hash in the database we store. For whatever reason, some people have gotten it into their heads that there’s no possible way to do these checks if you salt, but that’s not true.”

CHECK YOUR ASSUMPTIONS

You — the user — can’t be expected to know or control what password hashing methods a given site uses, if indeed they use them at all. But you can control the quality of the passwords you pick.

I can’t stress this enough: Do not re-use passwords. And don’t recycle them either. Recycling involves rather lame attempts to make a reused password unique by simply adding a digit or changing the capitalization of certain characters. Crooks who specialize in password attacks are wise to this approach as well.

If you have trouble remembering complex passwords (and this describes most people), consider relying instead on password length, which is a far more important determiner of whether a given password can be cracked by available tools in any timeframe that might be reasonably useful to an attacker.

In that vein, it’s safer and wiser to focus on picking passphrases instead of passwords. Passphrases are collections of multiple (ideally unrelated) words mushed together. Passphrases are not only generally more secure, they also have the added benefit of being easier to remember.

According to a recent blog entry by Microsoft group program manager Alex Weinert, none of the above advice about password complexity amounts to a hill of beans from the attacker’s standpoint.

Weinert’s post makes a compelling argument that as long as we’re stuck with passwords, taking full advantage of the most robust form of multi-factor authentication (MFA) offered by a site you frequent is the best way to deter attackers. Twofactorauth.org has a handy list of your options here, broken down by industry.

“Your password doesn’t matter, but MFA does,” Weinert wrote. “Based on our studies, your account is more than 99.9% less likely to be compromised if you use MFA.”

Glassdoor’s Moisant said the company doesn’t currently offer MFA for its users, but that it is planning to roll that out later this year to both consumer and business users.

Password managers also can be useful for those who feel encumbered by having to come up with passphrases or complex passwords. If you’re uncomfortable with entrusting a third-party service or application to handle this process for you, there’s absolutely nothing wrong with writing down your passwords, provided a) you do not store them in a file on your computer or taped to your laptop or screen or whatever, and b) that your password notebook is stored somewhere relatively secure, i.e. not in your purse or car, but something like a locked drawer or safe.

Although many readers will no doubt take me to task on that last bit of advice, as in all things security related it’s important not to let the perfect become the enemy of the good. Many people (think moms/dads/grandparents) can’t be bothered to use password managers  — even when you go through the trouble of setting them up on their behalf. Instead, without an easier, non-technical method they will simply revert to reusing or recycling passwords.

CryptogramGoogle Finds 20-Year-Old Microsoft Windows Vulnerability

There's no indication that this vulnerability was ever used in the wild, but the code it was discovered in -- Microsoft's Text Services Framework -- has been around since Windows XP.

,

CryptogramSurveillance as a Condition for Humanitarian Aid

Excellent op-ed on the growing trend to tie humanitarian aid to surveillance.

Despite the best intentions, the decision to deploy technology like biometrics is built on a number of unproven assumptions, such as, technology solutions can fix deeply embedded political problems. And that auditing for fraud requires entire populations to be tracked using their personal data. And that experimental technologies will work as planned in a chaotic conflict setting. And last, that the ethics of consent don't apply for people who are starving.

Worse Than FailureCodeSOD: I'm Sooooooo Random, LOL

There are some blocks of code that require a preamble, and an explanation of the code and its flow. Often you need to provide some broader context.

Sometimes, you get some code like Wolf found, which needs no explanation:

export function generateRandomId(): string { counter++; return 'id' + counter; }

I mean, I guess that's a slightly better than this solution. Wolf found this because some code downstream was expecting random, unique IDs, and wasn't getting them.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Cory DoctorowPodcast: A cycle of renewal, broken: How Big Tech and Big Media abuse copyright law to slay competition

In my latest podcast (MP3), I read my essay “A Cycle of Renewal, Broken: How Big Tech and Big Media Abuse Copyright Law to Slay Competition”, published today on EFF’s Deeplinks; it’s the latest in my ongoing series of case-studies of “adversarial interoperability,” where new services unseated the dominant companies by finding ways to plug into existing products against those products’ manufacturers. This week’s installment recounts the history of cable TV, and explains how the legal system in place when cable was born was subsequently extinguished (with the help of the cable companies who benefitted from it!) meaning that no one can do to cable what cable once did to broadcasters.

In 1950, a television salesman named Robert Tarlton put together a consortium of TV merchants in the town of Lansford, Pennsylvania to erect an antenna tall enough to pull down signals from Philadelphia, about 90 miles to the southeast. The antenna connected to a web of cables that the consortium strung up and down the streets of Lansford, bringing big-city TV to their customers — and making TV ownership for Lansfordites far more attractive. Though hobbyists had been jury-rigging their own “community antenna television” networks since 1948, no one had ever tried to go into business with such an operation. The first commercial cable TV company was born.

The rise of cable over the following years kicked off decades of political controversy over whether the cable operators should be allowed to stay in business, seeing as they were retransmitting broadcast signals without payment or permission and collecting money for the service. Broadcasters took a dim view of people using their signals without permission, which is a little rich, given that the broadcasting industry itself owed its existence to the ability to play sound recordings over the air without permission or payment.

The FCC brokered a series of compromises in the years that followed, coming up with complex rules governing which signals a cable operator could retransmit, which ones they must retransmit, and how much all this would cost. The end result was a second way to get TV, one that made peace with—and grew alongside—broadcasters, eventually coming to dominate how we get cable TV in our homes.

By 1976, cable and broadcasters joined forces to fight a new technology: home video recorders, starting with Sony’s Betamax recorders. In the eyes of the cable operators, broadcasters, and movie studios, these were as illegitimate as the playing of records over the air had been, or as retransmitting those broadcasts over cable had been. Lawsuits over the VCR continued for the next eight years. In 1984, the Supreme Court finally weighed in, legalizing the VCR, and finding that new technologies were not illegal under copyright law if they were “capable of substantial noninfringing uses.”

MP3

Krebs on SecurityThe Rise of “Bulletproof” Residential Networks

Cybercrooks increasingly are anonymizing their malicious traffic by routing it through residential broadband and wireless data connections. Traditionally, those connections have been mainly hacked computers, mobile phones, or home routers. But this story is about so-called “bulletproof residential VPN services” that appear to be built by purchasing or otherwise acquiring discrete chunks of Internet addresses from some of the world’s largest ISPs and mobile data providers.

In late April 2019, KrebsOnSecurity received a tip from an online retailer who’d seen an unusual number of suspicious transactions originating from a series of Internet addresses assigned to a relatively new Internet provider based in Maryland called Residential Networking Solutions LLC.

Now, this in itself isn’t unusual; virtually every provider has the occasional customers who abuse their access for fraudulent purposes. But upon closer inspection, several factors caused me to look more carefully at this company, also known as “Resnet.”

An examination of the IP address ranges assigned to Resnet shows that it maintains an impressive stable of IP blocks — totaling almost 70,000 IPv4 addresses — many of which had until quite recently been assigned to someone else.

Most interestingly, about ten percent of those IPs — more than 7,000 of them — had until late 2018 been under the control of AT&T Mobility. Additionally, the WHOIS registration records for each of these mobile data blocks suggest Resnet has been somehow reselling data services for major mobile and broadband providers, including AT&T, Verizon, and Comcast Cable.

The WHOIS records for one of several networks associated with Residential Networking Solutions LLC.

Drilling down into the tracts of IPs assigned to Resnet’s core network indicates those 7,000+ mobile IP addresses under Resnet’s control were given the label  “Service Provider Corporation” — mostly those beginning with IPs in the range 198.228.x.x.

An Internet search reveals this IP range is administered by the Wireless Data Service Provider Corporation (WDSPC), a non-profit formed in the 1990s to manage IP address ranges that could be handed out to various licensed mobile carriers in the United States.

Back when the WDSPC was first created, there were quite a few mobile wireless data companies. But today the vast majority of the IP space managed by the WDSPC is leased by AT&T Mobility and Verizon Wireless — which have gradually acquired most of their competing providers over the years.

A call to the WDSPC revealed the nonprofit hadn’t leased any new wireless data IP space in more than 10 years. That is, until the organization received a communication at the beginning of this year that it believed was from AT&T, which recommended Resnet as a customer who could occupy some of the company’s mobile data IP address blocks.

“I’m afraid we got duped,” said the person answering the phone at the WDSPC, while declining to elaborate on the precise nature of the alleged duping or the medium that was used to convey the recommendation.

AT&T declined to discuss its exact relationship with Resnet  — or if indeed it ever had one to begin with. It responded to multiple questions about Resnet with a short statement that said, “We have taken steps to terminate this company’s services and have referred the matter to law enforcement.”

Why exactly AT&T would forward the matter to law enforcement remains unclear. But it’s not unheard of for hosting providers to forge certain documents in their quest for additional IP space, and anyone caught doing so via email, phone or fax could be charged with wire fraud, which is a federal offense that carries punishments of up to $500,000 in fines and as much as 20 years in prison.

WHAT IS RESNET?

The WHOIS registration records for Resnet’s main Web site, resnetworking[.]com, are hidden behind domain privacy protection. However, a cursory Internet search on that domain turned up plenty of references to it on Hackforums[.]net, a sprawling community that hosts a seemingly never-ending supply of up-and-coming hackers seeking affordable and anonymous ways to monetize various online moneymaking schemes.

One user in particular — a Hackforums member who goes by the nickname “Profitvolt” — has spent several years advertising resnetworking[.]com and a number of related sites and services, including “unlimited” AT&T 4G/LTE data services, and the immediate availability of more than 1 million residential IPs that he suggested were “perfect for botting, shoe buying.”

The Hackforums user “Profitvolt” advertising residential proxies.

Profitvolt advertises his mobile and residential data services as ideal for anyone who wishes to run “various bots,” or “advertising campaigns.” Those services are meant to provide anonymity when customers are doing things such as automating ad clicks on platforms like Google Adsense and Facebook; generating new PayPal accounts; sneaker bot activity; credential stuffing attacks; and different types of social media spam.

For readers unfamiliar with this term, “shoe botting” or “sneaker bots” refers to the use of automated bot programs and services that aid in the rapid acquisition of limited-release, highly sought-after designer shoes that can then be resold at a profit on secondary markets. All too often, it seems, the people who profit the most in this scheme are using multiple sets of compromised credentials from consumer accounts at online retailers, and/or stolen payment card data.

To say shoe botting has become a thorn in the side of online retailers and regular consumers alike would be a major understatement: A recent State of The Internet Security Report (PDF) from Akamai (an advertiser on this site) noted that such automated bot activity now accounts for almost half of the Internet bandwidth directed at online retailers. The prevalance of shoe botting also might help explain Footlocker‘s recent $100 million investment in goat.com, the largest secondary shoe resale market on the Web.

In other discussion threads, Profitvolt advertises he can rent out an “unlimited number” of so-called “residential proxies,” a term that describes home or mobile Internet connections that can be used to anonymously relay Internet traffic for a variety of dodgy deals.

From a ne’er-do-well’s perspective, the beauty of routing one’s traffic through residential IPs is that few online businesses will bother to block malicious or suspicious activity emanating from them.

That’s because in general the pool of IP addresses assigned to residential or mobile wireless connections cycles intermittently from one user to the next, meaning that blacklisting one residential IP for abuse or malicious activity may only serve to then block legitimate traffic (and e-commerce) from the next user who gets assigned that same IP.

A BULLETPROOF PLAN?

In one early post on Hackforums, Profitvolt laments the untimely demise of various “bulletproof” hosting providers over the years, from the Russian Business Network and Atrivo/Intercage, to McColo, 3FN and Troyak, among others.

All of these Internet providers had one thing in common: They specialized in cultivating customers who used their networks for nefarious purposes — from operating botnets and spamming to hosting malware. They were known as “bulletproof” because they generally ignored abuse complaints, or else blamed any reported abuse on a reseller of their services.

In that Hackforums post, Profitvolt bemoans that “mediums which we use to distribute [are] locking us out and making life unnecessarily hard.”

“It’s still sketchy, so I am not going all out to reveal my plans, but currently I am starting off with a 32 GB RAM server with a 1 GB unmetered up-link in a Caribbean country,” Profitvolt told forum members, while asking in different Hackforums posts whether there are any other users from the dual-island Caribbean nation of Trinidad and Tobago on the forum.

“To be quite honest, the purpose of this is to test how far we can stretch the leniency before someone starts asking questions, or we start receiving emails,” Profitvolt continued.

Hackforums user Profitvolt says he plans to build his own “bulletproof” hosting network catering to fellow forum users who might want to rent his services for a variety of dodgy activities.

KrebsOnSecurity started asking questions of Resnet after stumbling upon several indications that this company was enabling different types of online abuse in bite-sized monthly packages. The site resnetworking[.]com appears normal enough on the surface, but a review of the customer packages advertised on it suggests the company has courted a very specific type of client.

“No bullshit, just proxies,” reads one (now hidden or removed) area of the site’s shopping cart. Other promotions advertise the use of residential proxies to promote “growth services” on multiple social media platforms including CraigslistFacebook, Google, Instagram, Spotify, Soundcloud and Twitter.

Resnet also peers with or partners with several other interesting organizations, including:

residential-network[.]com, also known as “IAPS Security Services” (formerly intl-alliance[.]com), which advertises the sale of residential VPNs and mobile 4G/IPv6 proxies aimed at helping customers avoid being blocked while automating different types of activity, from mass-creating social media and email accounts to bulk message sending on platforms like WhatsApp and Facebook.

Laksh Cybersecurity and Defense LLC, which maintains Hexproxy[.]com, another residential proxy service that largely courts customers involved in shoe botting.

-Several chunks of IP space from a Russian provider variously known by the names “SERVERSGET” and “Men Danil Valentinovich,” which has been associated with numerous instances of hijacking vast swaths of IP addresses from other organizations quite recently.

Some of Profitvolt’s discussion threads on Hackforums.

WHO IS RESNET?

Resnetworking[.]com lists on its home page the contact phone number 202-643-8533. That number is tied to the registration records for several domains, including resnetworking[.]com, residentialvpn[.]info, and residentialvpn[.]org. All of those domains also have in their historic WHOIS records the name Joshua Powder and Residential Networking Solutions LLC.

Running a reverse WHOIS lookup via Domaintools.com on “Joshua Powder” turns up almost 60 domain names — most of them tied to the email address joshua.powder@gmail.com. Among those are resnetworking[.]info, resvpn[.]com/net/org/info, tobagospeaks[.]com, tthack[.]com and profitvolt[.]com. Recall that “Profitvolt” is the nickname of the Hackforums user advertising resnetworking[.]com.

The email address josh@tthack.com was used to register an account on the scammer-friendly site blackhatworld[.]com under the nickname “BulletProofWebHost.” Here’s a list of domains registered to this email address.

A search on the Joshua Powder and tthack email addresses at Hyas, a startup that specializes in combining data from a number of sources to provide attribution of cybercrime activity, further associates those to mafiacloud@gmail.com and to the phone number 868-360-9983, which is a mobile number assigned by Digicel Trinidad and Tobago Ltd. A full list of domains tied to that 868- number is here.

Hyas’s service also pointed to this post on the Facebook page of the Prince George’s County Economic Development Corporation in Maryland, which appears to include a 2017 photo of Mr. Powder posing with county officials.

‘A GLORIFIED SOLUTIONS PROVIDER’

Roughly three weeks ago, KrebsOnSecurity called the 202 number listed at the top of resnetworking[.]com. To my surprise, a man speaking in a lovely Caribbean-sounding accent answered the call and identified himself as Josh Powder. When I casually asked from where he’d acquired that accent, Powder said he was a native of New Jersey but allowed that he has family members who now live in Trinidad and Tobago.

Powder said Residential Networking Solutions LLC is “a normal co-location Internet provider” that has been in operation for about three years and employs some 65 people.

“You’re not the first person to call us about residential VPNs,” Powder said. “In the past, we did have clients that did host VPNs, but it’s something that’s been discontinued since 2017. All we are is a glorified solutions provider, and we broker and lease Internet lines from different companies.”

When asked about the various “botting” packages for sale on Resnetworking[.]com, Powder replied that the site hadn’t been updated in a while and that these were inactive offers that resulted from a now-discarded business model.

“When we started back in 2016, we were really inexperienced, and hired some SEO [search engine optimization] firms to do marketing,” he explained. “Eventually we realized that this was creating a shitstorm, because it started to make us look a specific way to certain people. So we had to really go through a process of remodeling. That process isn’t complete, and the entire web site is going to retire in about a week’s time.”

Powder maintains that his company does have a contract with AT&T to resell LTE and 4G data services, and that he has a similar arrangement with Sprint. He also suggested that one of the aforementioned companies which partnered with Resnet — IAPS Security Services — was responsible for much of the dodgy activity that previously brought his company abuse complaints and strange phone calls about VPN services.

“That guy reached out to us and he leased service from us and nearly got us into a lot of trouble,” Powder said. “He was doing a lot of illegal stuff, and I think there is an ongoing matter with him legally. That’s what has caused us to be more vigilant and really look at what we do and change it. It attracted too much nonsense.”

Interestingly, when one visits IAPS Security Services’ old domain — intl-alliance[.]com — it now forwards to resvpn[.]com, which is one of the domains registered to Joshua Powder.

Shortly after our conversation, the monthly packages I asked Powder about that were for sale on resnetworking[.]com disappeared from the site, or were hidden behind a login. Also, Resnet’s IPv6 prefixes (a la IAPS Security Services) were removed from the company’s list of addresses. At the same time, a large number of Profitvolt’s posts prior to 2018 were deleted from Hackforums.

EPILOGUE

It appears that the future of low-level abuse targeting some of the most popular Internet destinations is tied to the increasing willingness of the world’s biggest ISPs to resell discrete chunks of their address space to whomever is able to pay for them.

Earlier this week, I had a Skype conversation with an individual who responded to my requests for more information from residential-network[.]com, and this person told me that plenty of mobile and land-line ISPs are more than happy to sell huge amounts of IP addresses to just about anybody.

“Mobile providers also sell mass services,” the person who responded to my Skype request offered. “Rogers in Canada just opened a new package for unlimited 4G data lines and we’re currently in negotiations with them for that service as well. The UK also has 4G providers that have unlimited data lines as well.”

The person responding to my Skype messages said they bought most of their proxies from a reseller at customproxysolutions[.]com, which advertises “the world’s largest network of 4G LTE modems in the United States.”

He added that “Rogers in Canada has a special offer that if you buy more than 50 lines you get a reduced price lower than the $75 Canadian Dollar price tag that they would charge for fewer than 50 lines. So most mobile ISPs want to sell mass lines instead of single lines.”

It remains unclear how much of the Internet address space claimed by these various residential proxy and VPN networks has been acquired legally or through other means. But it seems that Resnet and its business associates are in fact on the cutting edge of what it means to be a bulletproof Internet provider today.

CryptogramInfluence Operations Kill Chain

Influence operations are elusive to define. The Rand Corp.'s definition is as good as any: "the collection of tactical information about an adversary as well as the dissemination of propaganda in pursuit of a competitive advantage over an opponent." Basically, we know it when we see it, from bots controlled by the Russian Internet Research Agency to Saudi attempts to plant fake stories and manipulate political debate. These operations have been run by Iran against the United States, Russia against Ukraine, China against Taiwan, and probably lots more besides.

Since the 2016 US presidential election, there have been an endless series of ideas about how countries can defend themselves. It's time to pull those together into a comprehensive approach to defending the public sphere and the institutions of democracy.

Influence operations don't come out of nowhere. They exploit a series of predictable weaknesses -- and fixing those holes should be the first step in fighting them. In cybersecurity, this is known as a "kill chain." That can work in fighting influence operations, too­ -- laying out the steps of an attack and building the taxonomy of countermeasures.

In an exploratory blog post, I first laid out a straw man information operations kill chain. I started with the seven commandments, or steps, laid out in a 2018 New York Times opinion video series on "Operation Infektion," a 1980s Russian disinformation campaign. The information landscape has changed since the 1980s, and these operations have changed as well. Based on my own research and feedback from that initial attempt, I have modified those steps to bring them into the present day. I have also changed the name from "information operations" to "influence operations," because the former is traditionally defined by the US Department of Defense in ways that don't really suit these sorts of attacks.

Step 1: Find the cracks in the fabric of society­ -- the social, demographic, economic, and ethnic divisions. For campaigns that just try to weaken collective trust in government's institutions, lots of cracks will do. But for influence operations that are more directly focused on a particular policy outcome, only those related to that issue will be effective.

Countermeasures: There will always be open disagreements in a democratic society, but one defense is to shore up the institutions that make that society possible. Elsewhere I have written about the "common political knowledge" necessary for democracies to function. That shared knowledge has to be strengthened, thereby making it harder to exploit the inevitable cracks. It needs to be made unacceptable -- or at least costly -- for domestic actors to use these same disinformation techniques in their own rhetoric and political maneuvering, and to highlight and encourage cooperation when politicians honestly work across party lines. The public must learn to become reflexively suspicious of information that makes them angry at fellow citizens. These cracks can't be entirely sealed, as they emerge from the diversity that makes democracies strong, but they can be made harder to exploit. Much of the work in "norms" falls here, although this is essentially an unfixable problem. This makes the countermeasures in the later steps even more important.

Step 2: Build audiences, either by directly controlling a platform (like RT) or by cultivating relationships with people who will be receptive to those narratives. In 2016, this consisted of creating social media accounts run either by human operatives or automatically by bots, making them seem legitimate, gathering followers. In the years following, this has gotten subtler. As social media companies have gotten better at deleting these accounts, two separate tactics have emerged. The first is microtargeting, where influence accounts join existing social circles and only engage with a few different people. The other is influencer influencing, where these accounts only try to affect a few proxies (see step 6) -- either journalists or other influencers -- who can carry their message for them.

Countermeasures: This is where social media companies have made all the difference. By allowing groups of like-minded people to find and talk to each other, these companies have given propagandists the ability to find audiences who are receptive to their messages. Social media companies need to detect and delete accounts belonging to propagandists as well as bots and groups run by those propagandists. Troll farms exhibit particular behaviors that the platforms need to be able to recognize. It would be best to delete accounts early, before those accounts have the time to establish themselves.

This might involve normally competitive companies working together, since operations and account names often cross platforms, and cross-platform visibility is an important tool for identifying them. Taking down accounts as early as possible is important, because it takes time to establish the legitimacy and reach of any one account. The NSA and US Cyber Command worked with the FBI and social media companies to take down Russian propaganda accounts during the 2018 midterm elections. It may be necessary to pass laws requiring Internet companies to do this. While many social networking companies have reversed their "we don't care" attitudes since the 2016 election, there's no guarantee that they will continue to remove these accounts -- especially since their profits depend on engagement and not accuracy.

Step 3: Seed distortion by creating alternative narratives. In the 1980s, this was a single "big lie," but today it is more about many contradictory alternative truths -- a "firehose of falsehood" -- that distort the political debate. These can be fake or heavily slanted news stories, extremist blog posts, fake stories on real-looking websites, deepfake videos, and so on.

Countermeasures: Fake news and propaganda are viruses; they spread through otherwise healthy populations. Fake news has to be identified and labeled as such by social media companies and others, including recognizing and identifying manipulated videos known as deepfakes. Facebook is already making moves in this direction. Educators need to teach better digital literacy, as Finland is doing. All of this will help people recognize propaganda campaigns when they occur, so they can inoculate themselves against their effects. This alone cannot solve the problem, as much sharing of fake news is about social signaling, and those who share it care more about how it demonstrates their core beliefs than whether or not it is true. Still, it is part of the solution.

Step 4: Wrap those narratives in kernels of truth. A core of fact makes falsehoods more believable and helps them spread. Releasing stolen emails from Hillary Clinton's campaign chairman John Podesta and the Democratic National Committee, or documents from Emmanuel Macron's campaign in France, were both an example of that kernel of truth. Releasing stolen emails with a few deliberate falsehoods embedded among them is an even more effective tactic.

Countermeasures: Defenses involve exposing the untruths and distortions, but this is also complicated to put into practice. Fake news sows confusion just by being there. Psychologists have demonstrated that an inadvertent effect of debunking a piece of fake news is to amplify the message of that debunked story. Hence, it is essential to replace the fake news with accurate narratives that counter the propaganda. That kernel of truth is part of a larger true narrative. The media needs to learn skepticism about the chain of information and to exercise caution in how they approach debunked stories.

Step 5: Conceal your hand. Make it seem as if the stories came from somewhere else.

Countermeasures: Here the answer is attribution, attribution, attribution. The quicker an influence operation can be pinned on an attacker, the easier it is to defend against it. This will require efforts by both the social media platforms and the intelligence community, not just to detect influence operations and expose them but also to be able to attribute attacks. Social media companies need to be more transparent about how their algorithms work and make source publications more obvious for online articles. Even small measures like the Honest Ads Act, requiring transparency in online political ads, will help. Where companies lack business incentives to do this, regulation will be the only answer.

Step 6: Cultivate proxies who believe and amplify the narratives. Traditionally, these people have been called "useful idiots." Encourage them to take action outside of the Internet, like holding political rallies, and to adopt positions even more extreme than they would otherwise.

Countermeasures: We can mitigate the influence of people who disseminate harmful information, even if they are unaware they are amplifying deliberate propaganda. This does not mean that the government needs to regulate speech; corporate platforms already employ a variety of systems to amplify and diminish particular speakers and messages. Additionally, the antidote to the ignorant people who repeat and amplify propaganda messages is other influencers who respond with the truth -- in the words of one report, we must "make the truth louder." Of course, there will always be true believers for whom no amount of fact-checking or counter-speech will suffice; this is not intended for them. Focus instead on persuading the persuadable.

Step 7: Deny involvement in the propaganda campaign, even if the truth is obvious. Although since one major goal is to convince people that nothing can be trusted, rumors of involvement can be beneficial. The first was Russia's tactic during the 2016 US presidential election; it employed the second during the 2018 midterm elections.

Countermeasures: When attack attribution relies on secret evidence, it is easy for the attacker to deny involvement. Public attribution of information attacks must be accompanied by convincing evidence. This will be difficult when attribution involves classified intelligence information, but there is no alternative. Trusting the government without evidence, as the NSA's Rob Joyce recommended in a 2016 talk, is not enough. Governments will have to disclose.

Step 8: Play the long game. Strive for long-term impact over immediate effects. Engage in multiple operations; most won't be successful, but some will.

Countermeasures: Counterattacks can disrupt the attacker's ability to maintain influence operations, as US Cyber Command did during the 2018 midterm elections. The NSA's new policy of "persistent engagement" (see the article by, and interview with, US Cyber Command Commander Paul Nakasone here) is a strategy to achieve this. So are targeted sanctions and indicting individuals involved in these operations. While there is little hope of bringing them to the United States to stand trial, the possibility of not being able to travel internationally for fear of being arrested will lead some people to refuse to do this kind of work. More generally, we need to better encourage both politicians and social media companies to think beyond the next election cycle or quarterly earnings report.

Permeating all of this is the importance of deterrence. Deterring them will require a different theory. It will require, as the political scientist Henry Farrell and I have postulated, thinking of democracy itself as an information system and understanding "Democracy's Dilemma": how the very tools of a free and open society can be subverted to attack that society. We need to adjust our theories of deterrence to the realities of the information age and the democratization of attackers. If we can mitigate the effectiveness of influence operations, if we can publicly attribute, if we can respond either diplomatically or otherwise -- we can deter these attacks from nation-states.

None of these defensive actions is sufficient on its own. Steps overlap and in some cases can be skipped. Steps can be conducted simultaneously or out of order. A single operation can span multiple targets or be an amalgamation of multiple attacks by multiple actors. Unlike a cyberattack, disrupting will require more than disrupting any particular step. It will require a coordinated effort between government, Internet platforms, the media, and others.

Also, this model is not static, of course. Influence operations have already evolved since the 2016 election and will continue to evolve over time -- especially as countermeasures are deployed and attackers figure out how to evade them. We need to be prepared for wholly different kinds of influencer operations during the 2020 US presidential election. The goal of this kill chain is to be general enough to encompass a panoply of tactics but specific enough to illuminate countermeasures. But even if this particular model doesn't fit every influence operation, it's important to start somewhere.

Others have worked on similar ideas. Anthony Soules, a former NSA employee who now leads cybersecurity strategy for Amgen, presented this concept at a private event. Clint Watts of the Alliance for Securing Democracy is thinking along these lines as well. The Credibility Coalition's Misinfosec Working Group proposed a "misinformation pyramid." The US Justice Department developed a "Malign Foreign Influence Campaign Cycle," with associated countermeasures.

The threat from influence operations is real and important, and it deserves more study. At the same time, there's no reason to panic. Just as overly optimistic technologists were wrong that the Internet was the single technology that was going to overthrow dictators and liberate the planet, so pessimists are also probably wrong that it is going to empower dictators and destroy democracy. If we deploy countermeasures across the entire kill chain, we can defend ourselves from these attacks.

But Russian interference in the 2016 presidential election shows not just that such actions are possible but also that they're surprisingly inexpensive to run. As these tactics continue to be democratized, more people will attempt them. And as more people, and multiple parties, conduct influence operations, they will increasingly be seen as how the game of politics is played in the information age. This means that the line will increasingly blur between influence operations and politics as usual, and that domestic influencers will be using them as part of campaigning. Defending democracy against foreign influence also necessitates making our own political debate healthier.

This essay previously appeared in Foreign Policy.

Worse Than FailureLowest Bidder Squared

Stack of coins 0214

Initech was in dire straits. The website was dog slow, and the budget had been exceeded by a factor of five already trying to fix it. Korbin, today's submitter, was brought in to help in exchange for decent pay and an office in their facility.

He showed up only to find a boxed-up computer and a brand new flat-packed desk, also still in the box. The majority of the space was a video-recording studio that saw maybe 4-6 hours of use a week. After setting up his office, Korbin spent the next day and a half finding his way around the completely undocumented C# code. The third day, there was a carpenter in the studio area. Inexplicably, said carpenter decided he needed to contact-glue carpet to a set of huge risers ... indoors. At least a gallon of contact cement was involved. In minutes, Korbin got a raging headache, and he was essentially gassed out of the building for the rest of the day. Things were not off to a good start.

Upon asking around, Korbin quickly determined that the contractors originally responsible for coding the website had underbid the project by half, then subcontracted the whole thing out to a team in India to do the work on the cheap. The India team had then done the very same thing, subcontracting it out to the most cut-rate individuals they could find. Everything had been written in triplicate for some reason, making it impossible to determine what was actually powering the website and what was dead code. Furthermore, while this was a database-oriented site, there were no stored procedures, and none of the (sub)subcontractors seemed to understand how to use a JOIN command.

In an effort to tease apart what code was actually needed, Korbin turned on profiling. Only ... it was already on in the test version of the site. With a sudden ominous hunch, he checked the live site—and sure enough, profiling was running in production as well. He shut it off, and instantly, the whole site became more responsive.

The next fix was also pretty simple. The site had a bad habit of asking for information it already had, over and over, without any JOINs. Reducing the frequency of database hits improved performance again, bringing it to within an order of magnitude of what one might expect from a website.

While all this was going on, the leaderboard page had begun timing out. Sure enough, it was an N-squared solution: open database, fetch record, close database, repeat, then compare the two records, putting them in order and beginning again. With 500 members, it was doing 250,000 passes each time someone hit the page. Korbin scrapped the whole thing in favor of the site's first stored procedure, then cached it to call only once a day.

The weeks went on, and the site began to take shape, finally getting something like back on track. Thanks to the botched rollout, however, many of the company's endorsements had vanished, and backers were pulling out. The president got on the phone with some VIP about Facebook—because as we all know, the solution to any company's problem is the solution to every company's problems.

"Facebook was written in PHP. He told me it was the best thing out there. So we're going to completely redo the website in PHP," the president confidently announced at the next all-hands meeting. "I want to hear how long everyone thinks this will take to get done."

The only developers left at that point were Korbin and a junior kid just out of college, with one contractor with some experience on the project.

"Two weeks. Maybe three," the kid replied.

They went around the table, and all the non-programmers chimed in with the 2-3 week assessment. Next to last came the experienced contractor. Korbin's jaw nearly dropped when he weighed in at 3-4 weeks.

"None of that is realistic!" Korbin proclaimed. "Even with the existing code as a road map, it's going to take 4-6 months to rewrite. And with the inevitable feature-creep and fixes for things found in testing, it is likely to take even longer."

Korbin was told the next day he could pick up his final check. Seven months later, he ran into the junior kid again, and asked how the rewrite went.

"It's still ongoing," he admitted.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

,

CryptogramFriday Squid Blogging: Robot Squid Propulsion

Interesting research:

The squid robot is powered primarily by compressed air, which it stores in a cylinder in its nose (do squids have noses?). The fins and arms are controlled by pneumatic actuators. When the robot wants to move through the water, it opens a value to release a modest amount of compressed air; releasing the air all at once generates enough thrust to fire the robot squid completely out of the water.

The jumping that you see at the end of the video is preliminary work; we're told that the robot squid can travel between 10 and 20 meters by jumping, whereas using its jet underwater will take it just 10 meters. At the moment, the squid can only fire its jet once, but the researchers plan to replace the compressed air with something a bit denser, like liquid CO2, which will allow for extended operation and multiple jumps. There's also plenty of work to do with using the fins for dynamic control, which the researchers say will "reveal the superiority of the natural flying squid movement."

I can't find the paper online.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSoftware Vulnerabilities in the Boeing 787

Boeing left its software unprotected, and researchers have analyzed it for vulnerabilities:

At the Black Hat security conference today in Las Vegas, Santamarta, a researcher for security firm IOActive, plans to present his findings, including the details of multiple serious security flaws in the code for a component of the 787 known as a Crew Information Service/Maintenance System. The CIS/MS is responsible for applications like maintenance systems and the so-called electronic flight bag, a collection of navigation documents and manuals used by pilots. Santamarta says he found a slew of memory corruption vulnerabilities in that CIS/MS, and he claims that a hacker could use those flaws as a foothold inside a restricted part of a plane's network. An attacker could potentially pivot, Santamarta says, from the in-flight entertainment system to the CIS/MS to send commands to far more sensitive components that control the plane's safety-critical systems, including its engine, brakes, and sensors. Boeing maintains that other security barriers in the 787's network architecture would make that progression impossible.

Santamarta admits that he doesn't have enough visibility into the 787's internals to know if those security barriers are circumventable. But he says his research nonetheless represents a significant step toward showing the possibility of an actual plane-hacking technique. "We don't have a 787 to test, so we can't assess the impact," Santamarta says. "We're not saying it's doomsday, or that we can take a plane down. But we can say: This shouldn't happen."

Boeing denies that there's any problem:

In a statement, Boeing said it had investigated IOActive's claims and concluded that they don't represent any real threat of a cyberattack. "IOActive's scenarios cannot affect any critical or essential airplane system and do not describe a way for remote attackers to access important 787 systems like the avionics system," the company's statement reads. "IOActive reviewed only one part of the 787 network using rudimentary tools, and had no access to the larger system or working environments. IOActive chose to ignore our verified results and limitations in its research, and instead made provocative statements as if they had access to and analyzed the working system. While we appreciate responsible engagement from independent cybersecurity researchers, we're disappointed in IOActive's irresponsible presentation."

This being Black Hat and Las Vegas, I'll say it this way: I would bet money that Boeing is wrong. I don't have an opinion about whether or not it's lying.

Worse Than FailureError'd: What About the Fish?

"On the one hand, I don't want to know what the fish has to do with Boris Johnson's love life...but on the other hand I have to know!" Mark R. writes.

 

"Not sure if that's a new GDPR rule or the Slack Mailbot's weekend was just that much better then mine," Adam G. writes.

 

Connor W. wrote, "You know what, I think I'll just stay inside."

 

"It's great to see that an attempt at personalization was made, but whatever happened to 'trust but verify'?" writes Rob H.

 

"For a while, I thought that, maybe, I didn't actually know how to use my iPhone's alarm. Instead, I found that it just wasn't working right. So, I contacted Apple Support, and while they were initially skeptical that it was an iOS issue, this morning, I actually have proof!" Markus G. wrote.

 

Tim G. writes, "I guess that's better than an angry error message."

 

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

CryptogramBypassing Apple FaceID's Liveness Detection Feature

Apple's FaceID has a liveness detection feature, which prevents someone from unlocking a victim's phone by putting it in front of his face while he's sleeping. That feature has been hacked:

Researchers on Wednesday during Black Hat USA 2019 demonstrated an attack that allowed them to bypass a victim's FaceID and log into their phone simply by putting a pair of modified glasses on their face. By merely placing tape carefully over the lenses of a pair glasses and placing them on the victim's face the researchers demonstrated how they could bypass Apple's FaceID in a specific scenario. The attack itself is difficult, given the bad actor would need to figure out how to put the glasses on an unconscious victim without waking them up.

LongNowAI analyzed 3.3 million scientific abstracts and discovered possible new materials

A new paper shows how AI can accelerate scientific discovery through analyzing millions of scientific abstracts. From the MIT Technology Review:

Natural-language processing has seen major advancements in recent years, thanks to the development of unsupervised machine-learning techniques that are really good at capturing the relationships between words. They count how often and how closely words are used in relation to one another, and map those relationships in a three-dimensional vector space. The patterns can then be used to predict basic analogies like “man is to king as woman is to queen,” or to construct sentences and power things like autocomplete and other predictive text systems.

A group of researchers have now used this technique to munch through 3.3 million scientific abstracts published between 1922 and 2018 in journals that would likely contain materials science research. The resulting word relationships captured fundamental knowledge within the field, including the structure of the periodic table and the way chemicals’ structures relate to their properties. The paper was published in Nature last week.

MIT Technology Review


Worse Than FailureCodeSOD: A Devil With a Date

Jim was adding a feature to the backend. This feature updated a few fields on an object, and then handed the object off as JSON to the front-end.

Adding the feature seemed pretty simple, but when Jim went to check out its behavior in the front-end, he got validation errors. Something in the data getting passed back by his web service was fighting with the front end.

On its surface, that seemed like a reasonable problem, but when looking into it, Jim discovered that it was the record_update_date field which was causing validation issues. The front-end displayed this as a read only field, so there was no reason to do any client-side validation in the first place, and that field was never sent to the backend, so there was even less than no reason to do validation.

Worse, the field had, at least to the eye, a valid date: 2019-07-29T00:00:00.000Z. Even weirder, if Jim changed the backend to just return 2019-07-29, everything worked. He dug into the validation code to see what might be wrong about it:

/**
 * Custom validation
 *
 * This is a callback function for ajv custom keywords
 *
 * @param  {object} wsFormat aiFormat property content
 * @param  {object} data Data (of element type) from document where validation is required
 * @param  {object} itemSchema Schema part from wsValidation keyword
 * @param  {string} dataPath Path to document element
 * @param  {object} parentData Data of parent object
 * @param  {string} key Property name
 * @param  {object} rootData Document data
 */
function wsFormatFunction(wsFormat, data, itemSchema, dataPath, parentData, key, rootData) {

    let valid;
    switch (aiFormat) {
        case 'date': {
            let regex = /^\d\d\d\d-[0-1]\d-[0-3](T00:00:00.000Z)?\d$/;
            valid = regex.test(data);
            break;
        }
        case 'date-time': {
            let regex = /^\d\d\d\d-[0-1]\d-[0-3]\d[t\s](?:[0-2]\d:[0-5]\d:[0-5]\d|23:59:60)(?:\.\d+)?(?:z|[+-]\d\d:\d\d)$/i;
            valid = regex.test(data);
            break;
        }
        case 'time': {
            let regex = /^(0[0-9]|1[0-9]|2[0-3]):[0-5][0-9]:[0-5][0-9]$/;
            valid = regex.test(data);
            break;
        }
        default: throw 'Unknown wsFormat: ' + wsFormat;
    }

    if (!valid) {
        wsFormatFunction['errors'] = wsFormatFunction['errors'] || [];

        wsFormatFunction['errors'].push({
            keyword: 'wsFormat',
            dataPath: dataPath,
            message: 'should match format "' + wsFormat + '"',
            schema: itemSchema,
            data: data
        });
    }

    return valid;
}

When it starts with “Custom validation” and it involves dates, you know you’re in for a bad time. Worse, it’s custom validation, dates, and regular expressions written by someone who clearly didn’t understand regular expressions.

Let’s take a peek at the branch which was causing Jim’s error, and examine the regex:

/^\d\d\d\d-[0-1]\d-[0-3](T00:00:00.000Z)?\d$/

It should start with four digits, followed by a dash, followed by a value between 0 and 1. Then another digit, then a dash, then a number between 0 and 3, then the time (optionally), then a final digit.

It’s obvious why Jim’s perfectly reasonable date wasn’t working: it needed to be 2019-07-2T00:00:00.000Z9. Or, if Jim just didn’t include the timestamp, not only would 2019-07-29 be a valid date, but so would 2019-19-39, which just so happens to be my birthday. Mark your calendars for the 39th of Undevigintiber.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramSide-Channel Attack against Electronic Locks

Several high-security electronic locks are vulnerable to side-channel attacks involving power monitoring.

Cory DoctorowMy appearance on the MMT podcast

I’ve been following the Modern Monetary Theory debate for about 18 months, and I’m largely a convert: governments spend money into existence and tax it out of existence, and government deficit spending is only inflationary if it’s bidding against the private sector for goods or services, which means that the government could guarantee every unemployed person a job (say, working on the Green New Deal), and which also means that every unemployed person and every unfilled social services role is a political choice, not an economic necessity.

I was delighted to be invited onto the MMT Podcast to discuss the ways that MMT dovetails with the fight against monopoly and inequality, and how science-fiction storytelling can bring complicated technical subjects (like adversarial interoperability) to life.

We talked so long that they’ve split it into two episodes, the first of which is now live (MP3).

Krebs on SecurityMeet Bluetana, the Scourge of Pump Skimmers

Bluetana,” a new mobile app that looks for Bluetooth-based payment card skimmers hidden inside gas pumps, is helping police and state employees more rapidly and accurately locate compromised fuel stations across the nation, a study released this week suggests. Data collected in the course of the investigation also reveals some fascinating details that may help explain why these pump skimmers are so lucrative and ubiquitous.

The new app, now being used by agencies in several states, is the brainchild of computer scientists from the University of California San Diego and the University of Illinois Urbana-Champaign, who say they developed the software in tandem with technical input from the U.S. Secret Service (the federal agency most commonly called in to investigate pump skimming rings).

The Bluetooth pump skimmer scanner app ‘Bluetana’ in action.

Gas pumps are a perennial target of skimmer thieves for several reasons. They are usually unattended, and in too many cases a handful of master keys will open a great many pumps at a variety of filling stations.

The skimming devices can then be attached to electronics inside the pumps in a matter of seconds, and because they’re also wired to the pump’s internal power supply the skimmers can operate indefinitely without the need of short-lived batteries.

And increasingly, these pump skimmers are fashioned to relay stolen card data and PINs via Bluetooth wireless technology, meaning the thieves who install them can periodically download stolen card data just by pulling up to a compromised pump and remotely connecting to it from a Bluetooth-enabled mobile device or laptop.

According to the study, some 44 volunteers  — mostly law enforcement officials and state employees — were equipped with Bluetana over a year-long experiment to test the effectiveness of the scanning app.

The researchers said their volunteers collected Bluetooth scans at 1,185 gas stations across six states, and that Bluetana detected a total of 64 skimmers across four of those states. All of the skimmers were later collected by law enforcement, including two that were reportedly missed in manual safety inspections of the pumps six months earlier.

While several other Android-based apps designed to find pump skimmers are already available, the researchers said Bluetana was developed with an eye toward eliminating false-positives that some of these other apps can fail to distinguish.

“Bluetooth technology used in these skimmers are also used for legitimate products commonly seen at and near gas stations such as speed-limit signs, weather sensors and fleet tracking systems,” said Nishant Bhaskar, UC San Diego Ph.D. student and principal author of the study. “These products can be mistaken for skimmers by existing detection apps.”

BLACK MARKET VALUE

The fuel skimmer study also helps explain how quickly these hidden devices can generate huge profits for the organized gangs that typically deploy them. The researchers found the skimmers their app found collected data from roughly 20 -25 payment cards each day — evenly distributed between debit and credit cards (although they note estimates from payment fraud prevention companies and the Secret Service that put the average figure closer to 50-100 cards daily per compromised machine).

The academics also studied court documents which revealed that skimmer scammers often are only able to “cashout” stolen cards — either through selling them on the black market or using them for fraudulent purchases — a little less than half of the time. This can result from the skimmers sometimes incorrectly reading card data, daily withdrawal limits, or fraud alerts at the issuing bank.

“Based on the prior figures, we estimate the range of per-day revenue from a skimmer is $4,253 (25 cards per day, cashout of $362 per card, and 47% cashout success rate), and our high end estimate is $63,638 (100 cards per day per day, $1,354 cashout per card, and cashout success rate of 47%),” the study notes.

Not a bad haul either way, considering these skimmers typically cost about $25 to produce.

Those earnings estimates assume an even distribution of credit and debit card use among customers of a compromised pump: The more customers pay with a debit card, the more profitable the whole criminal scheme may become. Armed with your PIN and debit card data, skimmer thieves or those who purchase stolen cards can clone your card and pull money out of your account at an ATM.

“Availability of a PIN code with a stolen debit card in particular, can increase its value five-fold on the black market,” the researchers wrote.

This highlights a warning that KrebsOnSecurity has relayed to readers in many previous stories on pump skimming attacks: Using a debit card at the pump can be way riskier than paying with cash or a credit card.

The black market value, impact to consumers and banks, and liability associated with different types of card fraud.

And as the above graphic from the report illustrates, there are different legal protections for fraudulent transactions on debit vs. credit cards. With a credit card, your maximum loss on any transactions you report as fraud is $50; with a debit card, that protection only extends for within two days of the unauthorized transaction. After that, the maximum consumer liability can increase to $500 within 60 days, and to an unlimited amount after 60 days.

In practice, your bank or debit card issuer may still waive additional liabilities, and many do. But even then, having your checking account emptied of cash while your bank sorts out the situation can still be a huge hassle and create secondary problems (bounced checks, for instance).

Interestingly, this advice against using debit cards at the pump often runs counter to the messaging pushed by fuel station owners themselves, many of whom offer lower prices for cash or debit card transactions. That’s because credit card transactions typically are more expensive to process.

For all its skimmer-skewering prowess, Bluetana will not be released to the public. The researchers said the primary reason for this is highlighted in the core findings of the study.

“There are many legitimate devices near gas stations that look exactly like skimmers do in Bluetooth scans,” said UCSD Assistant Professor Aaron Schulman, in an email to KrebsOnSecurity. “Flagging suspicious devices in Bluetana is a only a way of notifying inspectors that they need to gather more data around the gas station to determine if the Bluetooth transmissions appear to be emanating from a device inside of of the pumps. If it does, they can then open the pump door and confirm that the signal strength rises, and begin their visual inspection for the skimmer.”

One of the best tips for avoiding fuel card skimmers is to favor filling stations that have updated security features, such as custom keys for each pump, better compartmentalization of individual components within the machine, and tamper protections that physically shut down a pump if the machine is improperly accessed.

How can you spot a gas station with these updated features, you ask? As noted in last summer’s story, How to Avoid Card Skimmers at the Pumps, these newer-model machines typically feature a horizontal card acceptance slot along with a raised metallic keypad. In contrast, older, less secure pumps usually have a vertical card reader a flat, membrane-based keypad.

Newer, more tamper-resistant fuel pumps include pump-specific key locks, raised metallic keypads, and horizontal card readers.

The researchers will present their work on Bluetana later today at the USENIX Security 2019 conference in Santa Clara, Calif. A copy of their paper is available here (PDF).

If you enjoyed this story, check out my series on all things skimmer-related: All About Skimmers. Looking for more information on fuel pump skimming? Have a look at some of these stories.

CryptogramExploiting GDPR to Get Private Information

A researcher abused the GDPR to get information on his fiancee:

It is one of the first tests of its kind to exploit the EU's General Data Protection Regulation (GDPR), which came into force in May 2018. The law shortened the time organisations had to respond to data requests, added new types of information they have to provide, and increased the potential penalty for non-compliance.

"Generally if it was an extremely large company -- especially tech ones -- they tended to do really well," he told the BBC.

"Small companies tended to ignore me.

"But the kind of mid-sized businesses that knew about GDPR, but maybe didn't have much of a specialised process [to handle requests], failed."

He declined to identify the organisations that had mishandled the requests, but said they had included:

  • a UK hotel chain that shared a complete record of his partner's overnight stays

  • two UK rail companies that provided records of all the journeys she had taken with them over several years

  • a US-based educational company that handed over her high school grades, mother's maiden name and the results of a criminal background check survey.

CryptogramAttorney General Barr and Encryption

Last month, Attorney General William Barr gave a major speech on encryption policy足what is commonly known as "going dark." Speaking at Fordham University in New York, he admitted that adding backdoors decreases security but that it is worth it.

Some hold this view dogmatically, claiming that it is technologically impossible to provide lawful access without weakening security against unlawful access. But, in the world of cybersecurity, we do not deal in absolute guarantees but in relative risks. All systems fall short of optimality and have some residual risk of vulnerability -- a point which the tech community acknowledges when they propose that law enforcement can satisfy its requirements by exploiting vulnerabilities in their products. The real question is whether the residual risk of vulnerability resulting from incorporating a lawful access mechanism is materially greater than those already in the unmodified product. The Department does not believe this can be demonstrated.

Moreover, even if there was, in theory, a slight risk differential, its significance should not be judged solely by the extent to which it falls short of theoretical optimality. Particularly with respect to encryption marketed to consumers, the significance of the risk should be assessed based on its practical effect on consumer cybersecurity, as well as its relation to the net risks that offering the product poses for society. After all, we are not talking about protecting the Nation's nuclear launch codes. Nor are we necessarily talking about the customized encryption used by large business enterprises to protect their operations. We are talking about consumer products and services such as messaging, smart phones, e-mail, and voice and data applications. If one already has an effective level of security say, by way of illustration, one that protects against 99 percent of foreseeable threats -- is it reasonable to incur massive further costs to move slightly closer to optimality and attain a 99.5 percent level of protection? A company would not make that expenditure; nor should society. Here, some argue that, to achieve at best a slight incremental improvement in security, it is worth imposing a massive cost on society in the form of degraded safety. This is untenable. If the choice is between a world where we can achieve a 99 percent assurance against cyber threats to consumers, while still providing law enforcement 80 percent of the access it might seek; or a world, on the other hand, where we have boosted our cybersecurity to 99.5 percent but at a cost reducing law enforcements [sic] access to zero percent the choice for society is clear.

I think this is a major change in government position. Previously, the FBI, the Justice Department and so on had claimed that backdoors for law enforcement could be added without any loss of security. They maintained that technologists just need to figure out how足 -- an approach we have derisively named "nerd harder."

With this change, we can finally have a sensible policy conversation. Yes, adding a backdoor increases our collective security because it allows law enforcement to eavesdrop on the bad guys. But adding that backdoor also decreases our collective security because the bad guys can eavesdrop on everyone. This is exactly the policy debate we should be having -- not the fake one about whether or not we can have both security and surveillance.

Barr makes the point that this is about "consumer cybersecurity" and not "nuclear launch codes." This is true, but it ignores the huge amount of national security-related communications between those two poles. The same consumer communications and computing devices are used by our lawmakers, CEOs, legislators, law enforcement officers, nuclear power plant operators, election officials and so on. There's no longer a difference between consumer tech and government tech -- it's all the same tech.

Barr also says:

Further, the burden is not as onerous as some make it out to be. I served for many years as the general counsel of a large telecommunications concern. During my tenure, we dealt with these issues and lived through the passage and implementation of CALEA the Communications Assistance for Law Enforcement Act. CALEA imposes a statutory duty on telecommunications carriers to maintain the capability to provide lawful access to communications over their facilities. Companies bear the cost of compliance but have some flexibility in how they achieve it, and the system has by and large worked. I therefore reserve a heavy dose of skepticism for those who claim that maintaining a mechanism for lawful access would impose an unreasonable burden on tech firms especially the big ones. It is absurd to think that we would preserve lawful access by mandating that physical telecommunications facilities be accessible to law enforcement for the purpose of obtaining content, while allowing tech providers to block law enforcement from obtaining that very content.

That telecommunications company was GTE -- which became Verizon. Barr conveniently ignores that CALEA-enabled phone switches were used to spy on government officials in Greece in 2003 -- which seems to have been a National Security Agency operation -- and on a variety of people in Italy in 2006. Moreover, in 2012 every CALEA-enabled switch sold to the Defense Department had security vulnerabilities. (I wrote about all this, and more, in 2013.)

The final thing I noticed about the speech is that it is not about iPhones and data at rest. It is about communications足 -- data in transit. The "going dark" debate has bounced back and forth between those two aspects for decades. It seems to be bouncing once again.

I hope that Barr's latest speech signals that we can finally move on from the fake security vs. privacy debate, and to the real security vs. security debate. I know where I stand on that: As computers continue to permeate every aspect of our lives, society, and critical infrastructure, it is much more important to ensure that they are secure from everybody -- even at the cost of law enforcement access足 -- than it is to allow access at the cost of security. Barr is wrong, it kind of is like these systems are protecting nuclear launch codes.

This essay previously appeared on Lawfare.com.

Worse Than FailureCodeSOD: A Loop in the String

Robert was browsing through a little JavaScript used at his organization, and found this gem of type conversion.

//use only for small numbers
function StringToInteger (str) {
    var int = -1;
    for (var i=0; i<=100; i++) {
        if (i+"" == str) {
            int = i;
            break;
        }
    }
    return int;
}

So, this takes our input str, which is presumably a string, and it starts counting from 0 to 100. i+"" coerces the integer value to a string, which we compare against our string. If it’s a match, we’ll store that value and break out of the loop.

Obviously, this has a glaring flaw: the 100 is hardcoded. So what we really need to do is add a search_low and search_high parameter, so we can write the for loop as i = search_low; i <= search_high; i++ instead. Because that’s the only glaring flaw in this code. I can’t think of any possible better way of converting strings to integers. Not a one.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramPhone Pharming for Ad Fraud

Interesting article on people using banks of smartphones to commit ad fraud for profit.

No one knows how prevalent ad fraud is on the Internet. I believe it is surprisingly high -- here's an article that places losses between $6.5 and $19 billion annually -- and something companies like Google and Facebook would prefer remain unresearched.

Krebs on SecurityPatch Tuesday, August 2019 Edition

Most Microsoft Windows (ab)users probably welcome the monthly ritual of applying security updates about as much as they look forward to going to the dentist: It always seems like you were there just yesterday, and you never quite know how it’s all going to turn out. Fortunately, this month’s patch batch from Redmond is mercifully light, at least compared to last month.

Okay, maybe a trip to the dentist’s office is still preferable. In any case, today is the second Tuesday of the month, which means it’s once again Patch Tuesday (or — depending on your setup and when you’re reading this post — Reboot Wednesday). Microsoft today released patches to fix some 93 vulnerabilities in Windows and related software, 35 of which affect various Server versions of Windows, and another 70 that apply to the Windows 10 operating system.

Although there don’t appear to be any zero-day vulnerabilities fixed this month — i.e. those that get exploited by cybercriminals before an official patch is available — there are several issues that merit attention.

Chief among those are patches to address four moderately terrifying flaws in Microsoft’s Remote Desktop Service, a feature which allows users to remotely access and administer a Windows computer as if they were actually seated in front of the remote computer. Security vendor Qualys says two of these weaknesses can be exploited remotely without any authentication or user interaction.

“According to Microsoft, at least two of these vulnerabilities (CVE-2019-1181 and CVE-2019-1182) can be considered ‘wormable’ and [can be equated] to BlueKeep,” referring to a dangerous bug patched earlier this year that Microsoft warned could be used to spread another WannaCry-like ransomware outbreak. “It is highly likely that at least one of these vulnerabilities will be quickly weaponized, and patching should be prioritized for all Windows systems.”

Fortunately, Remote Desktop is disabled by default in Windows 10, and as such these flaws are more likely to be a threat for enterprises that have enabled the application for various purposes. For those keeping score, this is the fourth time in 2019 Microsoft has had to fix critical security issues with its Remote Desktop service.

For all you Microsoft Edge and Internet Exploiter Explorer users, Microsoft has issued the usual panoply of updates for flaws that could be exploited to install malware after a user merely visits a hacked or booby-trapped Web site. Other equally serious flaws patched in Windows this month could be used to compromise the operating system just by convincing the user to open a malicious file (regardless of which browser the user is running).

As crazy as it may seem, this is the second month in a row that Adobe hasn’t issued a security update for its Flash Player browser plugin, which is bundled in IE/Edge and Chrome (although now hobbled by default in Chrome). However, Adobe did release important updates for its Acrobat and free PDF reader products.

If the tone of this post sounds a wee bit cantankerous, it might be because at least one of the updates I installed last month totally hosed my Windows 10 machine. I consider myself an equal OS abuser, and maintain multiple computers powered by a variety of operating systems, including Windows, Linux and MacOS.

Nevertheless, it is frustrating when being diligent about applying patches introduces so many unfixable problems that you’re forced to completely reinstall the OS and all of the programs that ride on top of it. On the bright side, my newly-refreshed Windows computer is a bit more responsive than it was before crash hell.

So, three words of advice. First off, don’t let Microsoft decide when to apply patches and reboot your computer. On the one hand, it’s nice Microsoft gives us a predictable schedule when it’s going to release patches. On the other, Windows 10 will by default download and install patches whenever it pleases, and then reboot the computer.

Unless you change that setting. Here’s a tutorial on how to do that. For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

Secondly, it doesn’t hurt to wait a few days to apply updates.  Very often fixes released on Patch Tuesday have glitches that cause problems for an indeterminate number of Windows systems. When this happens, Microsoft then patches their patches to minimize the same problems for users who haven’t yet applied the updates, but it sometimes takes a few days for Redmond to iron out the kinks.

Finally, please have some kind of system for backing up your files before applying any updates. You can use third-party software for this, or just the options built into Windows 10. At some level, it doesn’t matter. Just make sure you’re backing up your files, preferably following the 3-2-1 backup rule. Thankfully, I’m vigilant about backing up my files.

And, as ever, if you experience any problems installing any of these patches this month, please feel free to leave a comment about it below; there’s a good chance other readers have experienced the same and may even chime in here with some helpful tips.

Cory DoctorowPodcast: Interoperability and Privacy: Squaring the Circle

In my latest podcast (MP3), I read my essay “Interoperability and Privacy: Squaring the Circle, published today on EFF’s Deeplinks; it’s another in the series of “adversarial interoperability” explainers, this one focused on how privacy and adversarial interoperability relate to each other.

Even if we do manage to impose interoperability on Facebook in ways that allow for meaningful competition, in the absence of robust anti-monopoly rules, the ecosystem that grows up around that new standard is likely to view everything that’s not a standard interoperable component as a competitive advantage, something that no competitor should be allowed to make incursions upon, on pain of a lawsuit for violating terms of service or infringing a patent or reverse-engineering a copyright lock or even more nebulous claims like “tortious interference with contract.”

In other words, the risk of trusting competition to an interoperability mandate is that it will create a new ecosystem where everything that’s not forbidden is mandatory, freezing in place the current situation, in which Facebook and the other giants dominate and new entrants are faced with onerous compliance burdens that make it more difficult to start a new service, and limit those new services to interoperating in ways that are carefully designed to prevent any kind of competitive challenge.

Standards should be the floor on interoperability, but adversarial interoperability should be the ceiling. Adversarial interoperability takes place when a new company designs a product or service that works with another company’s existing products or services, without seeking permission to do so.

MP3

Worse Than FailureCodeSOD: Nullable Knowledge

You’ve got a decimal value- maybe. It could be nothing at all, and you need to handle that null gracefully. Fortunately for you, C# has “nullable types”, which make this task easy.

Ian P’s co-worker made this straightforward application of nullable types.

public static decimal ValidateDecimal(decimal? value)
{
if (value == null) return 0;
decimal returnValue = 0;
Decimal.TryParse(value.ToString(), out returnValue);
return returnValue;
}

The lack of indentation was in the original.

The obvious facepalm is the Decimal.TryParse call. If our decimal has a value, we could just return it, but no, instead, we convert it to a string then convert that string back into a Decimal.

But the real problem here is someone who doesn’t understand what .NET’s nullable types offer. For starters, one could make the argument that value.HasValue() is more readable than value == null, though that’s clearly debatable. That’s not really the problem though.

The purpose of ValidateDecimal is to return the input value, unless the input value was null, in which case we want to return 0. Nullable types have a lovely GetValueOrDefault() method, which returns the value, or a reasonable default. What is the default for any built in numeric type?

0.

This method doesn’t need to exist, it’s already built in to the decimal? type. Of course, the built-in method almost certainly doesn’t do a string conversion to get its value, so the one with a string is better, is it knot?

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecuritySEC Investigating Data Leak at First American Financial Corp.

The U.S. Securities and Exchange Commission (SEC) is investigating a security failure on the Web site of real estate title insurance giant First American Financial Corp. that exposed more than 885 million personal and financial records tied to mortgage deals going back to 2003, KrebsOnSecurity has learned.

First American Financial Corp.

In May, KrebsOnSecurity broke the news that the Web site for Santa Ana, Calif.-based First American [NYSE:FAFexposed some 885 million documents related to real estate closings over the past 16 years, including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts and drivers license images. No authentication was required to view the documents.

The initial tip on that story came from Ben Shoval, a real estate developer based in Seattle. Shoval said he recently received a letter from the SEC’s enforcement division which stated the agency was investigating the data exposure to determine if First American had violated federal securities laws.

In its letter, the SEC asked Shoval to preserve and share any documents or evidence he had related to the data exposure.

“This investigation is a non-public, fact-finding inquiry,” the letter explained. “The investigation does not mean that we have concluded that anyone has violated the law.”

The SEC declined to comment for this story.

Word of the SEC investigation comes weeks after regulators in New York said they were investigating the company in what could turn out to be the first test of the state’s strict new cybersecurity regulation, which requires financial companies to periodically audit and report on how they protect sensitive data, and provides for fines in cases where violations were reckless or willful. First American also is now the target of a class action lawsuit that alleges it “failed to implement even rudimentary security measures.”

First American has issued a series of statements over the past few months that seem to downplay the severity of the data exposure, which the company said was the result of a “design defect” in its Web site.

On June 18, First American said a review of system logs by an outside forensic firm, “based on guidance from the company, identified 484 files that likely were accessed by individuals without authorization. The company has reviewed 211 of these files to date and determined that only 14 (or 6.6%) of those files contain non-public personal information. The company is in the process of notifying the affected consumers and will offer them complimentary credit monitoring services.”

In a statement on July 16, First American said its now-completed investigation identified just 32 consumers whose non-public personal information likely was accessed without authorization.

“These 32 consumers have been notified and offered complimentary credit monitoring services,” the company said.

First American has not responded to questions about how long this “design defect” persisted on its site, how far back it maintained access logs, or how far back in those access logs the company’s review extended.

Updated, Aug, 13, 8:40 a.m.: Added “no comment” from the SEC.

CryptogramEvaluating the NSA's Telephony Metadata Program

Interesting analysis: "Examining the Anomalies, Explaining the Value: Should the USA FREEDOM Act's Metadata Program be Extended?" by Susan Landau and Asaf Lubin.

Abstract: The telephony metadata program which was authorized under Section 215 of the PATRIOT Act, remains one of the most controversial programs launched by the U.S. Intelligence Community (IC) in the wake of the 9/11 attacks. Under the program major U.S. carriers were ordered to provide NSA with daily Call Detail Records (CDRs) for all communications to, from, or within the United States. The Snowden disclosures and the public controversy that followed led Congress in 2015 to end bulk collection and amend the CDR authorities with the adoption of the USA FREEDOM Act (UFA).

For a time, the new program seemed to be functioning well. Nonetheless, three issues emerged around the program. The first concern was over high numbers: in both 2016 and 2017, the Foreign Intelligence Surveillance Court issued 40 orders for collection, but the NSA collected hundreds of millions of CDRs, and the agency provided little clarification for the high numbers. The second emerged in June 2018 when the NSA announced the purging of three years' worth of CDR records for "technical irregularities." Finally, in March 2019 it was reported that the NSA had decided to completely abandon the program and not seek its renewal as it is due to sunset in late 2019.

This paper sheds significant light on all three of these concerns. First, we carefully analyze the numbers, showing how forty orders might lead to the collection of several million CDRs, thus offering a model to assist in understanding Intelligence Community transparency reporting across its surveillance programs. Second, we show how the architecture of modern telephone communications might cause collection errors that fit the reported reasons for the 2018 purge. Finally, we show how changes in the terrorist threat environment as well as in the technology and communication methods they employ ­ in particular the deployment of asynchronous encrypted IP-based communications ­ has made the telephony metadata program far less beneficial over time. We further provide policy recommendations for Congress to increase effective intelligence oversight.

Worse Than FailureInternship of Things

Mindy was pretty excited to start her internship with Initech's Internet-of-Things division. She'd been hearing at every job fair how IoT was still going to be blowing up in a few years, and how important it would be for her career to have some background in it.

It was a pretty standard internship. Mindy went to meetings, shadowed developers, did some light-but-heavily-supervised changes to the website for controlling your thermostat/camera/refrigerator all in one device.

As part of testing, Mindy created a customer account on the QA environment for the site. She chucked a junk password at it, only to get a message: "Your password must be at least 8 characters long, contain at least three digits, not in sequence, four symbols, at least one space, and end with a letter, and not be more than 10 characters."

"Um, that's quite the password rule," Mindy said to her mentor, Bob.

"Well, you know how it is, most people use one password for every site, and we don't want them to do that here. That way, when our database leaks again, it minimizes the harm."

"Right, but it's not like you're storing the passwords anyway, right?" Mindy said. She knew that even leaked hashes could be dangerous, but good salting/hashing would go a long way.

"Of course we are," Bob said. "We're selling web connected thermostats to what can be charitably called 'twelve-o-clock flashers'. You know what those are, right? Every clock in their house is flashing twelve?" Bob sneered. "They can't figure out the site, so we often have to log into their account to fix the things they break."

A few days later, Initech was ready to push a firmware update to all of the Model Q baby monitor cameras. Mindy was invited to watch the process so she could understand their workflow. It started off pretty reasonable: their CI/CD system had a verified build, signed off, ready to deploy.

"So, we've got a deployment farm running in the cloud," Bob explained. "There are thousands of these devices, right? So we start by putting the binary up in an S3 bucket." Bob typed a few commands to upload the binary. "What's really important for our process is that it follows this naming convention. Because the next thing we're going to do is spin up a half dozen EC2 instances- virtual servers in the cloud."

A few more commands later, and then Bob had six sessions open to cloud servers in tmux. "Now, these servers are 'clean instances', so the very first thing I have to do is upload our SSH keys." Bob ran an ssh-copy-id command to copy the SSH key from his computer up to the six cloud VMs.

"Wait, you're using your personal SSH keys?"

"No, that'd be crazy!" Bob said. "There's one global key for every one of our Model Q cameras. We've all got a copy of it on our laptops."

"All… the developers?"

"Everybody on the team," Bob said. "Developers to management."

"On their laptops?"

"Well, we were worried about storing something so sensitive on the network."

Bob continued the process, which involved launching a script that would query a webservice to see which Model Q cameras were online, then sshing into them, having them curl down the latest firmware, and then self-update. "For the first few days, we leave all six VMs running, but once most of them have gotten the update, we'll just leave one cloud service running," Bob explained. "Helps us manage costs."

It's safe to say Mindy learned a lot during her internship. Mostly, she learned, "don't buy anything from Initech."

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Valerie AuroraGoth fashion tips for Ehlers-Danlos Syndrome

A woman wearing a dramatic black hooded jacket typing on a laptop
Skingraft hoodie, INC shirt, Fisherman’s Wharf fingerless gloves

My ideal style could perhaps be best described as “goth chic”—a lot of structured black somewhere on the border between couture and business casual—but because I have Ehlers-Danlos Syndrome, I more often end up wearing “sport goth”: a lot of stretchy black layers in washable fabrics with flat shoes. With great effort, I’ve nudged my style back towards “goth chic,” at least on good days. Enough people have asked me about my gear that I figured I’d share what I’ve learned with other EDS goths (or people who just like being comfortable and also wearing a lot of black).

Here are the constraints I’m operating under:

  • Flat shoes with thin soles to prevent ankle sprains and foot and back pain
  • Stretchy/soft shoes without pressure points to prevent blisters on soft skin
  • Can’t show sweat because POTS causes excessive sweating, also I walk a lot
  • Layers because POTS, walking, and San Francisco weather means I need to adjust my temperature a lot
  • Little or no weight on shoulders due to hypermobile shoulders
  • No tight clothes on abdomen due to pain (many EDS folks don’t have this problem but I do)
  • Soft fabric only touching skin due to sensitive easily irritated skin
  • Warm wrists to prevent hands from losing circulation due to Reynaud’s or POTS

On the other hand, I have a few things that make fashion easier for me. For starters, I can afford a four-figure annual clothing budget. I still shop a lot at thrift stores, discount stores like Ross, or discount versions of more expensive stores like Nordstrom Rack but I can afford a few expensive pieces at full price. Many of the items on this page can be found used on Poshmark, eBay, and other online used clothing marketplaces. I also recommend doing the math for “cost per wear” to figure out if you would save money if you wore a more expensive but more durable piece for a longer period of time. I usually keep clothing and shoes for several years and repair as necessary.

I currently fit within the “standard” size ranges of most clothing and shoe brands, but many of the brands I recommend here have a wider range of sizes. I’ve included the size range where relevant.

Finally, as a cis woman with an extremely femme body type, I can wear a wide range of masculine and feminine styles without being hassled in public for being gender-nonconforming (I still get hassled in public for being a woman, yay). Most of the links here are to women’s styles, but many brands also have men’s styles. (None of these brands have unisex styles that I know of.)

Shoes and socks

Shoes are my favorite part of fashion! I spend much more money on shoes than I used to because more expensive shoes are less likely to give me blisters. If I resole/reheel/polish them regularly, they can last for several years instead of a few months, so they cost the same per wear. Functional shoes are notoriously hard for EDS people to find, so the less often I have to search for new shoes, the better. I nearly always wear my shoes until they can no longer be repaired. If this post does nothing other than convince you that it is economical and wise to spend more money on shoes, I have succeeded.

Woman wearing two coats and holding two rolling bags
Via Spiga trench, Mossimo hoodie, VANELi flats, Aimee Kestenberg rolling laptop bag, Travelpro rolling bag

Smartwool black socks – My poor tender feet need cushiony socks that don’t sag or rub. Smartwool socks are expensive but last forever, and you can get them in 100% black so that you can wash them with your black clothes without covering them in little white balls. I wear mostly the men’s Walk Light Crew and City Slicker, with occasional women’s Hide and Seek No Show.

Skechers Cleo flats – These are a line of flats in a stretchy sweater-like material. The heel can be a little scratchy, but I sewed ribbon over the seam and it was fine. The BOBS line of Skechers is also extremely comfortable. Sizes 5 – 11.

VANELi flats – The sportier versions of these shoes are obscenely comfortable and also on the higher end of fashion. I wore my first pair until they had holes in the soles, and then I kept wearing them another year. I’m currently wearing out this pair. You can get them majorly discounted at DSW and similar places. Sizes 5 – 12.

Stuart Weitzman 5050 boots – These over-the-knee boots are the crown jewel of any EDS goth wardrobe. First, they are almost totally flat and roomy in the toe. Second, the elastic in the boot shaft acts like compression socks, helping with POTS. Third, they look amazing. Charlize Theron wore them in “Atomic Blonde” while performing martial arts. Angelina Jolie wears these in real life. The downside is the price, but there is absolutely no reason to pay full price. I regularly find them in Saks Off 5th for 30% off. Also, they last forever: with reheeling, my first pair lasted around three years of heavy use. Stuart Weitzman makes several other flat boots with elastic shafts which are also worth checking out, but they have been making the 5050 for around 25 years so this style should always be available. Sizes 4 – 12, runs about a half size large.

Pants/leggings/skirts

A woman wearing black leggings and a black sweater
Patty Boutik sweater, Demobaza leggings, VANELi flats

Satina high-waisted leggings – I wear these extremely cheap leggings probably five days a week under skirts or dresses. Available in two sizes, S – L and XL – XXXL. If you can wear tight clothing, you might want to check out the Spanx line of leggings (e.g. the Moto Legging) which I would totally wear if I could.

Toad & Co. Women’s Chaka skirt – I wear this skirt probably three days a week. Ridiculously comfortable and only middling expensive. Sizes XS – L.

NYDJ jeans/leggings – These are pushing it for me in terms of tightness, but I can wear them if I’m standing or walking most of the day. Expensive, but they look professional and last forever. Sizes 00 – 28, including petites, and  they run at least a size large.

Demobaza leggings – The leggings made mostly of stretch material are amazingly comfortable, but also obscenely expensive. They also last forever. Sizes XS – L.

Tops

Patty Boutik – This strange little label makes comfortable tops with long long sleeves and long long bodies, and it keeps making the same styles for years. Unfortunately, they tend to sell out of the solid black versions of my favorite tops on a regular basis. I order two or three of my favorite styles whenever they are in stock as they are reasonably cheap. I’ve been wearing the 3/4 sleeve boat neck shirt at least once a week for about 5 years now. Sizes XS – XL, tend to run a size small.

14th and Union – This label makes very simple pieces out of the most comfortable fabrics I’ve ever worn for not very much money. I wear this turtleneck long sleeve tee about once a week. I also like their skirts. Sizes XS to XL, standard and petite.

Macy’s INC – This label is a reliable source of stretchy black clothing at Macy’s prices. It often edges towards club wear but keeps the simplicity I prefer.

Coats

Mossimo hoodie – Ugh, I love this thing. It’s the perfect cheap fashion staple. I often wear it underneath other coats. Not sure about sizes since it is only available on resale sites.

Skingraft Royal Hoodie – A vastly more expensive version of the black hoodie, but still comfortable, stretchy, and washable. And oh so dramatic. Sizes XS – L.

3/4 length hooded black trench coat – Really any brand will do, but I’ve mostly recently worn out a Calvin Klein and am currently wearing a Via Spiga.

Accessories

A woman wearing all black with a fanny pack
Mossimo hoodie, Toad & Co. skirt, T Tahari fanny pack, Satina leggings, VANELi flats

Fingerless gloves – The cheaper, the better! I buy these from the tourist shops at Fisherman’s Wharf in San Francisco for under $10. I am considering these gloves from Demobaza.

Medline folding cane – Another cheap fashion staple for the EDS goth! Sturdy, adjustable, folding black cane with clean sleek lines.

T Tahari Logo Fanny Pack – I stopped being able to carry a purse right about the time fanny packs came back into style! Ross currently has an entire fanny pack section, most of which are under $13. If I’m using a backpack or the rolling laptop bag, I usually keep my wallet, phone, keys, and lipstick in the fanny pack for easy access.

Duluth Child’s Pack, Envelope style – A bit expensive, but another simple fashion staple. I used to carry the larger roll-top canvas backpack until I realized I was packing it full of stuff and aggravating my shoulders. The child’s pack barely fits a small laptop and a few accessories.

Aimee Kestenberg rolling laptop bag – For the days when I need more than I can fit in my tiny backpack and fanny pack. It has a strap to fit on to the handle of a rolling luggage bag, which is great for air travel.

Apple Watch – The easiest way to diagnose POTS! (Look up “poor man’s tilt table test.”) A great way to track your heart rate and your exercise, two things I am very focused on as someone with EDS. When your first watch band wears out, go ahead and buy a random cheap one off the Internet.

That’s my EDS goth fashion tips! If you have more, please share them in the comments.

,

Rondam RamblingsFedex: when it absolutely, positively has to get stuck in the system for over two months

I have seen some pretty serious corporate bureaucratic dysfunction over the years, but I think this one takes the cake: on May 23, we shipped a package via Fedex from California to Colorado.  The package required a signature.  It turned out that the person we sent it to had moved, and so was not able to sign for the package, and so it was not delivered. Now, the package has our return address on

,

LongNowMariana Mazzucato on the Economics Behind the Apollo Moon Landing

Getting to the moon and back again required unprecedented innovation across different sectors of the United States economy. Economist Mariana Mazzucato on the economics behind the Apollo 11 moon landing.

From the Long Now Seminar, “Rethinking Value” by Mariana Mazzucato. Watch the full talk here.

,

LongNowNeal Stephenson on the Ending of Game of Thrones

Author Neal Stephenson discusses the controversial ending to Game of Thrones and why endings are generally so hard to nail in works of fiction.

From the Neal Stephenson Conversation at the Interval, “Fall, or Dodge in Hell.” Watch the full video here.

,

LongNowBrian Eno’s Soundtrack for the Apollo 11 Moon Landing

50 years ago, the Apollo 11 moon landing was televised live to some 600 million viewers back on planet Earth. One of them was future Long Now co-founder Brian Eno, then 21. He found himself underwhelmed by what he saw. 

Footage from the television transmission of the moon landing.

Surely, there was more gravitas to the experience than the grainy, black and white footage suggested. In the months that followed, the same few seconds of Neil Armstrong’s small steps played on an endless loop on TV as anchors and journalists offered their analysis and patriotic platitudes as a soundtrack. The experts, he later wrote, “[obscured] the grandeur and strangeness of the event with a patina of down-to-earth chatter.”

In 01983, Eno decided to add his own soundtrack to the momentous event. His ninth solo album, Apollo: Atmospheres and Soundtracks was produced to accompany a documentary, Apollo, that consisted solely of 35mm footage from the Apollo 11 mission, with no narration. The first iteration of the film was too experimental for most audiences; it was recut with commentary from Apollo astronauts when it was eventually re-released as For All Mankind in 01989. 

The remastered and extended edition of Brian Eno’s Apollo album will be released on July 19.

This year, on the occasion of the moon landing’s 50th anniversary, Eno has revisited the Apollo project. He reunited with original producers Daniel Lanois and Roger Eno to remaster the album and record 11 new instrumental compositions. The album, Apollo: Extended Edition, will be released on July 19. A new music video for the album’s most well-known track, “An Ending (Ascent)” has also been released with visuals from a 02016 Earth overview.

A new music video for Brian Eno’s “An Ending (Ascent).”

To celebrate the album’s release and the moon landing anniversary, Long Now will be hosting a Brian Eno album listening event at The Interval on the evenings of July 23, 24, 30, and 31. 

The album will be played on our Meyer Sound System, accompanied by footage of the Apollo missions as well as a special mini menu of cocktails inspired by the album. Tickets are $20 and are expected to go quickly. 

The Apollo missions have always been a point of inspiration for Long Now over the years, both for the Big Here perspective they provided as well as for the long-term thinking they utilized. Below are links to some of our Apollo-related blog posts and articles:

,

Sam VargheseThe Rise and Fall of the Tamil Tigers is full of errors

How many mistakes should one accept in a book before it is pulled from sale? In the normal course, when a book is accepted for publication by a recognised publishing company, there are experienced editors who go through the text, correct it and ensure that there are no major bloopers.

Then there are fact-checkers who ensure that what is stated within the book is, at least, mostly aligned with public versions of events from reliable sources.

In the case of The Rise and Fall of the Tamil Tigers, a third-rate book that is being sold by some outlets online, neither of these exercises has been carried out. And it shows.

If the author, Damian Tangram, had voiced his views or even put the entire book online as a free offering, that would be fine. He is entitled to his opinion. But when he is trying to trick people into buying what is a very poor-quality book, then warnings are in order.

Here are just a few of the screw-ups in the first 14 pages (the book is 375 pages!):

In the foreword, the words “Civil War” are capitalised. This is incorrect and would be right only if the civil war were exclusive to Sri Lanka. This is not the case; there are numerous civil wars occurring around the world.

Next, the foreword claims the war started in 1985. This, again, is incorrect. It began in July 1983. The next claim is that this war “had its origins in the post-war political exploitation of socially divisive policies.” Really? Post-war means after the war – this conflict must be the first in the world to begin after it was over!

There is a further line indicating that the author does not know how to measure time: “After spanning three decades…” A decade is 10 years, three decades would be 30 years. The war lasted a little less than 26 years – July 23, 1983 to May 19, 2009.

Again, in the foreword, the author claims that the Liberation Tigers of Tamil Eelam “grew from being a small despot insurgency to the most dangerous and effective terrorist organizations the world has ever seen.” The LTTE was started by Velupillai Pirapaharan in the 1970s. By 1983, it was already a well-organised fighting force. Further, the English is all wonky here, the word should be “organization”, not the plural “organizations”.

And this is just the first paragraph of the book!

The second paragraph of the foreword claims about the year 2006: “Just when things could not be worse Sri Lanka was plunged into all-out war.” The war started much earlier and was in a brief hiatus. The final effort to eliminate the LTTE began on April 25, 2006. And a comma would be handy there.

Then again, the book claims in the foreword that the only person who refused to compromise in the conflict had been Pirapaharan. This is incorrect as the government was also equally stubborn until 2002.

To go on, the foreword says the book gives “an example of how a terrorist organisation like the LTTE can proliferate and spread its murderous ambitions”. The book suffers from numerous generalisations of this kind, all of which are standout examples of malapropism. And one’s ambitions grow, one does not “spread ambitions”.

Again, and we are still in the foreword, the book says the LTTE “was a force that lasted for more than twenty-five years…” Given that it took shape in the 1970s, this is again incorrect.

Next, there is a section titled “About this Book”. Again, misplaced capitalisation of the word “Book”. The author says he visited Sri Lanka for the first time in 1989 soon after he “met and married wife….” Great use of butler English, that. Additionally, he could not have married his wife; the woman in question became his wife only after he married her.

That year, he claims the “most frightening organization” was the JVP or Janata Vimukti Peramuna or People’s Liberation Front. Two years later, when he returned for a visit, the JVP had been defeated but “the enemy to peace was the LTTE”. This is incorrect as the LTTE did not offer any let-up while the JVP was engaging the Sri Lankan army.

Of the Tigers he says, “the power that they had acquired over those short years had turned them into a mythical unstoppable force.” This is incorrect; the Tigers became a force to be reckoned with many years earlier. They did not undergo any major evolution between 1989 and 1991.

The author’s only connection to Sri Lanka is through marrying a Sri Lankan woman. This, plus his visits, he claims give him a “close connection” to the island!

So we go on: “I returned to Sri Lankan several times…” The word is Lanka, not Lankan. More proof of a lack of editing, if any is needed by now.

“Lives were being lost; freedoms restricted and the economy being crushed under a financial burden.” The use of that semi-colon illustrates Tangram’s level of ignorance of English. Factually, this is all stating the bleeding obvious as all these fallouts of the war had begun much earlier.

The author claims that one generation started the war, a second continued to fight and a third was about to grow up and be thrown into a conflict. How three generations can come and go in the space of 26 years is a mystery and more evidence that this man just flings words about and hopes that they make sense.

More in this same section: “To know Sri Lanka without war was once an impossible dream…” Rubbish, I lived in Sri Lanka from 1957 till 1972 and I knew peace most of the time.

Ending this section is another screw-up: “I returned to Sri Lanka in 2012, after the war had ended, to witness the one thing I had not seen in over 25 years: Peace.” Leaving aside the wrong capitalisation of the word “peace”, since the author’s first visit was in 1989, how does 2012 make it “over 25 years”? By any calculation, that comes to 23 years. This is a ruse used throughout the book to give the impression that the author has a long connection to Sri Lanka when in reality he is just an opportunist trying to turn some bogus observations about a conflict he knows nothing about into a cash cow.

And so far I have covered hardly three full pages!!!

Let’s have a brief look at Ch-1 (one presumes that means Chapter 1) which is titled “Understanding Sri Lanka” with a sub-heading “Introduction Understanding Sri Lanka: The impossible puzzle”. (If it is impossible as claimed, how does the author claim he can explain it?)

So we begin: “…there is very little information being proliferated into the general media about the nation of Sri Lanka.” The author obviously does not own a dictionary and is unaware how the word “proliferated” should be used.

There are several strange conglomerations of words which mean nothing; for example, take this: “Without referring to a map most people would struggle to name any other city than Colombo. Even the name of the island may reflect some kind of echo of when it changed from being called Ceylon to when it became Sri Lanka.” Apart from all the missing punctuation, and the mixing up of the order of words, what the hell does this mean? Echo?

On the next page, the book says: “At the bottom corner of India is the small teardrop-shaped island of Sri Lankan.” That sentence could have done without the last “n”. Once again, no editor. Only Tangram the great.

The word Sinhalese is spelt that way; there is nobody who spells it “Singhalese”. But since the author is unable to read Sinhala, the local language, he makes errors of this kind over and over again. Again, common convention for the usage of numbers in print dictates that one to nine be spelt out and any higher number be used as a figure. The author is blissfully unaware of this too.

The percentage of Sinhalese-speakers is given as “about 70%” when the actual figure is 74.9%. And then in another illustration of his sloppiness, the author writes “The next largest groups are the Tamils who make up about 15% of the population.” The Tamils are not a single group, being made up of plantation Tamils who were brought in by the British from India to work in the tea estates (4.2%) and the local Tamils (11.2%) who have been there much longer.

He then refers to a group whom he calls Burgers – which is something sold in a fast-food outlet. The Sri Lankan ethnic group is called Burghers, who are the product of inter-marriages between Sinhalese and Portuguese, British or Dutch invaders. There is a reference made to a group of indigenous people, whom the author calls “Vedthas.” Later, on the same page, he calls these people Veddhas. This is not the first time that it is clear that he could not be bothered to spell-check this bogus tome.

There’s more: the “Singhalese” (the author’s spelling) are claimed to be of “Arian” origin. The word is Aryan. Then there is a claim that the Veddhas are related to the “Australian Indigenous Aborigines”. One has yet to hear of any non-Indigenous Aborigines. Redundant words are one thing at which Tangram excels.

There is reference to some king of Sri Lanka known as King Dutigama. The man’s name was Dutugemunu. But then what’s the difference, eh? We might as well have called him Charlie Chaplin!

Referring to the religious groups in Sri Lanka, Tangram writes: “Hinduism also has a long history in Sri Lanka with Kovils…” The word is temples, unless one is writing in the vernacular. He claims Buddhists make up 80%; the correct figure is 70.2%.

Then referring to the Bo tree under which Gautama Buddha is claimed to have found enlightenment, Tangram claims it is more than 2000 years old and the oldest cultivated tree alive today. He does not know about the Bristlecone pine trees that date back more than 4700 years. Or the redwoods that carbon dating has shown to be more than 3000 years old.

This brings me to page 14 and I have crossed 1500 words! The entire book would probably take me a week to cover. But this number of errors should serve to prove my point: this book should not be sold. It is a fraud on the public.

,

LongNowThe Global Tree Restoration Potential

Earlier this month, a study appeared in Science that found that a global reforestation effort could capture 205 gigatons of CO2 over the next 40-100 years—two thirds of all the CO2 humans have generated since the industrial revolution:

The restoration of trees remains among the most effective strategies for climate change mitigation. We mapped the global potential tree coverage to show that 4.4 billion hectares of canopy cover could exist under the current climate. Excluding existing trees and agricultural and urban areas, we found that there is room for an extra 0.9 billion hectares of canopy cover, which could store 205 gigatonnes of carbon in areas that would naturally support woodlands and forests. This highlights global tree restoration as our most effective climate change solution to date. However, climate change will alter this potential tree coverage. We estimate that if we cannot deviate from the current trajectory, the global potential canopy cover may shrink by ~223 million hectares by 2050, with the vast majority of losses occurring in the tropics. Our results highlight the opportunity of climate change mitigation through global tree restoration but also the urgent need for action.

Via Science.

Scientific American unpacked the study and its potential implications:

The study team analyzed almost 80,000 satellite photo measurements of tree cover worldwide and combined them with enormous global databases about soil and climate conditions, evaluating one hectare at a time. The exercise generated a detailed map of how many trees the earth could naturally support—where forests grow now and where they could grow, outside of areas such as deserts and savannahs that support very few or no trees. The team then subtracted existing forests and also urban areas and land used for agriculture. That left 0.9 billion hectares that could be forested but have not been. If those spaces were filled with trees that already flourish nearby, the new growth could store 205 gigatons of carbon by the time the forests mature.

After 40 to 100 years, of course, the storage rate would flatten as forest growth levels off—but the researchers say the 205 gigatons would be maintained as old trees die and new ones grow. There would be “a bank of excess carbon that is no longer in the atmosphere,” Crowther says.

Via Scientific American.