Planet Russell

,

Planet DebianAndrew Cater: Debian 10.5 media testing process started 202008011245 - post 1 of several.

The media testing process has started slightly late. There will be a _long_ testing process over much of the day: the final media image releases are likely to be at about 0200-0300UTC tomorrow.

Just settling in for a long day of testing: as ever, it's good to be chatting with my Debian colleagues in Cambridge and with Schweer in Germany. It's going to be a hot one - 30 Celsius (at least) and high humidity for all of us.

Planet DebianAndrew Cater: Debian 10.5 Buster point release 20200801 - all of the fixes :)

The point release is happening today for Debian Buster 10.5. This is an important release because it incorporates all the recent security fixes from the latest GRUB / Secure Boot "Boothole" security problems.

Behind the scenes, there has been a lot of work to get this right: a release subject to an embargo to allow all the Linux releases to co-ordinate this as far as possible, lots of consistent effort, lots of cooperation - the very best of Free/Libre/Open Source working together.

Secure Boot shims are signed with a different key to go to upstream this time around: in due course, when revocation of old, insecure code happens to plug the security hole, older media may be deny-listed. All the updates for all the affected packages (listed in https://www.debian.org/security/2020-GRUB-UEFI-SecureBoot/ ) are included in this release.

This has been a major wake-up call: the work behind the scenes has meant that each affected Linux distribution will be in a much better position going forward and working together is always good.

Planet DebianUtkarsh Gupta: FOSS Activites in July 2020

Here’s my (tenth) monthly update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 17th month of contributing to Debian. I became a DM in late March last year and a DD last Christmas! \o/

Well, this month I didn’t do a lot of Debian stuff, like I usually do, however, I did a lot of things related to Debian (indirectly via GSoC)!

Anyway, here are the following things I did this month:

Uploads and bug fixes:

Other $things:

  • Mentoring for newcomers.
  • FTP Trainee reviewing.
  • Moderation of -project mailing list.
  • Sponsored php-twig for William, ruby-growl, ruby-xmpp4r, and uby-uniform-notifier for Cocoa, sup-mail for Iain, and node-markdown-it for Sakshi.

GSoC Phase 2, Part 2!

In May, I got selected as a Google Summer of Code student for Debian again! \o/
I am working on the Upstream-Downstream Cooperation in Ruby project.

The first three blogs can be found here:

Also, I log daily updates at gsocwithutkarsh2102.tk.

Whilst the daily updates are available at the above site^, I’ll breakdown the important parts of the later half of the second month here:

  • Marc Andre, very kindly, helped in fixing the specs that were failing earlier this month. Well, the problem was with the specs, but I am still confused how so. Anyway..
  • Finished documentation of the second cop and marked the PR as ready to be reviewed.
  • David reviewed and suggested some really good changes and I fixed/tweaked that PR as per his suggestion to finally finish the last bits of the second cop, RelativeRequireToLib.
  • Merged the PR upon two approvals and released it as v0.2.0! 💖
  • We had our next weekly meeting where we discussed the next steps and the things that are supposed to be done for the next set of cops.
  • Introduced rubocop-packaging to the outer world and requested other upstream projects to use it! It is being used by 13 other projects already! 😭💖
  • Started to work on packaging-style-guide but I didn’t push anything to the public repository yet.
  • Worked on refactoring the cops_documentation Rake task which was broken by the new auto-corrector API. Opened PR #7 for it. It’ll be merged after the next RuboCop release as it uses CopsDocumentationGenerator class from the master branch.
  • Whilst working on autoprefixer-rails, I found something unusual. The second cop shouldn’t really report offenses if the require_relative calls are from lib to lib itself. This is a false-positive. Opened issue #8 for the same.

Whilst working on rubocop-packaging, I contributed to more Ruby projects, refactoring their library a little bit and mostly fixing RuboCop issues and fixing issues that the Packaging extension reports as “offensive�.
Following are the PRs that I raised:


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the Jessie release (+2 years after LTS support).

This was my tenth month as a Debian LTS and my first as a Debian ELTS paid contributor.
I was assigned 25.25 hours for LTS and 13.25 hours for ELTS and worked on the following things:

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

Other (E)LTS Work:

  • Did my LTS frontdesk duty from 29th June to 5th July.
  • Triaged qemu, firefox-esr, wordpress, libmediainfo, squirrelmail, xen, openjpeg2, samba, and ldb.
  • Mark CVE-2020-15395/libmediainfo as no-dsa for Jessie.
  • Mark CVE-2020-13754/qemu as no-dsa/intrusive for Stretch and Jessie.
  • Mark CVE-2020-12829/qemu as no-dsa for Jessie.
  • Mark CVE-2020-10756/qemu as not-affected for Jessie.
  • Mark CVE-2020-13253/qemu as postponed for Jessie.
  • Drop squirrelmail and xen for Stretch LTS.
  • Add notes for tomcat8, shiro, and cacti to take care of the Stretch issues.
  • Emailed team@security.d.o and debian-lts@l.d.o regarding possible clashes.
  • Maintenance of LTS Survey on the self-hosted LimeSurvey instance. Received 1765 (just wow!) responses.
  • Attended the fourth LTS meeting. MOM here.
  • General discussion on LTS private and public mailing list.

Other(s)

Sometimes it gets hard to categorize work/things into a particular category.
That’s why I am writing all of those things inside this category.
This includes two sub-categories and they are as follows.

Personal:

This month I did the following things:

  • Released v0.2.0 of rubocop-packaging on RubyGems! 💯
    It’s open-sourced and the repository is here.
    Bug reports and pull requests are welcomed! 😉
  • Released v0.1.0 of get_root on RubyGems! 💖
    It’s open-sourced and the repository is here.
  • Wrote max-word-frequency, my Rails C1M2 programming assignment.
    And made it pretty neater & cleaner!
  • Refactored my lts-dla and elts-ela scripts entirely and wrote them in Ruby so that there are no issues and no false-positives! 🚀
    Check lts-dla here and elts-ela here.
  • And finally, built my first Rails (mini) web-application! 🤗
    The repository is here. This was also a programming assignment (C1M3).
    And furthermore, hosted it at Heroku.

Open Source:

Again, this contains all the things that I couldn’t categorize earlier.
Opened several issues and PRs:

  • Issue #8273 against rubocop, reporting a false-positive auto-correct for Style/WhileUntilModifier.
  • Issue #615 against http reporting a weird behavior of a flaky test.
  • PR #3791 for rubygems/bundler to remove redundant bundler/setup require call from spec_helper generated by bundle gem.
  • Issue #3831 against rubygems, reporting a traceback of undefined method, rubyforge_project=.
  • Issue #238 against nheko asking for enhancement in showing the font name in the very font itself.
  • PR #2307 for puma to constrain rake-compiler to v0.9.4.
  • And finally, I joined the Cucumber organization! \o/

Thank you for sticking along for so long :)

Until next time.
:wq for today.

Planet DebianPaul Wise: FLOSS Activities July 2020

Focus

This month I didn't have any particular focus. I just worked on issues in my info bubble.

Changes

Issues

Review

Administration

  • Debian wiki: unblock IP addresses, approve accounts, reset email addresses

Communication

Sponsors

The purple-discord, ifenslave and psqlodbc work was sponsored by my employer. All other work was done on a volunteer basis.

Planet DebianJunichi Uekawa: August and feels like it finally.

August and feels like it finally. July didn't feel like July and felt like June because it rained so much. This is summer.

,

Planet DebianBen Hutchings: Debian LTS work, July 2020

I was assigned 20 hours of work by Freexian's Debian LTS initiative, but only worked 5 hours this month and returned the remainder to the pool.

Now that Debian 9 'stretch' has entered LTS, the stretch-backports suite will be closed and no longer updated. However, some stretch users rely on the newer kernel version provided there. I prepared to add Linux 4.19 to the stretch-security suite, alongside the standard package of Linux 4.9. I also prepared to update the firmware-nonfree package so that firmware needed by drivers in Linux 4.19 will also be available in stretch's non-free section. Both these updates will be based on the packages in stretch-backports, but needed some changes to avoid conflicts or regressions for users that continue using Linux 4.9 or older non-Debian kernel versions. I will upload these after the Debian 10 'buster' point release.

Planet DebianChris Lamb: Free software activities in July 2020

Here is my monthly update covering what I have been doing in the free and open source software world during July 2020 (previous month):

  • Opened a pull request to make the build reproducible in PyERFA, a set of Python bindings for various astronomy-related utilities (#45), as well as one for PeachPy assembler to make the output of codecode/x86_64.py reproducible (#108).
  • As part of being on the board of directors of the Open Source Initiative and Software in the Public Interest I attended their respective monthly meetings and participated in various licensing and other discussions occurring on the internet, as well as the usual internal discussions regarding logistics and policy etc. This month, it was SPI's Annual General Meeting and the OSI has been running a number of remote strategy sessions for the board.

  • Fixed an issue in my tickle-me-email library that implements Getting Things Done (GTD)-like behaviours in IMAP inboxes to ensure that all messages have a unique Message-Id header. [...]

  • Reviewed and merged even more changes by Pavel Dolecek into my Strava Enhancement Suite, a Chrome extension to improve the user experience on the Strava athletic tracker.

  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub, to use the Travis CI continuous integration platform) to fix a compatibility issue with the latest version of mk-build-deps. [...][...]

For Lintian, the static analysis tool for Debian packages:

  • Update the regular expression to search for all the released versions in a .changes file. [...]

  • Avoid false-positives when matching sensible-utils utilities such as i3-sensible-pager. (#966022)

  • Rename the send-patch tag to patch-not-forwarded-upstream. [...]

  • Drop reminders from 26 tags that false-positives should be reported to Lintian as this is implicit in all our tags. [...]


§


Reproducible Builds

One of the original promises of open source software is that distributed peer review and transparency of process results in enhanced end-user security. However, whilst anyone may inspect the source code of free and open source software for malicious flaws, almost all software today is distributed as pre-compiled binaries. This allows nefarious third-parties to compromise systems by injecting malicious code into ostensibly secure software during the various compilation and distribution processes.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

The project is proud to be a member project of the Software Freedom Conservancy. Conservancy acts as a corporate umbrella allowing projects to operate as non-profit initiatives without managing their own corporate structure. If you like the work of the Conservancy or the Reproducible Builds project, please consider becoming an official supporter.

This month, I:


§


Diffoscope

Elsewhere in our tooling, I made the following changes to diffoscope, including preparing and uploading versions 150, 151, 152, 153 & 154 to Debian:

  • New features:

    • Add support for flash-optimised F2FS filesystems. (#207)
    • Don't require zipnote(1) to determine differences in a .zip file as we can use libarchive. [...]
    • Allow --profile as a synonym for --profile=-. [...]
    • Increase the minimum length of the output of strings(1) to eight characters to avoid unnecessary diff noise. [...]
    • Drop some legacy argument styles: --exclude-directory-metadata and --no-exclude-directory-metadata have been replaced with --exclude-directory-metadata={yes,no}. [...]
  • Bug fixes:

    • Pass the absolute path when extracting members from SquashFS images as we run the command with working directory in a temporary directory. (#189)
    • Correct adding a comment when we cannot extract a filesystem due to missing libguestfs module. [...]
    • Don't crash when listing entries in archives if they don't have a listed size such as hardlinks in ISO images. (#188)
  • Output improvements:

    • Strip off the file offset prefix from xxd(1) and show bytes in groups of 4. [...]
    • Don't emit javap not found in path if it is available in the path but it did not result in an actual difference. [...]
    • Fix ... not available in path messages when looking for Java decompilers that used the Python class name instead of the command. [...]
  • Logging improvements:

    • Add a bit more debugging info when launching libguestfs. [...]
    • Reduce the --debug log noise by truncating the has_some_content messages. [...]
    • Fix the compare_files log message when the file does not have a literal name. [...]
  • Codebase improvements:

    • Rewrite and rename exit_if_paths_do_not_exist to not check files multiple times. [...][...]
    • Add an add_comment helper method; don't mess with our internal list directly. [...]
    • Replace some simple usages of str.format with Python 'f-strings' [...] and make it easier to navigate to the main.py entry point [...].
    • In the RData comparator, always explicitly return None in the failure case as we return a non-None value in the success one. [...]
    • Tidy some imports [...][...][...] and don't alias a variable when don't end up it and use _ instead. [...]
    • Clarify the use of a separate NullChanges quasi-file to represent missing data in the Debian package comparator [...] and clarify use of a 'null' diff in order to remember an exit code. [...]
  • Misc:


§


Debian

In Debian, I made the following uploads this month:


§


Debian LTS

This month I have worked 18 hours on Debian Long Term Support (LTS) and 12 for the Extended LTS project. This included:

You can find out more about the project via the following video:

Krebs on SecurityThree Charged in July 15 Twitter Compromise

Three individuals have been charged for their alleged roles in the July 15 hack on Twitter, an incident that resulted in Twitter profiles for some of the world’s most recognizable celebrities, executives and public figures sending out tweets advertising a bitcoin scam.

Amazon CEO Jeff Bezos’s Twitter account on the afternoon of July 15.

Nima “Rolex” Fazeli, a 22-year-old from Orlando, Fla., was charged in a criminal complaint in Northern California with aiding and abetting intentional access to a protected computer.

Mason “Chaewon” Sheppard, a 19-year-old from Bognor Regis, U.K., also was charged in California with conspiracy to commit wire fraud, money laundering and unauthorized access to a computer.

A U.S. Justice Department statement on the matter does not name the third defendant charged in the case, saying juvenile proceedings in federal court are sealed to protect the identity of the youth. But an NBC News affiliate in Tampa reported today that authorities had arrested 17-year-old Graham Clark as the alleged mastermind of the hack.

17-year-old Graham Clark of Tampa, Fla. was among those charged in the July 15 Twitter hack. Image: Hillsborough County Sheriff’s Office.

Wfla.com said Clark was hit with 30 felony charges, including organized fraud, communications fraud, one count of fraudulent use of personal information with over $100,000 or 30 or more victims, 10 counts of fraudulent use of personal information and one count of access to a computer or electronic device without authority. Clark’s arrest report is available here (PDF). A statement from prosecutors in Florida says Clark will be charged as an adult.

On Thursday, Twitter released more details about how the hack went down, saying the intruders “targeted a small number of employees through a phone spear phishing attack,” that “relies on a significant and concerted attempt to mislead certain employees and exploit human vulnerabilities to gain access to our internal systems.”

By targeting specific Twitter employees, the perpetrators were able to gain access to internal Twitter tools. From there, Twitter said, the attackers targeted 130 Twitter accounts, tweeting from 45 of them, accessing the direct messages of 36 accounts, and downloading the Twitter data of seven.

Among the accounts compromised were democratic presidential candidate Joe BidenAmazon CEO Jeff BezosPresident Barack ObamaTesla CEO Elon Musk, former New York Mayor Michael Bloomberg and investment mogul Warren Buffett.

The hacked Twitter accounts were made to send tweets suggesting they were giving away bitcoin, and that anyone who sent bitcoin to a specified account would be sent back double the amount they gave. All told, the bitcoin accounts associated with the scam received more than 400 transfers totaling more than $100,000.

Sheppard’s alleged alias Chaewon was mentioned twice in stories here since the July 15 incident. On July 16, KrebsOnSecurity wrote that just before the Twitter hack took place, a member of the social media account hacking forum OGUsers named Chaewon advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

The OGUsers forum user “Chaewon” taking requests to modify the email address tied to any twitter account.

On July 17, The New York Times ran a story that featured interviews with several people involved in the attack, who told The Times they weren’t responsible for the Twitter bitcoin scam and had only brokered the purchase of accounts from the Twitter hacker — who they referred to only as “Kirk.”

One of the people interviewed by The Times used the alias “Ever So Anxious,” and said he was a 19-year from the U.K. In my follow-up story on July 22, it emerged that Ever So Anxious was in fact Chaewon.

The person who shared that information was the principal subject of my July 16 post, which followed clues from tweets sent from one of the accounts claimed during the Twitter compromise back to a 21-year-old from the U.K. who uses the nickname PlugWalkJoe.

That individual shared a series of screenshots showing he had been in communications with Chaewon/Ever So Anxious just prior to the Twitter hack, and had asked him to secure several desirable Twitter usernames from the Twitter hacker. He added that Chaewon/Ever So Anxious also was known as “Mason.”

The negotiations over highly-prized Twitter usernames took place just prior to the hijacked celebrity accounts tweeting out bitcoin scams. PlugWalkJoe is pictured here chatting with Ever So Anxious/Chaewon/Mason using his Discord username “Beyond Insane.”

On July 22, KrebsOnSecurity interviewed Sheppard/Mason/Chaewon, who confirmed that PlugWalkJoe had indeed asked him to ask Kirk to change the profile picture and display name for a specific Twitter account on July 15. He acknowledged that while he did act as a “middleman” between Kirk and others seeking to claim desirable Twitter usernames, he had nothing to do with the hijacking of the VIP Twitter accounts for the bitcoin scam that same day.

“Encountering Kirk was the worst mistake I’ve ever made due to the fact it has put me in issues I had nothing to do with,” he said. “If I knew Kirk was going to do what he did, or if even from the start if I knew he was a hacker posing as a rep I would not have wanted to be a middleman.”

Another individual who told The Times he worked with Ever So Anxious/Chaewon/Mason in communicating with Kirk said he went by the nickname “lol.” On July 22, KrebsOnSecurity identified lol as a young man who went to high school in Danville, Calif.

Federal investigators did not mention lol by his nickname or his real name, but the charging document against Sheppard says that on July 21 federal agents executed a search warrant at a residence in Northern California to question a juvenile who assisted Kirk and Chaewon in selling access to Twitter accounts. According to that document, the juvenile and Chaewon had discussed turning themselves in to authorities after the Twitter hack became publicly known.

CryptogramTwitter Hacker Arrested

A 17-year-old Florida boy was arrested and charged with last week's Twitter hack.

News articles. Boing Boing post. Florida state attorney press release.

This is a developing story. Post any additional news in the comments.

CryptogramFriday Squid Blogging: Squid Proteins for a Better Face Mask

Researchers are synthesizing squid proteins to create a face mask that better survives cleaning. (And you thought there was no connection between squid and COVID-19.) The military thinks this might have applications for self-healing robots.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramData and Goliath Book Placement

Notice the copy of Data and Goliath just behind the head of Maine Senator Angus King.

Screenshot of MSNBC interview with Angus King

This demonstrates the importance of a vibrant color and a large font.

Kevin RuddABC: Closing The Gap, AUSMIN & Public Health

E&OE TRANSCRIPT
TELEVISION INTERVIEW

ABC NEWS CHANNEL
AFTERNOON BRIEFING
31 JULY 2020

Patricia Karvelas
My next guest this afternoon is the former prime minister Kevin Rudd. He’s the man that delivered the historic Apology to the stolen generations and launched the original Close the Gap targets. Of course, yesterday, there was a big revamp of the Close the Gap so we thought it was a good idea to talk to the man originally responsible. Kevin Rudd, welcome.

Kevin Rudd
Good to be with you. Patricia.

Patricia Karvelas
Prime Minister Scott Morrison said there had been a failure to partner with Indigenous people to develop and deliver the 2008 targets. Is that something you regret?

Kevin Rudd
Oh, Prime Minister Morrison is always out to differentiate himself from what previous Labor governments have done. We worked closely with Indigenous leaders at the time through minister Jenny Macklin in framing those Closing the Gap targets. The bottom line is: we deliver the National Apology; we established a Closing the Gap framework, which we thought should be measurable; and on top of that, Patricia, what we also did was, we negotiated the first-ever commonwealth-state agreement in 2008-9 over the following 10-year period, which had Closing the Gap targets as the basis for the funding commitments by the commonwealth and the states. Those things have been sustained into the future. If the Indigenous leadership of Australia have decided that it’s time to refresh the targets then I support Pat Turner’s leadership and I support what Indigenous leaders have done.

Patricia Karvelas
She’s got a seat at the table though. I remember, you know, I covered it extensively at the time. But she has got a point and they have a point that they now have a seat at the table in a different partnership model than was delivered originally.

Kevin Rudd
Well, as you know, the realities back in 2007 were radically different. Back then there was a huge partisan fight over whether we should have a National Apology. We had people like Peter Dutton and Tony Abbott threatening not to participate in the Apology. So it was a highly partisan environment back then. So these things evolve over time. The Apology remains in place. The national statement each year on the anniversary of the Apology remains in place on progress in achieving Closing the Gap, our successes and our failures. But yes, I welcome any advance that’s been made. But here’s the rub, Patricia: why have there been challenges in delivering on previous Closing the Gap targets? In large part it’s because in the 2014 budget, the first year after the current Coalition government took office, as you know, someone who’s covered the area extensively, they pulled out half a billion dollars worth of funding. Now you’re not going to achieve targets, if simultaneously you gut the funding capacity to act in these areas. That’s what essentially happened over the last five-to-six years.

Patricia Karvelas
That’s absolutely part of the story. But is it all of the story? I mean, if you look at failure to deliver on these targets, it’s been very disappointing for Aboriginal Australians. But I think for Australians who wanted to see the gap closed because it’s the right thing to do; it’s the kind of country they want to live in. There are other reasons aren’t there, that the gap hasn’t been closed? Isn’t one of the reasons that it’s lacked Aboriginal authority and ownership, that it’s been a top-down approach?

Kevin Rudd
Well, I welcome the statement by Pat Turner in bringing Indigenous leadership to the table with these new targets for the future. I’m fully supportive of that. You’re looking at someone who has stood for a lifetime in empowerment of Indigenous organisations. As I said, realities change over time, and I welcome what will happen in the future. But the bottom line is, Patricia, with or without Indigenous leadership from the ground up, nothing will happen in the absence of physical resources as well. And that is a critical part of the equation as I think you’ve just agreed with me. And we can have as many notional targets as we like, but if on day two you, as it were, disembowel the funding arrangements, which is what happened under the current government, guess what: nothing happens. And I note that when these new targets were announced yesterday that Ken Wyatt and the Prime Minister were silent on the question of future funding commitments by the commonwealth. So our Closing the Gap targets, yes, they weren’t all realised. We were on track to achieve two of the six targets that we set. We made some progress on another two. And we were kind of flatlining when it came to the remaining two. But I make no apology for measurement, Patricia, because unless you measure things, guess what? They never happen. And so I’m all for actually an annual report card on success and failure. That’s why I did it in the first place, and without apology.

Patricia Karvelas
I want to move on just to another story that was big this week. What did you make of this week’s AUSMIN talks and the Foreign Minister’s emphasis on Australia taking what is an independent position here, particularly with our relationship with China, was that significant?

Kevin Rudd
Well, whacko! The Australian Foreign Minister says we should have an independent foreign policy! Hold the front page! I mean, for God’s sake.

Patricia Karvelas
Well, it was in the AUSMIN framework. I mean, it wasn’t just a statement to the media, do you think?

Kevin Rudd
Yeah, yeah, but you know, the function of the national government of Australia is to run the foreign policy of Australia, an independent foreign policy. And if the conservatives have recently discovered this principle is a good one, well, I welcome them to the table. That’s been our view for about the last hundred years that the Australian Labor Party has been engaged in the foreign policy debates of this country. But why did she say that? That’s the more important question, I think, Patricia. I think the Australian Government, both Morrison and the Foreign Minister looked at Secretary of State Pompeo’s speech at the Nixon Library a week or so ago when effectively he called for a Second Cold War against China and, within that, called for the overthrow of the Chinese Communist Party. Even for the current Australian conservative government, that looked like a bridge too far, and I think they basically took fright at what they were walking into. And my judgment is: it’s very important to separate out our national interests from those the United States; secondly, understand what a combined allied strategy could and should be on China, as opposed to finding yourself wrapped up either in President Trump’s re-election strategy or Secretary of State Pompeo’s interest in securing the Republican nomination in 2024. These are quite separate political matters as opposed to national strategy.

Patricia Karvelas
Just on COVID, before I let you go, the Queensland Government has declared all of Greater Sydney as a COVID-19 hotspot and the state’s border will be closed to people travelling from that region from 1am on Saturday. Is that the right decision?

Kevin Rudd
Well, absolutely right. I mean, Premier Palaszczuk has faced like every premier, Daniel Andrews and Gladys Berejiklian, very hard public policy decisions. But what Premier Palaszczuk has done — and I’ve been here in Queensland for the last three and a half months now, observing this on a daily basis — is that she has taken the Chief Medical Officer’s advice day-in, day-out and acted accordingly. She’s come under enormous attack within Queensland, led initially by the Murdoch media, followed up by Frecklington, the leader of the LNP, saying ‘open the borders’. In fact, I think Frecklington called for the borders to be opened some 60 or 70 separate times, but to give Palaszczuk her due, she’s just stood her ground and said ‘my job is to give effect to the Chief Medical Officer’s advice, despite all the political clamour to the contrary’. So as she did then and as she does now, I think that’s right in terms of the public health and wellbeing of your average Queenslanders, including me.

Patricia Karvelas
Including you. And now you are very much a long-standing Queenslander being there for that long. Kevin Rudd, thank you so much for joining us this afternoon.

Kevin Rudd
Still from Queensland. Here to help. Bye.

Patricia Karvelas
Always. That’s the former prime minister Kevin Rudd, Joining me to talk about yesterday’s Closing the Gap announcement, defending his government’s legacy there but also, of course, talking about the failure as well to deliver on those targets. But particularly pointed comments around the withdrawal of funding in relation to Indigenous affairs which happened under the Abbott Government and he says was responsible for the failure to deliver at the rate that was expected, and it’s been obviously a disappointing journey not quite as planned. Now, a whole bunch of new targets.

The post ABC: Closing The Gap, AUSMIN & Public Health appeared first on Kevin Rudd.

Planet DebianJonathan Carter: Free Software Activities for 2020-07

Here are my uploads for the month of July, which is just a part of my free software activities, I’ll try to catch up on the rest in upcoming posts. I haven’t indulged in online conferences much over the last few months, but this month I attended the virtual editions of Guadec 2020 and HOPE 2020. HOPE isn’t something I knew about before and I enjoyed it a lot, you can find their videos on archive.org.

Debian Uploads

2020-07-05: Sponsor backport gamemode-1.5.1-5 for Debian buster-backports.

2020-07-06: Sponsor package piper (0.5.1-1) for Debian unstable (mentors.debian.net request).

2020-07-14: Upload package speedtest-cli (2.0.2-1+deb10u1) to Debian buster (Closes: #940165, #965116).

2020-07-15: Upload package calamares (3.2.27-1) to Debian unstable.

2020-07-15: Merge MR#1 for gnome-shell-extension-dash-to-panel.

2020-07-15: Upload package gnome-shell-extension-dash-to-panel (38-1) to Debian unstable.

2020-07-15: Upload package gnome-shell-extension-disconnect-wifi (25-1) to Debian unstable.

2020-07-15: Upload package gnome-shell-extension-draw-on-your-screen (6.1-1) to Debian unstable.

2020-07-15: Upload package xabacus (8.2.8-1) to Debian unstable.

2020-07-15: Upload package s-tui (1.0.2-1) to Debian unstable.

2020-07-15: Upload package calamares-settings-debian (10.0.2-1+deb10u2) to Debian buster (Closes: #934503, #934504).

2020-07-15: Upload package calamares-settings-debian (10.0.2-1+deb10u3) to Debian buster (Closes: #959541, #965117).

2020-07-15: Upload package calamares-settings-debian (11.0.2-1) to Debian unstable.

2020-07-19: Upload package bluefish (2.2.11+svn-r8872-1) to Debian unstable (Closes: #593413, #593427, #692284, #730543, #857330, #892502, #951143).

2020-07-19: Upload package bundlewrap (4.0.0-1) to Debian unstable.

2020-07-20: Upload package bluefish (2.2.11+svn-r8872-1) to Debian unstable (Closes: #965332).

2020-07-22: Upload package calamares (3.2.27-1~bpo10+1) to Debian buster-backports.

2020-07-24: Upload package bluefish (2.2.11_svn-r8872-3) to Debian unstable (Closes: #965944).

Worse Than FailureError'd: Please Reboot Faster, I Can't Wait Any Longer

"Saw this at a German gas station along the highway. The reboot screen at the pedestal just kept animating the hourglass," writes Robin G.

 

"Somewhere, I imagine there's a large number of children asking why their new bean bag is making them feel hot and numb," Will N. wrote.

 

Joel B. writes, "I came across these 'deals' on the Microsoft Canada store. Normally I'd question it, but based on my experiences with Windows, I bet, to them, the math checks out."

 

Kyle H. wrote, "Truly, nothing but the best quality strip_zeroes will be accepted."

 

"My Nan is going to be thrilled at the special discount on these masks!" Paul R. wrote.

 

Paul G. writes, "I know it seemed like the hours were passing more slowly, and thanks to Apple, I now know why."

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianFrançois Marier: Extending GPG key expiry

Extending the expiry on a GPG key is not very hard, but it's easy to forget a step. Here's how I did my last expiry bump.

Update the expiry on the main key and the subkey:

gpg --edit-key KEYID
> expire
> key 1
> expire
> save

Upload the updated key to the keyservers:

gpg --export KEYID | curl -T - https://keys.openpgp.org
gpg --keyserver keyring.debian.org --send-keys KEYID

Planet DebianReproducible Builds (diffoscope): diffoscope 154 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 154. This version includes the following changes:

[ Chris Lamb ]

* Add support for F2FS filesystems.
  (Closes: reproducible-builds/diffoscope#207)
* Allow "--profile" as a synonym for "--profile=-".
* Add an add_comment helper method so don't mess with our _comments list
  directly.
* Add missing bullet point in a previous changelog entry.
* Use "human-readable" over unhyphenated version.
* Add a bit more debugging around launching guestfs.
* Profile the launch of guestfs filesystems.
* Correct adding a comment when we cannot extract a filesystem due to missing
  guestfs module.

You find out more by visiting the project homepage.

,

Planet DebianRussell Coker: Links July 2020

iMore has an insightful article about Apple’s transition to the ARM instruction set for new Mac desktops and laptops [1]. I’d still like to see them do something for the server side.

Umair Haque wrote an insightful article about How the American Idiot Made America Unlivable [2]. We are witnessing the destruction of a once great nation.

Chris Lamb wrote an interesting blog post about comedy shows with the laugh tracks edited out [3]. He then compares that to social media with the like count hidden which is an interesting perspective. I’m not going to watch TV shows edited in that way (I’ve enjoyed BBT inspite of all the bad things about it) and I’m not going to try and hide like counts on social media. But it’s interesting to consider these things.

Cory Doctorow wrote an interesting Locus article suggesting that we could have full employment by a transition to renewable energy and methods for cleaning up the climate problems we are too late to prevent [4]. That seems plausible, but I think we should still get a Universal Basic Income.

The Thinking Shop has posters and decks of cards with logical fallacies and cognitive biases [5]. Every company should put some of these in meeting rooms. Also they have free PDFs to download and print your own posters.

gayhomophobe.com [6] is a site that lists powerful homophobic people who hurt GLBT people but then turned out to be gay. It’s presented in an amusing manner, people who hurt others deserve to be mocked.

Wired has an insightful article about the shutdown of Backpage [7]. The owners of Backpage weren’t nice people and they did some stupid things which seem bad (like editing posts to remove terms like “lolita”). But they also worked well with police to find criminals. The opposition to what Backpage were doing conflates sex trafficing, child prostitution, and legal consenting adult sex work. Taking down Backpage seems to be a bad thing for the victims of sex trafficing, for consenting adult sex workers, and for society in general.

Cloudflare has an interesting blog post about short lived certificates for ssh access [8]. Instead of having user’s ssh keys stored on servers each user has to connect to a SSO server to obtain a temporary key before connecting, so revoking an account is easy.

CryptogramFake Stories in Real News Sites

Fireeye is reporting that a hacking group called Ghostwriter broke into the content management systems of Eastern European news sites to plant fake stories.

From a Wired story:

The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they've posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content.

That hacking campaign, targeting media sites from Poland to Lithuania, has spread false stories about US military aggression, NATO soldiers spreading coronavirus, NATO planning a full-on invasion of Belarus, and more.


Kevin RuddAIIA: The NT’s Global Opportunities and Challenges

REMARKS AT THE LAUNCH OF THE
NORTHERN TERRITORY BRANCH OF THE
AUSTRALIAN INSTITUTE OF INTERNATIONAL AFFAIRS

 

 

Image: POIS Tom Gibson/ADF

 

The post AIIA: The NT’s Global Opportunities and Challenges appeared first on Kevin Rudd.

Krebs on SecurityIs Your Chip Card Secure? Much Depends on Where You Bank

Chip-based credit and debit cards are designed to make it infeasible for skimming devices or malware to clone your card when you pay for something by dipping the chip instead of swiping the stripe. But a recent series of malware attacks on U.S.-based merchants suggest thieves are exploiting weaknesses in how certain financial institutions have implemented the technology to sidestep key chip card security features and effectively create usable, counterfeit cards.

A chip-based credit card. Image: Wikipedia.

Traditional payment cards encode cardholder account data in plain text on a magnetic stripe, which can be read and recorded by skimming devices or malicious software surreptitiously installed in payment terminals. That data can then be encoded onto anything else with a magnetic stripe and used to place fraudulent transactions.

Newer, chip-based cards employ a technology known as EMV that encrypts the account data stored in the chip. The technology causes a unique encryption key — referred to as a token or “cryptogram” — to be generated each time the chip card interacts with a chip-capable payment terminal.

Virtually all chip-based cards still have much of the same data that’s stored in the chip encoded on a magnetic stripe on the back of the card. This is largely for reasons of backward compatibility since many merchants — particularly those in the United States — still have not fully implemented chip card readers. This dual functionality also allows cardholders to swipe the stripe if for some reason the card’s chip or a merchant’s EMV-enabled terminal has malfunctioned.

But there are important differences between the cardholder data stored on EMV chips versus magnetic stripes. One of those is a component in the chip known as an integrated circuit card verification value or “iCVV” for short — also known as a “dynamic CVV.”

The iCVV differs from the card verification value (CVV) stored on the physical magnetic stripe, and protects against the copying of magnetic-stripe data from the chip and the use of that data to create counterfeit magnetic stripe cards. Both the iCVV and CVV values are unrelated to the three-digit security code that is visibly printed on the back of a card, which is used mainly for e-commerce transactions or for card verification over the phone.

The appeal of the EMV approach is that even if a skimmer or malware manages to intercept the transaction information when a chip card is dipped, the data is only valid for that one transaction and should not allow thieves to conduct fraudulent payments with it going forward.

However, for EMV’s security protections to work, the back-end systems deployed by card-issuing financial institutions are supposed to check that when a chip card is dipped into a chip reader, only the iCVV is presented; and conversely, that only the CVV is presented when the card is swiped. If somehow these do not align for a given transaction type, the financial institution is supposed to decline the transaction.

The trouble is that not all financial institutions have properly set up their systems this way. Unsurprisingly, thieves have known about this weakness for years. In 2017, I wrote about the increasing prevalence of “shimmers,” high-tech card skimming devices made to intercept data from chip card transactions.

A close-up of a shimmer found on a Canadian ATM. Source: RCMP.

More recently, researchers at Cyber R&D Labs published a paper detailing how they tested 11 chip card implementations from 10 different banks in Europe and the U.S. The researchers found they could harvest data from four of them and create cloned magnetic stripe cards that were successfully used to place transactions.

There are now strong indications the same method detailed by Cyber R&D Labs is being used by point-of-sale (POS) malware to capture EMV transaction data that can then be resold and used to fabricate magnetic stripe copies of chip-based cards.

Earlier this month, the world’s largest payment card network Visa released a security alert regarding a recent merchant compromise in which known POS malware families were apparently modified to target EMV chip-enabled POS terminals.

“The implementation of secure acceptance technology, such as EMV® Chip, significantly reduced the usability of the payment account data by threat actors as the available data only included personal account number (PAN), integrated circuit card verification value (iCVV) and expiration date,” Visa wrote. “Thus, provided iCVV is validated properly, the risk of counterfeit fraud was minimal. Additionally, many of the merchant locations employed point-to-point encryption (P2PE) which encrypted the PAN data and further reduced the risk to the payment accounts processed as EMV® Chip.”

Visa did not name the merchant in question, but something similar seems to have happened at Key Food Stores Co-Operative Inc., a supermarket chain in the northeastern United States. Key Food initially disclosed a card breach in March 2020, but two weeks ago updated its advisory to clarify that EMV transaction data also was intercepted.

“The POS devices at the store locations involved were EMV enabled,” Key Food explained. “For EMV transactions at these locations, we believe only the card number and expiration date would have been found by the malware (but not the cardholder name or internal verification code).”

While Key Food’s statement may be technically accurate, it glosses over the reality that the stolen EMV data could still be used by fraudsters to create magnetic stripe versions of EMV cards presented at the compromised store registers in cases where the card-issuing bank hadn’t implemented EMV correctly.

Earlier today, fraud intelligence firm Gemini Advisory released a blog post with more information on recent merchant compromises — including Key Food — in which EMV transaction data was stolen and ended up for sale in underground shops that cater to card thieves.

“The payment cards stolen during this breach were offered for sale in the dark web,” Gemini explained. “Shortly after discovering this breach, several financial institutions confirmed that the cards compromised in this breach were all processed as EMV and did not rely on the magstripe as a fallback.”

Gemini says it has verified that another recent breach — at a liquor store in Georgia — also resulted in compromised EMV transaction data showing up for sale at dark web stores that sell stolen card data. As both Gemini and Visa have noted, in both cases proper iCVV verification from banks should render this intercepted EMV data useless to crooks.

Gemini determined that due to the sheer number of stores affected, it’s extremely unlikely the thieves involved in these breaches intercepted the EMV data using physically installed EMV card shimmers.

“Given the extreme impracticality of this tactic, they likely used a different technique to remotely breach POS systems to collect enough EMV data to perform EMV-Bypass Cloning,” the company wrote.

Stas Alforov, Gemini’s director of research and development, said financial institutions that aren’t performing these checks risk losing the ability to notice when those cards are used for fraud.

That’s because many banks that have issued chip-based cards may assume that as long as those cards are used for chip transactions, there is virtually no risk that the cards will be cloned and sold in the underground. Hence, when these institutions are looking for patterns in fraudulent transactions to determine which merchants might be compromised by POS malware, they may completely discount any chip-based payments and focus only on those merchants at which a customer has swiped their card.

“The card networks are catching on to the fact that there’s a lot more EMV-based breaches happening right now,” Alforov said. “The larger card issuers like Chase or Bank of America are indeed checking [for a mismatch between the iCVV and CVV], and will kick back transactions that don’t match. But that is clearly not the case with some smaller institutions.”

For better or worse, we don’t know which financial institutions have failed to properly implement the EMV standard. That’s why it always pays to keep a close eye on your monthly statements, and report any unauthorized transactions immediately. If your institution lets you receive transaction alerts via text message, this can be a near real-time way to keep an eye out for such activity.

CryptogramImages in Eye Reflections

In Japan, a cyberstalker located his victim by enhancing the reflections in her eye, and using that information to establish a location.

Reminds me of the image enhancement scene in Blade Runner. That was science fiction, but now image resolution is so good that we have to worry about it.

LongNowThe Digital Librarian as Essential Worker

Michelle Swanson, an Oregon-based educator and educational consultant, has written a blog post on the Internet Archive on the increased importance of digital librarians during the pandemic:

With public library buildings closed due to the global pandemic, teachers, students, and lovers of books everywhere have increasingly turned to online resources for access to information. But as anyone who has ever turned up 2.3 million (mostly unrelated) results from a Google search knows, skillfully navigating the Internet is not as easy as it seems. This is especially true when conducting serious research that requires finding and reviewing older books, journals and other sources that may be out of print or otherwise inaccessible.

Enter the digital librarian.

Michelle Swanson, “Digital Librarians – Now More Essential Than Ever” from the Internet Archive.

Kevin Kelly writes (in New Rules for the New Economy and in The Inevitable) about how an information economy flips the relative valuation of questions and answers — how search makes useless answers nearly free and useful questions even more precious than before, and knowing how to reliably produce useful questions even more precious still.

But much of our knowledge and outboard memory is still resistant to or incompatible with web search algorithms — databases spread across both analog and digital, with unindexed objects or idiosyncratic cataloging systems. Just as having map directions on your phone does not outdo a local guide, it helps to have people intimate with a library who can navigate the weird specifics. And just as scientific illustrators still exist to mostly leave out the irrelevant and make a paper clear as day (which cameras cannot do, as of 02020), a librarian is a sharp instrument that cuts straight through the extraneous info to what’s important.

Knowing what to enter in a search is one thing; knowing when it won’t come up in search and where to look amidst an analog collection is another skill entirely. Both are necessary at a time when libraries cannot receive (as many) scholars in the flesh, and what Penn State Prof Rich Doyle calls the “infoquake” online — the too-much-all-at-once-ness of it all — demands an ever-sharper reason just to stay afloat.

Learn More

  • Watch Internet Archive founder Brewster Kahle’s 02011 Long Now talk, “Universal Access to All Knowledge.”

Worse Than FailureCodeSOD: A Variation on Nulls

Submitter “NotAThingThatHappens” stumbled across a “unique” way to check for nulls in C#.

Now, there are already a few perfectly good ways to check for nulls. variable is null, for example, or use nullable types specifically. But “NotAThingThatHappens” found approach:

if(((object)someObjThatMayBeNull) is var _)
{
    //object is null, react somehow
} 
else
{ 
  UseTheObjectInAMethod(someObjThatMayBeNull);
}

What I hate most about this is how cleverly it exploits the C# syntax to work.

Normally, the _ is a discard. It’s meant to be used for things like tuple unpacking, or in cases where you have an out parameter but don’t actually care about the output- foo(out _) just discards the output data.

But _ is also a perfectly valid identifier. So var _ creates a variable _, and the type of that variable is inferred from context- in this case, whatever type it’s being compared against in someObjThatMayBeNull. This variable is scoped to the if block, so we don’t have to worry about it leaking into our namespace, but since it’s never initialized, it’s going to choose the appropriate default value for its type- and for reference types, that default is null. By casting explicitly to object, we guarantee that our type is a reference type, so this makes sure that we don’t get weird behavior on value types, like integers.

So really, this is just an awkward way of saying objectCouldBeNull is null.

NotAThingThatHappens adds:

The code never made it to production… but I was surprised that the compiler allowed this.
It’s stupid, but it WORKS!

It’s definitely stupid, it definitely works, I’m definitely glad it’s not in your codebase.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Sam VargheseHistory lessons at a late stage of life

In 1987, I got a job in Dubai, to work for a newspaper named Khaleej (Gulf) Times. I was chosen because the interviewer was a jolly Briton who came down to Bombay to do the interview on 12 June.

Malcolm Payne, the first editor of the newspaper that had been started in 1978 by Iranian brothers named Galadari, told me that he had always wanted to come and pick some people to work at the paper. By then he had been pushed out of the editorship by the politics of both Pakistani and Indian journalists who worked there.

For some strange reason, he took a liking to me. At the end of about 45 minutes of what was a much more robust conversation than I had ever experienced in earlier job interviews, which were normally tense affairs, Payne told me, “You’re a good bugger, Samuel. I’ll see you in Dubai.”

I took it with a pinch of salt. Anyway, I reckoned that I would know in a matter of months whether he pulling my leg or not in few months. I was more focused on my upcoming wedding which was to be scheduled shortly.

But, Payne turned out to be a man of his word. In September, I got a telegram from Dubai asking me to send copies of my passport in order that a visa could be obtained for me to work in Dubai. I had mixed emotions: on the one hand, I was happy that a chance to get out of the grinding poverty I lived in had presented itself. At the same time, I was worried about leaving my sickly mother in India; by then, she had been a widow for a few months and I was her only son.

When my mother-in-law to be heard about the job opportunity, she insisted that the wedding should be held before I left for Dubai. Probably she thought that once I went to the Persian Gulf, I would begin to look for another woman.

The wedding was duly fixed for 19 October and I was to leave for Dubai on 3 November.

After I landed in Dubai, I learnt about the tension that exists between most Indian and Pakistanis as a result of the partition of the subcontinent in 1947. Pakistanis are bitter because they feel that they were forced to leave for a country that had turned out to be a basket case, subsisting only because of aid from the US, and Indians felt that the Pakistanis had been the ones to force Britain, then the colonial ruler, to split the country.

Never did this enmity come to the fore more than when India and Pakistan sent their cricket teams to the UAE — Dubai is part of this country — to play in a tournament organised there by some businessman from Sharjah.

Of course, the whole raison d’etre for the tournament was the Indo-Pakistan enmity; pitting teams that had a history of this sort against each other was like staging a proxy war. What’s more, there were both expatriate Indians and Pakistanis in large numbers waiting eagerly to buy tickets and pour into what was literally a coliseum.
The other teams who were invited — sometimes there was a three-way contest, at others a four-way fight — were just there to make up the numbers.

And the organisers always prayed for an India-Pakistan final.

A year before I arrived in Dubai, a Pakistani batsman known as Javed Miandad had taken his team to victory by hitting a six off the last ball; the contests were limited to 50 overs a side. He was showered with gifts by rich Pakistanis and one even gifted him some land. Such was the euphoria a victory in the former desert generated.

Having been born and raised in Sri Lanka, I knew nothing of the history of India. My parents did not clue me in either. I learnt all about the grisly history of the subcontinent after I landed in Dubai.

That enmity resulted in several other incidents worth telling, which I shall relate soon.

,

Planet DebianNorbert Preining: KDE/Plasma Status Update 2020-07-30

Only a short update on the current status of my KDE/Plasma package for Debian sid and testing:

  • Frameworks 5.72
  • Plasma 5.19.4
  • Apps 20.04.3
  • Digikam 7.0.0
  • Ark CVE-2020-16116 fixed in version 20.04.3-1~np2

Hope that helps a few people. See this post for how to setup archives.

Enjoy.

Planet DebianDima Kogan: An awk corner case?

So even after years and years of experience, core tools still find ways to surprise me. Today I tried to do some timestamp comparisons with mawk (vnl-filter, to be more precise), and ran into a detail of the language that made it not work. Not a bug, I guess, since both mawk and gawk are affected. I'll claim "language design flaw", however.

Let's say I'm processing data with unix timestamps in it (seconds since the epoch). gawk and recent versions of mawk have strftime() for that:

$ date
Wed Jul 29 15:31:13 PDT 2020

$ date +"%s"
1596061880

$ date +"%s" | mawk '{print strftime("%H",$1)}'
15

And let's say I want to do something conditional on them. I want only data after 9:00 each day:

$ date +"%s" | mawk 'strftime("%H",$1) >= 9 {print "Yep. After 9:00"}'

That's right. No output. But it is 15:31 now, and I confirmed above that strftime() reports the right time, so it should know that it's after 9:00, but it doesn't. What gives?

As we know, awk (and perl after it) treat numbers and strings containing numbers similarly: 5+5 and ="5"+5= both work the same, which is really convenient. This can only work if it can be inferred from context whether we want a number or a string; it knows that addition takes two numbers, so it knows to convert ="5"= into a number in the example above.

But what if an operator is ambiguous? Then it picks a meaning based on some internal logic that I don't want to be familiar with. And apparently awk implements string comparisons with the same < and > operators, as numerical comparisons, creating the ambiguity I hit today. strftime returns strings, and you get silent, incorrect behavior that then demands debugging. How to fix? By telling awk to treat the output of strftime() as a number:

$ date +"%s" | mawk '0+strftime("%H",$1) >= 9 {print "Yep. After 9:00"}'

Yep. After 9:00

With the benefit of hindsight, they really should not have reused any operators for both number and string operations. Then these ambiguities wouldn't occur, and people wouldn't be grumbling into their blogs decades after these decisions were made.

Krebs on SecurityHere’s Why Credit Card Fraud is Still a Thing

Most of the civilized world years ago shifted to requiring computer chips in payment cards that make it far more expensive and difficult for thieves to clone and use them for fraud. One notable exception is the United States, which is still lurching toward this goal. Here’s a look at the havoc that lag has wrought, as seen through the purchasing patterns at one of the underground’s biggest stolen card shops that was hacked last year.

In October 2019, someone hacked BriansClub, a popular stolen card bazaar that uses this author’s likeness and name in its marketing. Whoever compromised the shop siphoned data on millions of card accounts that were acquired over four years through various illicit means from legitimate, hacked businesses around the globe — but mostly from U.S. merchants. That database was leaked to KrebsOnSecurity, which in turn shared it with multiple sources that help fight payment card fraud.

An ad for BriansClub has been using my name and likeness for years to peddle millions of stolen credit cards.

Among the recipients was Damon McCoy, an associate professor at New York University’s Tandon School of Engineering [full disclosure: NYU has been a longtime advertiser on this blog]. McCoy’s work in probing the credit card systems used by some of the world’s biggest purveyors of junk email greatly enriched the data that informed my 2014 book Spam Nation, and I wanted to make sure he and his colleagues had a crack at the BriansClub data as well.

McCoy and fellow NYU researchers found BriansClub earned close to $104 million in gross revenue from 2015 to early 2019, and listed over 19 million unique card numbers for sale. Around 97% of the inventory was stolen magnetic stripe data, commonly used to produce counterfeit cards for in-person payments.

“What surprised me most was there are still a lot of people swiping their cards for transactions here,” McCoy said.

In 2015, the major credit card associations instituted new rules that made it riskier and potentially more expensive for U.S. merchants to continue allowing customers to swipe the stripe instead of dip the chip. Complicating this transition was the fact that many card-issuing U.S. banks took years to replace their customer card stocks with chip-enabled cards, and countless retailers dragged their feet in updating their payment terminals to accept chip-based cards.

Indeed, three years later the U.S. Federal Reserve estimated (PDF) that 43.3 percent of in-person card payments were still being processed by reading the magnetic stripe instead of the chip. This might not have been such a big deal if payment terminals at many of those merchants weren’t also compromised with malicious software that copied the data when customers swiped their cards.

Following the 2015 liability shift, more than 84 percent of the non-chip cards advertised by BriansClub were sold, versus just 35 percent of chip-based cards during the same time period.

“All cards without a chip were in much higher demand,” McCoy said.

Perhaps surprisingly, McCoy and his fellow NYU researchers found BriansClub customers purchased only 40% of its overall inventory. But what they did buy supports the notion that crooks generally gravitate toward cards issued by financial institutions that are perceived as having fewer or more lax protections against fraud.

Source: NYU.

While the top 10 largest card issuers in the United States accounted for nearly half of the accounts put up for sale at BriansClub, only 32 percent of those accounts were sold — and at a roughly half the median price of those issued by small- and medium-sized institutions.

In contrast, more than half of the stolen cards issued by small and medium-sized institutions were purchased from the fraud shop. This was true even though by the end of 2018, 91 percent of cards for sale from medium-sized institutions were chip-based, and 89 percent from smaller banks and credit unions. Nearly all cards issued by the top ten largest U.S. card issuers (98 percent) were chip-enabled by that time.

REGION LOCK

The researchers found BriansClub customers strongly preferred cards issued by financial institutions in specific regions of the United States, specifically Colorado, Nevada, and South Carolina.

“For whatever reason, those regions were perceived as having lower anti-fraud systems or those that were not as effective,” McCoy said.

Cards compromised from merchants in South Carolina were in especially high demand, with fraudsters willing to spend twice as much on those cards per capita than any other state — roughly $1 per resident.

That sales trend also was reflected in the support tickets filed by BriansClub customers, who frequently were informed that cards tied to the southeastern United States were less likely to be restricted for use outside of the region.

Image: NYU.

McCoy said the lack of region locking also made stolen cards issued by banks in China something of a hot commodity, even though these cards demanded much higher prices (often more than $100 per account): The NYU researchers found virtually all available Chinese cards were sold soon after they were put up for sale. Ditto for the relatively few corporate and business cards for sale.

A lack of region locks may also have caused card thieves to gravitate toward buying up as many cards as they could from USAA, a savings bank that caters to active and former military service members and their immediate families. More than 83 percent of the available USAA cards were sold between 2015 and 2019, the researchers found.

Although Visa cards made up more than half of accounts put up for sale (12.1 million), just 36 percent were sold. MasterCards were the second most-plentiful (3.72 million), and yet more than 54 percent of them sold.

American Express and Discover, which unlike Visa and MasterCard are so-called “closed loop” networks that do not rely on third-party financial institutions to issue cards and manage fraud on them, saw 28.8 percent and 33 percent of their stolen cards purchased, respectively.

PREPAIDS

Some people concerned about the scourge of debit and credit card fraud opt to purchase prepaid cards, which generally enjoy the same cardholder protections against fraudulent transactions. But the NYU team found compromised prepaid accounts were purchased at a far higher rate than regular debit and credit cards.

Several factors may be at play here. For starters, relatively few prepaid cards for sale were chip-based. McCoy said there was some data to suggest many of these prepaids were issued to people collecting government benefits such as unemployment and food assistance. Specifically, the “service code” information associated with these prepaid cards indicated that many were restricted for use at places like liquor stores and casinos.

“This was a pretty sad finding, because if you don’t have a bank this is probably how you get your wages,” McCoy said. “These cards were disproportionately targeted. The unfortunate and striking thing was the sheer demand and lack of [chip] support for prepaid cards. Also, these cards were likely more attractive to fraudsters because [the issuer’s] anti-fraud countermeasures weren’t up to par, possibly because they know less about their customers and their typical purchase history.”

PROFITS

The NYU researchers estimate BriansClub pulled in approximately $24 million in profit over four years. They calculated this number by taking the more than $100 million in total sales and subtracting commissions paid to card thieves who supplied the shop with fresh goods, as well as the price of cards that were refunded to buyers. BriansClub, like many other stolen card shops, offers refunds on certain purchases if the buyer can demonstrate the cards were no longer active at the time of purchase.

On average, BriansClub paid suppliers commissions ranging from 50-60 percent of the total value of the cards sold. Card-not-present (CNP) accounts — or those stolen from online retailers and purchased by fraudsters principally for use in defrauding other online merchants — fetched a much steeper supplier commission of 80 percent, but mainly because these cards were in such high demand and low supply.

The NYU team found card-not-present sales accounted for just 7 percent of all revenue, even though card thieves clearly now have much higher incentives to target online merchants.

A story here last year observed that this exact supply and demand tug-of-war had helped to significantly increase prices for card-not-present accounts across multiple stolen credit card shops in the underground. Not long ago, the price of CNP accounts was less than half that of card-present accounts. These days, those prices are roughly equivalent.

One likely reason for that shift is the United States is the last of the G20 nations to fully transition to more secure chip-based payment cards. In every other country that long ago made the chip card transition, they saw the same dynamic: As they made it harder for thieves to counterfeit physical cards, the fraud didn’t go away but instead shifted to online merchants.

The same progression is happening now in the United States, only the demand for stolen CNP data still far outstrips supply. Which might explain why we’ve seen such a huge uptick over the past few years in e-commerce sites getting hacked.

“Everyone points to this displacement effect from card-present to card-not-present fraud,” McCoy said. “But if the supply isn’t there, there’s only so much room for that displacement to occur.”

No doubt the epidemic of card fraud has benefited mightily from hacked retail chains — particularly restaurants — that still allow customers to swipe chip-based cards. But as we’ll see in a post to be published tomorrow, new research suggests thieves are starting to deploy ingenious methods for converting card data from certain compromised chip-based transactions into physical counterfeit cards.

A copy of the NYU research paper is available here (PDF).

LongNowThe Unexpected Influence of Cosmic Rays on DNA

Samuel Velasco/Quanta Magazine

Living in a world with multiple spatiotemporal scales, the very small and fast can often drive the future of the very large and slow: Microscopic genetic mutations change macroscopic anatomy. Undetectably small variations in local climate change global weather patterns (the infamous “butterfly effect”).

And now, one more example comes from a new theory about why DNA on modern Earth only twists in one of two possible directions:

Our spirals might all trace back to an unexpected influence from cosmic rays. Cosmic ray showers, like DNA strands, have handedness. Physical events typically break right as often as they break left, but some of the particles in cosmic ray showers tap into one of nature’s rare exceptions. When the high energy protons in cosmic rays slam into the atmosphere, they produce particles called pions, and the rapid decay of pions is governed by the weak force — the only fundamental force with a known mirror asymmetry.

Millions if not billions of cosmic ray strikes could be required to yield one additional free electron in a [right-handed] strand, depending on the event’s energy. But if those electrons changed letters in the organisms’ genetic codes, those tweaks may have added up. Over perhaps a million years…cosmic rays might have accelerated the evolution of our earliest ancestors, letting them out-compete their [left-handed] rivals.

In other words, properties of the subatomic world seem to have conferred a benefit to the potential for innovation among right-handed nucleic acids, and a “talent” for generating useful copying errors led to the entrenched monopoly we observe today.

But that isn’t the whole story. Read more at Quanta.

Planet DebianEnrico Zini: Building and packaging a sysroot

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

After having had some success with a sysroot in having a Qt5 cross-build environment that includes QtWebEngine, the next step is packaging the sysroot so it can be available both to build the cross-build environment, and to do cross-development with it.

The result is this Debian source package which takes a Raspberry Pi OS disk image, provisions it in-place, extracts its contents, and packages them.

Yes. You may want to reread the last paragraph.

It works directly in the disk image to avoid a nasty filesystem issue on emulated 32bit Linux over a 64bit mounted filesystem.

This feels like the most surreal Debian package I've ever created, and this saga looks like one of the hairiest yaks I've ever shaved.

Integrating this monster codebase, full of bundled code and hacks, into a streamlined production and deployment system has been for me a full stack nightmare, and I have a renewed and growing respect for the people in the Qt/KDE team in Debian, who manage to stay on top of this mess, so that it all just works when we need it.

Worse Than FailureCodeSOD: True if Documented

“Comments are important,” is one of those good rules that often gets misapplied. No one wants to see a method called addOneToSet and a comment that tells us Adds one item to the set.

Still, a lot of our IDEs and other tooling encourage these kinds of comments. You drop a /// or /* before a method or member, and you get an autostubbed out comment that gives you a passable, if useless, comment.

Scott Curtis thinks that is where this particular comment originated, but over time it decayed into incoherent nonsense:

///<summary> True to use quote value </summary>
///
///<value> True if false, false if not </value>
private readonly bool _mUseQuoteValue

True if false, false if not. Or, worded a little differently, documentation makes code less clear, clearer if not.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianDirk Eddelbuettel: Installing and Running Ubuntu on a 2015-ish MacBook Air

So a few months ago kiddo one dropped an apparently fairly large cup of coffee onto her one and only trusted computer. With a few months (then) to graduation (which by now happened), and with the apparent “genuis bar” verdict of “it’s a goner” a new one was ordered. As it turns out this supposedly dead one coped well enough with the coffee so that after a few weeks of drying it booted again. But give the newer one, its apparent age and whatnot, it was deemed surplus. So I poked around a little on the interwebs and conclude that yes, this could work.

Fast forward a few months and I finally got hold of it, and had some time to play with it. First, a bootable usbstick was prepared, and the machine’s content was really (really, and check again: really) no longer needed, I got hold of it for good.

tl;dr It works just fine. It is a little heavier than I thought (and isn’t “air” supposed to be weightless?) The ergonomics seem quite nice. The keyboard is decent. Screen-resolution on this pre-retina simple Air is so-so at 1440 pixels. But battery live seems ok and e.g. the camera is way better than what I have in my trusted Lenovo X1 or at my desktop. So just as a zoom client it may make a lot of sense; otherwise just walking around with it as a quick portable machine seems perfect (especially as my Lenovo X1 still (ahem) suffers from one broken key I really need to fix…).

Below are some lightly edited notes from the installation. Initial steps were quick: maybe an hour or less? Customizing a machine takes longer than I remembered, this took a few minutes here and there quite a few times, but always incremental.

Initial Steps

  • Download of Ubuntu 20.04 LTS image: took a few moments, even on broadband, feels slower than normal (fast!) Ubuntu package updates, maybe lesser CDN or bad luck

  • Startup Disk Creator using a so-far unused 8gb usb drive

  • Plug into USB, recycle power, press “Option” on macOS keyboard: voila

  • After a quick hunch… no to ‘live/test only’ and yes to install, whole disk

  • install easy, very few questions, somehow skips wifi

  • so activate wifi manually — and everythings pretty much works

Customization

  • First deal with ‘fn’ and ‘ctrl’ key swap. Install git and followed this github repo which worked just fine. Yay. First (manual) Linux kernel module build needed need in … half a decade? Longer?

  • Fire up firefox, go to ‘download chrome’, install chrome. Sign in. Turn on syncing. Sign into Pushbullet and Momentum.

  • syncthing which is excellent. Initially via apt, later from their PPA. Spend some time remembering how to set up the mutual handshakes between devices. Now syncing desktop/server, lenovo x1 laptop, android phone and this new laptop

  • keepassx via apt and set up using Sync/ folder. Now all (encrypted) passwords synced.

  • Discovered synergy now longer really free, so after a quick search found and installed barrier (via apt) to have one keyboard/mouse from desktop reach laptop.

  • Added emacs via apt, so far ‘empty’, so config files yet

  • Added ssh via apt, need to propagate keys to github and gitlab

  • Added R via add-apt-repository --yes "ppa:marutter/rrutter4.0" and add-apt-repository --yes "ppa:c2d4u.team/c2d4u4.0+". Added littler and then RStudio

  • Added wajig (apt frontend) and byobu, both via apt

  • Created ssh key, shipped it to server and github + gitlab

  • Cloned (not-public) ‘dotfiles’ repo and linked some dotfiles in

  • Cloned git repo for nord-theme for gnome terminal and installed it; also added it to RStudio via this repo

  • Emacs installed, activated dotfiles, then incrementally install a few elpa-* packages and a few M-x package-install including nord-theme, of course

  • Installed JetBrains Mono font from my own local package; activated for Gnome Terminal and Emacs

  • Install gnome-tweak-tool via apt, adjusted a few settings

  • Ran gsettings set org.gnome.desktop.wm.preferences focus-mode 'sloppy'

  • Set up camera following this useful GH repo

  • At some point also added slack and zoom, because, well, it is 2020

  • STILL TODO:

    • docker
    • bother with email setup?,
    • maybe atom/code/…?

,

Planet DebianChris Lamb: Pop culture matters

Many people labour under the assumption that pop culture is trivial and useless while only 'high' art can grant us genuine and eternal knowledge about the world. Given that we have a finite time on this planet, we are all permitted to enjoy pop culture up to a certain point, but we should always minimise our interaction with it, and consume more moral and intellectual instruction wherever possible.

Or so the theory goes. What these people do not realise is that pop and mass culture can often provide more information about the world, humanity in general and — what is even more important — ourselves.

This is not quite the debate around whether high art is artistically better, simply that pop culture can be equally informative. Jeremy Bentham argued in the 1820s that "prejudice apart, the game of push-pin is of equal value with the arts and sciences of music and poetry", that it didn't matter where our pleasures come from. (John Stuart Mill, Bentham's intellectual rival, disagreed.) This fundamental question of philosophical utilitarianism will not be resolved here.

However, what might begin to be resolved is our instinctive push-back against pop culture. We all share an automatic impulse to disregard things we do not like and to pretend they do not exist, but this wishful thinking does not mean that these cultural products do not continue to exist when we aren't thinking about them and, more to our point, continue to influence others and even ourselves.

Take, for example, the recent trend for 'millennial pink'. With its empty consumerism, faux nostalgia, reductive stereotyping of age cohorts, objectively ugly æsthetics and tedious misogyny (photographed with Rose Gold iPhones), the very combination appears to have been deliberately designed to annoy me, curiously providing circumstantial evidence in favour of intelligent design. But if I were to immediately dismiss millennial pink and any of the other countless cultural trends I dislike simply because I find them disagreeable, I would be willingly keeping myself blind to their underlying ideology, their significance and their effect on society at large. If I had any ethical or political reservations I might choose not to engage with them economically or to avoid advertising them to others, but that is a different question altogether.

Even if we can't notice this pattern within ourselves we can first observe it in others. We can all recall moments where someone has brushed off a casual reference to pop culture, be it Tiger King, TikTok, team sports or Taylor Swift; if you can't, simply look for the abrupt change of tone and the slightly-too-quick dismissal. I am not suggesting you attempt to dissuade others or even to point out this mental tic, but merely seeing it in action can be highly illustrative in its own way.

In summary, we can simultaneously say that pop culture is not worthy of our time relative to other pursuits while consuming however much of it we want, but deliberately dismissing pop culture doesn't mean that a lot of other people are not interacting with it and is therefore undeserving of any inquiry. And if that doesn't convince you, just like the once-unavoidable millennial pink, simply sticking our collective heads in the sand will not mean that wider societal-level ugliness is going to disappear anytime soon.

Anyway, that's a very long way of justifying why I plan to re-watch TNG.

Planet DebianDirk Eddelbuettel: ttdo 0.0.6: Bugfix

A bugfix release of our (still small) ttdo package arrived on CRAN overnight. As introduced last fall, the ttdo package extends the most excellent (and very minimal / zero depends) unit testing package tinytest by Mark van der Loo with the very clever and well-done diffobj package by Brodie Gaslam to give us test results with visual diffs:

ttdo screenshot

This release corrects a minor editing error spotted by the ever-vigilant John Blischak.

The NEWS entry follow.

Changes in ttdo version 0.0.6 (2020-07-27)

  • Correct a minor editing mistake spotted by John Blischak.

CRANberries provides the usual summary of changes to the previous version. Please use the GitHub repo and its issues for any questions.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJonathan Carter: Free Software Activities for 2020-06

Hmm, this is the latest I’ve posted my monthly updates yet (nearly by a month!). June was both crazy on the incoming side, and at the same time I just wasn’t that productive (at least since then I caught up a lot). In theory, lockdown means that I spend less time in traffic, in shops or with friends and have more time to do stuff, in practice I go to bed later and later and waste more time watching tv shows and playing mobile games. A cycle that I have at least broken free from since June.

Debian Package Uploads

2020-06-04: Upload package btfs (2.21-1) to Debian unstable.

2020-06-04: Upload package gnome-shell-extension-disconnect-wifi (24-1) to Debian unstable.

2020-06-18: Sponsor package gamemode (1.5.1-5) for Debian unstable (Games team request).

2020-06-21: Upload package calamares (3.2.26-1) to Debian unstable.

2020-06-21: Upload package s-tui (1.0.1-1) to Debian unstable.

2020-06-29: Sponsor package libinih (48-1~bpo10+1) for Debian buster-backports.

2020-06-30: Upload packge calamares (3.2.26-1~bpo10+1) to Debian buster-backports.

2020-06-30: Upload package toot (0.27.0-1) to Debian unstable.

2020-06-30: Upload package calamares (3.2.26.1-1) to Debian unstable.

Planet DebianSteve Kemp: I'm a bit of a git (hacker?)

Sometimes I enjoy reading the source code to projects I like, use, or am about to install for the first time. This was something I used to do on a very regular basis, looking for security issues to report. Nowadays I don't have so much free time, but I still like to inspect the source code to new applications I install, and every now and again I'll find the time to look at the source to random projects.

Reading code is good. Reading code is educational.

One application I've looked at multiple times is redis, which is a great example of clean and well-written code. That said when reading the redis codebase I couldn't help noticing that there were a reasonably large number of typos/spelling mistakes in the comments, so I submitted a pull-request:

Sadly that particular pull-request didn't receive too much attention, although a previous one updating the configuration file was accepted. I was recently reminded of these pull-requests when I was when I was doing some other work. So I figured I'd have a quick scan of a couple of other utilities.

In the past I'd just note spelling mistakes when I came across them, usually I'd be opening each file in a project one by one and reading them from top to bottom. (Sometimes I'd just open files in emacs and run "M-x ispell-comments-and-strings", but more often I'd just notice them with my eyes). It did strike me that if I were to do this in a more serious fashion it would be good to automate it.

So this time round I hacked up a simple "dump comments" utility, which would scan named files and output the contents of any comments (be they single-line, or multi-line). Once I'd done that I could spell-check easily:

 $ go run dump-comments.go *.c > comments
 $ aspell -c comments

Anyway the upshot of that was a pull-request against git:

We'll see if that makes its way live sometime. In case I get interested in doing this again I've updated my sysbox-utility collection to have a comments sub-command. That's a little more robust and reliable than my previous hack:

$ sysbox comments -pretty=true $(find . -name '*.c')
..
..

The comments sub-command has support for:

  • Single-line comments, for C, as prefixed with //.
  • Multi-line comments, for C++, as between /* and */.
  • Single-line comments, for shell, as prefixed with #.
  • Lua comments, both single-line (prefixed with --) and multiline between --[[ and --]].

Adding new support would be trivial, I just need a start and end pattern to search against. Pull-requests welcome:

Rondam RamblingsThe insidious problem of racism

Take a moment to seriously think about what is wrong with racism.  If you're like most people, your answer will probably be that racism is bad because it's a form of prejudice, and prejudice is bad.  This is not wrong, but it misses a much deeper, more insidious issue.  The real problem with racism is that it is that it can be (and usually is) rationalized and those rationalizations can turn into

CryptogramSurvey of Supply Chain Attacks

The Atlantic Council has a released a report that looks at the history of computer supply chain attacks.

Key trends from their summary:

  1. Deep Impact from State Actors: There were at least 27 different state attacks against the software supply chain including from Russia, China, North Korea, and Iran as well as India, Egypt, the United States, and Vietnam.States have targeted software supply chains with great effect as the majority of cases surveyed here did, or could have, resulted in remote code execution. Examples: CCleaner, NotPetya, Kingslayer, SimDisk, and ShadowPad.

  2. Abusing Trust in Code Signing: These attacks undermine public key cryptography and certificates used to ensure the integrity of code. Overcoming these protections is a critical step to enabling everything from simple alterations of open-source code to complex nation-state espionage campaigns. Examples: ShadowHammer, Naid/McRAT, and BlackEnergy 3.

  3. Hijacking Software Updates: 27% of these attacks targeted software updates to insert malicious code against sometimes millions of targets. These attacks are generally carried out by extremely capable actors and poison updates from legitimate vendors. Examples: Flame, CCleaner 1 & 2, NotPetya, and Adobe pwdum7v71.

  4. Poisoning Open-Source Code: These incidents saw attackers either modify open-source code by gaining account access or post their own packages with names similar to common examples. Attacks targeted some of the most widely used open source tools on the internet. Examples: Cdorked/Darkleech, RubyGems Backdoor, Colourama, and JavaScript 2018 Backdoor.

  5. Targeting App Stores: 22% of these attacks targeted app stores like the Google Play Store, Apple's App Store, and other third-party app hubs to spread malware to mobile devices. Some attacks even targeted developer tools ­ meaning every app later built using that tool was potentially compromised. Examples: ExpensiveWall, BankBot, Gooligan, Sandworm's Android attack, and XcodeGhost.

Recommendations included in the report. The entirely open and freely available dataset is here.

Worse Than FailureCodeSOD: Underscoring the Comma

Andrea writes to confess some sins, though I'm not sure who the real sinner is. To understand the sins, we have to talk a little bit about C/C++ macros.

Andrea was working on some software to control a dot-matrix display from an embedded device. Send an array of bytes to it, and the correct bits on the display light up. Now, if you're building something like this, you want an easy way to "remember" the proper sequences. So you might want to do something like:

uint8_t glyph0[] = {'0', 0x0E, 0x11, 0x0E, 0};
uint8_t glyph1[] = {'1', 0x09, 0x1F, 0x01, 0};

And so on. And heck, you might want to go so far as to have a lookup array, so you might have a const uint8_t *const glyphs[] = {glyph0, glyph1…}. Now, you could just hardcode those definitions, but wouldn't it be cool to use macros to automate that a bit, as your definitions might change?

Andrea went with a style known as X macros, which let you specify one pattern of data which can be re-used by redefining X. So, for example, I could do something like:

#define MY_ITEMS \
  X(a, 5) \
  X(b, 6) \
  X(c, 7)
  
#define X(name, value) int name = value;
MY_ITEMS
#undef X

This would generate:

int a = 5;
int b = 6;
int c = 7;

But I could re-use this, later:

#define X(name, data) name, 
int items[] = { MY_ITEMS nullptr};
#undef X

This would generate, in theory, something like: int items[] = {a,b,c,nullptr};

We are recycling the MY_ITEMS macro, and we're changing its behavior by altering the X macro that it invokes. This can, in practice, result in much more readable and maintainable code, especially code where you need to have parallel lists of items. It's also one of those things that the first time you see it, it's… surprising.

Now, this is all great, and it means that Andrea could potentially have a nice little macro system for defining arrays of bytes and a lookup array pointing to those arrays. There's just one problem.

Specifically, if you tried to write a macro like this:

#define GLYPH_DEFS \
  X(glyph0, {'0', 0x0E, 0x11, 0x0E, 0})

It wouldn't work. It doesn't matter what you actually define X to do, the preprocessor isn't aware of the C/C++ syntax. So it doesn't say "oh, that second comma is inside of an array initalizer, I'll ignore it", it says, "Oh, they're trying to pass more than two parameters to the macro X."

So, you need some way to define an array initializer that doesn't use commas. If macros got you into this situation, macros can get you right back out. Here is Andrea's solution:

#define _ ,  // Sorry.
#define GLYPH_DEFS \
	X(glyph0, { '0' _ 0x0E _ 0x11 _ 0x0E _ 0 } ) \
	X(glyph1, { '1' _ 0x09 _ 0x1F _ 0x01 _ 0 }) \
	X(glyph2, { '2' _ 0x13 _ 0x15 _ 0x09 _ 0 }) \
	X(glyph3, { '3' _ 0x15 _ 0x15 _ 0x0A _ 0 }) \
	X(glyph4, { '4' _ 0x18 _ 0x04 _ 0x1F _ 0 }) \
	X(glyph5, { '5' _ 0x1D _ 0x15 _ 0x12 _ 0 }) \
	X(glyph6, { '6' _ 0x0E _ 0x15 _ 0x03 _ 0 }) \
	X(glyph7, { '7' _ 0x10 _ 0x13 _ 0x0C _ 0 }) \
	X(glyph8, { '8' _ 0x0A _ 0x15 _ 0x0A _ 0 }) \
	X(glyph9, { '9' _ 0x08 _ 0x14 _ 0x0F _ 0 }) \
	X(glyphA, { 'A' _ 0x0F _ 0x14 _ 0x0F _ 0 }) \
	X(glyphB, { 'B' _ 0x1F _ 0x15 _ 0x0A _ 0 }) \
	X(glyphC, { 'C' _ 0x0E _ 0x11 _ 0x11 _ 0 }) \
	X(glyphD, { 'D' _ 0x1F _ 0x11 _ 0x0E _ 0 }) \
	X(glyphE, { 'E' _ 0x1F _ 0x15 _ 0x15 _ 0 }) \
	X(glyphF, { 'F' _ 0x1F _ 0x14 _ 0x14 _ 0 }) \

#define X(name, data) const uint8_t name [] = data ;
GLYPH_DEFS
#undef X

#define X(name, data) name _
const uint8_t *const glyphs[] = { GLYPH_DEFS nullptr };
#undef X
#undef _

So, when processing the X macro, we pass it a pile of _s, which aren't commas, so it doesn't complain. Then we expand the _ macro and voila: we have syntactically valid array initalizers. If Andrea ever changes the list of glyphs, adding or removing any, the macro will automatically sync the declaration of the individual arrays and their pointers over in the glyphs array.

Andrea adds:

The scope of this definition is limited to this data structure, in which the X macros are used, and it is #undef'd just after that. However, with all the stories of #define abuse on this site, I feel I still need to atone.
The testing sketch works perfectly.

Honestly, all sins are forgiven. There isn't a true WTF here, beyond "the C preprocessor is TRWTF". It's a weird, clever hack, and it's interesting to see this technique in use.

That said, as you might note: this was a testing sketch, just to prove a concept. Instead of getting clever with macros, your disposable testing code should probably just get to proving your concept as quickly as possible. You can worry about code maintainability later. So, if there are any sins by Andrea, it's the sin of overengineering a disposable test program.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianRuss Allbery: Review: The City in the Middle of the Night

Review: The City in the Middle of the Night, by Charlie Jane Anders

Publisher: Tor
Copyright: February 2019
Printing: February 2020
ISBN: 1-4668-7113-X
Format: Kindle
Pages: 366

January is a tidally-locked planet divided between permanent night and permanent day, an unfortunate destination for a colony starship. Now, humans cling to a precarious existence along the terminator, huddling in two wildly different cities and a handful of smaller settlements, connected by a road through the treacherous cold.

The novel opens with Sophie, a shy university student from the dark side of the city of Xiosphant. She has an overwhelming crush on Bianca, her high-class, self-confident roommate and one of the few people in her life to have ever treated her with compassion and attention. That crush, and her almost non-existent self-esteem, lead her to take the blame for Bianca's petty theft, resulting in what should have been a death sentence. Sophie survives only because she makes first contact with a native intelligent species of January, one that the humans have been hunting for food and sport.

Sadly, I think this is enough Anders for me. I've now bounced off two of her novels, both for structural reasons that I think go deeper than execution and indicate a fundamental mismatch between what Anders wants to do as an author and what I'm looking for as a reader.

I'll talk more about what this book is doing in a moment, but I have to start with Bianca and Sophie. It's difficult for me to express how much I loathed this relationship and how little I wanted to read about it. It took me about five pages to peg Bianca as a malignant narcissist and Sophie's all-consuming crush as dangerous codependency. It took the entire book for Sophie to figure out how awful Bianca is to her, during which Bianca goes through the entire abusive partner playbook of gaslighting, trivializing, contingent affection, jealous rage, and controlling behavior. And meanwhile Sophie goes back to her again, and again, and again, and again. If I hadn't been reading this book on a Kindle, I think it would have physically hit a wall after their conversation in the junkyard.

This is truly a matter of personal taste and preference. This is not an unrealistic relationship; this dynamic happens in life all too often. I'm sure there is someone for whom reading about Sophie's spectacularly poor choices is affirming or cathartic. I've not personally experienced this sort of relationship, which doubtless matters.

But having empathy for someone who is making awful and self-destructive life decisions and trusting someone they should not be trusting and who is awful to them in every way is difficult work. Sophie is the victim of Bianca's abuse, but she does so many stupid and ill-conceived things in support of this twisted relationship that I found it very difficult to not get angry at her. Meanwhile, Anders writes Sophie as so clearly fragile and uncertain and devoid of a support network that getting angry at her is like kicking a puppy. The result for me was spending nearly an entire book in a deeply unpleasant state of emotional dissonance. I may be willing to go through that for a close friend, but in a work of fiction it's draining and awful and entirely not fun.

The other viewpoint character had the opposite problem for me. Mouth starts the book as a traveling smuggler, the sole survivor of a group of religious travelers called the Citizens. She's practical, tough, and guarded. Beneath that, I think the intent was to show her as struggling to come to terms with the loss of her family and faith community. Her first goal in the book is to recover a recording of Citizen sacred scripture to preserve it and to reconnect with her past.

This sounds interesting on the surface, but none of it gelled. Mouth never felt to me like someone from a faith community. She doesn't act on Citizen beliefs to any meaningful extent, she rarely talks about them, and when she does, her attitude is nostalgia without spirituality. When Mouth isn't pursuing goals that turn out to be meaningless, she aimlessly meandered through the story. Sophie at least has agency and makes some important and meaningful decisions. Mouth is just there, even when Anders does shattering things to her understanding of her past.

Between Sophie and Bianca putting my shoulders up around my ears within the first few pages of the first chapter and failing to muster any enthusiasm for Mouth, I said the eight deadly words ("I don't care what happens to these people") about a hundred pages in and the book never recovered.

There are parts of the world-building I did enjoy. The alien species that Sophie bonds with is not stunningly original, but it's a good (and detailed) take on one of the alternate cognitive and social models that science fiction has dreamed up. I was comparing the strangeness and dislocation unfavorably to China Miéville's Embassytown while I was reading it, but in retrospect Anders's treatment is more decolonialized. Xiosphant's turn to Circadianism as their manifestation of order is a nicely understated touch, a believable political overreaction to the lack of a day/night cycle. That touch is significantly enhanced by Sophie's time working in a salon whose business model is to help Xiosphant residents temporarily forget about time. And what glimmers we got of politics on the colony ship and their echoing influence on social and political structures were intriguing.

Even with the world-building, though, I want the author to be interested in and willing to expand the same bits of world-building that I'm engaged with. Anders didn't seem to be. The reader gets two contrasting cities along a road, one authoritarian and one libertine, which makes concrete a metaphor for single-axis political classification. But then Anders does almost nothing with that setup; it's just the backdrop of petty warlord politics, and none of the political activism of Bianca's student group seems to have relevance or theoretical depth. It's a similar shallowness as the religion of Mouth's Citizens: We get a few fragments of culture and religion, but without narrative exploration and without engagement from any of the characters. The way the crew of the Mothership was assembled seems to have led to a factional and racial caste system based on city of origin and technical expertise, but I couldn't tell you more than that because few of the characters seem to care. And so on.

In short, the world-building that I wanted to add up to a coherent universe that was meaningful to the characters and to the plot seemed to be little more than window-dressing. Anders tosses in neat ideas, but they don't add up to anything. They're just background scenery for Bianca and Sophie's drama.

The one thing that The City in the Middle of the Night does well is Sophie's nervous but excited embrace of the unknown. It was delightful to see the places where a typical protagonist would have to overcome a horror reaction or talk themselves through tradeoffs and where Sophie's reaction was instead "yes, of course, let's try." It provided an emotional strength to an extended first-contact exploration scene that made it liberating and heart-warming without losing the alienness. During that part of the book (in which, not coincidentally, Bianca does not appear), I was able to let my guard down and like Sophie for the first time, and I suspect that was intentional on Anders's part.

But, overall, I think the conflict between Anders's story-telling approach and my preferences as a reader are mostly irreconcilable. She likes to write about people who make bad decisions and compound their own problems. In one of the chapters of her non-fiction book about writing that's being serialized on Tor.com she says "when we watch someone do something unforgivable, we're primed to root for them as they search desperately for an impossible forgiveness." This is absolutely not true for me; when I watch a character do something unforgivable, I want to see repudiation from the protagonists and ideally some clear consequences. When that doesn't happen, I want to stop reading about them and find something more enjoyable to do with my time. I certainly don't want to watch a viewpoint character insist that the person who is doing unforgivable things is the center of her life.

If your preferences on character and story arc are closer to Anders's than mine, you may like this book. Certainly lots of people did; it was nominated for multiple awards and won the Locus Award for Best Science Fiction Novel. But despite the things it did well, I had a truly miserable time reading it and am not anxious to repeat the experience.

Rating: 4 out of 10

,

Krebs on SecurityBusiness ID Theft Soars Amid COVID Closures

Identity thieves who specialize in running up unauthorized lines of credit in the names of small businesses are having a field day with all of the closures and economic uncertainty wrought by the COVID-19 pandemic, KrebsOnSecurity has learned. This story is about the victims of a particularly aggressive business ID theft ring that’s spent years targeting small businesses across the country and is now pivoting toward using that access for pandemic assistance loans and unemployment benefits.

Most consumers are likely aware of the threat from identity theft, which occurs when crooks apply for new lines of credit in your name. But the same crime can be far more costly and damaging when thieves target small businesses. Unfortunately, far too many entrepreneurs are simply unaware of the threat or don’t know how to be watchful for it.

What’s more, with so many small enterprises going out of business or sitting dormant during the COVID-19 pandemic, organized fraud rings have an unusually rich pool of targets to choose from.

Short Hills, N.J.-based Dun & Bradstreet [NYSE:DNB] is a data analytics company that acts as a kind of de facto credit bureau for companies: When a business owner wants to open a new line of credit, creditors typically check with Dun & Bradstreet to gauge the business’s history and trustworthiness.

In 2019, Dun & Bradstreet saw more than a 100 percent increase in business identity theft. For 2020, the company estimates an overall 258 percent spike in the crime. Dun & Bradstreet said that so far this year it has received over 4,700 tips and leads where business identity theft or malfeasance are suspected.

“The ferocity of cyber criminals to take advantage of COVID-19 uncertainties by preying on small businesses is disturbing,” said Andrew LaMarca, who leads the global high-risk and fraud team at Dun & Bradstreet.

For the past several months, Milwaukee, Wisc. based cyber intelligence firm Hold Security has been monitoring the communications between and among a businesses ID theft gang apparently operating in Georgia and Florida but targeting businesses throughout the United States. That surveillance has helped to paint a detailed picture of how business ID thieves operate, as well as the tricks they use to gain credit in a company’s name.

Hold Security founder Alex Holden said the group appears to target both active and dormant or inactive small businesses. The gang typically will start by looking up the business ownership records at the Secretary of State website that corresponds to the company’s state of incorporation. From there, they identify the officers and owners of the company, acquire their Social Security and Tax ID numbers from the dark web and other sources online.

To prove ownership over the hijacked firms, they hire low-wage image editors online to help fabricate and/or modify a number of official documents tied to the business — including tax records and utility bills.

The scammers frequently then file phony documents with the Secretary of State’s office in the name(s) of the business owners, but include a mailing address that they control. They also create email addresses and domain names that mimic the names of the owners and the company to make future credit applications appear more legitimate, and submit the listings to business search websites, such as yellowpages.com.

For both dormant and existing businesses, the fraudsters attempt to create or modify the target company’s accounts at Dun & Bradstreet. In some cases, the scammers create dashboard accounts in the business’s names at Dun & Bradstreet’s credit builder portal; in others, the bad guys have actually hacked existing business accounts at DNB, requesting a new DUNS numbers for the business (a DUNS number is a unique, nine-digit identifier for businesses).

Finally, after the bogus profiles are approved by Dun & Bradstreet, the gang waits a few weeks or months and then starts applying for new lines of credit in the target business’s name at stores like Home Depot, Office Depot and Staples. Then they go on a buying spree with the cards issued by those stores.

Usually, the first indication a victim has that they’ve been targeted is when the debt collection companies start calling.

“They are using mostly small companies that are still active businesses but currently not operating because of COVID-19,” Holden said. “With this gang, we see four or five people working together. The team leader manages the work between people. One person seems to be in charge of getting stolen cards from the dark web to pay for the reactivation of businesses through the secretary of state sites. Another team member works on revising the business documents and registering them on various sites. The others are busy looking for specific businesses they want to revive.”

Holden said the gang appears to find success in getting new lines of credit with about 20 percent of the businesses they target.

“One’s personal credit is nothing compared to the ability of corporations to borrow money,” he said. “That’s bad because while the credit system may be flawed for individuals, it’s an even worse situation on average when we’re talking about businesses.”

Holden said over the past few months his firm has seen communications between the gang’s members indicating they have temporarily shifted more of their energy and resources to defrauding states and the federal government by filing unemployment insurance claims and apply for pandemic assistance loans with the Small Business Administration.

“It makes sense, because they’ve already got control over all these dormant businesses,” he said. “So they’re now busy trying to get unemployment payments and SBA loans in the names of these companies and their employees.”

PHANTOM OFFICES

Hold Security shared data intercepted from the gang that listed the personal and financial details of dozens of companies targeted for ID theft, including Dun & Bradstreet logins the crooks had created for the hijacked businesses. Dun & Bradstreet declined to comment on the matter, other than to say it was working with federal and state authorities to alert affected businesses and state regulators.

Among those targeted was Environmental Safety Consultants Inc. (ESC), a 37-year-old environmental engineering firm based in Bradenton, Fla. ESC owner Scott Russell estimates his company was initially targeted nearly two years ago, and that he first became aware something wasn’t right when he recently began getting calls from Home Depot’s corporate offices inquiring about the company’s delinquent account.

But Russell said he didn’t quite grasp the enormity of the situation until last year, when he was contacted by the manager of a virtual office space across town who told him about a suspiciously large number of deliveries at an office space that was rented out in his name.

Russell had never rented that particular office. Rather, the thieves had done it for him, using his name and the name of his business. The office manager said the deliveries came virtually non-stop, even though there was apparently no business operating within the rented premises. And in each case, shortly after the shipments arrived someone would show up and cart them away.

“She said we don’t think it’s you,” he recalled. “Turns out, they had paid for a lease in my name with someone else’s credit card. She shared with me a copy of the lease, which included a fraudulent ID and even a vehicle insurance card for a Land Cruiser we got rid of like 15 years ago. The application listed our home address with me and some woman who was not my wife’s name.”

The crates and boxes being delivered to his erstwhile office space were mostly computers and other high-priced items ordered from 10 different Office Depot credit cards that also were not in his name.

“The total value of the electronic equipment that was bought and delivered there was something like $75,000,” Russell said, noting that it took countless hours and phone calls with Office Depot to make it clear they would no longer accept shipments addressed to him or his company. “It was quite spine-tingling to see someone penned a lease in the name of my business and personal identity.”

Even though the virtual office manager had the presence of mind to take photocopies of the driver’s licenses presented by the people arriving to pick up the fraudulent shipments, the local police seemed largely uninterested in pursuing the case, Russell said.

“I went to the local county sheriff’s office and showed them all the documentation I had and the guy just yawned and said he’d get right on it,” he recalled. “The place where the office space was rented was in another county, and the detective I spoke to there about it was interested, but he could never get anyone from my county to follow up.”

RECYCLING VICTIMS

Russell said he believes the fraudsters initially took out new lines of credit in his company’s name and then used those to defraud others in a similar way. One of those victims is another victim on the gang’s target list obtained by Hold Security — Mary McMahan, owner of Fan Experiences, an event management company in Winter Park, Fla.

McMahan also had stolen goods from Office Depot and other stores purchased in her company’s name and delivered to the same office space rented in Russell’s name. McMahan said she and her businesses have suffered hundreds of thousands of dollars in fraud, and spent nearly as much in legal fees fending off collections firms and restoring her company’s credit.

McMahan said she first began noticing trouble almost four years ago, when someone started taking out new credit cards in her company’s name. At the same time, her business was used to open a new lease on a virtual office space in Florida that also began receiving packages tied to other companies victimized by business ID theft.

“About four years back, they hit my credit hard for a year, getting all these new lines of credit at Home Depot, Office Depot, Office Max, you name it,” she said. “Then they came back again two years ago and hit it hard for another year. They even went to the [Florida Department of Motor Vehicles] to get a driver’s license in my name.”

McMahan said the thieves somehow hacked her DNB account, and then began adding new officers and locations for her business listing.

“They changed the email and mailing address, and even went on Yelp and Google and did the same,” she said.

McMahan said she’s since locked down her personal and business credit to the point where even she would have a tough time getting a new line of credit or mortgage if she tried.

“There’s no way they can even utilize me anymore because there’s so many marks on my credit stating that it’s been stolen” she said. “These guys are relentless, and they recycle victims to defraud others until they figure out they can’t recycle them anymore.”

SAY…THAT’S A NICE CREDIT PROFILE YOU GOT THERE…

McMahan says she, too, has filed multiple reports about the crimes with local police, but has so far seen little evidence that anyone is interested in following up on the matter. For now, she is paying Dun and Bradstreet more than a $100 a month to monitor her business credit profile.

Dun & Bradstreet does offer a free version of credit monitoring called Credit Signal that lets business owners check their business credit scores and any inquiries made in the previous 14 days up to four times a year. However, those looking for more frequent checks or additional information about specific credit inquiries beyond 14 days are steered toward DNB’s subscription-based services.

Eva Velasquez, president of the Identity Theft Resource Center, a California-based nonprofit that assists ID theft victims, said she finds that troubling.

“When we look at these institutions that are necessary for us to operate and function in society and they start to charge us a fee for a service to fix a problem they helped create through their infrastructure, that’s just unconscionable,” Velasquez said. “We need to take a hard look at the infrastructures that businesses are beholden to and make sure the risk minimization protections they’re entitled to are not fee-based — particularly if it’s a problem created by the very infrastructure of the system.”

Velasquez said it’s unfortunate that small business owners don’t have the same protections afforded to consumers. For example, only recently did the three major consumer reporting bureaus allow all U.S. residents to place a freeze on their credit files for free.

“We’ve done a good job in educating the public that anyone can be victim of identity theft, and in compelling our infrastructure to provide robust consumer protection and risk minimization processes that are more uniform,” she said. “It’s still not good by any means, but it’s definitely better for consumers than it is for businesses. We currently put all the responsibility on the small business owner, and very little on the infrastructure and processes that should be designed to protect them but aren’t doing a great job, frankly.”

Rather, the onus continues to be on the business owner to periodically check with DNB and state agencies to monitor for any signs of unauthorized changes. Worse still, too many private and public organizations still don’t do a good enough job protecting employee identification and tax ID numbers that are so often abused in business identity theft, Velasquez said.

“You can put alerts and other protections in place but the problem is you have to go on a department by department and case by case basis,” she said. “The place to begin is your secretary of state’s office or wherever you file your documents to operate your business.

For its part, Dun & Bradstreet recently published a blog post outlining recommendations for businesses to ward off identity thieves. DNB says anyone who suspects fraudulent activity on their account should contact its support team.

Planet DebianMatthew Garrett: Filesystem deduplication is a sidechannel

First off - nothing I'm going to talk about in this post is novel or overly surprising, I just haven't found a clear writeup of it before. I'm not criticising any design decisions or claiming this is an important issue, just raising something that people might otherwise be unaware of.

With that out of the way: Automatic deduplication of data is a feature of modern filesystems like zfs and btrfs. It takes two forms - inline, where the filesystem detects that data being written to disk is identical to data that already exists on disk and simply references the existing copy rather than, and offline, where tooling retroactively identifies duplicated data and removes the duplicate copies (zfs supports inline deduplication, btrfs only currently supports offline). In a world where disks end up with multiple copies of cloud or container images, deduplication can free up significant amounts of disk space.

What's the security implication? The problem is that deduplication doesn't recognise ownership - if two users have copies of the same file, only one copy of the file will be stored[1]. So, if user a stores a file, the amount of free space will decrease. If user b stores another copy of the same file, the amount of free space will remain the same. If user b is able to check how much free space is available, user b can determine whether the file already exists.

This doesn't seem like a huge deal in most cases, but it is a violation of expected behaviour (if user b doesn't have permission to read user a's files, user b shouldn't be able to determine whether user a has a specific file). But we can come up with some convoluted cases where it becomes more relevant, such as law enforcement gaining unprivileged access to a system and then being able to demonstrate that a specific file already exists on that system. Perhaps more interestingly, it's been demonstrated that free space isn't the only sidechannel exposed by deduplication - deduplication has an impact on access timing, and can be used to infer the existence of data across virtual machine boundaries.

As I said, this is almost certainly not something that matters in most real world scenarios. But with so much discussion of CPU sidechannels over the past couple of years, it's interesting to think about what other features also end up leaking information in ways that may not be obvious.

(Edit to add: deduplication isn't enabled on zfs by default and is explicitly triggered on btrfs, so unless it's something you've enabled then this isn't something that affects you)

[1] Deduplication is usually done at the block level rather than the file level, but given zfs's support for variable sized blocks, identical files should be deduplicated even if they're smaller than the maximum record size

comment count unavailable comments

Planet DebianWouter Verhelst: giphy.gif

Planet DebianWouter Verhelst: On Statements, Facts, Hypotheses, Science, Religion, and Opinions

The other day, we went to a designer's fashion shop whose owner was rather adamant that he was never ever going to wear a face mask, and that he didn't believe the COVID-19 thing was real. When I argued for the opposing position, he pretty much dismissed what I said out of hand, claiming that "the hospitals are empty dude" and "it's all a lie". When I told him that this really isn't true, he went like "well, that's just your opinion". Well, no -- certain things are facts, not opinions. Even if you don't believe that this disease kills people, the idea that this is a matter of opinion is missing the ball by so much that I was pretty much stunned by the level of ignorance.

His whole demeanor pissed me off rather quickly. While I disagree with the position that it should be your decision whether or not to wear a mask, it's certainly possible to have that opinion. However, whether or not people need to go to hospitals is not an opinion -- it's something else entirely.

After calming down, the encounter got me thinking, and made me focus on something I'd been thinking about before but hadn't fully forumlated: the fact that some people in this world seem to misunderstand the nature of what it is to do science, and end up, under the claim of being "sceptical", with various nonsense things -- see scientology, flat earth societies, conspiracy theories, and whathaveyou.

So, here's something that might (but probably won't) help some people figuring out stuff. Even if it doesn't, it's been bothering me and I want to write it down so it won't bother me again. If you know all this stuff, it might be boring and you might want to skip this post. Otherwise, take a deep breath and read on...

Statements are things people say. They can be true or false; "the sun is blue" is an example of a statement that is trivially false. "The sun produces light" is another one that is trivially true. "The sun produces light through a process that includes hydrogen fusion" is another statement, one that is a bit more difficult to prove true or false. Another example is "Wouter Verhelst does not have a favourite color". That happens to be a true statement, but it's fairly difficult for anyone that isn't me (or any one of the other Wouters Verhelst out there) to validate as true.

While statements can be true or false, combining statements without more context is not always possible. As an example, the statement "Wouter Verhelst is a Debian Developer" is a true statement, as is the statement "Wouter Verhelst is a professional Volleybal player"; but the statement "Wouter Verhelst is a professional Volleybal player and a Debian Developer" is not, because while I am a Debian Developer, I am not a professional Volleybal player -- I just happen to share a name with someone who is.

A statement is never a fact, but it can describe a fact. When a statement is a true statement, either because we trivially know what it states to be true or because we have performed an experiment that proved beyond any possible doubt that the statement is true, then what the statement describes is a fact. For example, "Red is a color" is a statement that describes a fact (because, yes, red is definitely a color, that is a fact). Such statements are called statements of fact. There are other possible statements. "Grass is purple" is a statement, but it is not a statement of fact; because as everyone knows, grass is (usually) green.

A statement can also describe an opinion. "The Porsche 911 is a nice car" is a statement of opinion. It is one I happen to agree with, but it is certainly valid for someone else to make a statement that conflicts with this position, and there is nothing wrong with that. As the saying goes, "opinions are like assholes: everyone has one". Statements describing opinions are known as statements of opinion.

The differentiating factor between facts and opinions is that facts are universally true, whereas opinions only hold for the people who state the opinion and anyone who agrees with them. Sometimes it's difficult or even impossible to determine whether a statement is true or not. The statement "The numbers that win the South African Powerball lottery on the 31st of July 2020 are 2, 3, 5, 19, 35, and powerball 14" is not a statement of fact, because at the time of writing, the 31st of July 2020 is in the future, which at this point gives it a 1 in 24,435,180 chance to be true). However, that does not make it a statement of opinion; it is not my opinion that the above numbers will win the South African powerball; instead, it is my guess that those numbers will be correct. Another word for "guess" is hypothesis: a hypothesis is a statement that may be universally true or universally false, but for which the truth -- or its lack thereof -- cannot currently be proven beyond doubt. On Saturday, August 1st, 2020 the above statement about the South African Powerball may become a statement of fact; most likely however, it will instead become a false statement.

An unproven hypothesis may be expressed as a matter of belief. The statement "There is a God who rules the heavens and the Earth" cannot currently (or ever) be proven beyond doubt to be either true or false, which by definition makes it a hypothesis; however, for matters of religion this is entirely unimportant, as for believers the belief that the statement is correct is all that matters, whereas for nonbelievers the truth of that statement is not at all relevant. A belief is not an opinion; an opinion is not a belief.

Scientists do not deal with unproven hypotheses, except insofar that they attempt to prove, through direct observation of nature (either out in the field or in a controlled laboratory setting) that the hypothesis is, in fact, a statement of fact. This makes unprovable hypotheses unscientific -- but that does not mean that they are false, or even that they are uninteresting statements. Unscientific statements are merely statements that science cannot either prove or disprove, and that therefore lie outside of the realm of what science deals with.

Given that background, I have always found the so-called "conflict" between science and religion to be a non-sequitur. Religion deals in one type of statements; science deals in another. The do not overlap, since a statement can either be proven or it cannot, and religious statements by their very nature focus on unprovable belief rather than universal truth. Sure, the range of things that science has figured out the facts about has grown over time, which implies that religious statements have sometimes been proven false; but is it heresy to say that "animals exist that can run 120 kph" if that is the truth, even if such animals don't exist in, say, Rome?

Something very similar can be said about conspiracy theories. Yes, it is possible to hypothesize that NASA did not send men to the moon, and that all the proof contrary to that statement was somehow fabricated. However, by its very nature such a hypothesis cannot be proven or disproven (because the statement states that all proof was fabricated), which therefore implies that it is an unscientific statement.

It is good to be sceptical about what is being said to you. People can have various ideas about how the world works, but only one of those ideas -- one of the possible hypotheses -- can be true. As long as a hypothesis remains unproven, scientists love to be sceptical themselves. In fact, if you can somehow prove beyond doubt that a scientific hypothesis is false, scientists will love you -- it means they now know something more about the world and that they'll have to come up with something else, which is a lot of fun.

When a scientific experiment or observation proves that a certain hypothesis is true, then this probably turns the hypothesis into a statement of fact. That is, it is of course possible that there's a flaw in the proof, or that the experiment failed (but that the failure was somehow missed), or that no observance of a particular event happened when a scientist tried to observe something, but that this was only because the scientist missed it. If you can show that any of those possibilities hold for a scientific proof, then you'll have turned a statement of fact back into a hypothesis, or even (depending on the exact nature of the flaw) into a false statement.

There's more. It's human nature to want to be rich and famous, sometimes no matter what the cost. As such, there have been scientists who have falsified experimental results, or who have claimed to have observed something when this was not the case. For that reason, a scientific paper that gets written after an experiment turned a hypothesis into fact describes not only the results of the experiment and the observed behavior, but also the methodology: the way in which the experiment was run, with enough details so that anyone can retry the experiment.

Sometimes that may mean spending a large amount of money just to be able to run the experiment (most people don't have an LHC in their backyard, say), and in some cases some of the required materials won't be available (the latter is expecially true for, e.g., certain chemical experiments that involve highly explosive things); but the information is always there, and if you spend enough time and money reading through the available papers, you will be able to independently prove the hypothesis yourself. Scientists tend to do just that; when the results of a new experiment are published, they will try to rerun the experiment, partially because they want to see things with their own eyes; but partially also because if they can find fault in the experiment or the observed behavior, they'll have reason to write a paper of their own, which will make them a bit more rich and famous.

I guess you could say that there's three types of people who deal with statements: scientists, who deal with provable hypotheses and statements of fact (but who have no use for unprovable hypotheses and statements of opinion); religious people and conspiracy theorists, who deal with unprovable hypotheses (where the religious people deal with these to serve a large cause, while conspiracy theorists only care about the unprovable hypotheses); and politicians, who should care about proven statements of fact and produce statements of opinion, but who usually attempt the reverse of those two these days :-/

Anyway...

mic drop

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 11)

Here’s part eleven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

LongNowDiscovery in Mexican Cave May Drastically Change the Known Timeline of Humans’ Arrival to the Americas

Human history in the Americas may be twice long as long as previously believed — at least 26,500 years — according to authors of a new study at Mexico’s Chiquihuite cave and other sites throughout Central Mexico.

According to the study’s lead author Ciprian Ardelean:

“This site alone can’t be considered a definitive conclusion. But with other sites in North America like Gault (Texas), Bluefish Caves (Yukon), maybe Cactus Hill (Virginia)—it’s strong enough to favor a valid hypothesis that there were humans here probably before and almost surely during the Last Glacial Maximum.”

Planet DebianSteve Kemp: Growing food is fun.

"I grew up on a farm" is something I sometimes what I tell people. It isn't true, but it is a useful shorthand. What is true is that my parents both come from a farming background, my father's family up in Scotland, my mother's down in Yorkshire.

Every summer my sisters and myself would have a traditional holiday at the seaside, which is what people do in the UK (Blackpool, Scarborough, Great Yarmouth, etc). Before, or after, that we'd spend the rest of the summer living on my grandmother's farm.

I loved spending time on the farm when I was a kid, and some of my earliest memories date from that time. For example I remember hand-feeding carrots to working dogs (alsatians) that were taller than I was. I remember trying to ride on the backs of those dogs, and how that didn't end well. In fact the one and only time I can recall my grandmother shouting at me, or raising her voice at all, was when my sisters and I spent an afternoon playing in the coal-shed. We were filthy and covered in coal-dust from head to toe. Awesome!

Anyway the only reason I bring this up is because I have a little bit of a farming background, largely irrelevant in my daily life, but also a source of pleasant memories. Despite it being an animal farm (pigs, sheep, cows) there was also a lot of home-grown food, which my uncle Albert would deliver/sell to people nearby out of the back of a van. That same van that would be used to ferry us to see the fireworks every November. Those evenings were very memorable too - they would almost always involve flasks of home-made vegetable soup.

Nowadays I live in Finland, and earlier in the year we received access to an allotment - a small piece of land (10m x 10m) for €50/year - upon which we can grow our own plants, etc.

My wife decided to plant flowers and make it look pretty. She did good.

I decided to plant "food". I might not have done this stuff from scratch before, but I was pretty familiar with the process from my youth, and also having the internet to hand to make the obvious searches such as "How do you know when you can harvest your garlic?"

Before I started I figured it couldn't be too hard, after all if you leave onions/potatoes in the refrigerator for long enough they start to grow! It isn't like you have to do too much to help them. In short it has been pretty easy and I'm definitely going to be doing more of it next year.

I've surprised myself by enjoying the process as much as I have. Every few days I go and rip up the weeds, and water the things we've planted. So far I've planted, and harvested, Radish, Garlic, Onions, and in a few more weeks I'll be digging up potatoes.

I have no particular point to this post, except to say that if you have a few hours spare a week, and a slab of land to hand upon which you can dig and plant I'd recommend it. Sure there were annoyances, and not a single one of the carrot-seeds I planted showed any sign of life, but the other stuff? The stuff that grew? Very tasty, om nom nom ..

(It has to be said that when we received the plot there was a jungle growing upon it. Once we tidied it all up we found raspberries, roses, and other things. The garlic I reaped was already growing so I felt like a cheat to harvest it. That said I did plant a couple of bulbs on my balcony so I could say "I grew this from scratch". Took a while, but I did indeed harvest my own garlic.)

Planet DebianMartin Michlmayr: ledger2beancount 2.4 released

I released version 2.4 of ledger2beancount, a ledger to beancount converter.

There are two notable changes in this release:

  1. I fixed two regressions introduced in the last release. Sorry about the breakage!
  2. I improved support for hledger. I believe all syntax differences in hledger are supported now.

Here are the changes in 2.4:

  • Fix regressions introduced in version 2.3
    • Handle price directives with comments
    • Don't assume implicit conversion when price is on second posting
  • Improve support for hledger
    • Fix parsing of hledger tags
    • Support commas as decimal markers
    • Support digit group marks through commodity and D directives
    • Support end aliases directive
    • Support regex aliases
    • Recognise total balance assertions
    • Recognise sub-account balance assertions
  • Add support for define directive
  • Convert all uppercase metadata tags to all lowercase
  • Improve handling of ledger lots without cost
  • Allow transactions without postings
  • Fix parsing issue in commodity declarations
  • Support commodities that contain quotation marks
  • Add --version option to show version
  • Document problem of mixing apply and include

Thanks to Kirill Goncharov for pointing out one regressions, to Taylor R Campbell for for a patch, to Stefano Zacchiroli for some input, and finally to Simon Michael for input on hledger!

You can get ledger2beancount from GitHub

Worse Than FailureUltrabase

After a few transfers across departments at IniTech, Lydia found herself as a senior developer on an internal web team. They built intranet applications which covered everything from home-grown HR tools to home-grown supply chain tools, to home-grown CMSes, to home-grown "we really should have purchased something but the approval process is so onerous and the budgeting is so constrained that it looks cheaper to carry an IT team despite actually being much more expensive".

A new feature request came in, and it seemed extremely easy. There was a stored procedure that was normally invoked by a scheduled job. The admin users in one of the applications wanted to be able to invoke it on demand. Now, Lydia might be "senior", but she was new to the team, so she popped over to Desmond's cube to see what he thought.

"Oh, sure, we can do that, but it'll take about a week."

"A week?" Lydia asked. "A week? To add a button that invokes a stored procedure. It doesn't even take any parameters or return any results you'd need to display."

"Well, roughly 40 hours of effort, yeah. I can't promise it'd be a calendar week."

"I guess, with testing, and approvals, I could see it taking that long," Lydia said.

"Oh, no, that's just development time," Desmond said. "You're new to the team, so it's time you learned about Ultrabase."

Wyatt was the team lead. Lydia had met him briefly during her onboarding with the team, but had mostly been interacting with the other developers on the team. Wyatt, as it turned out, was a Certified Super Genius™, and was so smart that he recognized that most of their applications were, functionally, quite the same. CRUD apps, mostly. So Wyatt had "automated" the process, with his Ultrabase solution.

First, there was a configuration database. Every table, every stored procedure, every view or query, needed to be entered into the configuration database. Now, Wyatt, Certified Super Genius™, knew that he couldn't define a simple schema which would cover all the possible cases, so he didn't. He defined a fiendishly complicated schema with opaque and inconsistent validity rules. Once you had entered the data for all of your database objects, hopefully correctly, you could then execute the Data program.

The Data program would read through the configuration database, and through the glories of string concatenation generate a C# solution containing the definitions of your data model objects. The Data program itself was very fault tolerant, so fault tolerant that if anything went wrong, it still just output C# code, just not syntactically correct C# code. If the C# code couldn't compile, you needed to go back to the configuration database and figure out what was wrong.

Eventually, once you had a theoretically working data model library, you pushed the solution to the build server. That would build and sign the library with a corporate key, and publish it to their official internal software repository. This could take days or weeks to snake its way through all the various approval steps.

Once you had the official release of the datamodel, you could fire up the Data Access Layer tool, which would then pull down the signed version in the repository, and using reflection and the config database, the Data Access Layer program would generate a DAL. Assuming everything worked, you would push that to the build server, and then wait for that to wind its way through the plumbing of approvals.

Then the Business Logic Layer. Then the "Core" layer. The "UI Adapter Layer". The "Front End" layer.

Each layer required the previous layer to be in the corporate repository before you could generate it. Each layer also needed to check the config database. It was trivial to make an error that wouldn't be discovered until you tried to generate the front end layer, and if that happened, you needed to go all the way back to the beginning.

"Wyatt is working on a 'config validation tool' which he says will avoid some of these errors," Desmond said. "So we've got that to look forward to. Anyway, that's our process. Glad to have you on the team!"

Lydia was significantly less glad to be on the team, now that Desmond had given her a clearer picture of how it actually worked.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianRuss Allbery: Summer haul

I'm buying rather too many books at the moment and not reading enough of them (in part because I got back into Minecraft and in part because I got a bit stuck on a few difficult books). I think I've managed to get myself unstuck again, though, and have started catching up on reviews.

2020. It's kind of a lot. And I'm not even that heavily affected.

Katherine Addison — The Angel of the Crows (sff)
Marie Brennan — A Natural History of Dragons (sff)
Kacen Callender — Queen of the Conquered (sff)
Jo Clayton — Diadem from the Stars (sff)
Jo Clayton — Lamarchos (sff)
Jo Clayton — Irsud (sff)
Clifford D. Conner — The Tragedy of American Science (nonfiction)
Kate Elliott — Unconquerable Sun (sff)
Rory Fanning & Craig Hodges — Long Shot (nonfiction)
Michael Harrington — Socialism: Past & Future (nonfiction)
Nalo Hopkinson — Brown Girl in the Ring (sff)
Kameron Hurley — The Stars Are Legion (sff)
N.K. Jemisin — Emergency Skin (sff)
T. Kingfisher — A Wizard's Guide to Defensive Baking (sff)
T. Kingfisher — Nine Goblins (sff)
Michael Lewis — The Fifth Risk (nonfiction)
Paul McAuley — War of the Maps (sff)
Gretchen McCulloch — Because Internet (nonfiction)
Hayao Miyazaki — Nausicaä of the Valley of the Wind (graphic novel)
Annalee Newitz — The Future of Another Timeline (sff)
Nick Pettigrew — Anti-Social (nonfiction)
Rivers Solomon, et al. — The Deep (sff)
Jo Walton — Or What You Will (sff)
Erik Olin Wright — Stardust to Stardust (nonfiction)

Of these, I've already read and reviewed The Fifth Risk (an excellent book).

Planet DebianRuss Allbery: Review: Rise of the Warrior Cop

Review: Rise of the Warrior Cop, by Radley Balko

Publisher: PublicAffairs
Copyright: 2013
ISBN: 1-61039-212-4
Format: Kindle
Pages: 336

As the United States tries, in fits and starts, to have a meaningful discussion about long-standing police racism, brutality, overreach, corruption, and murder, I've realized that my theoretical understanding of the history of and alternative frameworks for law enforcement is woefully lacking. Starting with a book by a conservative white guy is not the most ideal of approaches, but it's what I already had on hand, and it won't be the last book I read and review on this topic. (Most of my research so far has been in podcast form. I don't review those here, but I can recommend Ezra Klein's interviews with Ta-Nehisi Coates, Paul Butler, and, most strongly, sujatha baliga.)

Rise of the Warrior Cop is from 2013 and has had several moments of fame, no doubt helped by Balko's connections to the conservative and libertarian right. One of the frustrating facts of US politics is that critiques of the justice system from the right (and from white men) get more media attention than critiques from the left. That said, it's a generally well-respected book on the factual history of the topic, and police brutality and civil rights are among the points on which I have stopped-clock agreements with US libertarians.

This book is very, very libertarian.

In my callow youth, I was an ardent libertarian, so I've read a lot of US libertarian literature. It's a genre with its own conventions that become obvious when you read enough of it, and Rise of the Warrior Cop goes through them like a checklist. Use the Roman Republic (never the Roman Empire) as the starting point for any political discussion, check. Analyze the topic in the context of pre-revolutionary America, check. Spend considerable effort on discerning the opinions of the US founders on the topic since their opinions are always relevant to the modern world, check. Locate some point in the past (preferably before 1960) where the political issue was as good as it has ever been, check. Frame all changes since then as an erosion of rights through government overreach, check. Present your solution as a return to a previous era of respect for civil rights, check. Once you start recognizing the genre conventions, their prevalence in libertarian writing is almost comical.

The framing chapters therefore leave a bit to be desired, but the meat of the book is a useful resource. Starting with the 1970s and its use as a campaigning tool by Nixon, Balko traces a useful history of the war on drugs. And starting with the 1980s, the number of cites to primary sources and the evidence of Balko's own research increases considerably. If you want to know how US police turned into military cosplayers with body armor, heavy weapons, and armored vehicles, this book provides a lot of context and history.

One of the reasons why I view libertarians as allies of convenience on this specific issue is that drug legalization and disgust with the war on drugs have been libertarian issues for decades. Ideologically honest libertarians (and Balko appears to be one) are inherently skeptical of the police, so when the police overreach in an area of libertarian interest, they notice. Balko makes a solid argument, backed up with statistics, specific programs, legislation, and court cases, that the drug war and its accompanying lies about heavily-armed drug dealers and their supposed threat to police officers was the fuel for the growth of SWAT teams, no-knock search warrants, erosion of legal protections for criminal defendants, and de facto license for the police to ignore the scope and sometimes even the existence of warrants.

This book is useful support for the argument that fears for the safety of officers underlying the militarization of police forces are imaginary. One telling point that Balko makes repeatedly and backs with statistical and anecdotal evidence is that the police generally do not use raid tactics on dangerous criminals. On the contrary, aggressive raids are more likely to be used on the least dangerous criminals because they're faster, they're fun for the police (they provide an adrenaline high and let them play with toys), and they're essentially risk-free. If the police believe someone is truly dangerous, they're more likely to use careful surveillance and to conduct a quiet arrest at an unexpected moment. The middle-of-the-night armed break-ins with battering rams, tear gas, and flash-bangs are, tellingly, used against the less dangerous suspects.

This is part of Balko's overall argument that police equipment and tactics have become untethered from any realistic threat and have become cultural. He traces an acceleration of that trend to 9/11 and the resulting obsession with terrorism, which further opened the spigot of military hardware and "special forces" training. This became a point of competition between police departments, with small town forces that had never seen a terrorist and had almost no chance of a terrorist incident demanding their own armored vehicles. I've encountered this bizarre terrorism justification personally; one of the reasons my local police department gave in a public hearing for not having a policy against shooting at moving vehicles was "but what if terrorism?" I don't believe there has ever been a local terrorist attack.

SWAT in such places didn't involve the special training or dedicated personnel of large city forces; instead, it was a part-time duty for normal police officers, and frequently they were encouraged to practice SWAT tactics by using them at random for some otherwise normal arrest or search. Balko argues that those raids were more exciting than normal police work, leading to a flood of volunteers for that duty and a tendency to use them as much as possible. That in turn normalizes disconnecting police tactics from the underlying crime or situational risk.

So far, so good. But despite the information I was able to extract from it, I have mixed feelings about Rise of the Warrior Cop as a whole. At the least, it has substantial limitations.

First, I don't trust the historical survey of policing in this book. Libertarian writing makes for bad history. The constraints of the genre require overusing only a few points of reference, treating every opinion of the US founders as holy writ, and tying forward progress to a return to a previous era, all of which interfere with good analysis. Balko also didn't do the research for the historical survey, as is clear from the footnotes. The citations are all to other people's histories, not to primary sources. He's summarizing other people's histories, and you'll almost certainly get better history by finding well-respected historians who cover the same ground. (That said, if you're not familiar with Peel's policing principles, this is a good introduction.)

Second, and this too is unfortunately predictable in a libertarian treatment, race rarely appears in this book. If Balko published the same book today, I'm sure he would say more about race, but even in 2013 its absence is strange. I was struck while reading by how many examples of excessive police force were raids on west coast pot farms; yes, I'm sure that was traumatic, but it's not the demographic I would name as the most vulnerable to or affected by police brutality. West coast pot growers are, however, mostly white.

I have no idea why Balko made that choice. Perhaps he thought his target audience would be more persuaded by his argument if he focused on white victims. Perhaps he thought it was an easier and less complicated story to tell. Perhaps, like a lot of libertarians, he doesn't believe racism has a significant impact on society because it would be a market failure. Perhaps those were the people who more readily came to mind. But to talk about police militarization, denial of civil rights, and police brutality in the United States without putting race at the center of both the history and the societal effects leaves a gaping hole in the analysis.

Given that lack of engagement, I also am dubious of Balko's policy prescriptions. His reform suggestions aren't unreasonable, but they stay firmly in the centrist and incrementalist camp and would benefit white people more than black people. Transparency, accountability, and cultural changes are all fine and good, but the cultural change Balko is focused on is less aggressive arrest tactics, more use of mediation, and better physical fitness. I would not object to those things (well, maybe the last, which seemed odd), but we need to have a discussion about police white supremacist organizations, the prevalence of spousal abuse, and the police tendency to see themselves not as public servants but as embattled warriors who are misunderstood by the naive sheep they are defending.

And, of course, you won't find in Rise of the Warrior Cop any thoughtful wrestling with whether there are alternative approaches to community safety, whether punitive rather than restorative justice is effective, or whether crime is a symptom of deeper societal problems we could address but refuse to. The most radical suggestion Balko has is to legalize drugs, which is both the predictable libertarian position and, as we have seen from recent events in the United States, far from the only problem of overcriminalization.

I understand why this book is so frequently mentioned on-line, and its author's political views may make it more palatable to some people than a more race-centered or radical perspective. But I don't think this is the best or most useful book on police violence that one could read today. I hope to find a better one in upcoming reviews.

Rating: 6 out of 10

,

Planet DebianEnrico Zini: Consent links

Teaching consent is ongoing, but it starts when children are very young. It involves both teaching children to pay attention to and respect others' consent (or lack thereof) and teaching children that they should expect their own bodies and their own space to be respected---even by their parents and other relatives. And if children of two or four can be expected to read the nonverbal cues and expressions of children not yet old enough to talk in order to assess whether there is consent, what excuse do full grown adults have?
Small children have no sense of shame or disgust or fear of their bodies. A body is what it is. It does what it does.
About commonly accepted violation of children boundaries
Personal boundaries are guidelines, rules or limits that a person creates to identify reasonable, safe and permissible ways for other people to behave towards them and how they will respond when someone passes those limits.[1] They are built out of a mix of conclusions, beliefs, opinions, attitudes, past experiences and social learning.[2][3] This concept or life skill has been widely referenced in self-help books and used in the counseling profession since the mid-1980s.[4]

Rondam RamblingsAbortion restrictions result in more abortions

Not that this was ever in any serious doubt, but now there is actual data published in The Lancet showing that abortion restrictions increase the number of abortions: In 2015–19, there were 121.0 million unintended pregnancies annually (80% uncertainty interval [UI] 112.8–131.5), corresponding to a global rate of 64 unintended pregnancies (UI 60–70) per 1000 women aged 15–49 years. 61% (58–63)

Rondam RamblingsMark your calendars: I am debating Ken Hovind on July 9

I've recently taken up a new hobby of debating young-earth creationists on YouTube.  (It's a dirty job, but somebody's gotta do it.)  I've done two of them so far [1][2], both on a creationist channel called Standing For Truth.  My third debate will be against Kent Hovind, one of the more prominent and, uh, outspoken members of the YEC community.  In case you haven't heard of him, here's a sample

Planet DebianHolger Levsen: 20200726-lts-survey

Final call to participate in the LTS survey

After 6 years of existence, we, the Debian LTS contributors, wanted to run a survey to learn more about how Debian LTS is used and perceived. Please take a few minutes to participate in the survey at https://surveys.debian.net/...!

Also, please hurry up, the survey will close at the end of July 27th on Samoa, which is in roughly 48h from now.

Planet DebianRuss Allbery: Review: Paladin's Grace

Review: Paladin's Grace, by T. Kingfisher

Publisher: Red Wombat Studio
Copyright: 2020
ASIN: B0848Q8JVW
Format: Kindle
Pages: 399

Stephen was a paladin. Then his god died.

He was a berserker, an unstoppable warrior in the service of his god. Now, well, he's still a berserker, but going berserk when you don't have a god to control the results is not a good idea. He and his brothers were taken in by the Temple of the Rat, where they serve as guards, watch out for each other, and try to get through each day with an emptiness in their souls where a god should be.

Stephen had just finished escorting a healer through some of the poorer parts of town when a woman runs up to him and asks him to hide her. Their awkward simulated tryst is sufficient to fool the two Motherhood priests who were after her for picking flowers from the graveyard. Stephen then walks her home and that would have been the end of it, except that neither could get the other out of their mind.

Despite first appearances, and despite being set in the same world and sharing a supporting character, this is not the promised sequel to Swordheart (which is apparently still coming). It's an entirely different paladin story. T. Kingfisher (Ursula Vernon's nom de plume when writing for adults) has a lot of things to say about paladins! And, apparently, paladin-involved romances.

On the romance front, Kingfisher clearly has a type. The general shape of the story will be familiar from Swordheart and The Wonder Engine: An independent and occasionally self-confident woman with various quirks, a hunky paladin who is often maddeningly dense, and a lot of worrying on both sides about whether the other person is truly interested in them and if their personal liabilities make a relationship a horrible idea. This is not my preferred romance formula (it provokes the occasional muttered "for the love of god just talk to each other"), but I liked this iteration of it better than the previous two, mostly because of Grace.

Grace is a perfumer, a trade she went into by being picked out of a lineup of orphans by a master perfumer for her sense of smell. One of Kingfisher's strengths as a writer is showing someone get lost in their routine day-to-day competence. When mixed with an inherently fascinating profession, this creates a great reading experience. Grace is also an abuse survivor, which made the communication difficulties with Stephen more interesting and subtle. Grace has created space and a life for herself, and her unwillingness to take risks on changes is a deep part of her sense of self and personal safety. As her past is slowly revealed, Kingfisher puts the reader in a position to share Stephen's anger and protectiveness, but then consistently puts Grace's own choices, coping mechanisms, and irritated refusal to be protected back into the center of the story. She has to accept some help as she gets entangled in the investigation of a highly political staged assassination attempt, but both that help and the relationship come on her own terms. It's very well-done.

The plot was enjoyable enough, although it involved a bit too much of constantly rising stakes and turns for the worst for my taste, and the ending had a touch of deus ex machina. Like Kingfisher's other books, though, the delight is in the unexpected details. Stephen knitting socks. Grace's frustrated obsession with why he smells like gingerbread. The beautifully practical and respectful relationship between the Temple of the Rat and Stephen's band of former paladins. (After only two books in which they play a major role, the Temple of the Rat is already one of my favorite fantasy religions.) Everything about Bishop Beartongue. Grace's friend Marguerite. And a truly satisfying ending.

The best part of this book, though, is the way Grace is shown as a complete character in a way that even most books with well-rounded characterization don't manage. Some things she does make the reader's heart ache because of the hints they provide about her past, but they're also wise and effective safety mechanisms for her. Kingfisher gives her space to be competent and prickly and absent-minded. She has a complete life: friends, work, goals, habits, and little rituals. Grace meets someone and falls in love, but one can readily imagine her not falling in love and going on with her life and that result wouldn't be tragic. In short, she feels like a grown adult who has made her own peace with where she came from and what she is doing. The book provides her an opportunity for more happiness and more closure without undermining her independence. I rarely see this in a novel, and even more rarely done this well.

If you haven't read any of Kingfisher's books and are in the mood for faux-medieval city romance involving a perfumer and a bit of political skulduggery, this is a great place to start. If you liked Swordheart, you'll probably like Paladin's Grace; like me, you may even like it a bit more. Recommended, particularly if you want something light and heart-warming.

Rating: 8 out of 10

,

Planet DebianNiels Thykier: Support for Debian packaging files in IDEA (IntelliJ/PyCharm)

I have been using the community editions of IntelliJ and PyCharm for a while now for Python or Perl projects. But it started to annoy me that for Debian packaging bits it would “revert� into a fancy version of notepad. Being fed up with it, I set down and spent the last week studying how to write a plugin to “fix� this.

After a few prototypes, I have now released IDEA-debpkg v0.0.3 (Link to JetBrain’s official plugin listing with screenshots). It provides a set of basic features for debian/control like syntax highlighting, various degree of content validation, folding of long fields, code completion and “CTRL + hover� documentation. For debian/changelog, it is mostly just syntax highlighting with a bit of fancy linking for now. I have not done anything for debian/rules as I noted there is a Makefile plugin, which will have to do for now.

The code is available from github and licensed under Apache-2.0. Contributors, issues/feature requests and pull requests are very welcome. Among things I could help with are:

  • Icons – both for the plugin and for the file types. Currently it is just colored text, which is as far as my artistic skills got with the space provided.
  • Color and text formatting for syntax highlighting.
  • Reports of papercut or features that would be very useful to prioritize.
  • Review of the “CTRL + hoverâ€� documentation. I am hoping for something that is help for new contributors but I am very unlikely to have gotten it right (among other because I wrote most of it to “get it doneâ€� rather than “getting it rightâ€�)

I hope you will take it for spin if you have been looking for a bit of Debian packaging support to your PyCharm or other IDEA IDE. 🙂 Please do file bugs/issues if you run into issues, rough edges or unhelpful documentation, etc.

Planet DebianAndrew Cater: How to use the signed checksum files to verify Debian media images

Following on from the blog post the other day in some sense: someone has asked on the debian-user list: "I do not understand from the given page (https://www.debian.org/CD/verify)  how to use .sign files and gpg in order to check verify the authenticity of debian cds. I understand the part with using sha256sum or sha512sum or md5sum to check whether the files were downloaded correctly."

Distributed with the CD and other media images on Debian CD mirrors, there are files like MD5SUM, MD5SUM.sign, SHA256SUM, SHA256SUM.sign and so on.

SHA512SUM is a plain text list of the SHA512SUMs for each of the files in the directory. SHA512SUM.sign is the GPG-signed version of that file. This allws for non-repudiation - if the signature is valid, then the plain text file has been signed by the owner of that key. Nothing has tampered with the checksums file since it was signed.

After downloading the SHA1SUM, SHA256SUM and SHA512SUM files and the corresponding .sign files from, say, the prime Debian CD mirror at

Assuming that you already have GPG installed: sha256sum and sha512sum are installed by the coreutils package, which Debian installs by default.

gpg --verify SHA512SUMS.sign SHA512SUMS will verify the .sign signature file against the signed file.

gpg --verify SHA512SUMS.sign SHA512SUMS
gpg: Signature made Sun 10 May 2020 00:16:52 UTC
gpg:                using RSA key DF9B9C49EAA9298432589D76DA87E80D6294BE9B


The signature is as given on the Debian CD verification page given above.

You can import that key from the Debian key servers if you wish.

gpg --keyserver keyring.debian.org --recv-keys DF9B9C49EAA9298432589D76DA87E80D6294BE9B

You can import the signature for checking from the SKS keyservers which are often more available:

gpg --keyserver pool.sks-keyservers.net --recv-keys DF9B9C49EAA9298432589D76DA87E80D6294BE9B 

and you then get:

gpg --verify SHA512SUMS.sign SHA512SUMS
gpg: Signature made Sun 10 May 2020 00:16:52 UTC
gpg:                using RSA key DF9B9C49EAA9298432589D76DA87E80D6294BE9B
gpg: Good signature from "Debian CD signing key " [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B


My own key isn't in the Debian CD signing key ring - but this does now show me that this is a good signature from the primary key fingerprint as given.

Repeating the exercise from the other day and producing a Debian amd64 netinst file using jigdo, I can now check the checksum on the local .iso file against the checksum file distributed by Debian. If they match, it's a good sign that the CD I've generated is bit for bit identical. For my locally generated file:

sha512sum debian-10.4.0-amd64-netinst.iso
ec69e4bfceca56222e6e81766bf235596171afe19d47c20120783c1644f72dc605d341714751341051518b0b322d6c84e9de997815e0c74f525c66f9d9eb4295  debian-10.4.0-amd64-netinst.iso


and for the file checksum as distributed by Debian:

less SHA512SUMS | grep *iso
ec69e4bfceca56222e6e81766bf235596171afe19d47c20120783c1644f72dc605d341714751341051518b0b322d6c84e9de997815e0c74f525c66f9d9eb4295  debian-10.4.0-amd64-netinst.iso


and they match! 

As ever, I hope this blog post will help somebody.

[Edit: Someone has kindly pointed out that grep *iso SHA512SUMS | sha512sum -c will check this more efficiently.]














 


 

Planet DebianCraig Small: 25 Years of Free Software

When did I start writing Free Software, now called Open Source? That’s a tricky question. Does the time start with the first file edited, the first time it compiles or perhaps even some proto-program you use to work out a concept for the real program formed later on.

So using the date you start writing, especially in a era before decent version control systems, is problematic. That is why I use the date of the first release of the first package as the start date. For me, that was Monday 24th July 1995.

axdigi and before

My first released Free Software program was axdigi which was a layer-2 packet repeater for hamradio. This was uploaded to some FTP server, probably UCSD in late July 1995. The README is dated 24th July 1995.

There were programs before this. I had written a closed-source (probably undistributable) driver for the Gracilis PackeTwin serial card and also some sort of primitive wireshark/tcpdump thing for capturing packet radio. Funny thing is that the capture program is the predecessor of both axdigi and a system that was used by a major Australian ISP for their internet billing system.

Choosing Free Software

So you have written something you think others might like, what software license will you use to distribute it? In 1995 it wasn’t that clear. This was the era of strange boutique licenses including ones where it was ok to run the program as a hamradio operator but not a CB radio operator (or at least they tried to work it that way).

A friend of mine and the author of the Linux HAM HOWTO amongst other documents, Terry Dawson, suggested I use GPL or another Free Software license. He explained what this Free Software thing was and said that if you want your program to be the most useful then something like GPL will do it. So I released axdigi under the GPL license and most of my programs since then have used the same license. Something like MIT or BSD licenses would have been fine too, I was just not going to use something closed or hand-crafted.

That was a while ago, I’ve written or maintained many programs since then. I also became a Debian maintainer (23 years so far) and adopted both procps and psmisc which I still maintain as both the Debian developer and upstream to this day.

What Next?

So it has been 25 years or a quarter of a century, what will happen next? Probably more of the same, though I’m not sure I will be maintaining Free Software by the end of the next 25 years (I’ll be over 70 then). Perhaps the singularity will arrive and writing software will be something people only do at Rennie Festivals.

Come to the Festival! There is someone making horseshoes! Other there is a steam engine. See this other guy writing computer programs on a thing called keyboard!

,

Krebs on SecurityThinking of a Cybersecurity Career? Read This

Thousands of people graduate from colleges and universities each year with cybersecurity or computer science degrees only to find employers are less than thrilled about their hands-on, foundational skills. Here’s a look at a recent survey that identified some of the bigger skills gaps, and some thoughts about how those seeking a career in these fields can better stand out from the crowd.

Virtually every week KrebsOnSecurity receives at least one email from someone seeking advice on how to break into cybersecurity as a career. In most cases, the aspirants ask which certifications they should seek, or what specialization in computer security might hold the brightest future.

Rarely am I asked which practical skills they should seek to make themselves more appealing candidates for a future job. And while I always preface any response with the caveat that I don’t hold any computer-related certifications or degrees myself, I do speak with C-level executives in cybersecurity and recruiters on a regular basis and frequently ask them for their impressions of today’s cybersecurity job candidates.

A common theme in these C-level executive responses is that a great many candidates simply lack hands-on experience with the more practical concerns of operating, maintaining and defending the information systems which drive their businesses.

Granted, most people who have just graduated with a degree lack practical experience. But happily, a somewhat unique aspect of cybersecurity is that one can gain a fair degree of mastery of hands-on skills and foundational knowledge through self-directed study and old fashioned trial-and-error.

One key piece of advice I nearly always include in my response to readers involves learning the core components of how computers and other devices communicate with one another. I say this because a mastery of networking is a fundamental skill that so many other areas of learning build upon. Trying to get a job in security without a deep understanding of how data packets work is a bit like trying to become a chemical engineer without first mastering the periodic table of elements.

But please don’t take my word for it. The SANS Institute, a Bethesda, Md. based security research and training firm, recently conducted a survey of more than 500 cybersecurity practitioners at 284 different companies in an effort to suss out which skills they find most useful in job candidates, and which are most frequently lacking.

The survey asked respondents to rank various skills from “critical” to “not needed.” Fully 85 percent ranked networking as a critical or “very important” skill, followed by a mastery of the Linux operating system (77 percent), Windows (73 percent), common exploitation techniques (73 percent), computer architectures and virtualization (67 percent) and data and cryptography (58 percent). Perhaps surprisingly, only 39 percent ranked programming as a critical or very important skill (I’ll come back to this in a moment).

How did the cybersecurity practitioners surveyed grade their pool of potential job candidates on these critical and very important skills? The results may be eye-opening:

“Employers report that student cybersecurity preparation is largely inadequate and are frustrated that they have to spend months searching before they find qualified entry-level employees if any can be found,” said Alan Paller, director of research at the SANS Institute. “We hypothesized that the beginning of a pathway toward resolving those challenges and helping close the cybersecurity skills gap would be to isolate the capabilities that employers expected but did not find in cybersecurity graduates.”

The truth is, some of the smartest, most insightful and talented computer security professionals I know today don’t have any computer-related certifications under their belts. In fact, many of them never even went to college or completed a university-level degree program.

Rather, they got into security because they were passionately and intensely curious about the subject, and that curiosity led them to learn as much as they could — mainly by reading, doing, and making mistakes (lots of them).

I mention this not to dissuade readers from pursuing degrees or certifications in the field (which may be a basic requirement for many corporate HR departments) but to emphasize that these should not be viewed as some kind of golden ticket to a rewarding, stable and relatively high-paying career.

More to the point, without a mastery of one or more of the above-mentioned skills, you simply will not be a terribly appealing or outstanding job candidate when the time comes.

BUT..HOW?

So what should you focus on, and what’s the best way to get started? First, understand that while there are a near infinite number of ways to acquire knowledge and virtually no limit to the depths you can explore, getting your hands dirty is the fastest way to learning.

No, I’m not talking about breaking into someone’s network, or hacking some poor website. Please don’t do that without permission. If you must target third-party services and sites, stick to those that offer recognition and/or incentives for doing so through bug bounty programs, and then make sure you respect the boundaries of those programs.

Besides, almost anything you want to learn by doing can be replicated locally. Hoping to master common vulnerability and exploitation techniques? There are innumerable free resources available; purpose-built exploitation toolkits like Metasploit, WebGoat, and custom Linux distributions like Kali Linux that are well supported by tutorials and videos online. Then there are a number of free reconnaissance and vulnerability discovery tools like Nmap, Nessus, OpenVAS and Nikto. This is by no means a complete list.

Set up your own hacking labs. You can do this with a spare computer or server, or with older hardware that is plentiful and cheap on places like eBay or Craigslist. Free virtualization tools like VirtualBox can make it simple to get friendly with different operating systems without the need of additional hardware.

Or look into paying someone else to set up a virtual server that you can poke at. Amazon’s EC2 services are a good low-cost option here. If it’s web application testing you wish to learn, you can install any number of web services on computers within your own local network, such as older versions of WordPress, Joomla or shopping cart systems like Magento.

Want to learn networking? Start by getting a decent book on TCP/IP and really learning the network stack and how each layer interacts with the other.

And while you’re absorbing this information, learn to use some tools that can help put your newfound knowledge into practical application. For example, familiarize yourself with Wireshark and Tcpdump, handy tools relied upon by network administrators to troubleshoot network and security problems and to understand how network applications work (or don’t). Begin by inspecting your own network traffic, web browsing and everyday computer usage. Try to understand what applications on your computer are doing by looking at what data they are sending and receiving, how, and where.

ON PROGRAMMING

While being able to program in languages like Go, Java, Perl, Python, C or Ruby may or may not be at the top of the list of skills demanded by employers, having one or more languages in your skillset is not only going to make you a more attractive hire, it will also make it easier to grow your knowledge and venture into deeper levels of mastery.

It is also likely that depending on which specialization of security you end up pursuing, at some point you will find your ability to expand that knowledge is somewhat limited without understanding how to code.

For those intimidated by the idea of learning a programming language, start by getting familiar with basic command line tools on Linux. Just learning to write basic scripts that automate specific manual tasks can be a wonderful stepping stone. What’s more, a mastery of creating shell scripts will pay handsome dividends for the duration of your career in almost any technical role involving computers (regardless of whether you learn a specific coding language).

GET HELP

Make no mistake: Much like learning a musical instrument or a new language, gaining cybersecurity skills takes most people a good deal of time and effort. But don’t get discouraged if a given topic of study seems overwhelming at first; just take your time and keep going.

That’s why it helps to have support groups. Seriously. In the cybersecurity industry, the human side of networking takes the form of conferences and local meetups. I cannot stress enough how important it is for both your sanity and career to get involved with like-minded people on a semi-regular basis.

Many of these gatherings are free, including Security BSides events, DEFCON groups, and OWASP chapters. And because the tech industry continues to be disproportionately populated by men, there are also a number cybersecurity meetups and membership groups geared toward women, such as the Women’s Society of Cyberjutsu and others listed here.

Unless you live in the middle of nowhere, chances are there’s a number of security conferences and security meetups in your general area. But even if you do reside in the boonies, the good news is many of these meetups are going virtual to avoid the ongoing pestilence that is the COVID-19 epidemic.

In summary, don’t count on a degree or certification to prepare you for the kinds of skills employers are going to understandably expect you to possess. That may not be fair or as it should be, but it’s likely on you to develop and nurture the skills that will serve your future employer(s) and employability in this field.

I’m certain that readers here have their own ideas about how newbies, students and those contemplating a career shift into cybersecurity can best focus their time and efforts. Please feel free to sound off in the comments. I may even update this post to include some of the better recommendations.

CryptogramFriday Squid Blogging: Introducing the Seattle Kraken

The Kraken is the name of Seattle's new NFL franchise.

I have always really liked collective nouns as sports team names (like the Utah Jazz or the Minnesota Wild), mostly because it's hard to describe individual players.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianDirk Eddelbuettel: anytime 0.3.8: Minor Maintenance

A new minor release of the anytime package arrived on CRAN overnight. This is the nineteenth release, and it comes just over six months after the previous release giving further indicating that we appear to have reached a nice level of stability.

anytime is a very focused package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, … format to either POSIXct or Date objects – and to do so without requiring a format string. See the anytime page, or the GitHub README.md for a few examples.

This release mostly plays games with CRAN. Given the lack of specification for setups on their end, reproducing test failures remains, to put it mildly, “somewhat challenging”. So we eventually gave up—and weaponed up once more and now explicitly test for the one distribution where tests failed (when they clearly passed everywhere else). With that we now have three new logical predicates for various Linux distribution flavours, and if that dreaded one is seen in one test file the test is skipped. And with that we now score twelve out of twelve OKs. This being a game of cat and mouse, I am sure someone somewhere will soon invent a new test…

The full list of changes follows.

Changes in anytime version 0.3.8 (2020-07-23)

  • A small utility function was added to detect the Linux distribution used in order to fine-tune tests once more.

  • Travis now uses Ubuntu 'bionic' and R 4.0.*.

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page. The issue tracker tracker off the GitHub repo can be use for questions and comments.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

LongNowThe Comet Neowise as seen from the ISS

For everyone who cannot see the Comet Neowise with their own eyes this week — or just wants to see it from a higher perch — this video by artist Seán Doran combines 550 NASA images from the International Space Station into a real-time view of the comet from 250 miles above Earth’s surface and 17,500 mph.

Planet DebianMike Gabriel: Ayatana Indicators / IDO - Menu Rendering Fixed with vanilla GTK-3+

At DebConf 17 in Montreal, I gave a talk about Ayatana Indicators [1] and the project's goal to continue the — by then already dropped out of maintenance — Ubuntu Indicators in a separate upstream project, detached from Ubuntu and its Ubuntu'isms.

Stalling

The whole Ayatana Indicators project received a bit of a show stopper by the fact that the IDO (Indicator Display Object) rendering was not working in vanilla GTK-3 without a certain patch [2] that only Ubuntu has in their GTK-3 package. Addressing GTK developers upstream some years back (after GTK 3.22 had already gone into long term maintenance mode) and asking for a late patch acceptance did not work out (as already assumed). Ayatana Indicators stalled at a level of 90% actually working fine, but those nice and shiny special widgets, like the calendar widget, the audio volume slider widgets, switch widgets, etc. could not be rendered appropriately in GTK based desktop environments (e.g. via MATE Indicator Applet) on other distros than Ubuntu.

I never really had the guts to sit down without a defined ending and find a patch / solution to this nasty problem. Ayatana Indicators stalled as a whole. I kept it alive and defended its code base against various GLib and what-not deprecations and kept it in Debian, but the software was actually partially broken / dysfunctional.

Taking the Dog for a Walk and then It Became all Light+Love

Several days back, I received a mail from Robert Tari [3]. I was outside on a hike with our dog and thought, ah well, let's check emails... I couldn't believe what I read then, 15 seconds later. I could in fact, hardly breathe...

I have known Robert from earlier email exchanges. Robert maintains various "little" upstream projects, like e.g. Caja Rename, Odio, Unity Mail, etc. that I have looked into earlier regarding Debian packaging. Robert is also a Manjaro contributor and he has been working on bringing Ayatana Indicators to Manjaro MATE. In the early days, without knowing Robert, I even forked one of his projects (indicator-notification) and turned it into an Ayatana Indicator.

Robert and I also exchanged some emails about Ayatana Indicators already a couple of weeks ago. I got the sense of him maybe being up to something already then. Oh, yeah!!!

It turned out that Robert and I share the same "love" for the Ubuntu Indicators concept [4]. From his email, it became clear that Robert had spent the last 1-2 weeks drowned in the Ayatana IDO and libayatana-indicator code and worked him self through the bowels of it in order to understand the code concept of Indicators to its very depth.

When emerging back from his journey, he presented me (or rather: the world) a patch [5] against libayatana-indicator that makes it possible to render IDO objects even if a vanilla GTK-3 is installed on the system. This patch is a game changer for Indicator lovers.

When Robert sent me his mail pointing me to this patch, I think, over the past five years, I have never felt more excited (except from the exact moment of getting married to my wife two-to-three years ago) than during that moment when my brain tried to process his email. "Like a kid on Christmas Eve...", Robert wrote in one of his later mails to me. Indeed, like a "kid on Christmas Eve", Robert.

Try It Out

As a proof of all this to the Debian people, I have just done the first release of ayatana-indicator-datetime and uploaded it to Debian's NEW queue. Robert is doing the same for Manjaro. The Ayatana Indicator Sound will follow after my vacation.

For fancy widget rendering in Ayatana Indicator's system indicators, make sure you have libayatana-indicator 0.7.0 or newer installed on your system.

Credits

One of the biggest thanks ever I send herewith to Robert Tari! Robert is now co-maintainer of Ayatana Indicators. Welcome! Now, there is finally a team of active contributors. This is so delightful!!!

References

P.S.

Expect more Ayatana Indicators to appear in your favourite distro soon...

CryptogramUpdate on NIST's Post-Quantum Cryptography Program

NIST has posted an update on their post-quantum cryptography program:

After spending more than three years examining new approaches to encryption and data protection that could defeat an assault from a quantum computer, the National Institute of Standards and Technology (NIST) has winnowed the 69 submissions it initially received down to a final group of 15. NIST has now begun the third round of public review. This "selection round" will help the agency decide on the small subset of these algorithms that will form the core of the first post-quantum cryptography standard.

[...]

For this third round, the organizers have taken the novel step of dividing the remaining candidate algorithms into two groups they call tracks. The first track contains the seven algorithms that appear to have the most promise.

"We're calling these seven the finalists," Moody said. "For the most part, they're general-purpose algorithms that we think could find wide application and be ready to go after the third round."

The eight alternate algorithms in the second track are those that either might need more time to mature or are tailored to more specific applications. The review process will continue after the third round ends, and eventually some of these second-track candidates could become part of the standard. Because all of the candidates still in play are essentially survivors from the initial group of submissions from 2016, there will also be future consideration of more recently developed ideas, Moody said.

"The likely outcome is that at the end of this third round, we will standardize one or two algorithms for encryption and key establishment, and one or two others for digital signatures," he said. "But by the time we are finished, the review process will have been going on for five or six years, and someone may have had a good idea in the interim. So we'll find a way to look at newer approaches too."

Details are here. This is all excellent work, and exemplifies NIST at its best. The quantum-resistant algorithms will be standardized far in advance of any practical quantum computer, which is how we all want this sort of thing to go.

Planet DebianRaphaël Hertzog: The Debian Handbook has been updated for Debian 10

Better late than never as we say… thanks to the work of Daniel Leidert and Jorge Maldonado Ventura, we managed to complete the update of my book for Debian 10 Buster.

You can get the electronic version on debian-handbook.info or the paperback version on lulu.com. Or you can just read it online.

Translators are busy updating their translations, with German and Norvegian Bokmal leading the way…

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

Kevin RuddCNN: Cold War 1.5

INTERVIEW VIDEO
TV INTERVIEW
CONNECT THE WORLD, CNN
24 JULY 2020

Topics: US-China relations, Australia’s coronavirus second wave

BECKY ANDERSON: Kevin Rudd is the president of the Asia Society Policy Institute and he’s joining us now from the Sunshine Coast in Australia. It’s great to have you. This type of rhetoric you say is not new. But it does feel like we are approaching a precipitous point.

KEVIN RUDD: Well Becky, I think there’s been a lot of debate in recent months as to whether we’re on the edge of a new Cold War between China and the United States. Rather than being Cold War 2.0, I basically see it as Cold War 1.5. That is, it’s sliding in that direction, and sliding rapidly in that direction. But we’re by no means there yet. And one of the reasons we’re not there yet is because of the continued depth and breadth of the economic relationship between China and the United States, which was never the case, in terms of the historical relationship, between the United States and the Soviet Union during the first Cold War. That may change, but that I think is where we are right now.

ANDERSON: We haven’t seen an awful lot of retaliation nor very much of a narrative really from Beijing in response to some of this US anti-China narrative. What do you expect next from Beijing?

RUDD: Well, in terms of the consulate general, I think as night follows day, you’re likely to see either a radical reduction in overall American diplomatic staff numbers in China and consular staff numbers, or the direct reciprocal action, which would close for example, the US Consulate General in perhaps Chengdu or Wuhan or in Shenyang, somewhere like that. But this as you said before in your introduction, Becky, forms just one part of a much broader deterioration relationship. I’ve been observing the US-China relationship for the better part of 35 years. Really, since Nixon and Kissinger first went to Beijing in 1971/1972. This is the low point, the lowest point of the US-China relationship in now half a century. And it’s only heading in one direction. Is there an exit ramp? Open question. But the dynamics both in Beijing and in Washington are pulling this relationship right apart, and that leaves third countries in an increasingly difficult position.

ANDERSON: Yes, and I wanted to talk to you about that because Australia is continually torn between the sort of economic relationship with China that it has, and its strategic partnership with the US. We have seen the US to all intents and purposes, leaning on the UK over Huawei. How should other countries engage with China going forward?

RUDD: Well, one thing I think is to understand that Xi Jinping’s China is quite different from the China of Hu Jintao, Jiang Zemin or even Deng Xiaoping. And since Xi Jinping took over in 2012/2013, it’s a much more assertive China, right across the board. And even in this COVID reality of 2020, we see not just the Hong Kong national security legislation, we see new actions by China in the South China Sea, against Taiwan, against Japan, in the East China Sea, on the Sino-Indian border, and the frictions with Canada, Australia, the United Kingdom – you’ve just mentioned – and elsewhere as well. So, this is a new, assertive China – quite different from the one we’ve seen in the past. So, your question is entirely valid – how do, as it were, the democracies of Asia and the democracies of Europe and elsewhere respond to this new phenomenon on the global stage? I think it’s along these lines. Number one, be confident in the position which democracies have, that we believe in universal values, and human rights and democracy. And we’re not about to change. Number two, many of us, whether we’re in Asia or Europe, or longstanding allies, the United States, that’s not about change. But number three, to make it plain to our Chinese friends that on a reciprocal basis, we wish to have a mutually productive trade, investment, and capital markets relationship. And four, the big challenges of global governance – whether it’s pandemics, or climate change, or stability of global financial markets, and the current crisis we have around the world – where it is incumbent on all of us to work together. I think those four principles form a basis for us dealing with Xi Jinping’s China.

ANDERSON: Kevin, do you see this as a Cold War?

RUDD: As I said before, we’re trending that way. As I said, the big difference between the Soviet Union and the United States is that China and the United States are deeply economically in mesh and have become that way over the last 20 years or so. And that never was the case in the old Cold War. Secondly, in the old Cold War, we basically had a strategic relationship of mutually assured destruction, which came to the flashpoint of the Cuban Missile Crisis in the early 1960s. That’s not the case either. But I’ve got to say in all honesty, it’s trending in a fundamentally negative direction, and when we start to see actions like shutting down each other’s consulate generals, that does remind me of where we got to in the last Cold War as well. There should be an exit ramp, but it’s going to require a new strategic framework for the US-China relationship, based on what I describe as: manage strategic competition between these two powers, where each side’s red lines are well recognized, understood and observed – and competition occurs, as it were, in all other domains. At present, we don’t seem to have parameters or red lines at all.

ANDERSON: And we might have had this discussion four or five months ago. The new layer of course, is the coronavirus pandemic and the way that the US has responded which you say has provided an opportunity for the Chinese to steal a march on the US with regard to its position and its power around the world. Is Beijing, do you think – if you believe that there is a power vacuum at present after this coronavirus response – is Beijing taking advantage of that vacuum?

RUDD: Well, when the coronavirus broke out, China was, by definition, in a defensive position, because the virus came from Wuhan, and therefore, as the virus then spread across the world, China found itself in a deeply problematic position – not just the damage to its economy at home – but frankly its reputation abroad as well. However, President Trump’s America has demonstrated to the world that a) his administration can’t handle the virus within the United States itself, and b) there has been a phenomenal lack of American global leadership in dealing with the public health and global economic dimensions of – let’s call it the COVID-19 crisis – across the world. So, the argument that I’m attracted to is that both these great powers have been fundamentally damaged by the coronavirus crisis that has inflicted the world. So the challenge for the future is whether in fact we a) see a change in administration in Washington with Biden, and secondly, whether a democrat administration will choose to reassert American global leadership through the institutions of global governance, where frankly, the current administration has left so many vacuums across the UN system and beyond it. And that remains the open question – which I think the international community is focusing on – as we move towards that event in November, when the good people the United States cast their ballot.

ANDERSON: Yeah, no fascinating. I’ll just stick to the coronavirus for a final question for you and thank you for this sort of wide-ranging discussion. Australia, of course, applauded for its ability to act fast and flatten its coronavirus curve back in April. That has all been derailed. We’ve seen a second wave. It’s worse than the first. Earlier this week, the country reporting its worst day since the pandemic began despite new tough restrictions. What do you believe it will take to flatten the curve again? And are you concerned that the situation in Australia is slipping out of control?

RUDD: What the situation in the state of Victoria and the city of Melbourne in particular demonstrates is what we see in so many countries around the world, which is the ease with which a second wave effect can be made manifest. It’s not just of course in Australia. We see evidence of this in Hong Kong. We see it in other countries, where in fact, the initial management of the crisis was pretty effective. What the lesson of Melbourne, and the lesson of Victoria is for all of us, is that when it comes to maintaining the disciplines of social distancing, of proper quarantine arrangements, as well as contact tracing and the rest, that there is no, as it were, release of our discipline applied to these challenges. And in the case of Victoria, it was in Melbourne – it was simply a poor application of quarantine arrangements in a single hotel, or Australians returning from elsewhere in the world, that led to this community-level transmission. And that can happen in the northern part of the United Kingdom. It can happen in regional France; it can happen anywhere in Germany. What’s the message? Vigilance across the board, until we can eliminate this thing. We’ve still got a lot to learn from Jacinda Ardern’s success in New Zealand in virtually eliminating this virus altogether.

ANDERSON: With that, we’re going to leave it there. Kevin Rudd, former Prime Minister of Australia, it’s always a pleasure. Thank you very much indeed for joining us.

RUDD: Good to be with you.

ANDERSON: Extremely important subject, US-China relations at present.

The post CNN: Cold War 1.5 appeared first on Kevin Rudd.

Planet DebianEvgeni Golov: Building documentation for Ansible Collections using antsibull

In my recent post about building and publishing documentation for Ansible Collections, I've mentioned that the Ansible Community is currently in the process of making their build tools available as a separate project called antsibull instead of keeping them in the hacking directory of ansible.git.

I've also said that I couldn't get the documentation to build with antsibull-docs as it wouldn't support collections yet. Thankfully, Felix Fontein, one of the maintainers of antsibull, pointed out that I was wrong and later versions of antsibull actually have partial collections support. So I went ahead and tried it again.

And what should I say? Two bug reports by me and four patches by Felix Fontain later I can use antsibull-docs to generate the Foreman Ansible Modules documentation!

Let's see what's needed instead of the ugly hack in detail.

We obviously don't need to clone ansible.git anymore and install its requirements manually. Instead we can just install antsibull (0.17.0 contains all the above patches). We also need Ansible (or ansible-base) 2.10 or never, which currently only exists as a pre-release. 2.10 is the first version that has an ansible-doc that can list contents of a collection, which antsibull-docs requires to work properly.

The current implementation of collections documentation in antsibull-docs requires the collection to be installed as in "Ansible can find it". We had the same requirement before to find the documentation fragments and can just re-use the installation we do for various other build tasks in build/collection and point at it using the ANSIBLE_COLLECTIONS_PATHS environment variable or the collections_paths setting in ansible.cfg1. After that, it's only a matter of passing --use-current to make it pick up installed collections instead of trying to fetch and parse them itself.

Given the main goal of antisibull-docs collection is to build documentation for multiple collections at once, it defaults to place the generated files into <dest-dir>/collections/<namespace>/<collection>. However, we only build documentation for one collection and thus pass --squash-hierarchy to avoid this longish path and make it generate documentation directly in <dest-dir>. Thanks to Felix for implementing this feature for us!

And that's it! We can generate our documentation with a single line now!

antsibull-docs collection --use-current --squash-hierarchy --dest-dir ./build/plugin_docs theforeman.foreman

The PR to switch to antsibull is open for review and I hope to get merged in soon!

Oh and you know what's cool? The documentation is now also available as a preview on ansible.com!


  1. Yes, the paths version of that setting is deprecated in 2.10, but as we support older Ansible versions, we still use it. 

Planet DebianMartin Michlmayr: beancount2ledger 1.1 released

Martin Blais recently announced that he'd like to re-organize the beancount code and split out some functionality into separate projects, including the beancount to ledger/hledger conversion code previously provided by bean-report.

I agreed to take on the maintenance of this code and I've now released beancount2ledger, a beancount to ledger/hledger converter.

You can install beancount2ledger with pip:

pip3 install beancount2ledger

Please report issues to the GitHub tracker.

There are a number of outstanding issues I'll fix soon, but please report any other issues you encounter.

Note that I'm not very familiar with hledger. I intend to sync up with hledger author Simon Michael soon, but please file an issue if you notice any problems with the hledger conversion.

Version 1.1 contains a number of fixes compared to the latest code in bean-report:

1.1 (2020-07-24)

  • Preserve metadata information (issue #3)
  • Preserve cost information (lot dates and lot labels/notes) (issue #5)
  • Avoid adding two prices in hledger (issue #2)
  • Avoid trailing whitespace in account open declarations (issue #6)
  • Fix indentation issue in postings (issue #8)
  • Fix indentation issue in price entries
  • Drop time information from price (P) entries
  • Add documentation
  • Relicense under GPL-2.0-or-later (issue #1)

1.0 (2020-07-22)

  • Split ledger and hledger conversion from bean-report into a standalone tool
  • Add man page for beancount2ledger(1)

Worse Than FailureError'd: Free Coff...Wait!

"Hey! I like free coffee! Let me just go ahead and...um...hold on a second..." writes Adam R.

 

"I know I have a lot of online meetings these days but I don't remember signing up for this one," Ged M. wrote.

 

Peter G. writes, "The $60 off this $1M nylon bag?! What a deal! I should buy three of them!"

 

"So, because it's free, it's null, so I guess that's how Starbucks' app logic works?" James wrote.

 

Graham K. wrote, "How very 'zen' of National Savings to give me this particular error when I went to change my address."

 

"I'm not sure I trust "scenem3.com" with their marketing services, if they send out unsolicited template messages. (Muster is German for template, Max Muster is our equivalent of John Doe.)" Lukas G. wrote.

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianReproducible Builds (diffoscope): diffoscope 153 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 153. This version includes the following changes:

[ Chris Lamb ]

* Drop some legacy argument styles; --exclude-directory-metadata and
  --no-exclude-directory-metadata have been replaced with
  --exclude-directory-metadata={yes,no}.

* Code improvements:

  - Make it easier to navigate the main.py entry point.
  - Use a relative import for get_temporary_directory in diffoscope.diff.
  - Rename bail_if_non_existing to exit_if_paths_do_not_exist.
  - Rewrite exit_if_paths_do_not_exist to not check files multiple times.

* Documentation improvements:

  - CONTRIBUTING.md:

    - Add a quick note about adding/suggesting new options.
    - Update and expand the release process documentation.
    - Add a reminder to regenerate debian/tests/control.

  - README.rst:

    - Correct URL to build job on Jenkins.
    - Clarify and correct contributing info to point to salsa.debian.org.

You find out more by visiting the project homepage.

,

Planet DebianSean Whitton: keyboardingupdates

Marks and mark rings in GNU Emacs

I recently attempted to answer the question of whether experienced Emacs users should consider partially or fully disabling Transient Mark mode, which is (and should be) the default in modern GNU Emacs.

That blog post was meant to be as information-dense as I could make it, but now I’d like to describe the experience I have been having after switching to my custom pseudo-Transient Mark mode, which is labelled “mitigation #2” in my older post.

In summary: I feel like I’ve uncovered a whole editing paradigm lying just beneath the surface of the editor I’ve already been using for years. That is cool and enjoyable in itself, but I think it’s also helped me understand other design decisions about the basics of the Emacs UI better than before – in particular, the ideas behind how Emacs chooses where to display buffers, which were very frustrating to me in the past. I am now regularly using relatively obscure commands like C-x 4 C-o. I see it! It all makes sense now!

I would encourage everyone who has never used Emacs without Transient Mark mode to try turning it off for a while, either fully or partially, just to see what you can learn. It’s fascinating how it can come to seem more convenient and natural to pop the mark just to go back to the end of the current line after fixing up something earlier in the line, even though doing so requires pressing two modified keys instead of just C-e.

Eshell

I was amused to learn some years ago that someone was trying to make Emacs work as an X11 window manager. I was amazed and impressed to learn, more recently, that the project is still going and a fair number of people are using it. Kudos! I suspect that the basic motivation for such projects is that Emacs is a virtual Lisp machine, and it has a certain way of managing visible windows, and people would like to be able to bring both of those to their X11 window management.

However, I am beginning to suspect that the intrinsic properties of Emacs buffers are tightly connected to the ways in which Emacs manages visible windows, and the intrinsic properties of Emacs buffers are at least as fundamental as its status as a virtual Lisp machine. Thus I am not convinced by the idea of trying to use Emacs’ ways of handling visible windows to handle windows which do not contain Emacs buffers. (but it’s certainly nice to learn it’s working out for others)

The more general point is this. Emacs buffers are as fundamental to Emacs as anything else is, so it seems unlikely to be particularly fruitful to move something typically done outside of Emacs into Emacs, unless that activity fits naturally into an Emacs buffer or buffers. Being suited to run on a virtual Lisp machine is not enough.

What could be more suited to an Emacs buffer, however, than a typical Unix command shell session? By this I mean things like running commands which produce text output, and piping this output between commands and into and out of files. Typically the commands one enters are sort of like tiny programs in themselves, even if there are no pipes involved, because you have to spend time determining just what options to pass to achieve what you want. It is great to have all your input and output available as ordinary buffer text, navigable just like all your other Emacs buffers.

Full screen text user interfaces, like top(1), are not the sort of thing I have in mind here. These are suited to terminal emulators, and an Emacs buffer makes a poor terminal emulator – what you end up with is a sort of terminal emulator emulator. Emacs buffers and terminal emulators are just different things.

These sorts of thoughts lead one to Eshell, the Emacs Shell. Quoting from its documentation:

The shell’s role is to make [system] functionality accessible to the user in an unformed state. Very roughly, it associates kernel functionality with textual commands, allowing the user to interact with the operating system via linguistic constructs. Process invocation is perhaps the most significant form this takes, using the kernel’s fork' andexec’ functions.

Emacs is … a user application, but it does make the functionality of the kernel accessible through an interpreted language – namely, Lisp. For that reason, there is little preventing Emacs from serving the same role as a modern shell. It too can manipulate the kernel in an unpredetermined way to cause system changes. All it’s missing is the shell-ish linguistic model.

Eshell has been working very well for me for the past month or so, for, at least, Debian packaging work, which is very command shell-oriented (think tools like dch(1)).

The other respects in which Eshell is tightly integrated with the rest of Emacs are icing on the cake. In particular, Eshell can transparently operate on remote hosts, using TRAMP. So when I need to execute commands on Debian’s ftp-master server to process package removal requests, I just cd /ssh:fasolo: in Eshell. Emacs takes care of disconnecting and connecting to the server when needed – there is no need to maintain a fragile SSH connection and a shell process (or anything else) running on the remote end.

Or I can cd /ssh:athena\|sudo:root@athena: to run commands as root on the webserver hosting this blog, and, again, the text of the session survives on my laptop, and may be continued at my leisure, no matter whether athena reboots, or I shut my laptop and open it up again the next morning. And of course you can easily edit files on the remote host.

Planet DebianSean Whitton: Kinesis Advantage 2 for heavy Emacs users

A little under two months ago I invested in an expensive ergonomic keyboard, a Kinesis Advantage 2, and set about figuring out how to use it most effectively with Emacs. The default layout for the keyboard is great for strong typists who control their computer mostly with their mouse, but less good for Emacs users, who are strong typists that control their computer mostly with their keyboard.

It took me several tries to figure out where to put the ctrl, alt, backspace, delete, return and spacebar keys, and aside from one forum post I ran into, I haven’t found anyone online who came up with anything much like what I’ve come up with, so I thought I should probably write up a blog post.

The mappings

  • The pairs of arrow keys under the first two fingers of each hand become ctrl and alt/meta keys. This way there is a ctrl and alt/meta key for each hand, to reduce the need for one-handed chording.

    I bought the keyboard expecting to have all modifier keys on my thumbs. However, (i) only the two large thumb keys can be pressed without lifting your hand away from the home row, or stretching in a way that’s not healthy; and (ii) only the outermost large thumb key can be comfortably held down as a modifier.

    It takes a little work to get used to using the third and fifth fingers of one hand to hold down both alt/meta and shift, for typing core Emacs commands like M-^ and M-@, but it does become natural to do so.

  • The arrow keys are moved to the four ctrl/alt/super keys which run along the top of the thumb key areas.

  • The outermost large thumb key of each hand becomes a spacebar. This means it is easy to type C-u C-SPC with the right hand while the left hand holds down control, and sequences like C-x C-SPC and C-a C-SPC C-e with the left hand with the right hand holding down control.

    It took me a while to realise that it is not wasteful to have two spacebars.

  • The inner large thumb keys become backspace and return.

  • The international key becomes delete.

    Rarely needed for Emacs users, as we have C-d, so initially I just had no delete key, but soon came to regret this when trying to edit text in web forms.

  • Caps Lock becomes Super, but remains caps lock on the keypad layer.

    See my rebindings for ordinary keyboards for some discussion of having just a single Super key.

Sequences of two modified keys on different halves of the keyboard

It is desirable to input sequences like C-x C-o without switching which hand is holding the control key. This requires one-handed chording, but this is trecherous when the modifier keys not under the thumbs, because you might need to press the modified key with the same finger that’s holding the modifier!

Fortunately, most or all sequences of two keys modified by ctrl or alt/meta, where each of the two modifier keys is typed by a different hand, begin with C-c, C-x or M-g, and the left hand can handle each of these on its own. This leaves the right hand completely free to hit the second modified key while the left hand continues to hold down the modifier.

My rebindings for ordinary keyboards

I have some rebindings to make Emacs usage more ergonomic on an ordinary keyboard. So far, my Kinesis Advantage setup is close enough to that setup that I’m not having difficulty switching back and forth from my laptop keyboard.

The main difference is for sequences of two modified keys on different halves of the keyboard – which of the two modified keys is easiest to type as a one-handed chord is different on the Kinesis Advantage than on my laptop keyboard. At this point, I’m executing these sequences without any special thought, and they’re rare enough that I don’t think I need to try to determine what would be the most ergonomic way to handle them.

Krebs on SecurityNY Charges First American Financial for Massive Data Leak

In May 2019, KrebsOnSecurity broke the news that the website of mortgage title insurance giant First American Financial Corp. had exposed approximately 885 million records related to mortgage deals going back to 2003. On Wednesday, regulators in New York announced that First American was the target of their first ever cybersecurity enforcement action in connection with the incident, charges that could bring steep financial penalties.

First American Financial Corp.

Santa Ana, Calif.-based First American [NYSE:FAF] is a leading provider of title insurance and settlement services to the real estate and mortgage industries. It employs some 18,000 people and brought in $6.2 billion in 2019.

As first reported here last year, First American’s website exposed 16 years worth of digitized mortgage title insurance records — including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts, and drivers license images.

The documents were available without authentication to anyone with a Web browser.

According to a filing (PDF) by the New York State Department of Financial Services (DFS), the weakness that exposed the documents was first introduced during an application software update in May 2014 and went undetected for years.

Worse still, the DFS found, the vulnerability was discovered in a penetration test First American conducted on its own in December 2018.

“Remarkably, Respondent instead allowed unfettered access to the personal and financial data of millions of its customers for six more months until the breach and its serious ramifications were widely publicized by a nationally recognized cybersecurity industry journalist,” the DFS explained in a statement on the charges.

A redacted screenshot of one of many millions of sensitive records exposed by First American’s Web site.

Reuters reports that the penalties could be significant for First American: The DFS considers each instance of exposed personal information a separate violation, and the company faces penalties of up to $1,000 per violation.

In a written statement, First American said it strongly disagrees with the DFS’s findings, and that its own investigation determined only a “very limited number” of consumers — and none from New York — had personal data accessed without permission.

In August 2019, the company said a third-party investigation into the exposure identified just 32 consumers whose non-public personal information likely was accessed without authorization.

When KrebsOnSecurity asked last year how long it maintained access logs or how far back in time that review went, First American declined to be more specific, saying only that its logs covered a period that was typical for a company of its size and nature.

But in Wednesday’s filing, the DFS said First American was unable to determine whether records were accessed prior to Jun 2018.

“Respondent’s forensic investigation relied on a review of web logs retained from June 2018 onward,” the DFS found. “Respondent’s own analysis demonstrated that during this 11-month period, more than 350,000 documents were accessed without authorization by automated ‘bots’ or ‘scraper’ programs designed to collect information on the Internet.

The records exposed by First American would have been a virtual gold mine for phishers and scammers involved in so-called Business Email Compromise (BEC) scams, which often impersonate real estate agents, closing agencies, title and escrow firms in a bid to trick property buyers into wiring funds to fraudsters. According to the FBI, BEC scams are the most costly form of cybercrime today.

First American’s stock price fell more than 6 percent the day after news of their data leak was published here. In the days that followed, the DFS and U.S. Securities and Exchange Commission each announced they were investigating the company.

First American released its first quarter 2020 earnings today. A hearing on the charges alleged by the DFS is slated for Oct. 26.

Kevin RuddBloomberg: US-China Relations Worsen

E&OE TRANSCRIPT
BLOOMBERG
23 JULY 2020

TOM MACKENZIE: Let’s start with your reaction to this latest sequence of events.

KEVIN RUDD: Well, structurally, the US-China relationship is in the worst state it’s been in about 50 years. It’s 50 years next year since Henry Kissinger undertook his secret diplomacy in Beijing. So, this relationship is in trouble strategically, militarily, diplomatically, politically, economically, trade, investment technology, and of course, in the wonderful world of espionage as well. And so, whereas this is a surprising move against a Chinese consulate general of the United States, it certainly fits within the fabric of a structural deterioration relationship underway now for quite a number of years.

MACKENZIE: So far, China, Beijing has taken what many would argue would be a proportionate response to actions by the US, at least in the last few months. Is there an argument now that this kind of action, calling for the closure of this consulate in Houston, will strengthen the hands of the hardliners here in Beijing, and it will force them to take a stronger response? What do you think ultimately will be the material reaction then from Beijing

RUDD: Well, on this particular consulate general closure, I think, as night follows day, you’ll see a Chinese decision to close an American consulate general in China. There are a number already within China. I think you would look to see what would happen with the future of the US Consulate General in say Shenyang up in the northeast, or in Chengdu in the west, because this tit-for-tat is alive very much in the way in which China views the necessity politically, to respond in like form to what the Americans have done. But overall, the Chinese leadership are a very hard-bitten, deeply experienced Marxist-Leninist leadership, who look at the broad view of the US-China relationship. They see it as structurally deteriorating. They see it in part as an inevitable reaction to China’s rise. And if you look carefully at some of the internal statements by Xi Jinping in recent months, the Chinese system is gearing up for what it describes internally as 20 to 30 years of growing friction in the US-China relationship, and that will make life difficult for all countries who have deep relationships with both countries.

MACKENZIE: Mike Pompeo, the US Secretary of State was in London talking to his counterparts there, and he called for a coalition with allies. Presumably, that will include at some point Australia, though we have yet to hear from their leaders about the sense of a coalition against China. Do you think this is significant? Do you think this is a shift in US policy? How much traction do you think Mike Pompeo and the Trump administration will get in forming a coalition to push back against China?

RUDD: Well, the truth is, most friends and allies of the United States, are waiting to see what happens in the US presidential election. There was a general expectation that President Trump will not be re-elected. Therefore, the attitude of friends and allies of the United States: well, what will be the policy cost here of an incoming Biden administration, in relation to China, and in critical areas like the economy, trade, investment technology and the rest? Bear in mind, however, that what has happened under Xi Jinping’s leadership, since he became leader of the Chinese Communist Party the end of 2012, is that China has progressively become more assertive in the promotion of its international interests, whether it’s in the South China Sea, the East China Sea, whether it’s in Europe, whether it’s the United States, whether its countries like Australia. And therefore, what is happening is that countries who are now experiencing this for the first time – the real impact of an assertive Chinese foreign policy – are themselves beginning to push back. And so whether it’s with American leadership or not, the bottom line is that what I now observe is that countries in Europe, democracies in Europe, democracies in Asia, are increasingly in discussion with one another about how do you deal with the emerging China challenge to the international rules based system. That I think is happening as a matter of course, whether or not Mike Pompeo seeks to lead it or not.

DAVID INGLES: Mr Rudd I’d like to pick it up there. David here, by the way, in Hong Kong. In terms of what do you think is the proper way to engage an emerging China? You’ve dealt with them at many levels. You understand how sensitive their past is to their leadership, and how that shapes where they think their country should be, their ambitions. How should the world – let alone the US, let’s set that aside – how should the rest of the world engage an emerging China?

RUDD: Well you’re right. In one capacity or another, I’ve been dealing with China for the last 35 years, since I first went to work there as an Australian embassy official way back in the 1980s. It’s almost the Mesolithic period now. And I’ve seen the evolution of China’s international posture over that period of time. And certainly, there is a clear dividing line with the emergence of Xi Jinping’s leadership, where China has ceased to hide its strength, bide its time, never to take the lead – that was Deng Xiaoping’s axiom for the past. And instead, we see a China under this new leadership, which is infinitely more assertive. And so my advice to governments when they asked me about this, is that governments need to have a coordinated China strategy themselves – just as China has a strategy for dealing with the rest of the world including the major countries and economies within it. But the principles of those strategies should be pretty basic. Number one, those of us who are democracies, we simply make it plain to the Chinese leadership that that’s our nature, our identity, and we’re not about change as far as our belief in universal human rights and values are concerned. Number two, most of us are allies with the United States for historical reasons, and current reasons as well. And that’s not going to change either. Number three, we would like to however, prosecute a mutually beneficial trade and investment and capital markets relationship with you in China, that works for both of us on the basis of reciprocity in each other’s markets. And four, there are so many global challenges out there at the moment – from the pandemic, through to global climate change action, and onto financial markets stability – which require us and China to work together in the major forums of the world like the G20. I think those principles should govern everyone’s approach to how you deal with this emerging and different China.

The post Bloomberg: US-China Relations Worsen appeared first on Kevin Rudd.

Planet DebianDima Kogan: Finding long runs of "notable" data in a log

Here's yet another instance where the data processing I needed done could be acomplished entirely in the shell, with vnlog tools.

I have some time-series data in a text table. Via some join and filter operations, I have boiled down this table to a sequence of time indices where something interesting happened. For instance let's say it looks like this:

t.vnl

# time
1976
1977
1978
1979
1980
1986
1987
1988
1989
2011
2012
2013
2014
2015
4679
4680
4681
4682
4683
4684
4685
4686
4687
7281
7282
7283
7291
7292
7293

I'd like to find the longest contiguous chunk of time where the interesting thing kept happening. How? Like this!

$ < t.vnl vnl-filter -p 'time,d=diff(time)' |
          vnl-uniq -c -f -1 |
          vnl-filter 'd==1' -p 'count=count+1,time=time-1' |
          vnl-sort -nrk count |
          vnl-align
# count time
9       4679
5       2011
5       1976
4       1986
3       7291
3       7281

Bam! So the longest run was 9-frames-long, starting at time = 4679.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 202.00 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

June was the last month of Jessie LTS which ended on 2020-06-20. If you still need to run Jessie somewhere, please read the post about keeping Debian 8 Jessie alive for longer than 5 years.
So, as (Jessie) LTS is dead, long live the new LTS, Stretch LTS! Stretch has received its last point release, so regular LTS operations can now continue.
Accompanying this, for the first time, we have prepared a small survey about our users and contributors, who they are and why they are using LTS. Filling out the survey should take less than 10 minutes. We would really appreciate if you could participate in the survey online! On July 27th 2020 we will close the survey, so please don’t hesitate and participate now! After that, there will be a followup with the results.

The security tracker for Stretch LTS currently lists 29 packages with a known CVE and the dla-needed.txt file has 44 packages needing an update in Stretch LTS.

Thanks to our sponsors

New sponsors are in bold.

We welcome CoreFiling this month!

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianEnrico Zini: Build Qt5 cross-builder with raspbian sysroot: compiling with the sysroot (continued)

Lite extra ball, from https://www.flickr.com/photos/st3f4n/143623902

This is part of a series of posts on compiling a custom version of Qt5 in order to develop for both amd64 and a Raspberry Pi.

The previous rounds of attempts ended in one issue too many to investigate in the allocated hourly budget.

Andreas Gruber wrote:

Long story short, a fast solution for the issue with EGLSetBlobFuncANDROID is to remove libraspberrypi-dev from your sysroot and do a full rebuild. There will be some changes to the configure results, so please review them - if they are relevant for you - before proceeding with your work.

That got me unstuck! dpkg --purge libraspberrypi-dev in the sysroot, and we're back in the game.

While Qt5's build has proven extremely fragile, I was surprised that some customization from Raspberry Pi hadn't yet broken something. In the end, they didn't disappoint.

More i386 issues

The run now stops with a new 32bit issue related to v8 snapshots:

qt-everywhere-src-5.15.0/qtwebengine/src/core/release$ /usr/bin/g++ -pie -Wl,--fatal-warnings -Wl,--build-id=sha1 -fPIC -Wl,-z,noexecstack -Wl,-z,relro -Wl,-z,now -Wl,-z,defs -Wl,--as-needed -m32 -pie -Wl,--disable-new-dtags -Wl,-O2 -Wl,--gc-sections -o "v8_snapshot/mksnapshot" -Wl,--start-group @"v8_snapshot/mksnapshot.rsp"  -Wl,--end-group  -ldl -lpthread -lrt -lz
/usr/bin/ld: skipping incompatible //usr/lib/x86_64-linux-gnu/libz.so when searching for -lz
/usr/bin/ld: skipping incompatible //usr/lib/x86_64-linux-gnu/libz.a when searching for -lz
/usr/bin/ld: cannot find -lz
collect2: error: ld returned 1 exit status

Attempted solution: apt install zlib1g-dev:i386.

Alternative solution (untried): configure Qt5 with -no-webengine-v8-snapshot.

It builds!

Installation paths

Now it tries to install files into debian/tmp/home/build/sysroot/opt/qt5custom-armhf/.

I realise that I now need to package the sysroot itself, both as a build-dependency of the Qt5 cross-compiler, and as a runtime dependency of the built cross-builder.

Conclusion

The current work in progress, patches, and all, is at https://github.com/Truelite/qt5custom/tree/master/debian-cross-qtwebengine

It blows my mind how ridiculously broken is the Qt5 cross-compiler build, for a use case that, looking at how many people are trying, seems to be one of the main ones for the cross-builder.

CryptogramAdversarial Machine Learning and the CFAA

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, "What are the potential legal risks to adversarial ML researchers when they attack ML systems?" Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA's application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

Kevin RuddCNN: America, Britain and China

E&OE TRANSCRIPT
TELEVISION INTERVIEW
QUEST MEANS BUSINESS, CNN
22 JULY 2020

Richard Quest
Kevin Rudd, very good to see you. The strategy that China is now employing to put pressure — you’ve already seen the result of what the US sanctions on China has done — so now what happens with Australia?

Kevin Rudd
Well, it’s important, Richard, to understand what’s driving I think Chinese government decision making, not just on the Hong Kong question, but more broadly what’s happening with a number of other significant relationships which China has in the world. Since COVID-19 has hit, what many of us have observed is China actually doubling down hard in a nationalist direction in terms of a whole range of its external relationships, whether it’s with Canada, Australia, United Kingdom, but also over questions like Hong Kong, the South China Sea, Taiwan, and look at what most recently has happened on the China-India border. And so, therefore, we see a much more hardening Chinese response across the board. And it’s inevitable in my judgment, this is going to generate reactions of the type that we’ve seen in governments from Canberra to London to other points in between.

Richard Quest
But is China in danger of fighting on too many fronts? It’s got its enormous trade war with the United States. It’s now, of course, got the problems over Hong Kong, which will add more potential sanctions and tariffs to China. Now it’s got its row with the UK and of course its recent now with Australia. So what point in your view does China have to start rowing back?

Kevin Rudd
Well, it’s an important question for the Chinese leadership now in August, Richard, because in August they retreat east of Beijing for a month of high-level meetings amongst the extended central leadership. And a central question on the agenda for this upcoming set of meetings will be a) the state of the US-China relationship, which for them is central to everything, and b) the relationship with other principal countries like the UK and c) the unstated topic will be: has China gone too far? In Chinese strategic literature, there’s an expression just like you mentioned before, Richard, that is, it’s not sensible to fight on multiple fronts simultaneously. So there’s an internal debate in China at the moment about whether, in fact, the current strategy is the right one. And therefore the impact of this decision including the British decision most recently both the impending decision on Huawei and on Hong Kong will feed into that.

Richard Quest
But Kevin, whether it’s wise or not, and bearing in mind that China has enormous problems at home, it’s not as if President Xi has, by any means, an electorate or populace, I should say, more likely, that that’s entirely behind him. But he seems determined to prosecute these disagreements with other nations, whatever the cost, and I suggest to you that because he doesn’t have to face an electorate, like all the rest of them have to.

Kevin Rudd
But the bottom line is, however, Richard, is that you then see the economic impact of China being progressively, as it were, imperilled and its principal economic relationships abroad. The big debate in Beijing, for example, with the US-China trade war in the last two years has been: has China pushed too far in order to generate the magnitude of this American reaction? Parallel logic on Huawei, parallel logic in terms of the Hong Kong national security law. So your point goes to whether Xi Jinping is domestically immune from pressure? Well, yes, China is not a liberal democracy. We all know that. It never has been, at least since 1949 and for a long time before that, as well. But there are pressures within the Communist Party at a level of sheer pragmatism, which is: is this sustainable in terms of China’s economic interests? Remember 38% of the Chinese gross domestic product is generated through the traded sector of its economy. It has an unfolding balance of payments challenge and therefore, in terms of any potential financial sanctions coming out of the Hong Kong national security law from Washington in particular. China, therefore, experiences the economic impact, which then feeds into its domestic political debate within the Communist Party.

Journalist
Kevin Rudd joining us.

The post CNN: America, Britain and China appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Step too Var

Astor works for a company that provides software for research surveys. Someone needed a method to return a single control object from a list of control objects, so they wrote this C# code:

 
private ResearchControl GetResearchControlFromListOfResearchControls(int theIndex, 
    List<ResearchControl> researchControls)
{
    var result = new ResearchControl();
    result = researchControls[theIndex];
    return result;
}

Astor has a theory: “I can only guess the author was planning to return some default value in some case…”

I’m sorry, Astor, but you are mistaken. Honestly, if that were the case, I wouldn’t consider this much of a WTF at all, but here we have a subtle hint about deeper levels of ignorance, and it’s all encoded in that little var.

C# is strongly typed, but declaring the type for every variable is a pain, and in many cases, it’s redundant information. So C# lets you declare a variable with var, which does type interpolation. A var variable has a type, just instead of saying what it is, we just ask the compiler to figure it out from context.

But you have to give it that context, which means you have to declare and assign to the variable in a single step.

So, imagine you’re a developer who doesn’t know C# very well. Maybe you know some JavaScript, and you’re just trying to muddle through.

“Okay, I need a variable to hold the result. I’ll type var result. Hmm. Syntax error. Why?”

The developer skims through the code, looking for similar statements, and sees a var / new construct, and thinks, “Ah, that must be what I need to do!” So var result = new ResearchControl() appears, and the syntax error goes away.

Now, that doesn’t explain all of this code. There are still more questions, like: why not just return researchControls[index] or realize that, wait, you’re just indexing an array, so why not just not write a function at all? Maybe someone had some thoughts about adding exception handling, or returning a default value in cases where there wasn’t a valid entry in the array, but none of that ever happened. Instead, we just get this little artifact of someone who didn’t know better, and who wasn’t given any direction on how to do better.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Planet DebianJunichi Uekawa: Joys of sshfs slave mode.

Joys of sshfs slave mode. When I want to have parts of my source tree on remote, I use sshfs slave mode, combined with emacs tramp things look very much integrated. sshfs interface only has obnoxious -o slave option which makes it talk to stdin/stdout, which needs to be connected to sftp-server from the local host. Using dpipe from vde2 seems to be a popular method to run the tool. Something like: dpipe /usr/lib/openssh/sftp-server = ssh hostname sshfs :/directory/to/be/shared ~/mnt/src -o slave I wish I can limit the visibility from sftp-server but maybe that's okay.

Krebs on SecurityTwitter Hacking for Profit and the LoLs

The New York Times last week ran an interview with several young men who claimed to have had direct contact with those involved in last week’s epic hack against Twitter. These individuals said they were only customers of the person who had access to Twitter’s internal employee tools, and were not responsible for the actual intrusion or bitcoin scams that took place that day. But new information suggests that at least two of them operated a service that resold access to Twitter employees for the purposes of modifying or seizing control of prized Twitter profiles.

As first reported here on July 16, prior to bitcoin scam messages being blasted out from such high-profile Twitter accounts @barackobama, @joebiden, @elonmusk and @billgates, several highly desirable short-character Twitter account names changed hands, including @L, @6 and @W.

A screenshot of a Discord discussion between the key Twitter hacker “Kirk” and several people seeking to hijack high-value Twitter accounts.

Known as “original gangster” or “OG” accounts, short-character profile names confer a measure of status and wealth in certain online communities, and such accounts can often fetch thousands of dollars when resold in the underground.

The people involved in obtaining those OG accounts on July 15 said they got them from a person identified only as “Kirk,” who claimed to be a Twitter employee. According to The Times, Kirk first reached out to the group through a hacker who used the screen name “lol” on OGusers, a forum dedicated to helping users hijack and resell OG accounts from Twitter and other social media platforms. From The Times’s story:

“The hacker ‘lol’ and another one he worked with, who went by the screen name ‘ever so anxious,’ told The Times that they wanted to talk about their work with Kirk in order to prove that they had only facilitated the purchases and takeovers of lesser-known Twitter addresses early in the day. They said they had not continued to work with Kirk once he began more high-profile attacks around 3:30 p.m. Eastern time on Wednesday.

‘lol’ did not confirm his real-world identity, but said he lived on the West Coast and was in his 20s. “ever so anxious” said he was 19 and lived in the south of England with his mother.

Kirk connected with “lol” late Tuesday and then “ever so anxious” on Discord early on Wednesday, and asked if they wanted to be his middlemen, selling Twitter accounts to the online underworld where they were known. They would take a cut from each transaction.”

Twice in the past year, the OGUsers forum was hacked, and both times its database of usernames, email addresses and private messages was leaked online. A review of the private messages for “lol” on OGUsers provides a glimpse into the vibrant market for the resale of prized OG accounts.

On OGUsers, lol was known to other members as someone who had a direct connection to one or more people working at Twitter who could be used to help fellow members gain access to Twitter profiles, including those that had been suspended for one reason or another. In fact, this was how lol introduced himself to the OGUsers community when he first joined.

“I have a twitter contact who I can get users from (to an extent) and I believe I can get verification from,” lol explained.

In a direct message exchange on OGUsers from November 2019, lol is asked for help from another OGUser member whose Twitter account had been suspended for abuse.

“hello saw u talking about a twitter rep could you please ask if she would be able to help unsus [unsuspend] my main and my friends business account will pay 800-1k for each,” the OGUusers profile inquires of lol.

Lol says he can’t promise anything but will look into it. “I sent her that, not sure if I will get a reply today bc its the weekend but ill let u know,” Lol says.

In another exchange, an OGUser denizen quizzes lol about his Twitter hookup.

“Does she charge for escalations? And how do you know her/what is her department/job. How do you connect with them if I may ask?”

“They are in the Client success team,” lol replies. “No they don’t charge, and I know them through a connection.”

As for how he got access to the Twitter employee, lol declines to elaborate, saying it’s a private method. “It’s a lil method, sorry I cant say.”

In another direct message, lol asks a fellow OGUser member to edit a comment in a forum discussion which included the Twitter account “@tankska,” saying it was his IRL (in real life) Twitter account and that he didn’t want to risk it getting found out or suspended (Twitter says this account doesn’t exist, but a simple text search on Twitter shows the profile was active until late 2019).

“can u edit that comment out, @tankska is a gaming twitter of mine and i dont want it to be on ogu :D’,” lol wrote. “just dont want my irl getting sus[pended].”

Still another OGUser member would post lol’s identifying information into a forum thread, calling lol by his first name — “Josh” — in a post asking lol what he might offer in an auction for a specific OG name.

“Put me down for 100, but don’t note my name in the thread please,” lol wrote.

WHO IS LOL?

The information in lol’s OGUsers registration profile indicates he was probably being truthful with The Times about his location. The hacked forum database shows a user “tankska” registered on OGUsers back in July 2018, but only made one post asking about the price of an older Twitter account for sale.

The person who registered the tankska account on OGUsers did so with the email address jperry94526@gmail.com, and from an Internet address tied to the San Ramon Unified School District in Danville, Calif.

According to 4iq.com, a service that indexes account details like usernames and passwords exposed in Web site data breaches, the jperry94526 email address was used to register accounts at several other sites over the years, including one at the apparel store Stockx.com under the profile name Josh Perry.

Tankska was active only briefly on OGUsers, but the hacked OGUsers database shows that “lol” changed his username three times over the years. Initially, it was “freej0sh,” followed by just “j0sh.”

lol did not respond to requests for comment sent to email addresses tied to his various OGU profiles and Instagram accounts.

ALWAYS IN DISCORD

Last week’s story on the Twitter compromise noted that just before the bitcoin scam tweets went out, several OG usernames changed hands. The story traced screenshots of Twitter tools posted online back to a moniker that is well-known in the OGUsers circle: PlugWalkJoe, a 21-year-old from the United Kingdom.

Speaking with The Times, PlugWalkJoe — whose real name is Joseph O’Connor — said while he acquired a single OG Twitter account (@6) through one of the hackers in direct communication with Kirk, he was otherwise not involved in the conversation.

“I don’t care,” O’Connor told The Times. “They can come arrest me. I would laugh at them. I haven’t done anything.”

In an interview with KrebsOnSecurity, O’Connor likewise asserted his innocence, suggesting at least a half dozen other hacker handles that may have been Kirk or someone who worked with Kirk on July 15, including “Voku,” “Crim/Criminal,” “Promo,” and “Aqua.”

“That twit screenshot was the first time in a while I joke[d], and evidently I shouldn’t have,” he said. “Joking is what got me into this mess.”

O’Connor shared a number of screenshots from a Discord chat conversation on the day of the Twitter hack between Kirk and two others: “Alive,” which is another handle used by lol, and “Ever So Anxious.” Both were described by The Times as middlemen who sought to resell OG Twitter names obtained from Kirk. O’Connor is referenced in these screenshots as both “PWJ” and by his Discord handle, “Beyond Insane.”

The negotiations over highly-prized OG Twitter usernames took place just prior to the hijacked celebrity accounts tweeting out bitcoin scams.

Ever So Anxious told Kirk his OGU nickname was “Chaewon,” which corresponds to a user in the United Kingdom. Just prior to the Twitter compromise, Chaewon advertised a service on the forum that could change the email address tied to any Twitter account for around $250 worth of bitcoin. O’Connor said Chaewon also operates under the hacker alias “Mason.”

“Ever So Anxious” tells Kirk his OGUsers handle is “Chaewon,” and asks Kirk to modify the display names of different OG Twitter handles to read “lol” and “PWJ”.

At one point in the conversation, Kirk tells Alive and Ever So Anxious to send funds for any OG usernames they want to this bitcoin address. The payment history of that address shows that it indeed also received approximately $180,000 worth of bitcoin from the wallet address tied to the scam messages tweeted out on July 15 by the compromised celebrity accounts.

The Twitter hacker “Kirk” telling lol/Alive and Chaewon/Mason/Ever So Anxious where to send the funds for the OG Twitter accounts they wanted.

SWIMPING

My July 15 story observed there were strong indications that the people involved in the Twitter hack have connections to SIM swapping, an increasingly rampant form of crime that involves bribing, hacking or coercing employees at mobile phone and social media companies into providing access to a target’s account.

The account “@shinji,” a.k.a. “PlugWalkJoe,” tweeting a screenshot of Twitter’s internal tools interface.

SIM swapping was thought to be behind the hijacking of Twitter CEO Jack Dorsey‘s Twitter account last year. As recounted by Wired.com, @jack was hijacked after the attackers conducted a SIM swap attack against AT&T, the mobile provider for the phone number tied to Dorsey’s Twitter account.

Immediately after Jack Dorsey’s Twitter handle was hijacked, the hackers tweeted out several shout-outs, including one to @PlugWalkJoe. O’Connor told KrebsOnSecurity he has never been involved in SIM swapping, although that statement was contradicted by two law enforcement sources who closely track such crimes.

However, Chaewon’s private messages on OGusers indicate that he very much was involved in SIM swapping. Use of the term “SIM swapping” was not allowed on OGusers, and the forum administrators created an automated script that would watch for anyone trying to post the term into a private message or discussion thread.

The script would replace the term with “I do not condone illegal activities.” Hence, a portmanteau was sometimes used: “Swimping.”

“Are you still swimping?” one OGUser member asks of Chaewon on Mar. 24, 2020. “If so and got targs lmk your discord.” Chaewon responds in the affirmative, and asks the other user to share his account name on Wickr, an encrypted online messaging app that automatically deletes messages after a few days.

Chaewon/Ever So Anxious/Mason did not respond to requests for comment.

O’Connor told KrebsOnSecurity that one of the individuals thought to be associated with the July 15 Twitter hack — a young man who goes by the nickname “Voku” — is still actively involved in SIM-swapping, particularly against customers of AT&T and Verizon.

Voku is one of several hacker handles used by a Canton, Mich. youth whose mom turned him in to the local police in February 2018 when she overheard him talking on the phone and pretending to be an AT&T employee. Officers responding to the report searched the residence and found multiple cell phones and SIM cards, as well as files on the kid’s computer that included “an extensive list of names and phone numbers of people from around the world.”

The following month, Michigan authorities found the same individual accessing personal consumer data via public Wi-Fi at a local library, and seized 45 SIM cards, a laptop and a Trezor wallet — a hardware device designed to store crytpocurrency account data. In April 2018, Voku’s mom again called the cops on her son — identified only as confidential source #1 (“CS1”) in the criminal complaint against him — saying he’d obtained yet another mobile phone.

Voku’s cooperation with authorities led them to bust up a conspiracy involving at least nine individuals who stole millions of dollars worth of cryptocurrency and other items of value from their targets.

CONSPIRACY

Samy Tarazi, an investigator with the Santa Clara County District Attorney’s Office, has spent hundreds of hours tracking young hackers during his tenure with REACT, a task force set up to combat SIM swapping and bring SIM swappers to justice.

According to Tarazi, multiple actors in the cybercrime underground are constantly targeting people who work in key roles at major social media and online gaming platforms, from Twitter and Instagram to Sony, Playstation and Xbox.

Tarazi said some people engaged in this activity seek to woo their targets, sometimes offering them bribes in exchange for the occasional request to unban or change the ownership of specific accounts.

All too often, however, employees at these social media and gaming platforms find themselves the object of extremely hostile and persistent personal attacks that threaten them and their families unless and until they give in to demands.

“In some cases, they’re just hitting up employees saying, ‘Hey, I’ve got a business opportunity for you, do you want to make some money?'” Tarazi explained. “In other cases, they’ve done everything from SIM swapping and swatting the victim many times to posting their personal details online or extorting the victims to give up access.”

Allison Nixon is chief research officer at Unit 221B, a cyber investigations company based in New York. Nixon says she doesn’t buy the idea that PlugWalkJoe, lol, and Ever So Anxious are somehow less culpable in the Twitter compromise, even if their claims of not being involved in the July 15 Twitter bitcoin scam are accurate.

“You have the hackers like Kirk who can get the goods, and the money people who can help them profit — the buyers and the resellers,” Nixon said. “Without the buyers and the resellers, there is no incentive to hack into all these social media and gaming companies.”

Mark Rasch, Unit 221B’s general counsel and a former U.S. federal prosecutor, said all of the players involved in the Twitter compromise of July 15 can be charged with conspiracy, a legal concept in the criminal statute which holds that any co-conspirators are liable for the acts of any other co-conspirator in furtherance of the crime, even if they don’t know who those other people are in real life or what else they may have been doing at the time.

“Conspiracy has been called the prosecutor’s friend because it makes the agreement the crime,” Rasch said. “It’s a separate crime in addition to the underlying crime, whether it be breaking in to a network, data theft or account takeover. The ‘I just bought some usernames and gave or sold them to someone else’ excuse is wrong because it’s a conspiracy and these people obviously don’t realize that.”

In a statement on its ongoing investigation into the July 15 incident, Twitter said it resulted from a small number of employees being manipulated through a social engineering scheme. Twitter said at least 130 accounts were targeted by the attackers, who succeeded in sending out unauthorized tweets from 45 of them and may have been able to view additional information about those accounts, such as direct messages.

On eight of the compromised accounts, Twitter said, the attackers managed to download the account history using the Your Twitter Data tool. Twitter added that it is working with law enforcement and is rolling out additional company-wide training to guard against social engineering tactics.

CryptogramFawkes: Digital Image Cloaking

Fawkes is a system for manipulating digital images so that they aren't recognized by facial recognition systems.

At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

Research paper.

Planet DebianBits from Debian: Let's celebrate DebianDay 2020 around the world

We encourage our community to celebrate around the world the 27th Debian anniversary with organized DebianDay events. This year due to the COVID-19 pandemic we cannot organize in-person events, so we ask instead that contributors, developers, teams, groups, maintainers, and users promote The Debian Project and Debian activities online on August 16th (and/or 15th).

Communities can organize a full schedule of online activities throughout the day. These activities can include talks, workshops, active participation with contributions such as translations assistance or editing, debates, BoFs, and all of this in your local language using tools such as Jitsi for capturing audio and video from presenters for later streaming to YouTube.

If you are not aware of any local community organizing a full event or you don't want to join one, you can solo design your own activity using OBS and stream it to YouTube. You can watch an OBS tutorial here.

Don't forget to record your activity as it will be a nice idea to upload it to Peertube later.

Please add your event/activity on the DebianDay wiki page and let us know about and advertise it on Debian micronews. To share it, you have several options:

  • Follow the steps listed here for Debian Developers.
  • Contact us using IRC in channel #debian-publicity on the OFTC network, and ask us there.
  • Send a mail to debian-publicity@lists.debian.org and ask for your item to be included in micronews. This is a publicly archived list.

PS: DebConf20 online is coming! It will be held from August 23rd to 29th, 2020. Registration is already open.

Planet DebianEnrico Zini: nc | sudo

Question: what does this command do?

# Don't do this
nc localhost 12345 | sudo tar xf -

Answer: it sends the password typed into sudo to the other endpoint of netcat.

I can reproduce this with both nc.traditional and nc.openbsd.

One might be tempted to just put sudo in front of everything, but it'll mean that only nc will run as root:

# This is probably not what you want
sudo nc localhost 12345 | tar xf -

The fix that I will never remember, thanks to twb on IRC, is to close nc's stdin:

<&- nc localhost 12345 | sudo tar xf -

Or flip the table and just use sudo -s:

$ sudo -s
# nc localhost 12345 | tar xf -

Updates

Harald Koenig suggested two alternative spellings that might be easier to remember:

nc localhost 12345 < /dev/null | sudo tar xf -
< /dev/null nc localhost 12345 | sudo tar xf -

And thinking along those lines, there could also be the disappointed face variant:

:| nc localhost 12345 | sudo tar xf -

Matthias Urlichs suggested the approach of precaching sudo's credentials, making the rest of the command lines more straightforward (and TIL: sudo id):

sudo id
nc localhost 12345 | sudo tar xf -

Or even better:

sudo id && nc localhost 12345 | sudo tar xf -

Shortcomings of nc | tar

Tomas Janousek commented:

There's one more problem with a plain tar | nc | tar without redirection or extra parameteres: it doesn't know when to stop. So the best way to use it, I believe, is:

tar c | nc -N

nc -d | tar x

The -N option terminates the sending end of the connection, and the -d option tells the receiving netcat to never read any input. These two parameters, I hope, should also fix your sudo/password problem.

Hope it helps!

Worse Than FailureScience Is Science

Oil well

Bruce worked for a small engineering consultant firm providing custom software solutions for companies in the industrial sector. His project for CompanyX involved data consolidation for a new oil well monitoring system. It was a two-phased approach: Phase 1 was to get the raw instrument data into the cloud, and Phase 2 was to aggregate that data into a useful format.

Phase 1 was completed successfully. When it came time to write the business logic for aggregating the data, CompanyX politely informed Bruce's team that their new in-house software team would take over from here.

Bruce and his team smelled trouble. They did everything they could think of to persuade CompanyX not to go it alone when all the expertise rested on their side. However, CompanyX was confident they could handle the job, parting ways with handshakes and smiles.

Although Phase 2 was officially no longer on his plate, Bruce had a suspicion borne from experience that this wasn't the last he'd hear from CompanyX. Sure enough, a month later he received an urgent support request via email from Rick, an electrical engineer.

We're having issues with our aggregated data not making it into the database. Please help!!

Rick Smith
LEAD SOFTWARE ENGINEER

"Lead Software Engineer!" Bruce couldn't help repeating out loud. Sadly, he'd seen this scenario before with other clients. In a bid to save money, their management would find the most sciency people on their payroll and would put them in charge of IT or, worse, programming.

Stifling a cringe, Bruce dug deeper into the email. Rick had written a Python script to read the raw instrument data, aggregate it in memory, and re-insert it into a table he'd added to the database. Said script was loaded with un-parameterized queries, filters on non-indexed fields, and SELECT * FROM queries. The aggregation logic was nothing to write home about, either. It was messy, slow, and a slight breeze could take it out. Bruce fired up the SQL profiler and found a bigger issue: a certain query was failing every time, throwing the error Cannot insert the value NULL into column 'requests', table 'hEvents'; column does not allow nulls. INSERT fails.

Well, that seemed straightforward enough. Bruce replied to Rick's email, asking if he knew about the error.

Rick's reply came quickly, and included someone new on the email chain. Yes, but we couldn't figure it out, so we were hoping you could help us. Aaron is our SQL expert and even he's stumped.

Product support was part of Bruce's job responsibilities. He helpfully pointed out the specific query that was failing and described how to use the SQL profiler to pinpoint future issues.

Unfortunately, CompanyX's crack new in-house software team took this opportunity to unload every single problem they were having on Bruce, most of them just as basic or even more basic than the first. The back-and-forth email chain grew to epic proportions, and had less to do with product support than with programming education. When Bruce's patience finally gave out, he sent Rick and Aaron a link to the W3 schools SQL tutorial page. Then he talked to his manager. Agreeing that things had gotten out of hand, Bruce's manager arranged for a BA to contact CompanyX to offer more formal assistance. A teleconference was scheduled for the next week, which Bruce and his manager would also be attending.

When the day of the meeting came, Bruce and his associates dialed in—but no one from CompanyX did. After some digging, they learned that the majority of CompanyX's software team had been fired or reassigned. Apparently, the CompanyX project manager had been BCC'd on Bruce's entire email chain with Rick and Aaron. Said PM had decided a new new software team was in order. The last Bruce heard, the team was still "getting organized." The fate of Phase 2 remains unknown.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianAndrew Cater: How to use jigdo to download media images

I worked with the CD release team over the weekend for the final release of Debian Stretch. One problem: we have media images which we cannot test because the team does not have the hardware. I asked questions on the debian-cd mailing list about the future of these and various other .iso images.

Could we replace some DVDs and larger files with smaller jigdo files so that people can download files to build the DVD locally?

People asked me:
  • How do you actually use jigdo to produce a usable media image? 
  • What advantages does jigdo bring over just downloading a large .iso image?
Why jigdo?
  • Downloading large files on a slow or lossy link is difficult.
  • Downloading large (several GB) files via http can be unreliable.
  • Jigdo can be quicker than trying to download one large file and failing.
  • There are few CD mirrors worldwide: jigdo can use a local Debian mirror.
  • The transport mechanism is http - no need for a particular port to be opened.
Using jigdo

Jigdo uses information from a template file to reconstruct an .iso file by downloading Debian packages from a mirror. The image is checksummed and verified at the end of the complete download. if the download is interrupted, you can import the previously downloaded part of the file.

It's a command line application - the GUI never really happened - but it is fairly easy to use.  apt install jigdo-file then find the .jigdo file and .template files that you need for the image from a CD mirror: https://cdimage.debian.org/debian-cd/current/amd64/jigdo-cd/

To build the netinst CD for AMD64, for example: you need the .jigdo file as a minimum: debian-10.4.0-amd64-netinst.jigdo

If you only have this file, jigdo-lite will download the template later but you can save the template in the same directory and save time. The jigdo file is only 25k or so and the template is 4.6M rather than 336M. I copied them into my home directory to build there. The process does not need root permissions.

Run the command jigdo-lite This prompts you for a .jigdo file to use. By default, this uses http to fetch the file from a distant webserver.
(If the files are local, you can use the file:/// syntax.For example: file:///home/amacater/debian-10.4.0-amd64-netinst.jigdo)

jigdo-lite then reads the .jigdo file and outputs some information about the .iso
It offers the chance to reload any failed download, then prompts for a mirror name. The download pulls in small numbers of files at a time, saves to a temporary directory and will checksum the eventual .iso file.

This will work for any larger file including the 16GB .iso distributed only as a .jigdo

For i386 and AMD, the images are bootable when copied to a USB stick. Use dd to write them and verify the copy.
  • Plug in a USB that can be overwritten.
  • Use dmesg as root to work out which device this is.
  • Change to the directory in which you have your .iso image.
  • Write the image to the stick in 4M blocks and display progress with the syntax of the command below (all one line if wrapped).

dd if=debian-10.4.0-amd64-netinst.iso of=/dev/sdX obs=4M oflag=sync status=progress




Planet DebianJonathan Dowland: FlashFloppy OLED display

This is the tenth part in a series of blog posts. The previous post was Amiga floppy recovery project: what next?. The whole series is available here: Amiga.

Rotary encoder, OLED display and mount

Rotary encoder, OLED display and mount

I haven't made any substantive progress on my Amiga floppy recovery project for a while, but I felt like some retail therapy a few days ago so I bought a rotary encoder and OLED display for the Gotek floppy disk emulator along with a 3D-printed mount for them. I'm pleased with the results! The rather undescriptive "DSKA0001" in the picture is a result of my floppy image naming scheme: the display is capable of much more useful labels such as "Lemmings", "Deluxe Paint IV", etc.

The Gotek and all the new bits can now be moved inside the Amiga A500's chassis.

Planet DebianBits from Debian: New Debian Developers and Maintainers (May and June 2020)

The following contributors got their Debian Developer accounts in the last two months:

  • Richard Laager (rlaager)
  • Thiago Andrade Marques (andrade)
  • Vincent Prat (vivi)
  • Michael Robin Crusoe (crusoe)
  • Jordan Justen (jljusten)
  • Anuradha Weeraman (anuradha)
  • Bernelle Verster (indiebio)
  • Gabriel F. T. Gomes (gabriel)
  • Kurt Kremitzki (kkremitzki)
  • Nicolas Mora (babelouest)
  • Birger Schacht (birger)
  • Sudip Mukherjee (sudip)

The following contributors were added as Debian Maintainers in the last two months:

  • Marco Trevisan
  • Dennis Braun
  • Stephane Neveu
  • Seunghun Han
  • Alexander Johan Georg Kjäll
  • Friedrich Beckmann
  • Diego M. Rodriguez
  • Nilesh Patra
  • Hiroshi Yokota

Congratulations!

CryptogramHacking a Power Supply

This hack targets the firmware on modern power supplies. (Yes, power supplies are also computers.)

Normally, when a phone is connected to a power brick with support for fast charging, the phone and the power adapter communicate with each other to determine the proper amount of electricity that can be sent to the phone without damaging the device­ -- the more juice the power adapter can send, the faster it can charge the phone.

However, by hacking the fast charging firmware built into a power adapter, Xuanwu Labs demonstrated that bad actors could potentially manipulate the power brick into sending more electricity than a phone can handle, thereby overheating the phone, melting internal components, or as Xuanwu Labs discovered, setting the device on fire.

Research paper, in Chinese.

Kevin RuddBrookings: Prioritising Education During Covid-19

SPEAKING REMARKS
BROOKINGS INSTITUTION
LEADERSHIP DIALOGUE SERIES
21 JULY 2020

The post Brookings: Prioritising Education During Covid-19 appeared first on Kevin Rudd.

Kevin RuddAFR: Seven Questions Morrison Must Answer

On the eve the government’s much-delayed financial statement, it’s time for some basic questions about Australia’s response.

The uncomfortable truth is that we are still in the economic equivalent of ‘‘the phoney war’’ between September 1939 and May 1940. Our real problem is not now but the fourth quarter of this year, and next year, by when temporary measures will have washed through, while globally the real economy will still be wrecked. But there’s no sign yet of a long-term economic strategy, centred on infrastructure, to rebuild business confidence to start investing and re-employing people.

So while Scott Morrison may look pleased with himself (after months of largely uncritical media, a Parliament that barely meets and a delayed budget) it’s time for some intellectual honesty in what passes for our public policy debate. So here are seven questions for Scotty to answer.

First, the big one. It’s well past time to come fully clean on the two dreaded words of Australian politics: debt and deficit. How on earth can Morrison’s Liberal Party and its coalition partner, the Murdoch party, justify their decade-long assault on public expenditure and investment in response to an existential financial and economic crisis?

Within nine months of taking office, we had to deal with a global financial crisis that threatened our banks, while avoiding mass unemployment. We avoided economic and social disaster by … borrowing. In total, we expended $88 billion, taking our federal net debt to 13 per cent of GDP by 2014 – while still sustaining our AAA credit rating.

Four months into the current crisis, Morrison has so far allocated $259 billion, resulting in a debt-to-GDP ratio of about 50 per cent and rising. We haven’t avoided recession – partly because Morrison had to be dragged kicking and screaming late to the stimulus table. He ignored Reserve Bank and Treasury advice to act earlier because it contradicted the Liberals’ political mantra of getting ‘‘back in black’’.

On debt and deficit, this emperor has no clothes. Indeed, the gargantuan nature of this stimulus strategy has destroyed the entire edifice of Liberal ideology and politics. No wonder Scotty from Marketing now talks of us being ‘‘beyond ideology’’: he no longer has one. He’s adopted social democracy instead, including the belated rediscovery that the agency of the state is essential in the economy, public health and broadband. Where would we be on online delivery of health, education and business Zoom services in the absence of our NBN, despite the Liberals botching its final form?

So Morrison and the Murdoch party should just admit their dishonest debt-anddeficit campaign was bullshit all along – a political myth manufactured to advance the proposition that Labor governments couldn’t manage the economy.

Then there’s Morrison’s claim that his mother-of-all-stimulus-strategies, unlike ours, is purring like a well-oiled machine without a wasted dollar. What about the monumental waste of paying $19,500 to young people who were previously working only part-time for less than half that amount? All part of a $130 billion program that suddenly became $70 billion after a little accounting error (imagine the howls of ‘‘incompetence’’ had we done that).

And let’s not forget the eerie silence surrounding the $40 billion ‘‘loans program’’ to businesses. If that’s administered with anything like the finesse we’ve seen with the $100 million sports rorts affair, heaven help the Auditor-General. Then there’s Stuart ‘‘Robodebt’’ Robert and the rolling administrative debacle that is Centrelink. Public administration across the board is just rolling along tickety-boo.

Third, the $30 billion snatch-and-grab raid (so far) on superannuation balances is the most financially irresponsible assault on savings since Federation. Paul Keating built a $3.1 trillion national treasure. I added to it by lifting the super guarantee from 9 per cent to 12 per cent, which the Liberals are seeking to wreck. The long-term damage this will do to the fiscal balance (age pensions), the balance of payments and our credit rating is sheer economic vandalism.

Fourth, industry policy. Yes to bailouts for regional media, despite Murdoch using COVID-19 to kill more than 100 local and regional papers nationwide. But no JobKeeper for the universities, one of our biggest export industries. Why? Ideology! The Liberals hate universities because they worry educated people become lefties. It’s like the Liberals killing off Australian car manufacturing because they hated unions, despite the fact our industry was among the least subsidised in the world.

Fifth, Morrison proclaimed an automatic ‘‘snapback’’ of his capital-S stimulus strategy after six months to avoid the ‘‘mistakes’’ of my government in allowing ours to taper out over two years. Looks like Scotty has been mugged by reality again. Global recessions have a habit of ignoring domestic political fiction.

Sixth, infrastructure. For God’s sake, we should be using near-zero interest rates to deploy infrastructure bonds and invest in our economic future. Extend the national transmission grid to accommodate industrial-scale solar. Admit the fundamental error of abandoning fibre for copper broadband and complete the NBN as planned. The future global economy will become more digital, not less. Use Infrastructure Australia (not the Nationals) to advise on the cost benefit of each.

Finally, there is trade – usually 43 per cent of our GDP. Global trade is collapsing because of the pandemic and Trumpian protectionism. Yet nothing from the government on forging a global free-trade coalition. Yes, the China relationship is hard. But the government’s failure to prosecute an effective China strategy is now compounding our economic crisis. And, outrageously, the US is moving in on our barley and beef markets. Trade policy is a rolled-gold mess.

So far Morrison’s government, unlike mine, has had unprecedented bipartisan support from the opposition. But public trust is hanging by a thread. It’s time for Morrison to get real with these challenges, not just spin us a line. Ultimately, the economy does not lie.

Kevin Rudd was the 26th prime minister of Australia.

First published in the Australian Financial Review on 21 July 2020.

The post AFR: Seven Questions Morrison Must Answer appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Dropped Pass

A charitable description of Java is that it’s a strict language, at least in terms of how it expects you to interact with types and definitions. That strictness can create conflict when you’re interacting with less strict systems, like JSON data.

Tessie produces data as a JSON API that wraps around sensing devices which report a numerical value. These sensors, as far as we care for this example, come in two flavors: ones that report a maximum recorded value, and ones which don’t. Something like:

  {
    dataNoMax: [
      {name: "sensor1", value: 20, max: 0} 
    ],
    dataWithMax: [
      {name: "sensor2", value: 25, max: 50 }
    ]
  }

By convention, the API would report max: 0 for all the devices which didn’t have a max.

With that in mind, they designed their POJOs like this:

  class Data {
    String name;
    int value;
    int max;
  }

  class Readings {
    List<Data> dataNoMax;
    List<Data> dataWithMax;
  }

These POJOs would be used both on the side producing the data, and in the client libraries for consuming the data.

Of course, by JSON convention, including a field that doesn’t actually hold a meaningful value is a bad idea- max: 0 should either be max: null, or better yet, just excluded from the output entirely.

So one of Tessie’s co-workers hacked some code into the JSON serializer to conditionally include the max field in the output.

QA needed to validate that this change was correct, so they needed to implement some automated tests. And this is where the problems started to crop up. The developer hadn’t changed the implementation of the POJOs, and they were using int.

For all that Java has a reputation as “everything’s an object”, a few things explicitly aren’t: primitive types. int is a primitive integer, while Integer is an object integer. Integers are references. ints are not. An Integer could be null, but an int cannot ever be null.

This meant if QA tried to write a test assertion that looked like this:

assertThat(readings.dataNoMax[0].getMax()).isNull()

it wouldn’t work. max could never be null.

There are a few different ways to solve this. One could make the POJO support nullable types, which is probably a better way to represent an object which may not have a value for certain fields. An int in Java that isn’t initialized to a value will default to zero, so they probably could have left their last unit test unchanged and it still would have passed. But this was a code change, and a code change needs to have a test change to prove the code change was correct.

Let’s compare versions. Here was their original test:

/** Should display max */
assertEquals("sensor2", readings.dataWithMax[0].getName())
assertEquals(50, readings.dataWithMax[0].getMax());
assertEquals(25, readings.dataWithMax[0].getValue());

/** Should not display max */
assertEquals("sensor1", readings.dataNoMax[0].getName())
assertEquals(0, readings.dataNoMax[0].getMax());
assertEquals(20, readings.dataNoMax[0].getValue());

And, since the code changed, and they needed to verify that change, this is their new test:

/** Should display max */
assertEquals("sensor2", readings.dataWithMax[0].getName())
assertThat(readings.dataWithMax[0].getMax()).isNotNull()
assertEquals(25, readings.dataWithMax[0].getValue());

/** Should not display max */
assertEquals("sensor1", readings.dataNoMax[0].getName())
//assertThat(readings.dataNoMax[0].getMax()).isNull();
assertEquals(20, readings.dataNoMax[0].getValue());

So, their original test compared strictly against values. When they needed to test if values were present, they switched to using an isNotNull comparison. On the side with a max, this test will always pass- it can’t possibly fail, because an int can’t possibly be null. When they tried to do an isNull check, on the other value, that always failed, because again- it can’t possibly be null.

So they commented it out.

Test is green. Clearly, this code is ready to ship.

Tessie adds:

[This] is starting to explain why our git history is filled with commits that “fix failing test” by removing all the asserts.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 10)

Here’s part ten of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

CryptogramOn the Twitter Hack

Twitter was hacked this week. Not a few people's Twitter accounts, but all of Twitter. Someone compromised the entire Twitter network, probably by stealing the log-in credentials of one of Twitter's system administrators. Those are the people trusted to ensure that Twitter functions smoothly.

The hacker used that access to send tweets from a variety of popular and trusted accounts, including those of Joe Biden, Bill Gates, and Elon Musk, as part of a mundane scam -- stealing bitcoin -- but it's easy to envision more nefarious scenarios. Imagine a government using this sort of attack against another government, coordinating a series of fake tweets from hundreds of politicians and other public figures the day before a major election, to affect the outcome. Or to escalate an international dispute. Done well, it would be devastating.

Whether the hackers had access to Twitter direct messages is not known. These DMs are not end-to-end encrypted, meaning that they are unencrypted inside Twitter's network and could have been available to the hackers. Those messages -- between world leaders, industry CEOs, reporters and their sources, heath organizations -- are much more valuable than bitcoin. (If I were a national-intelligence agency, I might even use a bitcoin scam to mask my real intelligence-gathering purpose.) Back in 2018, Twitter said it was exploring encrypting those messages, but it hasn't yet.

Internet communications platforms -- such as Facebook, Twitter, and YouTube -- are crucial in today's society. They're how we communicate with one another. They're how our elected leaders communicate with us. They are essential infrastructure. Yet they are run by for-profit companies with little government oversight. This is simply no longer sustainable. Twitter and companies like it are essential to our national dialogue, to our economy, and to our democracy. We need to start treating them that way, and that means both requiring them to do a better job on security and breaking them up.

In the Twitter case this week, the hacker's tactics weren't particularly sophisticated. We will almost certainly learn about security lapses at Twitter that enabled the hack, possibly including a SIM-swapping attack that targeted an employee's cellular service provider, or maybe even a bribed insider. The FBI is investigating.

This kind of attack is known as a "class break." Class breaks are endemic to computerized systems, and they're not something that we as users can defend against with better personal security. It didn't matter whether individual accounts had a complicated and hard-to-remember password, or two-factor authentication. It didn't matter whether the accounts were normally accessed via a Mac or a PC. There was literally nothing any user could do to protect against it.

Class breaks are security vulnerabilities that break not just one system, but an entire class of systems. They might exploit a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system's software. Or a vulnerability in internet-enabled digital video recorders and webcams that allows an attacker to recruit those devices into a massive botnet. Or a single vulnerability in the Twitter network that allows an attacker to take over every account.

For Twitter users, this attack was a double whammy. Many people rely on Twitter's authentication systems to know that someone who purports to be a certain celebrity, politician, or journalist is really that person. When those accounts were hijacked, trust in that system took a beating. And then, after the attack was discovered and Twitter temporarily shut down all verified accounts, the public lost a vital source of information.

There are many security technologies companies like Twitter can implement to better protect themselves and their users; that's not the issue. The problem is economic, and fixing it requires doing two things. One is regulating these companies, and requiring them to spend more money on security. The second is reducing their monopoly power.

The security regulations for banks are complex and detailed. If a low-level banking employee were caught messing around with people's accounts, or if she mistakenly gave her log-in credentials to someone else, the bank would be severely fined. Depending on the details of the incident, senior banking executives could be held personally liable. The threat of these actions helps keep our money safe. Yes, it costs banks money; sometimes it severely cuts into their profits. But the banks have no choice.

The opposite is true for these tech giants. They get to decide what level of security you have on your accounts, and you have no say in the matter. If you are offered security and privacy options, it's because they decided you can have them. There is no regulation. There is no accountability. There isn't even any transparency. Do you know how secure your data is on Facebook, or in Apple's iCloud, or anywhere? You don't. No one except those companies do. Yet they're crucial to the country's national security. And they're the rare consumer product or service allowed to operate without significant government oversight.

For example, President Donald Trump's Twitter account wasn't hacked as Joe Biden's was, because that account has "special protections," the details of which we don't know. We also don't know what other world leaders have those protections, or the decision process surrounding who gets them. Are they manual? Can they scale? Can all verified accounts have them? Your guess is as good as mine.

In addition to security measures, the other solution is to break up the tech monopolies. Companies like Facebook and Twitter have so much power because they are so large, and they face no real competition. This is a national-security risk as well as a personal-security risk. Were there 100 different Twitter-like companies, and enough compatibility so that all their feeds could merge into one interface, this attack wouldn't have been such a big deal. More important, the risk of a similar but more politically targeted attack wouldn't be so great. If there were competition, different platforms would offer different security options, as well as different posting rules, different authentication guidelines -- different everything. Competition is how our economy works; it's how we spur innovation. Monopolies have more power to do what they want in the quest for profits, even if it harms people along the way.

This wasn't Twitter's first security problem involving trusted insiders. In 2017, on his last day of work, an employee shut down President Donald Trump's account. In 2019, two people were charged with spying for the Saudi government while they were Twitter employees.

Maybe this hack will serve as a wake-up call. But if past incidents involving Twitter and other companies are any indication, it won't. Underspending on security, and letting society pay the eventual price, is far more profitable. I don't blame the tech companies. Their corporate mandate is to make as much money as is legally possible. Fixing this requires changes in the law, not changes in the hearts of the company's leaders.

This essay previously appeared on TheAtlantic.com.

LongNowSix Ways to Think Long-term: A Cognitive Toolkit for Good Ancestors

Image for post
Illustration: Tom Lee at Rocket Visual

Human beings have an astonishing evolutionary gift: agile imaginations that can shift in an instant from thinking on a scale of seconds to a scale of years or even centuries. Our minds constantly dance across multiple time horizons. One moment we can be making a quickfire response to a text and the next thinking about saving for our pensions or planting an acorn in the ground for posterity. We are experts at the temporal pirouette. Whether we are fully making use of this gift is, however, another matter.

The need to draw on our capacity to think long-term has never been more urgent, whether in areas such as public health care (like planning for the next pandemic on the horizon), to deal with technological risks (such as from AI-controlled lethal autonomous weapons), or to confront the threats of an ecological crisis where nations sit around international conference tables, bickering about their near-term interests, while the planet burns and species disappear. At the same time, businesses can barely see past the next quarterly report, we are addicted to 24/7 instant news, and find it hard to resist the Buy Now button.

What can we do to overcome the tyranny of the now? The easy answer is to say we need more long-term thinking. But here’s the problem: almost nobody really knows what it is.

In researching my latest book, The Good Ancestor: How to Think Long Term in a Short-Term World, I spoke to dozens of experts — psychologists, futurists, economists, public officials, investors — who were all convinced of the need for more long-term thinking to overcome the pathological short-termism of the modern world, but few of them could give me a clear sense of what it means, how it works, what time horizons are involved and what steps we must take to make it the norm. This intellectual vacuum amounts to nothing less than a conceptual emergency.

Let’s start with the question, ‘how long is long-term?’ Forget the corporate vision of ‘long-term’, which rarely extends beyond a decade. Instead, consider a hundred years as a minimum threshold for long-term thinking. This is the current length of a long human lifespan, taking us beyond the ego boundary of our own mortality, so we begin to imagine futures that we can influence but not participate in ourselves. Where possible we should attempt to think longer, for instance taking inspiration from cultural endeavours like the 10,000 Year Clock (the Long Now Foundation’s flagship project), which is being designed to stay accurate for ten millennia. At the very least, when you aim to think ‘long-term’, take a deep breath and think ‘a hundred years and more’.

The Tug of War for Time

It is just as crucial to equip ourselves with a mental framework that identifies different forms of long-term thinking. My own approach is represented in a graphic I call ‘The Tug of War for Time’ (see below). On one side, six drivers of short-termism threaten to drag us over the edge of civilizational breakdown. On the other, six ways to think long-term are drawing us towards a culture of longer time horizons and responsibility for the future of humankind.

Image for post

These six ways to think long are not a simplistic blueprint for a new economic or political system, but rather comprise a cognitive toolkit for challenging our obsession with the here and now. They offer conceptual scaffolding for answering what I consider to be the most important question of our time: How can we be good ancestors?

The tug of war for time is the defining struggle of our generation. It is going on both inside our own minds and in our societies. Its outcome will affect the fate of the billions upon billions of people who will inhabit the future. In other words, it matters. So let’s unpack it a little.

Drivers of Short-termism

Amongst the six drivers of short-termism, we all know about the power of digital distraction to immerse us in a here-and-now addiction of clicks, swipes and scrolls. A deeper driver has been the growing tyranny of the clock since the Middle Ages. The mechanical clock was the key machine of the Industrial Revolution, regimenting and speeding up time itself, bringing the future ever-nearer: by 01700 most clocks had minute hands and by 01800 second hands were standard. And it still dominates our daily lives, strapped to our wrists and etched onto our screens.

Speculative capitalism has been a source of boom-bust turbulence at least since the Dutch Tulip Bubble of 01637, through to the 02008 financial crash and the next one waiting around the corner. Electoral cycles also play their part, generating a myopic political presentism where politicians can barely see beyond the next poll or the latest tweet. Such short-termism is amplified by a world of networked uncertainty, where events and risks are increasingly interdependent and globalised, raising the prospect of rapid contagion effects and rendering even the near-term future almost unreadable.

Looming behind it all is our obsession with perpetual progress, especially the pursuit of endless GDP growth, which pushes the Earth system over critical thresholds of carbon emissions, biodiversity loss and other planetary boundaries. We are like a kid who believes they can keep blowing up the balloon, bigger and bigger, without any prospect that it could ever burst.

Put these six drivers together and you get a toxic cocktail of short-termism that could send us into a blind-drunk civilizational freefall. As Jared Diamond argues, ‘short-term decision making’ coupled with an absence of ‘courageous long-term thinking’ has been at the root of civilizational collapse for centuries. A stark warning, and one that prompts us to unpack the six ways to think long.

Six Ways to Think Long-term

1. Deep-Time Humility: grasp we are an eyeblink in cosmic time

Deep-time humility is about recognising that the two hundred thousand years that humankind has graced the earth is a mere eyeblink in the cosmic story. As John McPhee (who coined the concept of deep time in 01980) put it: ‘Consider the earth’s history as the old measure of the English yard, the distance from the king’s nose to the tip of his outstretched hand. One stroke of a nail file on his middle finger erases human history.’

But just as there is deep time behind us, there is also deep time ahead. In six billion years, any creatures that will be around to see our sun die, will be as different from us, as we are from the first single-celled bacteria.

Yet why exactly do long-term thinkers need this sense of temporal humility? Deep time prompts us to consider the consequences of our actions far beyond our own lifetimes, and puts us back in touch with the long-term cycles of the living world like the carbon cycle. But it also helps us grasp our destructive potential: in an incredibly short period of time — only a couple of centuries — we have endangered a world that took billions of years to evolve. We are just a tiny link in the great chain of living organisms, so who are we to put it all in jeopardy with our ecological blindness and deadly technologies? Don’t we have an obligation to our planetary future and the generations of humans and other species to come?

2. Legacy Mindset: be remembered well by posterity

We are the inheritors of extraordinary legacies from the past — from those who planted the first seeds, built the cities where we now live, and made the medical discoveries we benefit from. But alongside the good ancestors are the ‘bad ancestors’, such as those who bequeathed us colonial and slavery-era racism and prejudice that deeply permeate today’s criminal justice systems. This raises the question of what legacies we will leave to future generations: how do we want to be remembered by posterity?

The challenge is to leave a legacy that goes beyond egoistic legacy (like a Russian oligarch who wants a wing of an art gallery named after them) or even familial legacy (like wishing to pass on property or cultural traditions to our children). If we hope to be good ancestors, we need to develop a transcendent ‘legacy mindset’, where we aim to be remembered well by the generations we will never know, by the universal strangers of the future.

We might look for inspiration in many places. The Māori concept of whakapapa (‘genealogy’), describes a continues lifeline that connects an individual to the past, present and future, and generates a sense of respect for the traditions of previous generations while being mindful of those yet to come. In Katie Paterson’s art project Future Library, every year for a hundred years a famous writer (the first was Margaret Atwood) is depositing a new work, which will remain unread until 02114, when they will all be printed on paper made from a thousand trees that have been planted in a forest outside Oslo. Then there are activists like Wangari Maathai, the first African woman to win the Nobel Peace Prize. In 01977 she founded the Green Belt Movement in Kenya, which by the time of her death in 02011 had trained more than 25,000 women in forestry skills and planted 40 million trees. That’s how to pass on a legacy gift to the future.

3. Intergenerational Justice: consider the seventh generation ahead

“Why should I care about future generations? What have they ever done for me?’ This clever quip attributed to Groucho Marx highlights the issue of intergenerational justice. This is not the legacy question of how we will be remembered, but the moral question of what responsibilities we have to the ‘futureholders’ — the generations who will succeed us.

One approach, rooted in utilitarian philosophy, is to recognise that at least in terms of sheer numbers, the current population is easily outweighed by all those who will come after us. In a calculation made by writer Richard Fisher, around 100 billion people have lived and died in the past 50,000 years. But they, together with the 7.7 billion people currently alive, are far outweighed by the estimated 6.75 trillion people who will be born over the next 50,000 years, if this century’s birth rate is maintained (see graphic below). Even in just the next millennium, more than 135 billion people are likely to be born. How could we possibly ignore their wellbeing, and think that our own is of such greater value?

Image for post

Such thinking is embodied in the idea of ‘seventh-generation decision making’, an ethic of ecological stewardship practised amongst some Native American peoples such as the Oglala Lakota Nation in South Dakota: community decisions take into the account the impacts seven generations from the present. This ideal is fast becoming a cornerstone of the growing global intergenerational justice movement, inspiring groups such as Our Children’s Trust (fighting for the legal rights of future generations in the US) and Future Design in Japan (which promotes citizens’ assemblies for city planning, where residents imagine themselves being from future generations).

4. Cathedral thinking: plan projects beyond a human lifetime

Cathedral thinking is the practice of envisaging and embarking on projects with time horizons stretching decades and even centuries into the future, just like medieval cathedral builders who began despite knowing they were unlikely to see construction finished within their own lifetimes. Greta Thunberg has said that it will take ‘cathedral thinking’ to tackle the climate crisis.

Historically, cathedral thinking has taken different forms. Apart from religious buildings, there are public works projects such as the sewers built in Victorian London after the ‘Great Stink’ of 01858, which are still in use today (we might call this ‘sewer thinking’ rather than ‘cathedral thinking’). Scientific endeavours include the Svalbard Global Seed in the remote Arctic, which contains over one million seeds from more than 6,000 species, and intends to keep them safe in an indestructible rock bunker for at least a thousand years. We should also include social and political movements with long time horizons, such as the Suffragettes, who formed their first organisation in Manchester in 01867 and didn’t achieve their aim of votes for women for over half a century.

Inspiring stuff. But remember that cathedral thinking can be directed towards narrow and self-serving ends. Hitler hoped to create a Thousand Year Reich. Dictators have sought to preserve their power and privilege for their progeny through the generations: just look at North Korea. In the corporate world, Gus Levy, former head of investment bank Goldman Sachs, once proudly declared, ‘We’re greedy, but long-term greedy, not short-term greedy’.

That’s why cathedral thinking alone is not enough to create a long-term civilization that respects the interests of future generations. It needs to be guided by other approaches, such as intergenerational justice and a transcendent goal (see below).

5. Holistic Forecasting: envision multiple pathways for civilization

Numerous studies demonstrate that most forecasting professionals tend to have a poor record at predicting future events. Yet we must still try to map out the possible long-term trajectories of human civilization itself — what I call holistic forecasting — otherwise we will end up only dealing with crises as they hit us in the present. Experts in the fields of global risk studies and scenario planning have identified three broad pathways, which I call Breakdown, Reform and Transformation (see graphic below).

Image for post

Breakdown is the path of business-as-usual. We continue striving for the old twentieth-century goal of material economic progress but soon reach a point of societal and institutional collapse in the near term as we fail to respond to rampant ecological and technological crises, and cross dangerous civilizational tipping points (think Cormac McCarthy’s The Road).

A more likely trajectory is Reform, where we respond to global crises such as climate change but in an inadequate and piecemeal way that merely extends the Breakdown curve outwards, to a greater or lesser extent. Here governments put their faith in reformist ideals such as ‘green growth’, ‘reinventing capitalism’, or a belief that technological solutions are just around the corner.

A third trajectory is Transformation, where we see a radical shift in the values and institutions of society towards a more long-term sustainable civilization. For instance, we jump off the Breakdown curve onto a new pathway dominated by post-growth economic models such as Doughnut Economics or a Green New Deal.

Note the crucial line of Disruptions. These are disruptive innovations or events that offer an opportunity to switch from one curve onto another. It could be a new technology like blockchain, the rise of a political movement like Black Lives Matter, or a global pandemic like COVID-19. Successful long-term thinking requires turning these disruptions towards Transformative change and ensuring they are not captured by the old system.

6. Transcendent Goal: strive for one-planet thriving

Every society, wrote astronomer Carl Sagan, needs a ‘telos’ to guide it — ‘a long-term goal and a sacred project’. What are the options? While the goal of material progress served us well in the past, we now know too much about its collateral damage: fossil fuels and material waste have pushed us into the Anthropocene, the perilous new era characterised by a steep upward trend in damaging planetary indicators called the Great Acceleration (see graphic).

Image for post
See an enlarged version of this graphic here.

An alternative transcendent goal is to see our destiny in the stars: the only way to guarantee the survival of our species is to escape the confines of Earth and colonise other worlds. Yet terraforming somewhere like Mars to make it habitable could take centuries — if it could be done at all. Additionally, the more we set our sights on escaping to other worlds, the less likely we are to look after our existing one. As cosmologist Martin Rees points out, ‘It’s a dangerous delusion to think that space offers an escape from Earth’s problems. We’ve got to solve these problems here.’

That’s why our primary goal should be to learn to live within the biocapacity of the only planet we know that sustains life. This is the fundamental principle of the field of ecological economics developed by visionary thinkers such as Herman Daly: don’t use more resources than the earth can naturally regenerate (for instance, only harvest timber as fast as it can grow back), and don’t create more wastes than it can naturally absorb (so avoid burning fossil fuels that can’t be absorbed by the oceans and other carbon sinks).

Once we’ve learned to do this, we can do as much terraforming of Mars as we like: as any mountaineer knows, make sure your basecamp is in order with ample supplies before you tackle a risky summit. But according to the Global Footprint Network, we are not even close and currently use up the equivalent of around 1.6 planet Earths each year. That’s short-termism of the most deadly kind. A transcendent goal of one-planet thriving is our best guarantee of a long-term future. And we do it by caring about place as much as rethinking time.

Bring on the Time Rebellion

So there is a brief overview of a cognitive toolkit we could draw on to survive and thrive into the centuries and millennia to come. None of these six ways is enough alone to create a long-term revolution of the human mind — a fundamental shift in our perception of time. But together — and when practised by a critical mass of people and organisations — a new age of long-term thinking could emerge out of their synergy.

Is this a likely prospect? Can we win the tug of war against short-termism?

‘Only a crisis — actual or perceived — produces real change,’ wrote economist Milton Friedman. Out of the ashes of World War Two came pioneering long-term institutions such as the World Health Organisation, the European Union and welfare states. So too out of the global crisis of COVID-19 could emerge the long-term institutions we need to tackle the challenges of our own time: climate change, technology threats, the racism and inequality structured into our political and economic systems. Now is the moment for expanding our time horizons into a longer now. Now is the moment to become a time rebel.


Roman Krznaric is a public philosopher, research fellow of the Long Now Foundation, and founder of the world’s first Empathy Museum. His latest book is The Good Ancestor: How to Think Long Term in a Short-Term World. He lives in Oxford, UK. @romankrznaric

Note: All graphics from The Good Ancestor: How to Think Long Term in a Short-Term World by Roman Krznaric. Graphic design by Nigel Hawtin. Licensed under CC BY-NC-ND.

Worse Than FailureMega-Agile

A long time ago, way back in 2009, Bruce W worked for the Mega-Bureaucracy. It was a slog of endless forms, endless meetings, endless projects that just never hit a final ship date. The Mega-Bureaucracy felt that the organization which manages best manages the most, and ensured that there were six tons of management overhead attached to the smallest project.

After eight years in that position, Bruce finally left for another division in the same company.

But during those eight years, Bruce learned a few things about dealing with the Mega-Bureaucracy. His division was a small division, and while Bruce needed to interface with the Mega-Bureaucracy, he could shield the other developers on his team from it, as much as possible. This let them get embedded into the business unit, working closely with the end users, revising requirements on the fly based on rapid feedback and a quick release cycle. It was, in a word, "Agile", in the most realistic version of the term: focus on delivering value to your users, and build processes which support that. They were a small team, and there were many layers of management above them, which served to blunt and filter some of the mandates of the Mega-Bureaucracy, and that let them stay Agile.

Nothing, however, protects against management excess than a track record of success. While they had a reputation for being dangerous heretics: they released to test continuously and releasing to production once a month, they changed requirements as needs changed, meaning what they delivered was almost never what they specced, but it was what their users needed, and worst of all, their software defeated all the key Mega-Bureaucracy metrics. It performed better, it had fewer reported defects, it return-on-investment metrics their software saved the division millions of dollars in operating costs.

The Mega-Bureaucracy seethed at these heretics, but the C-level of the company just saw a high functioning team. There was nothing that the Bureaucracy could do to bring them in line-

-at least until someone opened up a trade magazine, skimmed the buzzwords, and said, "Maybe our processes are too cumbersome. We should do Agile. Company wide, let's lay out an Agile Process."

There's a huge difference between the "agile" created by a self-organizing team, that grows based on learning what works best for the team and their users, and the kind of "agile" that's imposed from the corporate overlords.

First, you couldn't do Agile without adopting the Agile Process, which in Mega-Bureaucracy-speak meant "we're doing a very specific flavor of scrum". This meant morning standups were mandated. You needed a scrum-master on the team, which would be a resource drawn from the project management office, and well, they'd also pull double duty as the project manager. The word "requirements" was forbidden, you had to write User Stories, and then estimate those User Stories as taking a certain number of hours. Then you could hold your Sprint Planning meeting, where you gathered a bucket of stories that would fit within your next sprint, which would be a 4-week cadence, but that was just the sprint planning cadence. Releases to production would happen only quarterly. Once user stories were written, they were never to be changed, just potentially replaced with a new story, but once a story was added to a sprint, you were expected to implement it, as written. No changes based on user feedback. At the end of the sprint, you'd have a whopping big sprint retrospective, and since this was a new process, instead of letting the team self-evaluate in private and make adjustments, management from all levels of the company would sit in on the retrospectives to be "informed" about the "challenges" in adopting the new process.

The resulting changes pleased nearly no one. The developers hated it, the users, especially in Bruce's division, hated it, management hated it. But the Mega-Bureaucracy had won; the dangerous heretics who didn't follow the process now were following the process. They were Agile.

That is what motivated Bruce to transfer to a new position.

Two years later, he attended an all-IT webcast. The CIO announced that they'd spun up a new pilot development team. This new team would get embedded into the business unit, work closely with the end user, revise requirements on the fly based on rapid feedback and a continuous release cycle. "This is something brand new for our company, and we're excited to see where it goes!"

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Sam VargheseThe Indian Government cheated my late father of Rs 332,775

Back in 1976. the Indian Government, for whom my father, Ipe Samuel Varghese, worked in Colombo, cheated him of Rs 13,500 – the gratuity that he was supposed to be paid when he was dismissed from the Indian High Commission (the equivalent of the embassy) in Colombo.

That sum, adjusted for inflation, works out to Rs 332,775 in today’s rupees.

But he was not paid this amount because the embassy said he had contravened rules by working at a second job, something which everyone at the embassy was doing, because what people were paid was basically a starvation wage. But my father had rubbed against powerful interests in the embassy who were making money by taking bribes from poor Sri Lankan Tamils who were applying for Indian passports to return to India.

But let me start at the beginning. My father went to Sri Lanka (then known as Ceylon) in 1947, looking for employment after the war. He took up a job as a teacher, something which was his first love. But in 1956, when the Sri Lankan Government nationalised the teaching profession, he was left without a job.

It was then that he began working for the Indian High Commission which was located in Colpetty, and later in Fort. As he was a local recruit, he was not given diplomatic status. The one benefit was that our family did not need visas to stay in Sri Lanka – we were all Indian citizens – but only needed to obtain passports once we reached the age of 14.

As my father had six children, the pay from the High Commission was not enough to provide for the household. He would tutor some students, either at our house, or else at their houses. He was very strict about his work, and was unwilling to compromise on any rules.

There were numerous people who worked alongside him and they would occasionally take a bribe from here and there and push the case of some person or the other for a passport. The Tamils, who had gone to Sri Lanka to work on the tea plantations, were being repatriated under a pact negotiated by Sirima Bandaranaike, the Sri Lankan prime minister, and Lal Bahadur Shastri, her Indian counterpart. It was thus known as the Sirima-Shastri pact.

There was a lot of anti-Tamil sentiment brewing in Sri Lanka at the time, feelings that blew up into the civil war from 1983 onwards, a conflict that only ended in May 2009. Thus, many Tamils were anxious and wanted to do whatever it took to get an Indian passport.

And in this, they found many High Commission employees more than willing to accept bribes in order to push their cases. But they came up against a brick wall in my father. There was another gentleman who was an impediment too, a man named Navamoni. The others used to call him Koranga Moonji Dorai – meaning monkey face man – as he was a wizened old man. He would lose his temper and shout at people when they tried to mollify him with this or that to push their cases.

Thus, it was only a matter of time before some of my father’s colleagues went to the higher-ups and complained that he was earning money outside his High Commission job. They all were as well, but nobody had rubbed up against the powers-that-be. By then, due to his competence, my father had been put in charge of the passport section, a very powerful post, because he could approve or turn down any application.

The men who wanted to make money through bribes found him a terrible obstacle. One day in May, when my mother called up the High Commission, she was told that my father no longer worked there. Shocked, she waited until he came home to find out the truth. We had no telephone at home.

The family was not in the best financial position at the time. We had a few weeks to return to India as we had been staying in Sri Lanka on the strength of my father’s employment. And then came the biggest shock: the money my father had worked for all those 20 years was denied to him.

We came back to India by train and ferry; we could not afford to fly back. It was a miserable journey and for many years after that we suffered financial hardship because we had no money to tide us over that period.

Many years later, after I migrated to Australia, I went to Indian Consulate in Coburg, a suburb of Melbourne, to get a new passport, I happened to speak to the consul and asked him what I should do with my old passport. He made my blood boil by telling me that it was my patriotic duty to send it by post to the Indian embassy in Canberra. I told him that I owed India nothing considering the manner in which it had treated my father. And I added that if the Indian authorities wanted my old passport, then they could damn well pay for the postage. He was not happy with my reply.

India is the only country in the world which will not restore a person’s citizenship if he asks for it in his later years for sentimental reasons, just so that he can die in the land of his birth. India is also the only country that insists its own former citizens obtain a visa to enter what is their own homeland. Money is not the only thing for the Indian Government; it is everything.

Every other country will restore a person’s citizenship in their latter years of they ask for it for sentimental reasons. Not India.

,

Kevin RuddInterview: UN Youth Australia

INTERVIEW VIDEO
UN YOUTH AUSTRALIA
PUBLISHED 18 JULY 2020

The post Interview: UN Youth Australia appeared first on Kevin Rudd.

Kevin RuddSMH: Stimulus Opportunity Knocks for Climate Action

By Kevin Rudd and Patrick Suckling

As the International Monetary Fund recently underlined in sharply revising down global growth prospects, recovering from the biggest peacetime shock to the global economy since the Great Depression will be a long haul.

There is a global imperative to put in place the strongest, most durable economic recovery. This is not a time for governments to retreat. Recovery will require massive and sustained support.

At the same time, spending decisions by governments now will shape our economic future for decades to come. In other words, we have a once-in-a-generation opportunity and can’t blow it.

But it’s looking like we might. This is because too few stimulus packages globally are reaping the double-dividend of both investing in growth and jobs, and in the transition to low emissions, more climate-resilient economies. And in Australia, this means we risk lagging even further behind the rest of the world as a result.

As Australia’s summer of hell demonstrated, climate change is only getting worse. It remains the greatest threat to our future welfare and economic prosperity. And while the world has legitimately been preoccupied with COVID-19, few have noticed that this year is on track to be the warmest in recorded history. Perhaps even fewer still have also made the connection between climate and biodiversity habitat loss and the outbreak of infectious diseases.

Stimulus decisions that do not address this climate threat therefore don’t just sell us short; they sell us out. And they cut against the grain of the global economy. This is the irreducible logic that flows from the 2015 Paris Agreement to which the world – including Australia – signed up to.

Unfortunately, as things stand today, many of the biggest stimulus efforts around the world are in danger of failing this logic.

For example, the US economic recovery is heavily focused on high emitting industries. The same is true in China, India, Japan and the large South-East Asian economies.

In fact, Beijing is approving plans for new coal-fired power plants at the fastest rate since 2015. And whether these plants are now actually built by China’s regional and provincial governments is increasingly becoming the global bellwether for whether we will emerge from this crisis better or worse off in the global fight against climate change.

And for our own part, Australia’s COVID Recovery Commission has placed limited emphasis on renewables despite advances in energy storage technologies and plummeting costs.

But as we know from our experience in the Global Financial Crisis a decade ago, it is entirely possible to design an economic recovery that is also good for the planet. This means investing in clean energy, energy efficiency systems, new transport systems, more sustainable homes and buildings and improved agricultural production, water and waste management. In fact, as McKinsey recently found, government spending on renewable energy technologies creates five more jobs per million dollars than spending on fossil fuels.

Despite these cautionary tales, there are thankfully also bright spots.

Take the European Union and its massive 750 billion Euro stimulus package. It will invest heavily in areas like energy efficiency, turbocharging renewable energy, accelerating hydrogen technologies, rolling out clean transport and promoting the circular economy.

To be fair, China is also emphasising new infrastructure like electric transport in its US$500 billion stimulus package. India is doubling down on its world-leading renewable energy investments. Indonesia has announced a major investment in solar energy. And Japan and South Korea are now announcing climate transition spending. But whether these are just bright spots amongst a dark haze of pollution, or genuinely light the way is the key question that confronts these economies.

In Australia, the government has confirmed significant investment in pumped hydro-power for “Snowy 2.0.” The government has also indicated acceleration of important projects such as the Marinus Link to ensure more renewable energy from Tasmania for the mainland. But much more is now needed.

An obvious starting point could be a nation-building stimulus investment around our decrepit energy system. By now the federal and state governments have a much stronger grasp of what we need for success, encouragingly evident on the recent $2 billion Commonwealth-NSW government package for better access, security and affordability.

Turbocharging this with a stimulus package for more renewable energy and storage of all sorts (including hydrogen), accompanying extension and stabilisation technologies for our electricity grid, and investment in dramatically improving energy efficiency would – literally and figuratively – power our economy forward.

In the aftermath of our drought and bushfires, another obvious area for nation-building investment is our land sector. Farm productivity can be dramatically improved by precision agriculture and regenerative farming technologies while building resilience to drought. New sources of revenue for farmers can be created through soil carbon and forest carbon farming – with carbon trading from these activities internationally set to be worth hundreds of billions of dollars over the coming decade.

Importantly, the Australian business community is not just calling for policy certainty, but actively ushering in change themselves. The Australian Industry Group, for instance, has called for a stronger climate-focused recovery. And in recent days, our largest private energy generator, AGL, has announced a significant strengthening of its commitment to climate transition, linking performance pay to progress in the company’s goal of achieving net-zero emissions by 2050 – a goal that BHP, Qantas and every Australian state and territory has signed up. HESTA has also announced it will be the first major Australian superannuation fund to align its investment portfolio to this end as well.

These sort of decisions are being replicated at a growing rate by companies around the world and show that business is leading and increasingly it is time for governments – including our own – to do the same. Whether we can use this crisis as an opportunity to emerge in a better place to tackle other global challenges remains to be seen, but rests on many of the decisions that will continue to be taken in the months to come.

Kevin Rudd is a former Prime Minister of Australia and now President of the Asia Society Policy Institute in New York. 

Patrick Suckling is a Senior Fellow at the Asia Society Policy Institute; Senior Partner at Pollination (pollinationgroup.com), a specialist climate investment and advisory firm; and was Australia’s Ambassador for the Environment.

 

First published in the Sydney Morning Herald and The Age on 18 February 2020.

The post SMH: Stimulus Opportunity Knocks for Climate Action appeared first on Kevin Rudd.

,

CryptogramFriday Squid Blogging: Squid Found on Provincetown Sandbar

Headline: "Dozens of squid found on Provincetown sandbar." Slow news day.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramTwitter Hackers May Have Bribed an Insider

Motherboard is reporting that this week's Twitter hack involved a bribed insider. Twitter has denied it.

I have been taking press calls all day about this. And while I know everyone wants to speculate about the details of the hack, we just don't know -- and probably won't for a couple of weeks.

Dave HallIf You’re not Using YAML for CloudFormation Templates, You’re Doing it Wrong

In my last blog post, I promised a rant about using YAML for CloudFormation templates. Here it is. If you persevere to the end I’ll also show you have to convert your existing JSON based templates to YAML.

Many of the points I raise below don’t just apply to CloudFormation. They are general comments about why you should use YAML over JSON for configuration when you have a choice.

One criticism of YAML is its reliance on indentation. A lot of the code I write these days is Python, so indentation being significant is normal. Use a decent editor or IDE and this isn’t a problem. It doesn’t matter if you’re using JSON or YAML, you will want to validate and lint your files anyway. How else will you find that trailing comma in your JSON object?

Now we’ve got that out of the way, let me try to convince you to use YAML.

As developers we are regularly told that we need to document our code. CloudFormation is Infrastructure as Code. If it is code, then we need to document it. That starts with the Description property at the top of the file. If you JSON for your templates, that’s it, you have no other opportunity to document your templates. On the other hand, if you use YAML you can add inline comments. Anywhere you need a comment, drop in a hash # and your comment. Your team mates will thank you.

JSON templates don’t support multiline strings. These days many developers have 4K or ultra wide monitors, we don’t want a string that spans the full width of our 34” screen. Text becomes harder to read once you exceed that “90ish” character limit. With JSON your multiline string becomes "[90ish-characters]\n[another-90ish-characters]\n[and-so-on"]. If you opt for YAML, you can use the greater than symbol (>) and then start your multiline comment like so:

Description: >
  This is the first line of my Description
  and it continues on my second line
  and I'll finish it on my third line.

As you can see it much easier to work with multiline string in YAML than JSON.

“Folded blocks” like the one above are created using the > replace new lines with spaces. This allows you to format your text in a more readable format, but allow a machine to use it as intended. If you want to preserve the new line, use the pipe (|) to create a “literal block”. This is great for an inline Lambda functions where the code remains readable and maintainable.

  APIFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        ZipFile: |
          import json
          import random


          def lambda_handler(event, context):
              return {"statusCode": 200, "body": json.dumps({"value": random.random()})}
      FunctionName: "GetRandom"
      Handler: "index.lambda_handler"
      MemorySize: 128
      Role: !GetAtt LambdaServiceRole.Arn
      Runtime: "python3.7"
		Timeout: 5

Both JSON and YAML require you to escape multibyte characters. That’s less of an issue with CloudFormation templates as generally you’re only using the ASCII character set.

In a YAML file generally you don’t need to quote your strings, but in JSON double quotes are used every where, keys, string values and so on. If your string contains a quote you need to escape it. The same goes for tabs, new lines, backslashes and and so on. JSON based CloudFormation templates can be hard to read because of all the escaping. It also makes it harder to handcraft your JSON when your code is a long escaped string on a single line.

Some configuration in CloudFormation can only be expressed as JSON. Step Functions and some of the AppSync objects in CloudFormation only allow inline JSON configuration. You can still use a YAML template and it is easier if you do when working with these objects.

The JSON only configuration needs to be inlined in your template. If you’re using JSON you have to supply this as an escaped string, rather than nested objects. If you’re using YAML you can inline it as a literal block. Both YAML and JSON templates support functions such as Sub being applied to these strings, it is so much more readable with YAML. See this Step Function example lifted from the AWS documentation:

MyStateMachine:
  Type: "AWS::StepFunctions::StateMachine"
  Properties:
    DefinitionString:
      !Sub |
        {
          "Comment": "A simple AWS Step Functions state machine that automates a call center support session.",
          "StartAt": "Open Case",
          "States": {
            "Open Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:open_case",
              "Next": "Assign Case"
            }, 
            "Assign Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:assign_case",
              "Next": "Work on Case"
            },
            "Work on Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:work_on_case",
              "Next": "Is Case Resolved"
            },
            "Is Case Resolved": {
                "Type" : "Choice",
                "Choices": [ 
                  {
                    "Variable": "$.Status",
                    "NumericEquals": 1,
                    "Next": "Close Case"
                  },
                  {
                    "Variable": "$.Status",
                    "NumericEquals": 0,
                    "Next": "Escalate Case"
                  }
              ]
            },
             "Close Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:close_case",
              "End": true
            },
            "Escalate Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:escalate_case",
              "Next": "Fail"
            },
            "Fail": {
              "Type": "Fail",
              "Cause": "Engage Tier 2 Support."    }   
          }
        }

If you’re feeling lazy you can use inline JSON for IAM policies that you’ve copied from elsewhere. It’s quicker than converting them to YAML.

YAML templates are smaller and more compact than the same configuration stored in a JSON based template. Smaller yet more readable is winning all round in my book.

If you’re still not convinced that you should use YAML for your CloudFormation templates, go read Amazon’s blog post from 2017 advocating the use of YAML based templates.

Amazon makes it easy to convert your existing templates from JSON to YAML. cfn-flip is aPython based AWS Labs tool for converting CloudFormation templates between JSON and YAML. I will assume you’ve already installed cfn-flip. Once you’ve done that, converting your templates with some automated cleanups is just a command away:

cfn-flip --clean template.json template.yaml

git rm the old json file, git add the new one and git commit and git push your changes. Now you’re all set for your new life using YAML based CloudFormation templates.

If you want to learn more about YAML files in general, I recommend you check our Learn X in Y Minutes’ Guide to YAML. If you want to learn more about YAML based CloudFormation templates, check Amazon’s Guide to CloudFormation Templates.

LongNowLong Now partners with Avenues: The World School for year-long, online program on the future of invention

The best way to predict the future is to invent it.” – Alan Kay

The Long Now Foundation has partnered with Avenues: The World School to offer a program on the past, present, and future of innovation. A fully online program for ages 17 and above, the Avenues Mastery Year is designed to equip aspiring inventors with the ability to: 

  • Conceive original ideas and translate those ideas into inventions through design and prototyping, 
  • Communicate the impact of the invention with an effective pitch deck and business plan, 
  • Ultimately file for and receive patent pending status with the United States Patent and Trademark Office. 

Applicants select a concentration in either Making and Design or Future Sustainability

Participants will hack, reverse engineer, and re-invent a series of world-changing technologies such as the smartphone, bioplastics, and the photovoltaic cell, all while immersing themselves in curated readings about the origins and possible trajectories of great inventions. 

The Long Now Foundation will host monthly fireside chats for participants where special guests offer feedback, spark new ideas and insights, and share advice and wisdom. Confirmed guests include Kim Polese (Long Now Board Member), Alexander Rose (Long Now Executive Director and Board Member), Samo Burja (Long Now Research Fellow), Jason Crawford (Roots of Progress), and Nick Pinkston (Volition). Additional guests from the Long Now Board and community are being finalized over the coming weeks.

The goal of Avenues Mastery Year is to equip aspiring inventors with the technical skills and long-term perspective needed to envision and invent the future. Visit Avenues Mastery Year to learn more, or get in touch directly by writing to ama@avenues.org.

Worse Than FailureError'd: Not Applicable

"Why yes, I have always pictured myself as not applicable," Olivia T. wrote.

 

"Hey Amazon, now I'm no doctor, but you may need to reconsider your 'Choice' of Acetaminophen as a 'stool softener'," writes Peter.

 

Ivan K. wrote, "Initially, I balked at the price of my new broadband plan, but the speed is just so good that sometimes it's so fast that the reply packets arrive before I even send a request!"

 

"I wanted to check if a site was being slow and, well, I figured it was good time to go read a book," Tero P. writes.

 

Robin L. writes, "I just can't wait to try Edge!"

 

"Yeah, one car stays in the garage, the other is out there tailgating Starman," Keith wrote.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityWho’s Behind Wednesday’s Epic Twitter Hack?

Twitter was thrown into chaos on Wednesday after accounts for some of the world’s most recognizable public figures, executives and celebrities starting tweeting out links to bitcoin scams. Twitter says the attack happened because someone tricked or coerced an employee into providing access to internal Twitter administrative tools. This post is an attempt to lay out some of the timeline of the attack, and point to clues about who may have been behind it.

The first public signs of the intrusion came around 3 PM EDT, when the Twitter account for the cryptocurrency exchange Binance tweeted a message saying it had partnered with “CryptoForHealth” to give back 5000 bitcoin to the community, with a link where people could donate or send money.

Minutes after that, similar tweets went out from the accounts of other cryptocurrency exchanges, and from the Twitter accounts for democratic presidential candidate Joe Biden, Amazon CEO Jeff Bezos, President Barack Obama, Tesla CEO Elon Musk, former New York Mayor Michael Bloomberg and investment mogul Warren Buffett.

While it may sound ridiculous that anyone would be fooled into sending bitcoin in response to these tweets, an analysis of the BTC wallet promoted by many of the hacked Twitter profiles shows that over the past 24 hours the account has processed 383 transactions and received almost 13 bitcoin — or approximately USD $117,000.

Twitter issued a statement saying it detected “a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools. We know they used this access to take control of many highly-visible (including verified) accounts and Tweet on their behalf. We’re looking into what other malicious activity they may have conducted or information they may have accessed and will share more here as we have it.”

There are strong indications that this attack was perpetrated by individuals who’ve traditionally specialized in hijacking social media accounts via “SIM swapping,” an increasingly rampant form of crime that involves bribing, hacking or coercing employees at mobile phone and social media companies into providing access to a target’s account.

People within the SIM swapping community are obsessed with hijacking so-called “OG” social media accounts. Short for “original gangster,” OG accounts typically are those with short profile names (such as @B or @joe). Possession of these OG accounts confers a measure of status and perceived influence and wealth in SIM swapping circles, as such accounts can often fetch thousands of dollars when resold in the underground.

In the days leading up to Wednesday’s attack on Twitter, there were signs that some actors in the SIM swapping community were selling the ability to change an email address tied to any Twitter account. In a post on OGusers — a forum dedicated to account hijacking — a user named “Chaewon” advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

The OGUsers forum user “Chaewon” taking requests to modify the email address tied to any twitter account.

“This is NOT a method, you will be given a full refund if for any reason you aren’t given the email/@, however if it is revered/suspended I will not be held accountable,” Chaewon wrote in their sales thread, which was titled “Pulling email for any Twitter/Taking Requests.”

Hours before any of the Twitter accounts for cryptocurrency platforms or public figures began blasting out bitcoin scams on Wednesday, the attackers appear to have focused their attention on hijacking a handful of OG accounts, including “@6.

That Twitter account was formerly owned by Adrian Lamo — the now-deceased “homeless hacker” perhaps best known for breaking into the New York Times’s network and for reporting Chelsea Manning‘s theft of classified documents. @6 is now controlled by Lamo’s longtime friend, a security researcher and phone phreaker who asked to be identified in this story only by his Twitter nickname, “Lucky225.”

Lucky225 said that just before 2 p.m. EDT on Wednesday, he received a password reset confirmation code via Google Voice for the @6 Twitter account. Lucky said he’d previously disabled SMS notifications as a means of receiving multi-factor codes from Twitter, opting instead to have one-time codes generated by a mobile authentication app.

But because the attackers were able to change the email address tied to the @6 account and disable multi-factor authentication, the one-time authentication code was sent to both his Google Voice account and to the new email address added by the attackers.

“The way the attack worked was that within Twitter’s admin tools, apparently you can update the email address of any Twitter user, and it does this without sending any kind of notification to the user,” Lucky told KrebsOnSecurity. “So [the attackers] could avoid detection by updating the email address on the account first, and then turning off 2FA.”

Lucky said he hasn’t been able to review whether any tweets were sent from his account during the time it was hijacked because he still doesn’t have access to it (he has put together a breakdown of the entire episode at this Medium post).

But around the same time @6 was hijacked, another OG account – @B — was swiped. Someone then began tweeting out pictures of Twitter’s internal tools panel showing the @B account.

A screenshot of the hijacked OG Twitter account “@B,” shows the hijackers logged in to Twitter’s internal account tools interface.

Twitter responded by removing any tweets across its platform that included screenshots of its internal tools, and in some cases temporarily suspended the ability of those accounts to tweet further.

Another Twitter account — @shinji — also was tweeting out screenshots of Twitter’s internal tools. Minutes before Twitter terminated the @shinji account, it was seen publishing a tweet saying “follow @6,” referring to the account hijacked from Lucky225.

The account “@shinji” tweeting a screenshot of Twitter’s internal tools interface.

Cached copies of @Shinji’s tweets prior to Wednesday’s attack on Twitter are available here and here from the Internet Archive. Those caches show Shinji claims ownership of two OG accounts on Instagram — “j0e” and “dead.”

KrebsOnSecurity heard from a source who works in security at one of the largest U.S.-based mobile carriers, who said the “j0e” and “dead” Instagram accounts are tied to a notorious SIM swapper who goes by the nickname “PlugWalkJoe.” Investigators have been tracking PlugWalkJoe because he is thought to have been involved in multiple SIM swapping attacks over the years that preceded high-dollar bitcoin heists.

Archived copies of the @Shinji account on twitter shows one of Joe’s OG Instagram accounts, “Dead.”

Now look at the profile image in the other Archive.org index of the @shinji Twitter account (pictured below). It is the same image as the one included in the @Shinji screenshot above from Wednesday in which Joseph/@Shinji was tweeting out pictures of Twitter’s internal tools.

Image: Archive.org

This individual, the source said, was a key participant in a group of SIM swappers that adopted the nickname “ChucklingSquad,” and was thought to be behind the hijacking of Twitter CEO Jack Dorsey‘s Twitter account last year. As Wired.com recounted, @jack was hijacked after the attackers conducted a SIM swap attack against AT&T, the mobile provider for the phone number tied to Dorsey’s Twitter account.

A tweet sent out from Twitter CEO Jack Dorsey’s account while it was hijacked shouted out to PlugWalkJoe and other Chuckling Squad members.

The mobile industry security source told KrebsOnSecurity that PlugWalkJoe in real life is a 21-year-old from Liverpool, U.K. named Joseph James O’Connor. The source said PlugWalkJoe is in Spain where he was attending a university until earlier this year. He added that PlugWalkJoe has been unable to return home on account of travel restrictions due to the COVID-19 pandemic.

The mobile industry source said PlugWalkJoe was the subject of an investigation in which a female investigator was hired to strike up a conversation with PlugWalkJoe and convince him to agree to a video chat. The source further explained that a video which they recorded of that chat showed a distinctive swimming pool in the background.

According to that same source, the pool pictured on PlugWalkJoe’s Instagram account (instagram.com/j0e) is the same one they saw in their video chat with him.

If PlugWalkJoe was in fact pivotal to this Twitter compromise, it’s perhaps fitting that he was identified in part via social engineering. Maybe we should all be grateful the perpetrators of this attack on Twitter did not set their sights on more ambitious aims, such as disrupting an election or the stock market, or attempting to start a war by issuing false, inflammatory tweets from world leaders.

Also, it seems clear that this Twitter hack could have let the attackers view the direct messages of anyone on Twitter, information that is difficult to put a price on but which nevertheless would be of great interest to a variety of parties, from nation states to corporate spies and blackmailers.

This is a fast-moving story. There were multiple people involved in the Twitter heist. Please stay tuned for further updates. KrebsOnSecurity would like to thank Unit 221B for their assistance in connecting some of the dots in this story.

Worse Than FailureCodeSOD: Because of the Implication

Even when you’re using TypeScript, you’re still bound by JavaScript’s type system. You’re also stuck with its object system, which means that each object is really just a dict, and there’s no guarantee that any object has any given key at runtime.

Madison sends us some TypeScript code that is, perhaps not strictly bad, in and of itself, though it certainly contains some badness. It is more of a symptom. It implies a WTF.

    private _filterEmptyValues(value: any): any {
        const filteredValue = {};
        Object.keys(value)
            .filter(key => {
                const v = value[key];

                if (v === null) {
                    return false;
                }
                if (v.von !== undefined || v.bis !== undefined) {
                    return (v.von !== null && v.von !== 'undefined' && v.von !== '') ||
                        (v.bis !== null && v.bis !== 'undefined' && v.bis !== '');
                }
                return (v !== 'undefined' && v !== '');

            }).forEach(key => {
            filteredValue[key] = value[key];
        });
        return filteredValue;
    }

At a guess, this code is meant to be used as part of prepping objects for being part of a request: clean out unused keys before sending or storing them. And as a core methodology, it’s not wrong, and it’s pretty similar to your standard StackOverflow solution to the problem. It’s just… forcing me to ask some questions.

Let’s trace through it. We start by doing an Object.keys to get all the fields on the object. We then filter to remove the “empty” ones.

First, if the value is null, that’s empty. That makes sense.

Then, if the value is an object which contains a von or bis property, we’ll do some more checks. This is a weird definition of “empty”, but fine. We’ll check that they’re both non-null, not an empty string, and not… 'undefined'.

Uh oh.

We then do a similar check on the value itself, to ensure it’s not an empty string, and not 'undefined'.

What this is telling me is that somewhere in processing, sometimes, the actual string “undefined” can be stored, and it’s meant to be treated as JavaScript’s type undefined. That probably shouldn’t be happening, and implies a WTF somewhere else.

Similarly, the von and bis check has to raise a few eyebrows. If an object contains these fields, these fields must contain a value to pass this check. Why? I have no idea.

In the end, this code isn’t the WTF itself, it’s all the questions that it raises that tell me the shape of the WTF. It’s like looking at a black hole: I can’t see the object itself, I can only see the effect it has on the space around it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

MEWindows 10 on Debian under KVM

Here are some things that you need to do to get Windows 10 running on a Debian host under KVM.

UEFI Booting

UEFI is big and complex, but most of what it does isn’t needed at all. If all you want to do is boot from an image of a disk with a GPT partition table then you just install the package ovmf and add something like the following to your KVM start script:

UEFI="-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd"

Note that some of the documentation on this doesn’t have the OVMF_VARS.fd file set to readonly. Allowing writes to that file means that the VM boot process (and maybe later) can change EFI variables that affect later boots and other VMs if they all share the same file. For a basic boot you don’t need to change variables so you want it read-only. Also having it read-only is necessary if you want to run KVM as non-root.

As an experiment I tried booting without the OVMF_VARS.fd file, it didn’t boot and then even after configuring it to use the OVMF_VARS.fd file again Windows gave a boot error about the “boot configuration data file” that required booting from recovery media. Apparently configuration mistakes with EFI can mess up the Windows installation, so be careful and backup the Windows installation regularly!

Linux can boot from EFI but you generally don’t want to unless the boot device is larger than 2TB. It’s relatively easy to convert a Linux installation on a GPT disk to a virtual image on a DOS partition table disk or on block devices without partition tables and that gives a faster boot. If the same person runs the host hardware and the VMs then the best choice for Linux is to have no partition tables just one filesystem per block device (which makes resizing much easier) and have the kernel passed as a parameter to kvm. So booting a VM from EFI is probably only useful for booting Windows VMs and for Linux boot loader development and testing.

As an aside, the Debian Wiki page about Secure Boot on a VM [4] was useful for this. It’s unfortunate that it and so much of the documentation about UEFI is about secure boot which isn’t so useful if you just want to boot a system without regard to the secure boot features.

Emulated IDE Disks

Debian kernels (and probably kernels from many other distributions) are compiled with the paravirtualised storage device drivers. Windows by default doesn’t support such devices so you need to emulate an IDE/SATA disk so you can boot Windows and install the paravirtualised storage driver. The following configuration snippet has a commented line for paravirtualised IO (which is fast) and an uncommented line for a virtual IDE/SATA disk that will allow an unmodified Windows 10 installation to boot.

#DRIVE="-drive format=raw,file=/home/kvm/windows10,if=virtio"
DRIVE="-drive id=disk,format=raw,file=/home/kvm/windows10,if=none -device ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0"

Spice Video

Spice is an alternative to VNC, Here is the main web site for Spice [1]. Spice has many features that could be really useful for some people, like audio, sharing USB devices from the client, and streaming video support. I don’t have a need for those features right now but it’s handy to have options. My main reason for choosing Spice over VNC is that the mouse cursor in the ssvnc doesn’t follow the actual mouse and can be difficult or impossible to click on items near edges of the screen.

The following configuration will make the QEMU code listen with SSL on port 1234 on all IPv4 addresses. Note that this exposes the Spice password to anyone who can run ps on the KVM server, I’ve filed Debian bug #965061 requesting the option of a password file to address this. Also note that the “qxl” virtual video hardware is VGA compatible and can be expected to work with OS images that haven’t been modified for virtualisation, but that they work better with special video drivers.

KEYDIR=/etc/letsencrypt/live/kvm.example.com-0001
-spice password=xxxxxxxx,x509-cacert-file=$KEYDIR/chain.pem,x509-key-file=$KEYDIR/privkey.pem,x509-cert-file=$KEYDIR/cert.pem,tls-port=1234,tls-channel=main -vga qxl

To connect to the Spice server I installed the spice-client-gtk package in Debian and ran the following command:

spicy -h kvm.example.com -s 1234 -w xxxxxxxx

Note that this exposes the Spice password to anyone who can run ps on the system used as a client for Spice, I’ve filed Debian bug #965060 requesting the option of a password file to address this.

This configuration with an unmodified Windows 10 image only supported 800*600 resolution VGA display.

Networking

To set up bridged networking as non-root you need to do something like the following as root:

chgrp kvm /usr/lib/qemu/qemu-bridge-helper
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper
mkdir -p /etc/qemu
echo "allow all" > /etc/qemu/bridge.conf
chgrp kvm /etc/qemu/bridge.conf
chmod 640 /etc/qemu/bridge.conf

Windows 10 supports the emulated Intel E1000 network card. Configuration like the following configures networking on a bridge named br0 with an emulated E1000 card. MAC addresses that have a 1 in the second least significant bit of the first octet are “locally administered” (like IPv4 addresses starting with “10.”), see the Wikipedia page about MAC Address for details.

The following is an example of network configuration where $ID is an ID number for the virtual machine. So far I haven’t come close to 256 VMs on one network so I’ve only needed one octet.

NET="-device e1000,netdev=net0,mac=02:00:00:00:01:$ID -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper,br=br0"

Final KVM Settings

KEYDIR=/etc/letsencrypt/live/kvm.example.com-0001
SPICE="-spice password=xxxxxxxx,x509-cacert-file=$KEYDIR/chain.pem,x509-key-file=$KEYDIR/privkey.pem,x509-cert-file=$KEYDIR/cert.pem,tls-port=1234,tls-channel=main -vga qxl"

UEFI="-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd"

DRIVE="-drive format=raw,file=/home/kvm/windows10,if=virtio"

NET="-device e1000,netdev=net0,mac=02:00:00:00:01:$ID -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper,br=br0"

kvm -m 4000 -smp 2 $SPICE $UEFI $DRIVE $NET

Windows Settings

The Spice Download page has a link for “spice-guest-tools” that has the QNX video driver among other things [2]. This seems to be needed for resolutions greater than 800*600.

The Virt-Manager Download page has a link for “virt-viewer” which is the Spice client for Windows systems [3], they have MSI files for both i386 and AMD64 Windows.

It’s probably a good idea to set display and system to sleep after never (I haven’t tested what happens if you don’t do that, but there’s no benefit in sleeping). Before uploading an image I disabled the pagefile and set the partition to the minimum size so I had less data to upload.

Problems

Here are some things I haven’t solved yet.

The aSpice Android client for the Spice protocol fails to connect with the QEMU code at the server giving the following message on stderr: “error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:../ssl/record/rec_layer_s3.c:1544:SSL alert number 48“.

Spice is supposed to support dynamic changes to screen resolution on the VM to match the window size at the client, this doesn’t work for me, not even with the Red Hat QNX drivers installed.

The Windows Spice client doesn’t seem to support TLS, I guess running some sort of proxy for TLS would work but I haven’t tried that yet.

,

CryptogramNSA on Securing VPNs

The NSA's Central Security Service -- that's the part that's supposed to work on defense -- has released two documents (a full and an abridged version) on securing virtual private networks. Some of it is basic, but it contains good information.

Maintaining a secure VPN tunnel can be complex and requires regular maintenance. To maintain a secure VPN, network administrators should perform the following tasks on a regular basis:

  • Reduce the VPN gateway attack surface
  • Verify that cryptographic algorithms are Committee on National Security Systems Policy (CNSSP) 15-compliant
  • Avoid using default VPN settings
  • Remove unused or non-compliant cryptography suites
  • Apply vendor-provided updates (i.e. patches) for VPN gateways and clients

Worse Than FailureCodeSOD: Dates by the Dozen

Before our regularly scheduled programming, Code & Supply, a developer community group we've collaborated with in the past, is running a salary survey, to gauge the state of the industry. More responses are always helpful, so I encourage you to take a few minutes and pitch in.

Cid was recently combing through an inherited Java codebase, and it predates Java 8. That’s a fancy way of saying “there were no good date builtins, just a mess of cruddy APIs”. That’s not to say that there weren’t date builtins prior to Java 8- they were just bad.

Bad, but better than this. Cid sent along a lot of code, and instead of going through it all, let’s get to some of the “highlights”. Much of this is stuff we’ve seen variations on before, but have been combined in ways to really elevate the badness. There are dozens of these methods, which we are only going to look at a sample of.

Let’s start with the String getLocalDate() method, which attempts to construct a timestamp in the form yyyyMMdd. As you can already predict, it does a bunch of string munging to get there, with blocks like:

switch (calendar.get(Calendar.MONTH)){
      case Calendar.JANUARY:
        sb.append("01");
        break;
      case Calendar.FEBRUARY:
        sb.append("02");
        break;
      …
}

Plus, we get the added bonus of one of those delightful “how do I pad an integer out to two digits?” blocks:

if (calendar.get(Calendar.DAY_OF_MONTH) < 10) {
  sb.append("0" + calendar.get(Calendar.DAY_OF_MONTH));
}
else {
  sb.append(calendar.get(Calendar.DAY_OF_MONTH));
}

Elsewhere, they expect a timestamp to be in the form yyyyMMddHHmmssZ, so they wrote a handy void checkTimestamp method. Wait, void you say? Shouldn’t it be boolean?

Well here’s the full signature:

public static void checkTimestamp(String timestamp, String name)
  throws IOException

Why return a boolean when you can throw an exception on bad input? Unless the bad input is a null, in which case:

if (timestamp == null) {
  return;
}

Nulls are valid timestamps, which is useful to know. We next get a lovely block of checking each character to ensure that they’re digits, and a second check to ensure that the last is the letter Z, which turns out to be double work, since the very next step is:

int year = Integer.parseInt(timestamp.substring(0,4));
int month = Integer.parseInt(timestamp.substring(4,6));
int day = Integer.parseInt(timestamp.substring(6,8));
int hour = Integer.parseInt(timestamp.substring(8,10));
int minute = Integer.parseInt(timestamp.substring(10,12));
int second = Integer.parseInt(timestamp.substring(12,14));

Followed by a validation check for day and month:

if (day < 1) {
  throw new IOException(msg);
}
if ((month < 1) || (month > 12)) {
  throw new IOException(msg);
}
if (month == 2) {
  if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
    if (day > 29) {
      throw new IOException(msg);
    }
  }
  else {
    if (day > 28) {
      throw new IOException(msg);
  }
  }
}
if (month == 1 || month == 3 || month == 5 || month == 7
|| month == 8 || month == 10 || month == 12) {
  if (day > 31) {
    throw new IOException(msg);
  }
}
if (month == 4 || month == 6 || month == 9 || month == 11) {
  if (day > 30) {
    throw new IOException(msg);
  }
"

The upshot is they at least got the logic right.

What’s fun about this is that the original developer never once considered “maybe I need an intermediate data structure beside a string to manipulate dates”. Nope, we’re just gonna munge that string all day. And that is our entire plan for all date operations, which brings us to the real exciting part, where this transcends from “just regular old bad date code” into full on WTF territory.

Would you like to see how they handle adding units of time? Like days?

public static String additionOfDays(String timestamp, int intervall) {
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int len = timestamp.length();
  String timestamp_rest = timestamp.substring(8, len);
  int lastDayOfMonth = 31;
  int current_intervall = intervall;
  while (current_intervall > 0) {
    lastDayOfMonth = getDaysOfMonth(year, month);
    if (day + current_intervall > lastDayOfMonth) {
      current_intervall = current_intervall - (lastDayOfMonth - day);
      if (month < 12) {
        month++;
      }
      else {
        year++;
        month = 1;
      }
      day = 0;
    }
    else {
      day = day + current_intervall;
      current_intervall = 0;
    }
  }
  String new_year = "" + year + "";
  String new_month = null;
  if (month < 10) {
    new_month = "0" + month + "";
  }
  else {
    new_month = "" + month + "";
  }
  String new_day = null;
  if (day < 10) {
    new_day = "0" + day + "";
  }
  else {
    new_day = "" + day + "";
  }
  return new String(new_year + new_month + new_day + timestamp_rest);
}

The only thing I can say is that here they realized that “hey, wait, maybe I can modularize” and figured out how to stuff their “how many days are in a month” logic into getDaysOfMonth, which you can see invoked above.

Beyond that, they manually handle carrying, and never once pause to think, “hey, maybe there’s a better way”.

And speaking of repeating code, guess what- there’s also a public static String additionOfSeconds(String timestamp, int intervall) method, too.

There are dozens of similar methods, Cid has only provided us a sample. Cid adds:

This particular developer didn’t trust in too fine modularization and code reusing (DRY!). So for every of this dozen of methods, he has implemented these date parsing/formatting algorithms again and again! And no, not just copy/paste; every time it is a real wheel-reinvention. The code blocks and the position of single code lines look different for every method.

Once Cid got too frustrated by this code, they went and reimplemented it in modern Java date APIs, shrinking the codebase by hundreds of lines.

The full blob of code Cid sent in follows, for your “enjoyment”:

public static String getLocalDate() {
  TimeZone tz = TimeZone.getDefault();
  GregorianCalendar calendar = new GregorianCalendar(tz);
  calendar.setTime(new Date());
  StringBuffer sb = new StringBuffer();
  sb.append(calendar.get(Calendar.YEAR));
  switch (calendar.get(Calendar.MONTH)){
    case Calendar.JANUARY:
      sb.append("01");
      break;
    case Calendar.FEBRUARY:
      sb.append("02");
      break;
    case Calendar.MARCH:
      sb.append("03");
      break;
    case Calendar.APRIL:
      sb.append("04");
      break;
    case Calendar.MAY:
      sb.append("05");
      break;
    case Calendar.JUNE:
      sb.append("06");
      break;
    case Calendar.JULY:
      sb.append("07");
      break;
    case Calendar.AUGUST:
      sb.append("08");
      break;
    case Calendar.SEPTEMBER:
      sb.append("09");
      break;
    case Calendar.OCTOBER:
      sb.append("10");
      break;
    case Calendar.NOVEMBER:
      sb.append("11");
      break;
    case Calendar.DECEMBER:
      sb.append("12");
      break;
  }
  if (calendar.get(Calendar.DAY_OF_MONTH) < 10) {
    sb.append("0" + calendar.get(Calendar.DAY_OF_MONTH));
  }
  else {
    sb.append(calendar.get(Calendar.DAY_OF_MONTH));
  }
  return sb.toString();
}

public static void checkTimestamp(String timestamp, String name)
throws IOException {
  if (timestamp == null) {
    return;
  }
  String msg = new String(
      "Wrong date or time. (" + name + "=\"" + timestamp + "\")");
  int len = timestamp.length();
  if (len != 15) {
    throw new IOException(msg);
  }
  for (int i = 0; i < (len - 1); i++) {
    if (! Character.isDigit(timestamp.charAt(i))) {
      throw new IOException(msg);
    }
  }
  if (timestamp.charAt(len - 1) != 'Z') {
    throw new IOException(msg);
  }
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int hour = Integer.parseInt(timestamp.substring(8,10));
  int minute = Integer.parseInt(timestamp.substring(10,12));
  int second = Integer.parseInt(timestamp.substring(12,14));
  if (day < 1) {
    throw new IOException(msg);
  }
  if ((month < 1) || (month > 12)) {
    throw new IOException(msg);
  }
  if (month == 2) {
    if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
      if (day > 29) {
        throw new IOException(msg);
      }
    }
    else {
      if (day > 28) {
        throw new IOException(msg);
    }
    }
  }
  if (month == 1 || month == 3 || month == 5 || month == 7
  || month == 8 || month == 10 || month == 12) {
    if (day > 31) {
      throw new IOException(msg);
    }
  }
  if (month == 4 || month == 6 || month == 9 || month == 11) {
    if (day > 30) {
      throw new IOException(msg);
    }
  }
  if ((hour < 0) || (hour > 24)) {
    throw new IOException(msg);
  }
  if ((minute < 0) || (minute > 59)) {
    throw new IOException(msg);
  }
  if ((second < 0) || (second > 59)) {
    throw new IOException(msg);
  }
}

public static String additionOfDays(String timestamp, int intervall) {
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int len = timestamp.length();
  String timestamp_rest = timestamp.substring(8, len);
  int lastDayOfMonth = 31;
  int current_intervall = intervall;
  while (current_intervall > 0) {
    lastDayOfMonth = getDaysOfMonth(year, month);
    if (day + current_intervall > lastDayOfMonth) {
      current_intervall = current_intervall - (lastDayOfMonth - day);
      if (month < 12) {
        month++;
      }
      else {
        year++;
        month = 1;
      }
      day = 0;
    }
    else {
      day = day + current_intervall;
      current_intervall = 0;
    }
  }
  String new_year = "" + year + "";
  String new_month = null;
  if (month < 10) {
    new_month = "0" + month + "";
  }
  else {
    new_month = "" + month + "";
  }
  String new_day = null;
  if (day < 10) {
    new_day = "0" + day + "";
  }
  else {
    new_day = "" + day + "";
  }
  return new String(new_year + new_month + new_day + timestamp_rest);
}

public static String additionOfSeconds(String timestamp, int intervall) {
  int hour = Integer.parseInt(timestamp.substring(8,10));
  int minute = Integer.parseInt(timestamp.substring(10,12));
  int second = Integer.parseInt(timestamp.substring(12,14));
  int new_second = (second + intervall) % 60;
  int minute_intervall = (second + intervall) / 60;
  int new_minute = (minute + minute_intervall) % 60;
  int hour_intervall = (minute + minute_intervall) / 60;
  int new_hour = (hour + hour_intervall) % 24;
  int day_intervall = (hour + hour_intervall) / 24;
  StringBuffer new_time = new StringBuffer();
  if (new_hour < 10) {
    new_time.append("0" + new_hour + "");
  }
  else {
    new_time.append("" + new_hour + "");
  }
  if (new_minute < 10) {
    new_time.append("0" + new_minute + "");
  }
  else {
    new_time.append("" + new_minute + "");
  }
  if (new_second < 10) {
    new_time.append("0" + new_second + "");
  }
  else {
    new_time.append("" + new_second + "");
  }
  if (day_intervall > 0) {
    return additionOfDays(timestamp.substring(0,8) + new_time.toString() + "Z", day_intervall);
  }
  else {
    return (timestamp.substring(0,8) + new_time.toString() + "Z");
  }
}

public static int getDaysOfMonth(int year, int month) {
  int lastDayOfMonth = 31;
  switch (month) {
    case 1: case 3: case 5: case 7: case 8: case 10: case 12:
      lastDayOfMonth = 31;
      break;
    case 2:
      if ((year % 4 == 0 && year % 100 != 0) || year %400 == 0) {
        lastDayOfMonth = 29;
      }
      else {
        lastDayOfMonth = 28;
      }
      break;
    case 4: case 6: case 9: case 11:
      lastDayOfMonth = 30;
      break;
  }
  return lastDayOfMonth;
}
[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

CryptogramEFF's 30th Anniversary Livestream

It's the EFF's 30th birthday, and the organization is having a celebratory livestream today from 3:00 to 10:00 pm PDT.

There are a lot of interesting discussions and things. I am having a fireside chat at 4:10 pm PDT to talk about the Crypto Wars and more.

Stop by. And thank you for supporting EFF.

EDITED TO ADD: This event is over, but you can watch a recorded version on YouTube.

,

Krebs on Security‘Wormable’ Flaw Leads July Microsoft Patches

Microsoft today released updates to plug a whopping 123 security holes in Windows and related software, including fixes for a critical, “wormable” flaw in Windows Server versions that Microsoft says is likely to be exploited soon. While this particular weakness mainly affects enterprises, July’s care package from Redmond has a little something for everyone. So if you’re a Windows (ab)user, it’s time once again to back up and patch up (preferably in that order).

Top of the heap this month in terms of outright scariness is CVE-2020-1350, which concerns a remotely exploitable bug in more or less all versions of Windows Server that attackers could use to install malicious software simply by sending a specially crafted DNS request.

Microsoft said it is not aware of reports that anyone is exploiting the weakness (yet), but the flaw has been assigned a CVSS score of 10, which translates to “easy to attack” and “likely to be exploited.”

“We consider this to be a wormable vulnerability, meaning that it has the potential to spread via malware between vulnerable computers without user interaction,” Microsoft wrote in its documentation of CVE-2020-1350. “DNS is a foundational networking component and commonly installed on Domain Controllers, so a compromise could lead to significant service interruptions and the compromise of high level domain accounts.”

CVE-2020-1350 is just the latest worry for enterprise system administrators in charge of patching dangerous bugs in widely-used software. Over the past couple of weeks, fixes for flaws with high severity ratings have been released for a broad array of software products typically used by businesses, including Citrix, F5, Juniper, Oracle and SAP. This at a time when many organizations are already short-staffed and dealing with employees working remotely thanks to the COVID-19 pandemic.

The Windows Server vulnerability isn’t the only nasty one addressed this month that malware or malcontents can use to break into systems without any help from users. A full 17 other critical flaws fixed in this release tackle security weaknesses that Microsoft assigned its most dire “critical” rating, such as in Office, Internet Exploder, SharePoint, Visual Studio, and Microsoft’s .NET Framework.

Some of the more eyebrow-raising critical bugs addressed this month include CVE-2020-1410, which according to Recorded Future concerns the Windows Address Book and could be exploited via a malicious vcard file. Then there’s CVE-2020-1421, which protects against potentially malicious .LNK files (think Stuxnet) that could be exploited via an infected removable drive or remote share. And we have the dynamic duo of CVE-2020-1435 and CVE-2020-1436, which involve problems with the way Windows handles images and fonts that could both be exploited to install malware just by getting a user to click a booby-trapped link or document.

Not to say flaws rated “important” as opposed to critical aren’t also a concern. Chief among those is CVE-2020-1463, a problem within Windows 10 and Server 2016 or later that was detailed publicly prior to this month’s Patch Tuesday.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for a particular Windows update to hose one’s system or prevent it from booting properly, and some updates even have been known to erase or corrupt files. Last month’s bundle of joy from Microsoft sent my Windows 10 system into a perpetual crash state. Thankfully, I was able to restore from a recent backup.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

Also, keep in mind that Windows 10 is set to apply patches on its own schedule, which means if you delay backing up you could be in for a wild ride. If you wish to ensure the operating system has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches whenever it sees fit, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips. Also, keep an eye on the AskWoody blog from Woody Leonhard, who keeps a reliable lookout for buggy Microsoft updates each month.

CryptogramEnigma Machine for Sale

A four-rotor Enigma machine -- with rotors -- is up for auction.

CryptogramHalf a Million IoT Passwords Leaked

It is amazing that this sort of thing can still happen:

...the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker then tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Telnet? Default passwords? In 2020?

We have a long way to go to secure the IoT.

EDITED TO ADD (7/14): Apologies, but I previously blogged this story in January.

Worse Than FailureRepresentative Line: An Exceptional Leader

IniTech’s IniTest division makes a number of hardware products, like a protocol analyzer which you can plug into a network and use to monitor data in transport. As you can imagine, it involves a fair bit of software, and it involves a fair bit of hardware. Since it’s a testing and debugging tool, reliability, accuracy, and stability are the watchwords of the day.

Which is why the software development process was overseen by Russel. Russel was the “Alpha Geek”, blessed by the C-level to make sure that the software was up to snuff. This lead to some conflict- Russel had a bad habit of shoulder-surfing his fellow developers and telling them what to type- but otherwise worked very well. Foibles aside, Russel was technically competent, knew the problem domain well, and had a clean, precise, and readable coding style which all the other developers tried to imitate.

It was that last bit which got Ashleigh’s attention. Because, scattered throughout the entire C# codebase, there are exception handlers which look like this:

try
{
	// some code, doesn't matter what
	// ...
}
catch (Exception ex)
{
   ex = ex;
}

This isn’t the sort of thing which one developer did. Nearly everyone on the team had a commit like that, and when Ashleigh asked about it, she was told “It’s just a best practice. We’re following Russel’s lead. It’s for debugging.”

Ashleigh asked Russel about it, but he just grumbled and had no interest in talking about it beyond, “Just… do it if it makes sense to you, or ignore it. It’s not necessary.”

If it wasn’t necessary, why was it so common in the codebase? Why was everyone “following Russel’s lead”?

Ashleigh tracked down the original commit which started this pattern. It was made by Russel, but the exception handler had one tiny, important difference:

catch (Exception ex)
{
   ex = ex; //putting this here to set a breakpoint
}

Yes, this was just a bit of debugging code. It was never meant to be committed. Russel pushed it into the main history by accident, and the other developers saw it, and thought to themselves, “If Russel does it, it must be the right thing to do,” and started copying him.

By the time Russel noticed what was going on, it was too late. The standard had been set while he wasn’t looking, and whether it was ego or cowardice, Russel just could never get the team to follow his lead away from the pointless pattern.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

MEDebian PPC64EL Emulation

In my post on Debian S390X Emulation [1] I mentioned having problems booting a Debian PPC64EL kernel under QEMU. Giovanni commented that they had PPC64EL working and gave a link to their site with Debian QEMU images for various architectures [2]. I tried their image which worked then tried mine again which also worked – it seemed that a recent update in Debian/Unstable fixed the bug that made QEMU not work with the PPC64EL kernel.

Here are the instructions on how to do it.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/ppc
mkfs.ext4 /vmstore/ppc
mount -o loop /vmstore/ppc /mnt/tmp

Then visit the Debian Netinst page [3] to download the PPC64EL net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-ppc has the program for emulating a PPC64LE system, the qemu-user-static package has the program for emulating PPC64LE for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

apt install qemu-system-ppc qemu-user-static

update-binfmts --display

# qemu ppc64 needs exec stack to solve "Could not allocate dynamic translator buffer"
# so enable that on SE Linux systems
setsebool -P allow_execstack 1

debootstrap --foreign --arch=ppc64el --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage

cat << END > /mnt/tmp/etc/apt/sources.list
deb http://mirror.internode.on.net/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf

echo ppc64 > /mnt/tmp/etc/hostname

# /usr/bin/awk: error while loading shared libraries: cannot restore segment prot after reloc: Permission denied
# only needed for chroot
setsebool allow_execmod 1

chroot /mnt/tmp apt update
# why aren't they in the default install?
chroot /mnt/tmp apt install perl dialog
chroot /mnt/tmp apt dist-upgrade
chroot /mnt/tmp apt install bash-completion locales man-db openssh-server build-essential systemd-sysv ifupdown vim ca-certificates gnupg
# install kernel last because systemd install rebuilds initrd
chroot /mnt/tmp apt install linux-image-ppc64el
chroot /mnt/tmp dpkg-reconfigure locales
chroot /mnt/tmp passwd

cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END

mkdir /mnt/tmp/root/.ssh
chmod 700 /mnt/tmp/root/.ssh
cp ~/.ssh/id_rsa.pub /mnt/tmp/root/.ssh/authorized_keys
chmod 600 /mnt/tmp/root/.ssh/authorized_keys

rm /mnt/tmp/vmlinux* /mnt/tmp/initrd*
mkdir /boot/ppc64
cp /mnt/tmp/boot/[vi]* /boot/ppc64

# clean up
umount /mnt/tmp
umount /mnt/tmp2

# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper

# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf

Here is an example script for starting kvm. It can be run by any user that can read /etc/qemu/bridge.conf.

#!/bin/bash
set -e

KERN="kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le"

# single network device, can have multiple
NET="-device e1000,netdev=net0,mac=02:02:00:00:01:04 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper"

# random number generator for fast start of sshd etc
RNG="-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0"

# I have lockdown because it does no harm now and is good for future kernels
# I enable SE Linux everywhere
KERNCMD="net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality"

kvm -drive format=raw,file=/vmstore/ppc64,if=virtio $RNG -nographic -m 1024 -smp 2 $KERN -curses -append "$KERNCMD" $NET

,

Krebs on SecurityBreached Data Indexer ‘Data Viper’ Hacked

Data Viper, a security startup that provides access to some 15 billion usernames, passwords and other information exposed in more than 8,000 website breaches, has itself been hacked and its user database posted online. The hackers also claim they are selling on the dark web roughly 2 billion records Data Viper collated from numerous breaches and data leaks, including data from several companies that likely either do not know they have been hacked or have not yet publicly disclosed an intrusion.

The apparent breach at St. Louis, Mo. based Data Viper offers a cautionary and twisted tale of what can happen when security researchers seeking to gather intelligence about illegal activity online get too close to their prey or lose sight of their purported mission. The incident also highlights the often murky area between what’s legal and ethical in combating cybercrime.

Data Viper is the brainchild of Vinny Troia, a security researcher who runs a cyber threat intelligence company called Night Lion Security. Since its inception in 2018, Data Viper has billed itself as a “threat intelligence platform designed to provide organizations, investigators and law enforcement with access to the largest collection of private hacker channels, pastes, forums and breached databases on the market.”

Many private companies sell access to such information to vetted clients — mainly law enforcement officials and anti-fraud experts working in security roles at major companies that can foot the bill for these often pricey services.

Data Viper has sought to differentiate itself by advertising “access to private and undisclosed breach data.” As KrebsOnSecurity noted in a 2018 story, Troia has acknowledged posing as a buyer or seller on various dark web forums as a way to acquire old and newly-hacked databases from other forum members.

But this approach may have backfired over the weekend, when someone posted to the deep web a link to an “e-zine” (electronic magazine) describing the Data Viper hack and linking to the Data Viper user base. The anonymous poster alleged he’d been inside Data Viper for months and had exfiltrated hundreds of gigabytes of breached data from the service without notice.

The intruder also linked to several dozen new sales threads on the dark web site Empire Market, where they advertise the sale of hundreds of millions of account details from dozens of leaked or hacked website databases that Data Viper allegedly acquired via trading with others on cybercrime forums.

An online post by the attackers who broke into Data Viper.

Some of the databases for sale tie back to known, publicly reported breaches. But others correspond to companies that do not appear to have disclosed a security incident. As such, KrebsOnSecurity is not naming most of those companies and is currently attempting to ascertain the validity of the claims.

KrebsOnSecurity did speak with Victor Ho, the CEO of Fivestars.com, a company that helps smaller firms run customer loyalty programs. The hackers claimed they are selling 44 million records taken from Fivestars last year. Ho said he was unaware of any data security incident and that no such event had been reported to his company, but that Fivestars is now investigating the claims. Ho allowed that the number of records mentioned in the dark web sales thread roughly matches the number of users his company had last year.

But on Aug. 3, 2019, Data Viper’s Twitter account casually noted, “FiveStars — 44m breached records added – incl Name, Email, DOB.” The post, buried among a flurry of similar statements about huge caches of breached personal information added to Data Viper, received hardly any attention and garnered just one retweet.

GNOSTIC PLAYERS, SHINY HUNTERS

Reached via Twitter, Troia acknowledged that his site had been hacked, but said the attackers only got access to the development server for Data Viper, and not the more critical production systems that power the service and which house his index of compromised credentials.

Troia said the people responsible for compromising his site are the same people who hacked the databases they are now selling on the dark web and claiming to have obtained exclusively from his service.

What’s more, Troia believes the attack was a preemptive strike in response to a keynote he’s giving in Boston this week: On June 29, Troia tweeted that he plans to use the speech to publicly expose the identities of the hackers, who he suspects are behind a large number of website break-ins over the years.

Hacked or leaked credentials are prized by cybercriminals engaged in “credential stuffing,” a rampant form of cybercrime that succeeds when people use the same passwords across multiple websites. Armed with a list of email addresses and passwords from a breached site, attackers will then automate login attempts using those same credentials at hundreds of other sites.

Password re-use becomes orders of magnitude more dangerous when website developers engage in this unsafe practice. Indeed, a January 2020 post on the Data Viper blog suggests credential stuffing is exactly how the group he plans to discuss in his upcoming talk perpetrated their website compromises.

In that post, Troia wrote that the hacker group, known variously as “Gnostic Players” and “Shiny Hunters,” plundered countless website databases using roughly the same method: Targeting developers using credential stuffing attacks to log into their GitHub accounts.

“While there, they would pillage the code repositories, looking for AWS keys and similar credentials that were checked into code repositories,” Troia wrote.

Troia said the intrusion into his service wasn’t the result of the credential re-use, but instead because his developer accidentally left his credentials exposed in documents explaining how customers can use Data Viper’s application programming interface.

“I will say the irony of how they got in is absolutely amazing,” Troia said. “But all of this stuff they claim to be selling is [databases] they were already selling. All of this is from Gnostic players. None of it came from me. It’s all for show to try and discredit my report and my talk.”

Troia said he didn’t know how many of the databases Gnostic Players claimed to have obtained from his site were legitimate hacks or even public yet.

“As for public reporting on the databases, a lot of that will be in my report Wednesday,” he said. “All of my ‘reporting’ goes to the FBI.”

SMOKE AND MIRRORS

The e-zine produced by the Data Viper hackers claimed that Troia used many nicknames on various cybercrime forums, including the moniker “Exabyte” on OGUsers, a forum that’s been closely associated with account takeovers.

In a conversation with KrebsOnSecurity, Troia acknowledged that this Exabyte attribution was correct, noting that he was happy about the exposure because it further solidified his suspicions about who was responsible for hacking his site.

This is interesting because some of the hacked databases the intruders claimed to have acquired after compromising Data Viper correspond to discoveries credited to Troia in which companies inadvertently exposed tens of millions of user details by leaving them publicly accessible online at cloud services like Amazon’s EC2.

For example, in March 2019, Troia said he’d co-discovered a publicly accessible database containing 150 gigabytes of plaintext marketing data — including 763 million unique email addresses. The data had been exposed online by Verifications.io, an email validation firm.

On Oct 12, 2019, a new user named Exabyte registered on RaidForums — a site dedicated to sharing hacked databases and tools to perpetrate credential stuffing attacks. That Exabyte account was registered less than two weeks after Troia created his Exabyte identity on OGUsers. The Exabyte on RaidForums posted on Dec. 26, 2019 that he was providing the community with something of a belated Christmas present: 200 million accounts leaked from Verifications.io.

“Verifications.io is finally here!” Exabyte enthused. “This release contains 69 of 70 of the original verifications.io databases, totaling 200+ million accounts.”

Exabyte’s offer of the Verifications.io database on RaidForums.

In May 2018, Troia was featured in Wired.com and many other publications after discovering that sales intelligence firm Apollo left 125 million email addresses and nine billion data points publicly exposed in a cloud service. As I reported in 2018, prior to that disclosure Troia had sought my help in identifying the source of the exposed data, which he’d initially and incorrectly concluded was exposed by LinkedIn.com. Rather, Apollo had scraped and collated the data from many different sites, including LinkedIn.

Then in August 2018, someone using the nickname “Soundcard” posted a sales thread to the now-defunct Kickass dark web forum offering the personal information of 212 million LinkedIn users in exchange for two bitcoin (then the equivalent of ~$12,000 USD). Incredibly, Troia had previously told me that he was the person behind that Soundcard identity on the Kickass forum.

Soundcard, a.k.a. Troia, offering to sell what he claimed was all of LinkedIn’s user data, on the Dark Web forum Kickass.

Asked about the Exabyte posts on RaidForums, Troia said he wasn’t the only one who had access to the Verifications.io data, and that the full scope of what’s been going on would become clearer soon.

“More than one person can have the same name ‘Exabyte,” Troia said. “So much from both sides you are seeing is smoke and mirrors.”

Smoke and mirrors, indeed. It’s entirely possible this incident is an elaborate and cynical PR stunt by Troia to somehow spring a trap on the bad guys. Troia recently published a book on threat hunting, and on page 360 (PDF) he describes how he previously staged a hack against his own site and then bragged about the fake intrusion on cybercrime forums in a bid to gather information about specific cybercriminals who took the bait — the same people, by the way, he claims are behind the attack on his site.

MURKY WATERS

While the trading of hacked databases may not technically be illegal in the United States, it’s fair to say the U.S. Department of Justice (DOJ) takes a dim view of those who operate services marketed to cybercriminals.

In January 2020, U.S. authorities seized the domain of WeLeakInfo.com, an online service that for three years sold access to data hacked from other websites. Two men were arrested in connection with that seizure. In February 2017, the Justice Department took down LeakedSource, a service that operated similarly to WeLeakInfo.

The DOJ recently released guidance (PDF) to help threat intelligence companies avoid the risk of prosecution when gathering and purchasing data from illicit sources online. The guidelines suggest that some types of intelligence gathering — particularly exchanging ill-gotten information with others on crime forums as a way to gain access to other data or to increase one’s status on the forum — could be especially problematic.

“If a practitioner becomes an active member of a forum and exchanges information and communicates directly with other forum members, the practitioner can quickly become enmeshed in illegal conduct, if not careful,” reads the Feb. 2020 DOJ document.

The document continues:

“It may be easier for an undercover practitioner to extract information from sources on the forum who have learned to trust the practitioner’s persona, but developing trust and establishing bona fides as a fellow criminal may involve offering useful information, services, or tools that can be used to commit crimes.”

“Engaging in such activities may well result in violating federal criminal law. Whether a crime has occurred usually hinges on an individual’s actions and intent. A practitioner must avoid doing anything that furthers the criminal objectives of others on the forums. Even though the practitioner has no intention of committing a crime, assisting others engaged in criminal conduct can constitute the federal offense of aiding and abetting.”

“An individual may be found liable for aiding and abetting a federal offense if her or she takes an affirmative act — even an act that is lawful on its own — that is in furtherance of the crime and conducted with the intent of facilitating the crime’s commission.”

Cory DoctorowFull Employment

This week’s podcast is a reading of Full Employment, my latest Locus column. It’s a counter to the argument about automation-driven unemployment – namely, that we will have hundreds of years of full employment facing the climate emergency and remediating the damage it wreaks. From relocating all our coastal cities to replacing aviation routes with high-speed rails to the caring and public health work for hundreds of millions of survivors of plagues, floods and fires, we are in no danger of running out of work. The real question is: how will we mobilize people to do the work needed to save our species and the only known planet in the entire universe that can sustain it?

MP3

CryptogramA Peek into the Fake Review Marketplace

A personal account of someone who was paid to buy products on Amazon and leave fake reviews.

Fake reviews are one of the problems that everyone knows about, and no one knows what to do about -- so we all try to pretend doesn't exist.

Kevin RuddGlobal TV: Sino-Canadian Relations

INTERVIEW VIDEO
GLOBAL TV CANADA
‘WEST BLOCK’
RECORDED 10 JULY 2020
BROADCAST 12 JULY 2020

The post Global TV: Sino-Canadian Relations appeared first on Kevin Rudd.

Worse Than FailureA Revolutionary Vocabulary

Changing the course of a large company is much like steering the Titanic: it's probably too late, it's going to end in tears, and for some reason there's going to be a spirited debate about the bouyancy and stability of the doors.

Shena works at Initech, which is already a gigantic, creaking organization on the verge of toppling over. Management recognizes the problems, and knows something must be done. They are not, however, particularly clear about what that something should actually be, so they handed the Project Management Office a budget, told them to bring in some consultants, and do something.

The PMO dutifully reviewed the list of trendy buzzwords in management magazines, evaluated their budget, and brought in a team of consultants to "Establish a culture of continuous process improvement" that would "implement Agile processes" and "break down silos" to ensure "high functioning teams that can successfully self-organize to meet institutional objectives on time and on budget" using "the best-in-class tools" to support the transition.

Any sort of organizational change is potentially scary, to at least some of the staff. No matter how toxic or dysfunctional an organization is, there's always someone who likes the status quo. There was a fair bit of resistance, but the consultants and PMO were empowered to deal with them, laying off the fortunate, or securing promotions to vaguely-defined make-work jobs for the deeply unlucky.

There were a handful of true believers, the sort of people who had landed in their boring corporate gig years before, and had spent their time gently suggesting that things could possibly be better, slightly. They saw the changes as an opportunity, at least until they met the reality of trying to acutally commit to changes in an organization the size of Initech.

The real hazard, however, were the members of the Project Management Office who didn't actually care about Initech, their peers, or process change: they cared about securing their own little fiefdom of power. People like Debbie, who before the consultants came, had created a series of "Project Checkpoint Documents". Each project was required to fill out the 8 core documents, before any other work began, and Debbie was the one who reviewed them- which meant projects didn't progress without her say-so. Or Larry, who was a developer before moving into project management, and thus was in charge of the code review processes for the entire company, despite not having written anything in a language newer than COBOL85.

Seeing that the organizational changes would threaten their power, people like Debbie or Larry did the only thing they could do: they enthusiastically embraced the changes and labeled themselves the guardians of the revolution. They didn't need to actually do anything good, they didn't need to actually facilitate the changes, they just needed to show enthusiasm and look busy, and generate the appearance that they were absolutely critical to the success of the transition.

Debbie, specifically, got herself very involved in driving the adoption of Jira as their ticket tracking tool, instead of the hodge-podge of Microsoft Project, spreadsheets, emails, and home-grown ticketing systems. Since this involved changing the vocubulary they used to talk about projects, it meant Debbie could spend much of her time policing the language used to describe projects. She ran trainings to explain what an "Epic" or a "Story" were, about how to "rightsize stories so you can decompose them into actionable tasks". But everything was in flux, which meant the exact way Initech developers were meant to use Jira kept changing, almost on a daily basis.

Which is why Shena eventually received this email from the Project Management Office.

Teams,

As part of our process improvement efforts, we'll be making some changes to how we track work in JIRA. Epics are now to only be created by leadership. They will represent mission-level initiatives that we should all strive for. For all development work tracking, the following shall be the process going forward to account for the new organizational communication directive:

  • Treat Features as Epics
  • Treat Stories as Features
  • Treat Tasks as Stories
  • Treat Sub-tasks as Tasks
  • If you need Sub-tasks, create a spreadsheet to track them within your team.

Additionally, the following is now reflected in the status workflows and should be adhered to:

  • Features may not be deleted once created. Instead, use the Cancel functionality.
  • Cancelled tasks will be marked as Done
  • Done tasks should now be marked as Complete

As she read this glorious and transcended piece of Newspeak, Shena couldn't help but wonder about her laid off co-workers, and wonder if perhaps she shouldn't join them.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

CryptogramFriday Squid Blogging: China Closing Its Squid Spawning Grounds

China is prohibiting squid fishing in two areas -- both in international waters -- for two seasons, to give squid time to recover and reproduce.

This is the first time China has voluntarily imposed a closed season on the high seas. Some experts regard it as an important step forward in China's management of distant-water fishing (DWF), and crucial for protecting the squid fishing industry. But others say the impact will be limited and that stronger oversight of fishing vessels is needed, or even a new fisheries management body specifically for squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

,

CryptogramBusiness Email Compromise (BEC) Criminal Ring

A criminal group called Cosmic Lynx seems to be based in Russia:

Dubbed Cosmic Lynx, the group has carried out more than 200 BEC campaigns since July 2019, according to researchers from the email security firm Agari, particularly targeting senior executives at large organizations and corporations in 46 countries. Cosmic Lynx specializes in topical, tailored scams related to mergers and acquisitions; the group typically requests hundreds of thousands or even millions of dollars as part of its hustles.

[...]

For example, rather than use free accounts, Cosmic Lynx will register strategic domain names for each BEC campaign to create more convincing email accounts. And the group knows how to shield these domains so they're harder to trace to the true owner. Cosmic Lynx also has a strong understanding of the email authentication protocol DMARC and does reconnaissance to assess its targets' specific system DMARC policies to most effectively circumvent them.

Cosmic Lynx also drafts unusually clean and credible-looking messages to deceive targets. The group will find a company that is about to complete an acquisition and contact one of its top executives posing as the CEO of the organization being bought. This phony CEO will then involve "external legal counsel" to facilitate the necessary payments. This is where Cosmic Lynx adds a second persona to give the process an air of legitimacy, typically impersonating a real lawyer from a well-regarded law firm in the United Kingdom. The fake lawyer will email the same executive that the "CEO" wrote to, often in a new email thread, and share logistics about completing the transaction. Unlike most BEC campaigns, in which the messages often have grammatical mistakes or awkward wording, Cosmic Lynx messages are almost always clean.

Sam VargheseRacism: Holding and Rainford-Brent do some plain speaking

Michael Anthony Holding, one of the feared West Indies pace bowlers from the 1970s and 1980s, bowled his best spell on 10 July, in front of the TV cameras.

Holding, in England to commentate on the Test series between England and the West Indies, took part in a roundtable on the Black Lives Matter protests which have been sweeping the world recently after an African-American man, George Floyd, was killed by a police officer in Minneapolis on May 25.

Holding speaks frankly, Very frankly. Along with former England cricketer Ebony Rainford-Brent, he spoke about the issues he had faced as a black man, the problems in cricket and how they could be resolved.

There was no bitterness in his voice, just audible pain and sadness. At one point, he came close to breaking down and later told one of the hosts that the memory of his mother being ostracised by her own family because she had married a very dark man had led to this.

Holding spoke of the need for education, to wipe out the centuries of conditioning that have resulted in black people knowing that white lives matter, while white people do not really care about black lives. He cited studies from American universities like Yale to make his points.

And much as white people will dismiss whatever he says, one could not escape the fact that here was a 66-year-old who had seen it all and some calling for a sane solution to the ills of racism.

He provided examples of racism from each of England, South Africa and Australia. In England, he cited the case when he was trying to flag down a cab while going home with his wife-to-be – a woman of Portuguese ancestry who is white. The driver had his meter up to indicate his cab was not occupied, but then on seeing Holding quickly offed the meter light and drove on. An Englishman of West Indian descent who recognised Holding, called out to him, “Hey Mikey, you have to put her in front.” To which Holding, characteristically, replied, “I would rather walk.”

In Australia, he cited a case during a tour; the West Indies teams were always put on a single floor in any hotel they stayed in. Holding said he and three of his fast bowling colleagues were coming down in a lift when it stopped at a floor on the way down. “There was a man waiting there,” Holding said. “He looked at us and did not get into the lift. That’s fine, maybe he was intimidated by the presence of four, big black men.

“But then, just before the lift doors closed, he shouted a racial eipthet at us.

And in South Africa, Holding cited a case when he and his Portuguese friend had gone to a hotel to stay. Someone came to him and was getting the details to book him in; meanwhile some other hotel staffer went to his companion and tried to book her in. “To their way of thinking, she could not possibly be with me, because she was white,” was Holding’s comment. “After all, I am black, am I not?”

Rainford-Brent, who took part in a formal video with Holding, also ventilated the problems that black women cricketers faced in England and spoke with tremendous feeling about the lack of people of colour at any level of the sport.

She was in tears occasionally as she spoke, as frankly as Holding, but again with no bitterness of the travails black people have when they join up to play cricket.

One only hopes that the talk does not end there and something is done about equality. Sky Sports, the broadcaster which ran this remarkable and unusual discussion, has pledged put 30 million pounds into efforts to narrow the gap. Holding’s view was that if enough big companies got involved then the gap would close that much faster.

If he has hope after what he has endured, then there is no reason why the rest of us should not.

Worse Than FailureError'd: They Said the Math Checks Out!

"So...I guess...they want me to spend more?" Angela A. writes.

 

"The '[object Object]' feature must be extremely rare and expensive considering that none of the phones in the list have it!" Jonathan writes.

 

Joel T. wrote, "I was checking this Covid-19 dashboard to see if it was safe to visit my family and well, I find it really thoughtful of them to cover the Null states, where I grew up."

 

"Thankfully after my appointment, I discovered I am healthier than my doctor's survey system," writes Paul T.

 

Peter C. wrote, "I am so glad that I went to college in {Other_Region}."

 

"I tried out this Excel currency converter template and it scrapes MSN.com/Money for up to date exchange rates," Kevin J. writes, "but, I think someone updated the website without thinking about this template."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Kevin RuddThe US-China Relationship Needs a New Organising Principle

The US-China Relationship Needs a New Organising Principle
The Hon Kevin Rudd AC
President of the Asia Society Policy Institute, New York.
China-US Think Tanks Online Forum
Peking University
9 July, 2020

It’s good to be in this gathering and to follow Foreign Minister Wang Yi and Dr Kissinger.

Foreign Minister Wang Yi before spoke about the great ship of US-China relations. The great ship of US-China relations currently has a number of holes in the side. It’s not quite the time to man the lifeboats. But I do see people preparing the lifeboats at present. So this conference comes at a critical time.

I’m conscious of the fact that we have many distinguished Americans joining us today. Ambassador Stapleton Roy (former US Ambassador to China in the Bush Administration); Kurt Campbell (former Assistant Secretary of State for East Asia under the Obama Administration) who have occupied distinguished offices on behalf the United States government in the past. Steve Orlins, President of the US National Committee on US-China Relations, together with other distinguished Americans, are with us as well. These individuals will have valuable perspectives from an American point of view. And as you know, I’m not an American. Nor am I Chinese. So what I will try to provide here are some separate reflections on the way forward.

We’re asked in this conference to address the correct way forward for US-China relations. I always love the Chinese use of the word zhengque, translated as “correct”. I think the Chinese in the title of today’s conference is Zhong-Mei Guanxi Weilaide Zhengque Fangxiang. Well, it depends on your definition of Zhengque or “correct”. Within a Chinese Marxist-Leninist system, zhengque has a particular meaning. Whereas for those of us from the reactionary West, we have a different idea of what zhengque or “correct” happens to be.

So given I’m an Australian and our traditional culture is to defy all rules, let me suggest that we informally amend our title to not finding a “correct” future for the US-China relationship, but to find a sustainable future for the US-China relationship. A relationship which is kechixu. And by sustainable, what do I mean? I mean four things. “Sustainable” within Chinese domestic politics. China is about to have the Beidaihe meetings in August where the US-China relationship will once again be the central topic. “Sustainable” within US domestic politics, both Republican and Democrat. A third criteria is, dare I add it, is “sustainable” for those of us who are third countries trying to deal with both of you. And fourthly, to do all the above without the relationship spiralling out of control into crisis, escalation, conflict or even war.

Now, as think-tankers, finding this intersecting circle between these four considerations is very difficult and perhaps impossible. We’re in search of the new “golden mean”, the new zhongyong in US-China relations. But I believe it’s worth the effort. In my allocated ten minutes, I’d therefore like to make just three points. And I base these three points on two critical political assumptions. One is that Xi Jinping is likely to remain in office after the 20th Party Congress in 2022. And my second critical assumption is that Biden is likely to be the President of the United States after January 2021.
My first point is we need to understand clearly why the US-China relationship is now in its most volatile condition in 30 years – and perhaps 50 years. I think we need to agree on three mega-changes that have been at work.

Number one: the changes in the underlying power structure of this relationship, that is the balance of power between the US and China, because of China’s rise. Those of us on the American side here who read the Chinese strategic and political literature know that this is openly discussed in China. That the shili pingheng has changed. And therefore this, in China’s mind, provides it with greater opportunity for policy leverage.

The second reason for the change is that those of us who observe China closely, as I’ve done for my entire professional life, have seen significant changes since the Central Party Work Conference on Foreign Affairs held in December of 2013. And since then China’s international policy has become more activist and more assertive across strategic policy, economic policy and human rights policy in this new age of fenfayouwie, and no more an age of taoguangyanghui or “hide your strength, bide your time.” We understand that change and we watch it carefully.

And the third factor that’s structurally at work is the Trump phenomenon and what “America First” has meant in practice. We’ve seen a trade war. We’ve seen the National Security Strategy. We’ve seen the National Defense Strategy. We’ve seen technology decoupling, in part. And we’ve seen it on human rights.

But this third structural factor associated with Trump will not change 180 degrees with Biden. It will change in tone. But my observation is that under Biden the substance is likely to be systematic, strategic competition rather than episodic strategic competition (as we have had with Trump). But with the United States under Biden still willing to work with China on certain defined global challenges like pandemics, like climate change, and possibly global financial management.

My second point is there can be no return, therefore, to previous strategic frameworks for managing a “sustainable” or kechixu US-China relationship in the future. Therefore we need to develop a new framework for doing that. I’m not talking about throwing away the three communiques (from the 1970’s, outlining the foundations of the US-China relationship). I’m talking about building something different based on the three communiques. I often read in the Chinese literature that Beijing hopes that the United States will recognise its errors and return to a “correct” understanding of the US-China relationship and resume the past forms of strategic engagement. For the reasons I’ve outlined, already, that will not happen.

Take one most recent example. China’s decision to enact the national security law on Hong Kong is seen in Beijing as a matter of national sovereignty – in 1997, sovereignty transferred from Britain back to China and that’s the end of the argument. But the essential nature of the American democracy does not permit Washington to see it that way. And hence the reaction from other Asian and Western democracies, which is likely to intensify over the coming months. This is just one example of the much broader point of the changing deep structure of the US-China relationship.

As I said before, the US-China relationship must remain anchored in the three communiques, including the most fundamental issue within them, which is Taiwan. But a new organising principle, and a new strategic architecture for a sustainable relationship, is now needed.
This brings me to my final point: some thoughts on what that alternative organising principle might be for the future.

Of course, the first option is that because we are now in an age of strategic competition, by definition there should be no framework – not even any rules of the road. I disagree with that. Because without any rules of the road, without any guide rails, without any guardrails, this would be highly destabilising. And therefore not sustainable.

The second option is to accept the reality of strategic competition but to mutually agree that strategic competition should be managed within defined parameters – and through a defined mechanism of the highest-level continuing strategic dialogue, communication and contact.

We might not yet be in Cold War 2.0 but, as I’ve written recently in Foreign Affairs magazine, I think we’re probably in Cold War 1.5. And if we’re not careful, we could end up in 2.0. But there are still lessons that we can learn from the US-Soviet relationship and the period of detente.

One of these lessons is in the importance of internal, high-level communication between the two sides that there should be an absolutely clear understanding of the red lines which exist, in particular on Taiwan. By red lines, I do not mean public statements from each other’s foreign ministries as an exercise in public diplomacy. I mean a core understanding about absolutely core interests, both in the military sphere but also in terms of future large-scale financial market actions as well. Within this framework, we also need a bilateral mechanism in place to ensure that these red lines are managed. And that’s quite different to what we have at present where we seem to be engaged in a free voyage of discovery, trying to work out where these red lines might lie. That’s quite dangerous. So to conclude: mutually agreed red lines, both in the military and the economic sphere.

But of course, that’s not the totality of the US-China relationship. But they do represent a foundation for the future strategic framework for the relationship. As for the rest of the architecture of the “managed strategic competition” that I speak of, my thoughts have not changed a lot since I wrote a paper on this at Harvard University, at the Kennedy School, about five years ago. I called it ‘constructive realism’ or jianshexing de xianshizhuyi. There I spoke about: number one, red lines where no common interests exist, but where core interests need to be understood; two, identifying difficult areas where cooperation is possible, for example, like you (both the Chinese and American sides) have recently done on the trade war; and three, those areas where bilateral and multilateral cooperation should be normal, even under current circumstances, like on pandemics, like on climate change, and like making the institutions of global governance operate in an efficient and effective way through the multilateral system, including the WHO.

And, of course, if this framework was to be mutually accepted, it would require changes on both sides – in both Washington and Beijing. But it will also require a new level of intellectual honesty with each other at the highest level of the relationship of the type we’ve seen in the records of Mao’s conversations with Zhou Enlai and with Kissinger and Nixon and those who were party to those critical conversations half a century ago. The framework I’ve just outlined is not dissimilar in some respects to what Foreign Minister Wang Yi spoke about just before.

My final thought is this: I’m very conscious it’s easy for an outsider to say these things because I’m an Australian. But I care passionately about both your countries and I really don’t want you to have a fight which ends up in a war. It’s not good for you. It’s not good for us. It’s not good for anybody.

I’m also conscious that in the United States, there are huge domestic challenges leading up to the US presidential elections – Black Lives Matter, COVID-19 – and I’m deeply mindful of Lincoln’s injunction that a house divided among itself cannot stand.

But I’m also mindful of China’s domestic challenges. I’m mindful of what I read in the People’s Daily today about a new zhengdang, a new party rectification campaign. I’m also mindful of traditional Chinese strategic wisdoms, for example, in the days of Liu Bang, Xiang Yu and shimian maifu or the dangers of “having challenges on ten different fronts at the same time” .

So much wisdom is required on both parties. But I do agree with Jiang Zemin and his continued reminder to us all that the US-China relationship remains zhongzhong zhi zhong or “the most important of the most important”. And if we don’t get that right, then we won’t be able to get anything else right.

This is an edited transcript of Mr Rudd’s remarks to the Peking University China-US Think Tanks Online Forum.

The post The US-China Relationship Needs a New Organising Principle appeared first on Kevin Rudd.

Dave HallLogging Step Functions to CloudWatch

Many AWS Services log to CloudWatch. Some do it out of the box, others need to be configured to log properly. When Amazon released Step Functions, they didn’t include support for logging to CloudWatch. In February 2020, Amazon announced StepFunctions could now log to CloudWatch. Step Functions still support CloudTrail logs, but CloudWatch logging is more useful for many teams.

Users need to configure Step Functions to log to CloudWatch. This is done on a per State Machine basis. Of course you could click around he console to enable it, but that doesn’t scale. If you use CloudFormation to manage your Step Functions, it is only a few extra lines of configuration to add the logging support.

In my example I will assume you are using YAML for your CloudFormation templates. I’ll save my “if you’re using JSON for CloudFormation you’re doing it wrong” rant for another day. This is a cut down example from one of my services:

---
AWSTemplateFormatVersion: '2010-09-09'
Description: StepFunction with Logging Example.
Parameters:
Resources:
  StepFunctionExecRole:
    Type: AWS::IAM::Role
    Properties:
      AssumeRolePolicyDocument:
        Version: '2012-10-17'
        Statement:
        - Effect: Allow
          Principal:
            Service: !Sub "states.${AWS::Region}.amazonaws.com"
          Action:
          - sts:AssumeRole
      Path: "/"
      Policies:
      - PolicyName: StepFunctionExecRole
        PolicyDocument:
          Version: '2012-10-17'
          Statement:
          - Effect: Allow
            Action:
            - lambda:InvokeFunction
            - lambda:ListFunctions
            Resource: !Sub "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:my-lambdas-namespace-*"
          - Effect: Allow
            Action:
            - logs:CreateLogDelivery
            - logs:GetLogDelivery
            - logs:UpdateLogDelivery
            - logs:DeleteLogDelivery
            - logs:ListLogDeliveries
            - logs:PutResourcePolicy
            - logs:DescribeResourcePolicies
            - logs:DescribeLogGroups
            Resource: "*"
  MyStateMachineLogGroup:
    Type: AWS::Logs::LogGroup
    Properties:
      LogGroupName: /aws/stepfunction/my-step-function
      RetentionInDays: 14
  DashboardImportStateMachine:
    Type: AWS::StepFunctions::StateMachine
    Properties:
      StateMachineName: my-step-function
      StateMachineType: STANDARD
      LoggingConfiguration:
        Destinations:
          - CloudWatchLogsLogGroup:
             LogGroupArn: !GetAtt MyStateMachineLogGroup.Arn
        IncludeExecutionData: True
        Level: ALL
      DefinitionString:
        !Sub |
        {
          ... JSON Step Function definition goes here
        }
      RoleArn: !GetAtt StepFunctionExecRole.Arn

The key pieces in this example are the second statement in the IAM Role with all the logging permissions, the LogGroup defined by MyStateMachineLogGroup and the LoggingConfiguration section of the Step Function definition.

The IAM role permissions are copied from the example policy in the AWS documentation for using CloudWatch Logging with Step Functions. The CloudWatch IAM permissions model is pretty weak, so we need to grant these broad permissions.

The LogGroup definition creates the log group in CloudWatch. You can use what ever value you want for the LogGroupName. I followed the Amazon convention of prefixing everything with /aws/[service-name]/ and then appended the Step Function name. I recommend using the RetentionInDays configuration. It stops old logs sticking around for ever. In my case I send all my logs to ELK, so I don’t need to retain them in CloudWatch long term.

Finally we use the LoggingConfiguration to tell AWS where we want to send out logs. You can only specify a single Destinations. The IncludeExecutionData determines if the inputs and outputs of each function call is logged. You should not enable this if you are passing sensitive information between your steps. The verbosity of logging is controlled by Level. Amazon has a page on Step Function log levels. For dev you probably want to use ALL to help with debugging but in production you probably only need ERROR level logging.

I removed the Parameters and Output from the template. Use them as you need to.

CryptogramTraffic Analysis of Home Security Cameras

Interesting research on home security cameras with cloud storage. Basically, attackers can learn very basic information about what's going on in front of the camera, and infer when there is someone home.

News article.

Slashdot thread.

Worse Than FailureCodeSOD: Is It the Same?

A common source of bad code is when you have a developer who understands one thing very well, but is forced- either through organizational changes or the tides of history- to adapt to a new tool which they don’t understand. But a possibly more severe problem is modern developers not fully understanding why certain choices may have been made. Today’s code isn’t a WTF, it’s actually very smart.

Eric P was digging through some antique Fortran code, just exploring some retrocomputing history, and found a block which needed to check if two values were the same.

The normal way to do that in Fortran would be to use the .EQ. operator, e.g.:

LSAME = ( (LOUTP(IOUTP)).EQ.(LPHAS1(IOUTP)) )

Now, in this specific case, I happen to know that LOUTP(IOUTP) and LPHAS1(IOUTP) happen to be boolean expressions. I know this, in part, because of how the original developer actually wrote an equality comparison:

      LSAME = ((     LOUTP(IOUTP)).AND.(     LPHAS1(IOUTP)).OR.
               (.NOT.LOUTP(IOUTP)).AND.(.NOT.LPHAS1(IOUTP)) )

Now, Eric sent us two messages. In their first message:

This type of comparison appears in at least 5 different places and the result is then used in other unnecessarily complicated comparisons and assignments.

But that doesn’t tell the whole story. We need to understand the actual underlying purpose of this code. And the purpose of this block of code is to translate symbolic formula expressions to execute on Programmable Array Logic (PAL) devices.

PAL’s were an early form of programmable ROM, and to describe the logic you wanted them to perform, you had to give them instructions essentially in terms of gates. Essentially, you ’d throw a binary representation of the gate arrangements at the chip, and it would now perform computations for you.

So Eric, upon further review, followed up with a fresh message:

The program it is from was co-written by the manager of the project to create the PAL (Programmable Array Logic) device. So, of course, this is exactly, down to the hardware logic gate, how you would implement an equality comparison in a hardware PAL!
It’s all NOTs, ANDs, and ORs!

Programming is about building a model. Most of the time, we want our model to be clear to humans, and we focus on finding ways to describe that model in clear, unsurprising ways. But what’s “clear” and “unsurprising” can vary depending on what specifically we’re trying to model. Here, we’re modeling low-level hardware, really low-level, and what looks weird at first is actually pretty darn smart.

Eric also included a link to the code he was reading through, for the PAL24 Assembler.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Rondam RamblingsGame over for Hong Kong

The Washington Post reports: Early Wednesday, under a heavy police presence and before any public announcement about the matter, officials inaugurated the Office for Safeguarding National Security of the Central People’s Government in the Hong Kong Special Administrative Region at a ceremony that took place behind water-filled barricades. They played the Chinese national anthem and raised the

Worse Than FailureCodeSOD: A Private Matter

Tim Cooper was digging through the code for a trip-planning application. This particular application can plan a trip across multiple modes of transportation, from public transit to private modes, like rentable scooters or bike-shares.

This need to discuss private modes of transportation can lead to some… interesting code.

// for private: better = same
TIntSet myPrivates = getPrivateTransportSignatures(true);
TIntSet othersPrivates = other.getPrivateTransportSignatures(true);
if (myPrivates.size() != othersPrivates.size()
        || ! myPrivates.containsAll(othersPrivates)
        || ! othersPrivates.containsAll(myPrivates)) {
    return false;
}

This block of code seems to worry a lot about the details of othersPrivates, which frankly is a bad look. Mind your own business, code. Mind your own business.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramIoT Security Principles

The BSA -- also known as the Software Alliance, formerly the Business Software Alliance (which explains the acronym) -- is an industry lobbying group. They just published "Policy Principles for Building a Secure and Trustworthy Internet of Things."

They call for:

  • Distinguishing between consumer and industrial IoT.
  • Offering incentives for integrating security.
  • Harmonizing national and international policies.
  • Establishing regularly updated baseline security requirements

As with pretty much everything else, you can assume that if an industry lobbying group is in favor of it, then it doesn't go far enough.

And if you need more security and privacy principles for the IoT, here's a list of over twenty.

Worse Than FailureCodeSOD: Your Personal Truth

There are still some environments where C may not have easy access to a stdbool header file. That's easy to fix, of course. The basic pattern is to typedef an integer type as a boolean type, and then define some symbols for true and false. It's a pretty standard pattern, three lines of code, and unless you insist that FILE_NOT_FOUND is a boolean value, it's pretty hard to mess up.

Julien H was compiling some third-party C code, specifically in Visual Studio 2010, and as it turns out, VS2010 doesn't support C99, and thus doesn't have a stdbool. But, as stated, it's an easy pattern to implement, so the third party library went and implemented it:

#ifndef _STDBOOL_H_VS2010 #define _STDBOOL_H_VS2010 typedef int bool; static bool true = 1; static bool false = 0; #endif

We've asked many times, what is truth? In this case, we admit a very post-modern reality: what is "true" is not constant and unchanging, it cannot merely be enumerated, it must be variable. Truth can change, because here we've defined true and false as variables. And more than that, each person must identify their own truth, and by making these variables static, what we guarantee is that every .c file in our application can have its own value for truth. The static keyword, applied to a global variable, guarantees that each .c file gets its own scope.

I can only assume this header was developed by Jacques Derrida.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Cory DoctorowFull Employment

My latest Locus column is “Full Employment,” in which I forswear “Fully Automated Luxury Communism” as totally incompatible with the climate emergency, which will consume 100%+ of all human labor for centuries to come.

https://locusmag.com/2020/07/cory-doctorow-full-employment/

This fact is true irrespective of any breakthroughs in AI OR geoengineering. Technological unemployment is vastly oversold and overstated (for example, that whole thing about truck drivers is bullshit).

https://journals.sagepub.com/doi/10.1177/0019793919858079

But even if we do manage to automate away all of jobs, the climate emergency demands unimaginably labor intensive tasks for hundreds of years – jobs like relocating every coastal city inland, or caring for hundreds of millions of refugees.

Add to those: averting the exinctions of thousands of species, managing wave upon wave of zoonotic and insect-borne plagues, dealing with wildfires and tornados, etc.

And geoengineering won’t solve this: we’ve sunk a lot of heat into the oceans. It’s gonna warm them up. That’s gonna change the climate. It’s not gonna be good. Heading this off doesn’t just involve repealing thermodynamics – it also requires a time-machine.

But none of this stuff is insurmountable – it’s just hard. We CAN do this stuff. If you were wringing your hands about unemployed truckers, good news! They’ve all got jobs moving thousands of cities inland!

It’s just (just!) a matter of reorienting our economy around preserving our planet and our species.

And yeah, that’s hard, too – but if “the economy” can’t be oriented to preserving our species, we need a different economy.

Period.

CryptogramThiefQuest Ransomware for the Mac

There's a new ransomware for the Mac called ThiefQuest or EvilQuest. It's hard to get infected:

For your Mac to become infected, you would need to torrent a compromised installer and then dismiss a series of warnings from Apple in order to run it. It's a good reminder to get your software from trustworthy sources, like developers whose code is "signed" by Apple to prove its legitimacy, or from Apple's App Store itself. But if you're someone who already torrents programs and is used to ignoring Apple's flags, ThiefQuest illustrates the risks of that approach.

But it's nasty:

In addition to ransomware, ThiefQuest has a whole other set of spyware capabilities that allow it to exfiltrate files from an infected computer, search the system for passwords and cryptocurrency wallet data, and run a robust keylogger to grab passwords, credit card numbers, or other financial information as a user types it in. The spyware component also lurks persistently as a backdoor on infected devices, meaning it sticks around even after a computer reboots, and could be used as a launchpad for additional, or "second stage," attacks. Given that ransomware is so rare on Macs to begin with, this one-two punch is especially noteworthy.

CryptogramiPhone Apps Stealing Clipboard Data

iOS apps are repeatedly reading clipboard data, which can include all sorts of sensitive information.

While Haj Bakry and Mysk published their research in March, the invasive apps made headlines again this week with the developer beta release of iOS 14. A novel feature Apple added provides a banner warning every time an app reads clipboard contents. As large numbers of people began testing the beta release, they quickly came to appreciate just how many apps engage in the practice and just how often they do it.

This YouTube video, which has racked up more than 87,000 views since it was posted on Tuesday, shows a small sample of the apps triggering the new warning.

EDITED TO ADD (7/6): LinkedIn and Reddit are doing this.

Worse Than FailureCodeSOD: Classic WTF: Dimensioning the Dimension

It was a holiday weekend in the US, so we're taking a little break. Yes, I know that most people took Friday off, but as this article demonstrates, dates remain hard. Original -- Remy

It's not too uncommon to see a Java programmer write a method to get the name of a month based on the month number. Sure, month name formatting is built in via SimpleDateFormat, but the documentation can often be hard to read. And since there's really no other place to find the answer, it's excusable that a programmer will just write a quick method to do this.

I have to say though, Robert Cooper's colleague came up with a very interesting way of doing this: adding an[other] index to an array ...

public class DateHelper
{
  private static final String[][] months = 
    { 
      { "0", "January" }, 
      { "1", "February" }, 
      { "2", "March" }, 
      { "3", "April" }, 
      { "4", "May" }, 
      { "5", "June" }, 
      { "6", "July" }, 
      { "7", "August" }, 
      { "8", "September" }, 
      { "9", "October" }, 
      { "10", "November" }, 
      { "11", "December" }
    };

  public static String getMonthDescription(int month)
  {
    for (int i = 0; i < months.length; i++)
    {
      if (Integer.parseInt(months[i][0]) == month)
      {
          return months[i][1];
      }
    }
    return null;
  }
}

If you enjoyed friday's post (A Pop-up Potpourii), make sure to check out the replies. There were some great error messages posted.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 09)

Here’s part nine of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

MEDebian S390X Emulation

I decided to setup some virtual machines for different architectures. One that I decided to try was S390X – the latest 64bit version of the IBM mainframe. Here’s how to do it, I tested on a host running Debian/Unstable but Buster should work in the same way.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/s390x
mkfs.ext4 /vmstore/s390x
mount -o loop /vmstore/s390x /mnt/tmp

Then visit the Debian Netinst page [1] to download the S390X net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-misc has the program for emulating a S390X system (among many others), the qemu-user-static package has the program for emulating S390X for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

# Install the basic packages you need
apt install qemu-system-misc qemu-user-static debootstrap

# List the support for different binary formats
update-binfmts --display

# qemu s390x needs exec stack to solve "Could not allocate dynamic translator buffer"
# so you probably need this on SE Linux systems
setsebool allow_execstack 1

# commands to do the main install
debootstrap --foreign --arch=s390x --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage

# set the apt sources
cat << END > /mnt/tmp/etc/apt/sources.list
deb http://YOURLOCALMIRROR/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
# for minimal install do not want recommended packages
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf

# update to latest packages
chroot /mnt/tmp apt update
chroot /mnt/tmp apt dist-upgrade

# install kernel, ssh, and build-essential
chroot /mnt/tmp apt install bash-completion locales linux-image-s390x man-db openssh-server build-essential
chroot /mnt/tmp dpkg-reconfigure locales
echo s390x > /mnt/tmp/etc/hostname
chroot /mnt/tmp passwd

# copy kernel and initrd
mkdir -p /boot/s390x
cp /mnt/tmp/boot/vmlinuz* /mnt/tmp/boot/initrd* /boot/s390x

# setup /etc/fstab
cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END

# clean up
umount /mnt/tmp
umount /mnt/tmp2

# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper

# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf

Some of the above can be considered more as pseudo-code in shell script rather than an exact way of doing things. While you can copy and past all the above into a command line and have a reasonable chance of having it work I think it would be better to look at each command and decide whether it’s right for you and whether you need to alter it slightly for your system.

To run qemu as non-root you need to have a helper program with extra capabilities to setup bridged networking. I’ve included that in the explanation because I think it’s important to have all security options enabled.

The “-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0” part is to give entropy to the VM from the host, otherwise it will take ages to start sshd. Note that this is slightly but significantly different from the command used for other architectures (the “ccw” is the difference).

I’m not sure if “noresume” on the kernel command line is required, but it doesn’t do any harm. The “net.ifnames=0” stops systemd from renaming Ethernet devices. For the virtual networking the “ccw” again is a difference from other architectures.

Here is a basic command to run a QEMU virtual S390X system. If all goes well it should give you a login: prompt on a curses based text display, you can then login as root and should be able to run “dhclient eth0” and other similar commands to setup networking and allow ssh logins.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume root=/dev/vda ro" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Here is a slightly more complete QEMU command. It has 2 block devices, for root and swap. It has SE Linux enabled for the VM (SE Linux works nicely on S390X). I added the “lockdown=confidentiality” kernel security option even though it’s not supported in 4.19 kernels, it doesn’t do any harm and when I upgrade systems to newer kernels I won’t have to remember to add it.

qemu-system-s390x -drive format=raw,file=/vmstore/s390x,if=virtio -drive format=raw,file=/vmswap/s390x,if=virtio -object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-ccw,rng=rng0 -nographic -m 1500 -smp 2 -kernel /boot/s390x/vmlinuz-4.19.0-9-s390x -initrd /boot/s390x/initrd.img-4.19.0-9-s390x -curses -append "net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality" -device virtio-net-ccw,netdev=net0,mac=02:02:00:00:01:02 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper

Try It Out

I’ve got a S390X system online for a while, “ssh root@s390x.coker.com.au” with password “SELINUX” to try it out.

PPC64

I’ve tried running a PPC64 virtual machine, I did the same things to set it up and then tried launching it with the following result:

qemu-system-ppc64 -drive format=raw,file=/vmstore/ppc64,if=virtio -nographic -m 1024 -kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le -curses -append "root=/dev/vda ro"

Above is the minimal qemu command that I’m using. Below is the result, it stops after the “4.” from “4.19.0-9”. Note that I had originally tried with a more complete and usable set of options, but I trimmed it to the minimal needed to demonstrate the problem.

  Copyright (c) 2004, 2017 IBM Corporation All rights reserved.
  This program and the accompanying materials are made available
  under the terms of the BSD License available at
  http://www.opensource.org/licenses/bsd-license.php

Booting from memory...
Linux ppc64le
#1 SMP Debian 4.

The kernel is from the package linux-image-4.19.0-9-powerpc64le which is a dependency of the package linux-image-ppc64el in Debian/Buster. The program qemu-system-ppc64 is from version 5.0-5 of the qemu-system-ppc package.

Any suggestions on what I should try next would be appreciated.

,

Krebs on SecurityE-Verify’s “SSN Lock” is Nothing of the Sort

One of the most-read advice columns on this site is a 2018 piece called “Plant Your Flag, Mark Your Territory,” which tried to impress upon readers the importance of creating accounts at websites like those at the Social Security Administration, the IRS and others before crooks do it for you. A key concept here is that these services only allow one account per Social Security number — which for better or worse is the de facto national identifier in the United States. But KrebsOnSecurity recently discovered that this is not the case with all federal government sites built to help you manage your identity online.

A reader who was recently the victim of unemployment insurance fraud said he was told he should create an account at the Department of Homeland Security‘s myE-Verify website, and place a lock on his Social Security number (SSN) to minimize the chances that ID thieves might abuse his identity for employment fraud in the future.

DHS’s myE-Verify homepage.

According to the website, roughly 600,000 employers at over 1.9 million hiring sites use E-Verify to confirm the employment eligibility of new employees. E-Verify’s consumer-facing portal myE-Verify lets users track and manage employment inquiries made through the E-Verify system. It also features a “Self Lock” designed to prevent the misuse of one’s SSN in E-Verify.

Enabling this lock is supposed to mean that for the next year thereafter, if an unauthorized individual attempts to fraudulently use a SSN for employment authorization, he or she cannot use the SSN in E-Verify, even if the SSN is that of an employment authorized individual. But in practice, this service may actually do little to deter ID thieves from impersonating you to a potential employer.

At the request of the reader who reached out (and in the interest of following my own advice to plant one’s flag), KrebsOnSecurity decided to sign up for a myE-Verify account. After verifying my email address, I was asked to pick a strong password and select a form of multi-factor authentication (MFA). The most secure MFA option offered (a one-time code generated by an app like Google Authenticator or Authy) was already pre-selected, so I chose that.

The site requested my name, address, SSN, date of birth and phone number. I was then asked to select five questions and answers that might be asked if I were to try to reset my password, such as “In what city/town did you meet your spouse,” and “What is the name of the company of your first paid job.” I chose long, gibberish answers that had nothing to do with the questions (yes, these password questions are next to useless for security and frequently are the cause of account takeovers, but we’ll get to that in a minute).

Password reset questions selected, the site proceeded to ask four, multiple-guess “knowledge-based authentication” questions to verify my identity. The U.S. Federal Trade Commission‘s primer page on preventing job-related ID theft says people who have placed a security freeze on their credit files with the major credit bureaus will need to lift or thaw the freeze before being able to answer these questions successfully at myE-Verify. However, I did not find that to be the case, even though my credit file has been frozen with the major bureaus for years.

After successfully answering the KBA questions (the answer to each was “none of the above,” by the way), the site declared I’d successfully created my account! I could then see that I had the option to place a “Self Lock” on my SSN within the E-Verify system.

Doing so required me to pick three more challenge questions and answers. The site didn’t explain why it was asking me to do this, but I assumed it would prompt me for the answers in the event that I later chose to unlock my SSN within E-Verify.

After selecting and answering those questions and clicking the “Lock my SSN” button, the site generated an error message saying something went wrong and it couldn’t proceed.

Alas, logging out and logging back in again showed that the site did in fact proceed and that my SSN was locked. Joy.

But I still had to know one thing: Could someone else come along pretending to be me and create another account using my SSN, date of birth and address but under a different email address? Using a different browser and Internet address, I proceeded to find out.

Imagine my surprise when I was able to create a separate account as me with just a different email address (once again, the correct answers to all of the KBA questions was “none of the above”). Upon logging in, I noticed my SSN was indeed locked within E-Verify. So I chose to unlock it.

Did the system ask any of the challenge questions it had me create previously? Nope. It just reported that my SSN was now unlocked. Logging out and logging back in to the original account I created (again under a different IP and browser) confirmed that my SSN was unlocked.

ANALYSIS

Obviously, if the E-Verify system allows multiple accounts to be created using the same name, address, phone number, SSN and date of birth, this is less than ideal and somewhat defeats the purpose of creating one for the purposes of protecting one’s identity from misuse.

Lest you think your SSN and DOB is somehow private information, you should know this static data about U.S. residents has been exposed many times over in countless data breaches, and in any case these digits are available for sale on most Americans via Dark Web sites for roughly the bitcoin equivalent of a fancy caffeinated drink at Starbucks.

Being unable to proceed through knowledge-based authentication questions without first unfreezing one’s credit file with one or all of the big three credit bureaus (Equifax, Experian and TransUnion) can actually be a plus for those of us who are paranoid about identity theft. I couldn’t find any mention on the E-Verify site of which company or service it uses to ask these questions, but the fact that the site doesn’t seem to care whether one has a freeze in place is troubling.

And when the correct answer to all of the KBA questions that do get asked is invariably “none of the above,” that somewhat lessens the value of asking them in the first place. Maybe that was just the luck of the draw in my case, but also troubling nonetheless. Either way, these KBA questions are notoriously weak security because the answers to them often are pulled from records that are public anyway, and can sometimes be deduced by studying the information available on a target’s social media profiles.

Speaking of silly questions, relying on “secret questions” or “challenge questions” as an alternative method of resetting one’s password is severely outdated and insecure. A 2015 study by Google titled “Secrets, Lies and Account Recovery” (PDF) found that secret questions generally offer a security level that is far lower than just user-chosen passwords. Also, the idea that an account protected by multi-factor authentication could be undermined by successfully guessing the answer(s) to one or more secret questions (answered truthfully and perhaps located by thieves through mining one’s social media accounts) is bothersome.

Finally, the advice given to the reader whose inquiry originally prompted me to sign up at myE-Verify doesn’t seem to have anything to do with preventing ID thieves from fraudulently claiming unemployment insurance benefits in one’s name at the state level. KrebsOnSecurity followed up with four different readers who left comments on this site about being victims of unemployment fraud recently, and none of them saw any inquiries about this in their myE-Verify accounts after creating them. Not that they should have seen signs of this activity in the E-Verify system; I just wanted to emphasize that one seems to have little to do with the other.

CryptogramEncroChat Hacked by Police

French police hacked EncroChat secure phones, which are widely used by criminals:

Encrochat's phones are essentially modified Android devices, with some models using the "BQ Aquaris X2," an Android handset released in 2018 by a Spanish electronics company, according to the leaked documents. Encrochat took the base unit, installed its own encrypted messaging programs which route messages through the firm's own servers, and even physically removed the GPS, camera, and microphone functionality from the phone. Encrochat's phones also had a feature that would quickly wipe the device if the user entered a PIN, and ran two operating systems side-by-side. If a user wanted the device to appear innocuous, they booted into normal Android. If they wanted to return to their sensitive chats, they switched over to the Encrochat system. The company sold the phones on a subscription based model, costing thousands of dollars a year per device.

This allowed them and others to investigate and arrest many:

Unbeknownst to Mark, or the tens of thousands of other alleged Encrochat users, their messages weren't really secure. French authorities had penetrated the Encrochat network, leveraged that access to install a technical tool in what appears to be a mass hacking operation, and had been quietly reading the users' communications for months. Investigators then shared those messages with agencies around Europe.

Only now is the astonishing scale of the operation coming into focus: It represents one of the largest law enforcement infiltrations of a communications network predominantly used by criminals ever, with Encrochat users spreading beyond Europe to the Middle East and elsewhere. French, Dutch, and other European agencies monitored and investigated "more than a hundred million encrypted messages" sent between Encrochat users in real time, leading to arrests in the UK, Norway, Sweden, France, and the Netherlands, a team of international law enforcement agencies announced Thursday.

EncroChat learned about the hack, but didn't know who was behind it.

Going into full-on emergency mode, Encrochat sent a message to its users informing them of the ongoing attack. The company also informed its SIM provider, Dutch telecommunications firm KPN, which then blocked connections to the malicious servers, the associate claimed. Encrochat cut its own SIM service; it had an update scheduled to push to the phones, but it couldn't guarantee whether that update itself wouldn't be carrying malware too. That, and maybe KPN was working with the authorities, Encrochat's statement suggested (KPN declined to comment). Shortly after Encrochat restored SIM service, KPN removed the firewall, allowing the hackers' servers to communicate with the phones once again. Encrochat was trapped.

Encrochat decided to shut itself down entirely.

Lots of details about the hack in the article. Well worth reading in full.

The UK National Crime Agency called it Operation Venetic: "46 arrests, and £54m criminal cash, 77 firearms and over two tonnes of drugs seized so far."

Many more news articles. EncroChat website. Slashdot thread. Hacker News threads.

,

CryptogramFriday Squid Blogging: Strawberry Squid

Pretty.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Kevin RuddDefence Strategy Is A National Scandal

The following statement was issued to the Sydney Morning Herald on 2 June 2020:

There are major gaps with the Morrison Government’s 2020 Strategic Policy Update and associated Force Structure Review. It is long on rhetoric and short on delivery.

My government’s 2009 Defence White Paper was the first to enunciate major changes in Australia’s strategic circumstances because of China’s rise. Malcolm Turnbull dismissed the white paper at the time as a relic of the Cold War. That white paper prescribed the biggest single expansion of the Royal Australian Navy since the war.

It also adjusted Australian strategic focus away from the Middle East where the previous conservative government had become hopelessly bogged down – both in its military deployments and in various irrational force structure decisions. The 2009 White Paper announced a doubling of the Australian submarine fleet. Eleven years later, that project has been comprehensively botched. The building of the first vessel has not even begun. In our current strategic circumstances, this is a national security scandal.

Second, Morrison’s 2020 Update talks about the so-called “Pacific Step-Up.” This is the second scandal. Only now are they attempting to restore Australia’s real aid level to the Pacific to where it was in 2013. This has opened the door far and wide to China’s aid presence because Australia was seen as an unreliable aid partner, which did not care about our friends in the region and which was dismissive of the island states’ existential concern about climate change.

Third, Morrison pretends that his government somehow invented Australia’s whole of government cyber capabilities. That is just wrong. Our white paper established the Cyber Security Operations Centre which was purpose-built for national responses to cyber incidents across government and critical private sector systems and infrastructure. The 2009 Cyber Security Strategy also established CERT Australia to provide the Australian government with all-source cyber situational awareness and an enhanced ability to facilitate operational responses to cybersecurity events of national importance. The current government has been slow in expanding these capabilities over recent years to keep pace with this rapidly expanding threat.

The post Defence Strategy Is A National Scandal appeared first on Kevin Rudd.

Sam VargheseDavid Warner must pay for his sins. As everyone else does

What does one make of the argument that David Warner, who was behind the ball tampering scandal in South Africa in 2018, was guilty of less of a mistake than Ben Stokes who indulged in public fights? And the argument that since Stokes has been made England captain for the series against the West Indies, Warner, who committed what is called a lesser sin, should also be in line for the role of Australian skipper?

The suggestion has been made by Peter Lalor, a senior cricket writer at The Australian, that Warner has paid a bigger price for past mistakes than Stokes. Does that argument really hold water?

Stokes was involved in a fracas outside a nightclub in Bristol a few years back and escaped tragedy and legal issues. He got into a brawl and was lucky to get off without a prison term.

But that had no connection to the game of cricket. And when we talk of someone bringing the game into disrepute, such incidents are not in the frame.

Had Stokes indulged in such immature behaviour on the field of play or insulted spectators who were at a game, then we would have to criticise the England board for handing him the mantle of leadership.

Warner brought the game into disrepute. He hatched a plot to use sandpaper in order to get the ball to swing, then shamefully recruited the youngest player in the squad, rookie Cameron Bancroft, to carry out his plan, and then expects to be forgiven and given a chance to lead the national team.

Really? Lalor argues that the ball tampering did not hurt anyone and the umpires did not even have to change the ball. Such is the level of morality we have come to, where arguments that have little ballast are advanced because nationalistic sentiments come into the picture.

It is troubling that as senior a writer as Lalor would seek to advance such an argument, when someone has clearly violated the spirit of the game. Doubtless there will be cynics who poke fun at any suggestion that cricket is still a gentleman’s game, but without those myths that surround this pursuit, would it still have its appeal?

The short answer to that is a resounding no.

Lalor argues that Stokes’ fate would have been different had he been an Australian, I doubt that very much because given the licence extended to Australian sports stars to behave badly, his indulgences would have been overlooked. The word used to excuse him would have ” larrikinism”.

But Warner cheated. And the Australian public, no matter what their shortcomings, do not like cheats.

Unfortunately, at a pivotal moment during the cricket team’s South African tour, this senior member could only think of cheating to win. That is sad, unfortunate, and even tragic. It speaks of a big moral chasm somewhere.

But once one has done the crime, one must do the time. Arguing as Lalor does, that both Steve Smith, the captain at the time, and Bancroft got away with no leadership bans, does not carry any weight.

The man who planned the crime was nailed with the heaviest punishment. And it is doubtful whether anyone who has a sense of justice would argue against that.

Worse Than FailureError'd: Take a Risk on NaN

"Sure, I know how long the free Standard Shipping will take, but maybe, just maybe, if I choose Economy, my package will have already arrived! Or never," Philip G. writes.

 

"To be honest, I would love to hear how a course on guitar will help me become certified on AWS!" Kevin wrote.

 

Gergő writes, "Hooray! I'm going to be so productive for the next 0 days!"

 

"I guess that inbox count is what I get for using Yahoo mail?" writes Becky R.

 

Marc W. wrote, "Try all you want, PDF Creator, but you'll never sweet talk me with your 'great' offer!"

 

Mark W. wrote, "My neighborhood has a personality split, but at least they're both Pleasant."

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

CryptogramThe Security Value of Inefficiency

For decades, we have prized efficiency in our economy. We strive for it. We reward it. In normal times, that's a good thing. Running just at the margins is efficient. A single just-in-time global supply chain is efficient. Consolidation is efficient. And that's all profitable. Inefficiency, on the other hand, is waste. Extra inventory is inefficient. Overcapacity is inefficient. Using many small suppliers is inefficient. Inefficiency is unprofitable.

But inefficiency is essential security, as the COVID-19 pandemic is teaching us. All of the overcapacity that has been squeezed out of our healthcare system; we now wish we had it. All of the redundancy in our food production that has been consolidated away; we want that, too. We need our old, local supply chains -- not the single global ones that are so fragile in this crisis. And we want our local restaurants and businesses to survive, not just the national chains.

We have lost much inefficiency to the market in the past few decades. Investors have become very good at noticing any fat in every system and swooping down to monetize those redundant assets. The winner-take-all mentality that has permeated so many industries squeezes any inefficiencies out of the system.

This drive for efficiency leads to brittle systems that function properly when everything is normal but break under stress. And when they break, everyone suffers. The less fortunate suffer and die. The more fortunate are merely hurt, and perhaps lose their freedoms or their future. But even the extremely fortunate suffer -- maybe not in the short term, but in the long term from the constriction of the rest of society.

Efficient systems have limited ability to deal with system-wide economic shocks. Those shocks are coming with increased frequency. They're caused by global pandemics, yes, but also by climate change, by financial crises, by political crises. If we want to be secure against these crises and more, we need to add inefficiency back into our systems.

I don't simply mean that we need to make our food production, or healthcare system, or supply chains sloppy and wasteful. We need a certain kind of inefficiency, and it depends on the system in question. Sometimes we need redundancy. Sometimes we need diversity. Sometimes we need overcapacity.

The market isn't going to supply any of these things, least of all in a strategic capacity that will result in resilience. What's necessary to make any of this work is regulation.

First, we need to enforce antitrust laws. Our meat supply chain is brittle because there are limited numbers of massive meatpacking plants -- now disease factories -- rather than lots of smaller slaughterhouses. Our retail supply chain is brittle because a few national companies and websites dominate. We need multiple companies offering alternatives to a single product or service. We need more competition, more niche players. We need more local companies, more domestic corporate players, and diversity in our international suppliers. Competition provides all of that, while monopolies suck that out of the system.

The second thing we need is specific regulations that require certain inefficiencies. This isn't anything new. Every safety system we have is, to some extent, an inefficiency. This is true for fire escapes on buildings, lifeboats on cruise ships, and multiple ways to deploy the landing gear on aircraft. Not having any of those things would make the underlying systems more efficient, but also less safe. It's also true for the internet itself, originally designed with extensive redundancy as a Cold War security measure.

With those two things in place, the market can work its magic to provide for these strategic inefficiencies as cheaply and as effectively as possible. As long as there are competitors who are vying with each other, and there aren't competitors who can reduce the inefficiencies and undercut the competition, these inefficiencies just become part of the price of whatever we're buying.

The government is the entity that steps in and enforces a level playing field instead of a race to the bottom. Smart regulation addresses the long-term need for security, and ensures it's not continuously sacrificed to short-term considerations.

We have largely been content to ignore the long term and let Wall Street run our economy as efficiently as it can. That's no longer sustainable. We need inefficiency -- the right kind in the right way -- to ensure our security. No, it's not free. But it's worth the cost.

This essay previously appeared in Quartz.

MEDesklab Portable USB-C Monitor

I just got a 15.6″ 4K resolution Desklab portable touchscreen monitor [1]. It takes power via USB-C and video input via USB-C or mini HDMI, has touch screen input, and has speakers built in for USB or HDMI sound.

PC Use

I bought a mini-DisplayPort to HDMI adapter and for my first test ran it from my laptop, it was seen as a 1920*1080 DisplayPort monitor. The adaptor is specified as supporting 4K so I don’t know why I didn’t get 4K to work, my laptop has done 4K with other monitors.

The next thing I plan to get is a VGA to HDMI converter so I can use this on servers, it can be a real pain getting a monitor and power cable to a rack mounted server and this portable monitor can be powered by one of the USB ports in the server. A quick search indicates that such devices start at about $12US.

The Desklab monitor has no markings to indicate what resolution it supports, no part number, and no serial number. The only documentation I could find about how to recognise the difference between the FullHD and 4K versions is that the FullHD version supposedly draws 2A and the 4K version draws 4A. I connected my USB Ammeter and it reported that between 0.6 and 1.0A were drawn. If they meant to say 2W and 4W instead of 2A and 4A (I’ve seen worse errors in manuals) then the current drawn would indicate the 4K version. Otherwise the stated current requirements don’t come close to matching what I’ve measured.

Power

The promise of USB-C was power from anywhere to anywhere. I think that such power can theoretically be done with USB 3 and maybe USB 2, but asymmetric cables make it more challenging.

I can power my Desklab monitor from a USB battery, from my Thinkpad’s USB port (even when the Thinkpad isn’t on mains power), and from my phone (although the phone battery runs down fast as expected). When I have a mains powered USB charger (for a laptop and rated at 60W) connected to one USB-C port and my phone on the other the phone can be charged while giving a video signal to the display. This is how it’s supposed to work, but in my experience it’s rare to have new technology live up to it’s potential at the start!

One thing to note is that it doesn’t have a battery. I had imagined that it would have a battery (in spite of there being nothing on their web site to imply this) because I just couldn’t think of a touch screen device not having a battery. It would be nice if there was a version of this device with a big battery built in that could avoid needing separate cables for power and signal.

Phone Use

The first thing to note is that the Desklab monitor won’t work with all phones, whether a phone will take the option of an external display depends on it’s configuration and some phones may support an external display but not touchscreen. The Huawei Mate devices are specifically listed in the printed documentation as being supported for touchscreen as well as display. Surprisingly the Desklab web site has no mention of this unless you download the PDF of the manual, they really should have a list of confirmed supported devices and a forum for users to report on how it works.

My phone is a Huawei Mate 10 Pro so I guess I got lucky here. My phone has a “desktop mode” that can be enabled when I connect it to a USB-C device (not sure what criteria it uses to determine if the device is suitable). The desktop mode has something like a regular desktop layout and you can move windows around etc. There is also the option of having a copy of the phone’s screen, but it displays the image of the phone screen vertically in the middle of the landscape layout monitor which is ridiculous.

When desktop mode is enabled it’s independent of the phone interface so I had to find the icons for the programs I wanted to run in an unsorted list with no search usable (the search interface of the app list brings up the keyboard which obscures the list of matching apps). The keyboard takes up more than half the screen and there doesn’t seem to be a way to make it smaller. I’d like to try a portrait layout which would make the keyboard take something like 25% of the screen but that’s not supported.

It’s quite easy to type on a keyboard that’s slightly larger than a regular PC keyboard (a 15″ display with no numeric keypad or cursor control keys). The hackers keyboard app might work well with this as it has cursor control keys. The GUI has an option for full screen mode for an app which is really annoying to get out of (you have to use a drop down from the top of the screen), full screen doesn’t make sense for a display this large. Overall the GUI is a bit clunky, imagine Windows 3.1 with a start button and task bar. One interesting thing to note is that the desktop and phone GUIs can be run separately, so you can type on the Desklab (or any similar device) and look things up on the phone. Multiple monitors never really interested me for desktop PCs because switching between windows is fast and easy and it’s easy to resize windows to fit several on the desktop. Resizing windows on the Huawei GUI doesn’t seem easy (although I might be missing some things) and the keyboard takes up enough of the screen that having multiple windows open while typing isn’t viable.

I wrote the first draft of this post on my phone using the Desklab display. It’s not nearly as easy as writing on a laptop but much easier than writing on the phone screen.

Currently Desklab is offering 2 models for sale, 4K resolution for $399US and FullHD for $299US. I got the 4K version which is very expensive at the moment when converted to Australian dollars. There are significantly cheaper USB-C monitors available (such as this ASUS one from Kogan for $369AU), but I don’t think they have touch screens and therefore can’t be used with a phone unless you enable the phone screen as touch pad mode and have a mouse cursor on screen. I don’t know if all Android devices support that, it could be that a large part of the desktop experience I get is specific to Huawei devices.

One annoying feature is that if I use the phone power button to turn the screen off it shuts down the connection to the Desklab display, but the phone screen will turn off it I leave it alone for the screen timeout (which I have set to 10 minutes).

Caveats

When I ordered this I wanted the biggest screen possible. But now that I have it the fact that it doesn’t fit in the pocket of my Scott e Vest jacket [2] will limit what I can do with it. Maybe I’ll be buying a 13″ monitor in the near future, I expect that Desklab will do well and start selling them in a wide range of sizes. A 15.6″ portable device is inconvenient even if it is in the laptop format, a thin portable screen is inconvenient in many ways.

Netflix doesn’t display video on the Desklab screen, I suspect that Netflix is doing this deliberately as some misguided attempt at stopping piracy. It is really good for watching video as it has the speakers in good locations for stereo sound, it’s a pity that Netflix is difficult.

The functionality on phones from companies other than Huawei is unknown. It is likely to work on most Android phones, but if a particular phone is important to you then you want to Google for how it worked for others.

Worse Than FailureABCD

As is fairly typical in our industry, Sebastian found himself working as a sub-contractor to a sub-contractor to a contractor to a big company. In this case, it was IniDrug, a pharmaceutical company.

Sebastian was building software that would be used at various steps in the process of manufacturing, which meant he needed to spend a fair bit of time in clean rooms, and on air-gapped networks, to prevent trade secrets from leaking out.

Like a lot of large companies, they had very formal document standards. Every document going out needed to have the company logo on it, somewhere. This meant all of the regular employees had the IniDrug logo in their email signatures, e.g.:

Bill Lumbergh
Senior Project Lead
  _____       _ _____                   
 |_   _|     (_|  __ \                  
   | |  _ __  _| |  | |_ __ _   _  __ _ 
   | | | '_ \| | |  | | '__| | | |/ _` |
  _| |_| | | | | |__| | |  | |_| | (_| |
 |_____|_| |_|_|_____/|_|   \__,_|\__, |
                                   __/ |
                                  |___/ 

At least, they did until Sebastian got an out of hours, emergency call. While they absolutely were not set up for remote work, Sebastian could get webmail access. And in the webmail client, he saw:

Bill Lumbergh
Senior Project Lead
ABCD

At first, Sebastian assumed Bill had screwed up his sigline. Or maybe the attachment broke? But as Sebastian hopped on an email chain, he noticed a lot of ABCDs. Then someone sent out a Word doc (because why wouldn’t you catalog your emergency response in a Word document?), and in the space where it usually had the IniDrug logo, it instead had “ABCD”.

The crisis resolved itself without any actual effort from Sebastian or his fellow contractors, but they had to reply to a few emails just to show that they were “pigs and not chickens”- they were committed to quality software. The next day, Sebastian mentioned the ABCD weirdness.

“I saw that too. I wonder what the deal was?” his co-worker Joanna said.

They pulled up the same document on his work computer, the logo displayed correctly. He clicked on it, and saw the insertion point blinking back at him. Then he glanced at the formatting toolbar and saw “IniDrug Logo” as the active font.

Puzzled, he selected the logo and changed the font. “ABCD” appeared.

IniDrug had a custom font made, hacked so that if you typed ABCD the resulting output would look like the IniDrug logo. That was great, if you were using a computer with the font installed, or if you remembered to make sure your word processor was embedding all your weird custom fonts.

Which also meant a bunch of outside folks were interacting with IniDrug employees, wondering why on Earth they all had “ABCD” in their siglines. Sebastian and Joanna got a big laugh about it, and shared the joke with their fellow contractors. Helping the new contractors discover this became a rite of passage. When contractors left for other contracts, they’d tell their peers, “It was great working at ABCD, but it’s time that I moved on.”

There were a lot of contractors, all chuckling about this, and one day in a shared break room, a bunch of T-Shirts appeared: plain white shirts with “ABCD” written on them in Arial.

That, as it turned out, was the bridge too far, and it got the attention of someone who was a regular IniDrug employee.

To the Contracting Team:
In the interests of maintaining a professional environment, we will be updating the company dress code. Shirts decorated with the text “ABCD” are prohibited, and should not be worn to work. If you do so, you will be asked to change or conceal the offending content.

Bill Lumbergh
Senior Project Lead
ABCD

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.