Planet Russell

,

Planet DebianNorbert Preining: Switching from KDE/Plasma to Gnome3 for one week

This guy is doing a great work, providing patches to improve kwin, and then tried Gnome3 for a week, 7 days. His verdict:

overall, after one week of using this, I can say…

it’s f**king HORRIBLE!!!

…well, it’s not as bad as I thought… but yeah, it was overall a pretty unpleasant experience…
I definitely have seen improvements, but the desktop is still not in shape.

I mean, it’s still not as usable as KDE is….

Honestly, I can’t agree more. I have tried Gnome3 for over a year, again and again, and it feels like a block of concrete put onto the feet of dissidents by Italian mafia bosses. It drowns and kills you.

Here are the links: Start, Day 1, Day 2, Day 3, Day 4, Day 5, Day 6, Day 7.

Thanks a lot for these blog posts, incredibly informative and convincing!

Kevin RuddBBC World: US-China Tensions

E&OE TRANSCRIPT
TV INTERVIEW
BBC WORLD
13 AUGUST 2020

Topics: Foreign Affairs article ‘Beware the Guns of August – in Asia’

Mike Embley
Beijing’s crackdown on Hong Kong’s democracy movement has attracted strong criticism both Washington and Beijing hitting key figures with sanctions and closing consulates in recent weeks. Is that not the only issue where the two countries don’t see eye to eye tensions have been escalating on a range of fronts, including the Chinese handling of the pandemic, the American decision to ban Huawei and Washington’s allegations of human rights abuses against Uighur Muslims in Xinjiang. So where is all this heading? Let’s try and find out we speak to Kevin Rudd, former Australian Prime Minister, of course, now the president of the Asia Society Policy Institute. Welcome very good to talk to you. You’ve been very vocal about China’s attitudes to democracy in Hong Kong, also the tit for tat sanctions between the US and China where do you think all this is heading?

Kevin Rudd
Well, if our prism for analysis is where does the US China relationship go? The bottom line is we haven’t seen this relationship in such fundamental disrepair in about half a century. And as a result, whether it’s Hong Kong, or whether it’s Taiwan or events unfolding in the South China Sea, this is pushing the relationship into greater and greater levels of crisis. What concerns those of us who study this professionally. And who know both systems of government reasonably well, both in Beijing and Washington, is that the probability of a crisis unfolding either in the Taiwan Straits or in the South China Sea is now growing. And the probability of escalation is now real into a serious shooting match. And the lesson of history is it’s very difficult to de escalate under those circumstances.

Mike Embley
Yes, I think you’ve spoken in terms of the risk of a hot war, actual war between the US and China. Are you serious?

Kevin Rudd
I am serious and I’ve not said this before. I’ve been a student of US-China relations for the last 35 years. And I’ve I take a genuinely sceptical approach to people who have sounded the alarms in previous periods of the relationship. But those of us who have observed this through the prism of history, I think have got a responsibility to say to decision makers both in Washington and in Beijing right now be careful what you wish for, because this is catapulting in a particular direction. When you look at the South China Sea in particular, there you have a huge amount of metal on metal, that is a large number of American ships and a large number of People’s Liberation Army Navy ships, similar number of aircraft, the rules of engagement, the standard operating procedures of these vessels are unbeknownst to the rest of us, we’ve had near misses before. What I’m pointing to is that if we actually have a collision, or a sinking or a crash, what then ensues in terms of crisis management on both sides when we last had this in 2001 2002 in the Bush administration, the state of the US China relationship was pretty good. Right now 20 years later, it is fundamentally appalling. That’s why many of us are deeply concerned, and are sounding this concern both to Beijing and Washington.

Mike Embley
And yet you know, of course, China is such a power economically and is making its presence felt in so many places in the world. There is a sense that really China can pretty much do what it wants, how do you avoid the kind of situation you’re describing?

Kevin Rudd
Well, the government in Beijing needs to understand the importance of restraint as well in terms of its own calculus of its own long term national interests. And that is China’s current cause of action across a range of fronts is in fact causing a massive international reaction against China now, unprecedented against again, the measures of the last 40 or 50 years. You now have fundamental dislocations in the relationship not just with Washington, but with Canada, with Australia, with United Kingdom, with Japan, with the Republic of Korea, and a whole bunch of others as well, including those in various parts of continental Europe. And so therefore, looking at this from the prism of Beijing’s own interests, there are those in Beijing who will be raising the argument, are we pushing too far too hard, too fast. And the responsibility of the rest of us is to say to that cautionary advice within Beijing, all power to your arm in restraining China from this course of action, but also in equal measure saying into our friends in Washington, particularly in a presidential election season, where Republicans and Democrats are seeking to outflank each other to the right, on China strategy, that this is no time to engage in, shall we say, symbolic acts for a domestic political purpose in the United States presidential election context, which can have real national security consequences in Southeast Asia and then globally.

Mike Embley
Mr. Rudd, you say very clearly what you hope will happen what you hope China will realize, what do you think actually will happen? Are you optimistic in a nutshell or pessimistic?

Kevin Rudd
The reason for me writing the piece I’ve just done in Foreign Affairs Magazine, which is entitled “Beware The Guns of August”, for those of us obviously familiar with what happened in August of 1914. Is that on balance I am pessimistic, that the political cultures in both capitals right now are fully seized of the risks that they are playing with on the high seas and over Taiwan as well. Hong Kong, the matters you were referring to before, frankly, add further to the deterioration of the surrounding political relationship between the two countries. But in terms of incendiary actions of a national security nature, it’s events in the Taiwan straits and it’s events on the high seas in the South China Sea, which are most likely to trigger this. And to answer your question directly right now, until we see the other side of the US presidential election. I remain on balance concerned and pessimistic.

Mike Embley
Right. Kevin Rudd Thank you very much for talking to us.

Kevin Rudd
Good to be with you.

The post BBC World: US-China Tensions appeared first on Kevin Rudd.

Kevin RuddAustralian Jewish News: Michael Gawenda and ‘The Powerbroker’

With the late Shimon Peres in 2012.

This article was first published by The Australian Jewish News on 13 August 2020.

The factional manoeuvrings of Labor’s faceless men a decade ago are convoluted enough without demonstrable misrepresentations by authors like Michael Gawenda in his biography of Mark Leibler, The Powerbroker.

Gawenda claims my memoir, The PM Years, blames the leadership coup on Leibler’s hardline faction of Australia’s Israel lobby, “plotting” in secret with Julia Gillard – a vision of “extreme, verging on conspiratorial darkness”. This is utter fabrication on his part. My simple challenge to Gawenda is to specify where I make such claims. He can’t. If he’d bothered to call me before publishing, I would have told him so.

Let me be clear: I have never claimed, nor do I believe, that Leibler or AIJAC were involved in the coup. It was conceived and executed almost entirely by factional warlords who blamed me for stymieing their individual ambitions.

It’s true my relationship with Leibler was strained in 2010 after Mossad agents stole the identities of four Australians living in Israel. Using false passports, they slipped into Dubai to assassinate a Hamas operative. They broke our laws and breached our trust.

The Mossad also jeopardised the safety of every Australian who travels on our passports in the Middle East. Unless this stopped, any Australian would be under suspicion, exposing them to arbitrary detention or worse.

More shocking, this wasn’t their first offence. The Mossad explicitly promised to stop abusing Australian passports after an incident in 2003, in a memorandum kept secret to spare Israel embarrassment. It didn’t work. They reoffended because they thought Australia was weak and wouldn’t complain.

We needed a proportional response to jolt Israeli politicians to act, without fundamentally damaging our valued relationship. Australia’s diplomatic, national security and intelligence establishments were unanimous: we should expel the Mossad’s representative in Canberra. This would achieve our goal but make little practical difference to Australia-Israel cooperation. Every minister in the national security committee agreed, including Gillard.

But obdurate elements of Australia’s Israel lobby accused us of overreacting. How could we treat our friend Israel like this? How did we know it was them? Wasn’t this just the usual murky business of espionage? According to Leibler, Diaspora leaders should “not criticise any Israeli government when it comes to questions of Israeli security”. Any violation of law, domestic or international, is acceptable. Never mind every citizen’s duty to uphold our laws and protect Australian lives.

I invited Leibler and others to dinner at the Lodge to reassure them the affair, although significant, wouldn’t derail the relationship. I sat politely as Leibler berated me. Boasting of his connections, he wanted to personally arrange meetings with the Mossad to smooth things over. We had, of course, already done this.

Apropos of nothing, Leibler then leaned over and, in what seemed to me a slightly menacing manner, suggested Julia was “looking very good in the polls” and “a great friend of Israel”. This surprised me, not least because I believed, however foolishly, that my deputy was loyal.

Leibler’s denials are absorbed wholly by Gawenda, solely on the basis of his notes. Give us a break, Michael – why would Leibler record such behaviour? It’s also meaningless that others didn’t hear him since, as often happens at dinners, multiple conversations occur around the table. The truth is it did happen, hence why I recorded it in my book. I have no reason to invent such an anecdote.

In fairness to Gillard, her eagerness to befriend Leibler reflected the steepness of her climb on Israel. She emerged from organisations that historically antagonised Israel – the Socialist Left and Australian Union of Students – and often overcompensated by swinging further towards AIJAC than longstanding Labor policy allowed.

By contrast, my reputation was well established, untainted by the anti-Israel sentiment sometimes found on the political left. A lifelong supporter of Israel and security for its people, I defied Labor critics by proudly leading Parliament in praise of the Jewish State’s achievements. I have consistently denounced the BDS campaign targeting Israeli businesses, both in office and since. My government blocked numerous shipments of potential nuclear components to Iran, and commissioned legal advice on charging president Mahmoud Ahmadinejad with incitement to genocide against the Jewish people. I’m as proud of this record as I am of my longstanding support for a two-state solution.

I have never considered that unequivocal support for Israel means unequivocal support for the policies of the Netanyahu government. For example, the annexation plan in the West Bank would be disastrous for Israel’s future security and fundamentally breach international law – a view shared by UK Conservative PM Boris Johnson. Israel, like the Australian Jewish community, is not monolithic; my concerns are shared by ordinary Israelis as well as many members of the Knesset.

Michael Gawenda is free to criticise me for things I’ve said and done (ironically, as editor of The Age, he didn’t consider me left-wing enough!), but his assertions in this account are flatly untrue.

The post Australian Jewish News: Michael Gawenda and ‘The Powerbroker’ appeared first on Kevin Rudd.

Planet DebianErich Schubert: Publisher MDPI lies to prospective authors

The publisher MDPI is a spammer and lies.

If you upload a paper draft to arXiv, MDPI will send spam to the authors to solicit submission. Within minutes of an upload I received the following email (sent by MDPI staff, not some overly eager new editor):

We read your recent manuscript "[...]" on
arXiv, and sincerely invite you to submit it to our journal Future
Internet, if it has not been published or submitted elsewhere.

Future Internet (ISSN 1999-5903, indexed by Scopus, Ei compendex,
*ESCI*-Web of Science) is a journal on Internet technologies and the
information society. It maintains a rigorous and fast peer review system
with a median publication time of 35 days from submission to online
publication, and 3 days from acceptance to publication. The journal
scope is shown here:
https://www.mdpi.com/journal/futureinternet/about.
Editorial Board: https://www.mdpi.com/journal/futureinternet/editors.

Since Future Internet is an open access journal there is a publication
fee. Your paper will be published, with a 20% discount (amounting to 200
CHF), and provided that it is accepted after our standard peer-review
procedure. 

First of all, the email begins with a lie. Because this paper clearly states that it is submitted elsewhere. Also, it fits other journals much better, and if they had read even just the abstract, they would have known.

This is predatory behavior by MDPI. Clearly, it is just about getting as many submissions as possible. The journal charges 1000 CHF (next year, 1400 CHF) to publish the papers. Its about the money.

Also, there have been reports that MDPI ignores the reviews, and always publishes even when reviewers recommended rejection…

The reviewer requests I have received from MDPI came with unreasonable deadlines, which will not allow for a thorough peer review. Hence I asked to not ever be emailed by them again. I must assume that many other qualified reviewers do the same. MDPI boasts in their 2019 annual report a median time to first decision of 19 days – in my discipline the typical time window to ask for reviews is at least a month (for shorter conference papers, not full journal articles), because professors tend to have lots of other duties, hence they need more flexibility. Above paper has been submitted in March, and is now under review for 4 months already. This is an annoying long time window, and I would appreciate if this were less, but it shows how extremely short the MDPI time frame is. They also claim 269.1k submissions and 106.2k published papers, so the acceptance rate is around 40% on average, and assuming that there are some journals with higher standards there then some must have acceptance rates much higher than this. I’d assume that many reputable journals have 40% desk-rejection rate for papers that are not even on-topic …

The average cost to authors is given as 1144 CHF (after discounts, 25% waived feeds etc.), so they, so we are talking about 120 million CHF of revenue from authors. Is that what you want academic publishing to be?

I am not happy with some of the established publishers such as Elsevier that also overcharge universities heavily. I do think we need to change academic publishing, and arXiv is a big improvement here. But I do not respect publishers such as MDPI that lie and send spam.

Worse Than FailureCodeSOD: Don't Stop the Magic

Don’t you believe in magic strings and numbers being bad? From the perspective of readability and future maintenance, constants are better. We all know this is true, and we all know that it can sometimes go too far.

Douwe Kasemier has a co-worker that has taken that a little too far.

For example, they have a Java method with a signature like this:

Document addDocument(Action act, boolean createNotification);

The Action type contains information about what action to actually perform, but it will result in a Document. Sometimes this creates a notification, and sometimes it doesn’t.

Douwe’s co-worker was worried about the readability of addDocument(myAct, true) and addDocument(myAct, false), so they went ahead and added some constants:

    private static final boolean NO_NOTIFICATION = false;
    private static final boolean CREATE_NOTIFICATION = true;

Okay, now, I don’t love this, but it’s not the worst thing…

public Document doActionWithNotification(Action act) {
  addDocument(act, CREATE_NOTIFICATION);
}

public Document doActionWithoutNotification(Action act) {
  addDocument(act, NO_NOTIFICATION);
}

Okay, now we’re just getting silly. This is at least diminishing returns of readability, if not actively harmful to making the code clear.

    private static final int SIX = 6;
    private static final int FIVE = 5;
    public String findId(String path) {
      String[] folders = path.split("/");
      if (folders.length >= SIX && (folders[FIVE].startsWith(PREFIX_SR) || folders[FIVE].startsWith(PREFIX_BR))) {
          return folders[FIVE].substring(PREFIX_SR.length());
      }
      return null;
    }

Ah, there we go. The logical conclusion: constants for 5 and 6. And yet they didn’t feel the need to make a constant for "/"?

At least this in maintainable, so that when the value of FIVE changes, the method doesn’t need to change.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Planet DebianNorbert Preining: KDE/Plasma Status Update 2020-08-13

Short status update on my KDE/Plasma packages for Debian sid and testing:

  • Frameworks update to 5.73
  • Apps are now available from the main archive in the same upstream version and superseeding my packages
  • New architecture: aarch64

Hope that helps a few people. See this post for how to setup archives.

Enjoy.

Planet DebianJonathan Dowland: Generic Haskell

When I did the work described earlier in template haskell, I also explored generic programming in Haskell to solve a particular problem. StrIoT is a program generator: it outputs source code, which may depend upon other modules, which need to be imported via declarations at the top of the source code files.

The data structure that StrIoT manipulates contains information about what modules are loaded to resolve the names that have been used in the input code, so we can walk that structure to automatically derive an import list. The generic programming tools I used for this are from Structure Your Boilerplate (SYB), a module written to complement a paper of the same name. In this code snippet, everything and mkQ are from SYB:

extractNames :: Data a => a -> [Name]
extractNames = everything (++) (\a -> mkQ [] f a)
     where f = (:[])

The input must be any type which implements typeclass Data, as must all its members (and their members etc.): This holds for the Template Haskell Exp types. The output is a normal list of Names. The utility function has a more specific type Name -> [Name]. This is all that's needed to walk over the heterogeneous data structures and do something specific (f) when we encounter a Name.

Post-processing the Names to get a list of modules is simple

 nub . catMaybes . map nameModule . concatMap extractNames

Unfortunately, there's a weird GHC behaviour relating to the module names for some Prelude functions that makes the above less useful in practice. For example, the Prelude function words :: String -> [String] can normally be used without an explicit import (since it's a Prelude function). However, once round-tripped through a Name, it becomes GHC.OldList.words. Attempting to import GHC.OldList fails in some contexts, because it's a hidden module or package. I've been meaning to investigate further and, if necessary, file a GHC bug about this.

For this reason I've scrapped all the above and gone with a different plan. We go back to requiring the user to specify their required import list explicitly. We then walk over the Exp data type prior to code generation and decanonicalize all the Names. I also use generic programming/SYB to do this:

unQualifyNames :: Exp -> Exp
unQualifyNames = everywhere (\a -> mkT f a)
     where f :: Name -> Name
           f n = if n == '(.)
            then mkName "."
            else (mkName . last . splitOn "." . pprint) n

I've had to special-case composition (.) since that code-point is also used as the delimiter between package, module and function. Otherwise this looks very similar to the earlier function, except using everywhere and mkT (make transformation) instead of everything and mkQ (make query).

CryptogramSmart Lock Vulnerability

Yet another Internet-connected door lock is insecure:

Sold by retailers including Amazon, Walmart, and Home Depot, U-Tec's $139.99 UltraLoq is marketed as a "secure and versatile smart deadbolt that offers keyless entry via your Bluetooth-enabled smartphone and code."

Users can share temporary codes and 'Ekeys' to friends and guests for scheduled access, but according to Tripwire researcher Craig Young, a hacker able to sniff out the device's MAC address can help themselves to an access key, too.

UltraLoq eventually fixed the vulnerabilities, but not in a way that should give you any confidence that they know what they're doing.

EDITED TO ADD (8/12): More.

CryptogramCybercrime in the Age of COVID-19

The Cambridge Cybercrime Centre has a series of papers on cybercrime during the coronavirus pandemic.

EDITED TO ADD (8/12): Interpol report.

CryptogramTwitter Hacker Arrested

A 17-year-old Florida boy was arrested and charged with last week's Twitter hack.

News articles. Boing Boing post. Florida state attorney press release.

This is a developing story. Post any additional news in the comments.

EDITED TO ADD (8/1): Two others have been charged as well.

EDITED TO ADD (8/11): The online bail hearing was hacked.

Krebs on SecurityWhy & Where You Should You Plant Your Flag

Several stories here have highlighted the importance of creating accounts online tied to your various identity, financial and communications services before identity thieves do it for you. This post examines some of the key places where everyone should plant their virtual flags.

As KrebsOnSecurity observed back in 2018, many people — particularly older folks — proudly declare they avoid using the Web to manage various accounts tied to their personal and financial data — including everything from utilities and mobile phones to retirement benefits and online banking services. From that story:

“The reasoning behind this strategy is as simple as it is alluring: What’s not put online can’t be hacked. But increasingly, adherents to this mantra are finding out the hard way that if you don’t plant your flag online, fraudsters and identity thieves may do it for you.”

“The crux of the problem is that while most types of customer accounts these days can be managed online, the process of tying one’s account number to a specific email address and/or mobile device typically involves supplying personal data that can easily be found or purchased online — such as Social Security numbers, birthdays and addresses.”

In short, although you may not be required to create online accounts to manage your affairs at your ISP, the U.S. Postal Service, the credit bureaus or the Social Security Administration, it’s a good idea to do so for several reasons.

Most importantly, the majority of the entities I’ll discuss here allow just one registrant per person/customer. Thus, even if you have no intention of using that account, establishing one will be far easier than trying to dislodge an impostor who gets there first using your identity data and an email address they control.

Also, the cost of planting your flag is virtually nil apart from your investment of time. In contrast, failing to plant one’s flag can allow ne’er-do-wells to create a great deal of mischief for you, whether it be misdirecting your service or benefits elsewhere, or canceling them altogether.

Before we dive into the list, a couple of important caveats. Adding multi-factor authentication (MFA) at these various providers (where available) and/or establishing a customer-specific personal identification number (PIN) also can help secure online access. For those who can’t be convinced to use a password manager, even writing down all of the account details and passwords on a slip of paper can be helpful, provided the document is secured in a safe place.

Perhaps the most important place to enable MFA is with your email accounts. Armed with access to your inbox, thieves can then reset the password for any other service or account that is tied to that email address.

People who don’t take advantage of these added safeguards may find it far more difficult to regain access when their account gets hacked, because increasingly thieves will enable multi-factor options and tie the account to a device they control.

Secondly, guard the security of your mobile phone account as best you can (doing so might just save your life). The passwords for countless online services can be reset merely by entering a one-time code sent via text message to the phone number on file for the customer’s account.

And thanks to the increasing prevalence of a crime known as SIM swapping, thieves may be able to upend your personal and financial life simply by tricking someone at your mobile service provider into diverting your calls and texts to a device they control.

Most mobile providers offer customers the option of placing a PIN or secret passphrase on their accounts to lessen the likelihood of such attacks succeeding, but these protections also usually fail when the attackers are social engineering some $12-an-hour employee at a mobile phone store.

Your best option is to reduce your overall reliance on your phone number for added authentication at any online service. Many sites now offer MFA options that are app-based and not tied to your mobile service, and this is your best option for MFA wherever possible.

YOUR CREDIT FILES

First and foremost, all U.S. residents should ensure they have accounts set up online at the three major credit bureaus — Equifax, Experian and Trans Union.

It’s important to remember that the questions these bureaus will ask to verify your identity are not terribly difficult for thieves to answer or guess just by referencing public records and/or perhaps your postings on social media.

You will need accounts at these bureaus if you wish to freeze your credit file. KrebsOnSecurity has for many years urged all readers to do just that, because freezing your file is the best way to prevent identity thieves from opening new lines of credit in your name. Parents and guardians also can now freeze the files of their dependents for free.

For more on what a freeze entails and how to place or thaw one, please see this post. Beyond the big three bureaus, Innovis is a distant fourth bureau that some entities use to check consumer creditworthiness. Fortunately, filing a freeze with Innovis likewise is free and relatively painless.

It’s also a good idea to notify a company called ChexSystems to keep an eye out for fraud committed in your name. Thousands of banks rely on ChexSystems to verify customers who are requesting new checking and savings accounts, and ChexSystems lets consumers place a security alert on their credit data to make it more difficult for ID thieves to fraudulently obtain checking and savings accounts. For more information on doing that with ChexSystems, see this link.

If you placed a freeze on your file at the major bureaus more than a few years ago but haven’t revisited the bureaus’ sites lately, it might be wise to do that soon. Following its epic 2017 data breach, Equifax reconfigured its systems to invalidate the freeze PINs it previously relied upon to unfreeze a file, effectively allowing anyone to bypass that PIN if they can glean a few personal details about you. Experian’s site also has undermined the security of the freeze PIN.

I mentioned planting your flag at the credit bureaus first because if you plan to freeze your credit files, it may be wise to do so after you have planted your flag at all the other places listed in this story. That’s because these other places may try to check your identity records at one or more of the bureaus, and having a freeze in place may interfere with that account creation.

YOUR FINANCIAL INSTITUTIONS

I can’t tell you how many times people have proudly told me they don’t bank online, and prefer to manage all of their accounts the old fashioned way. I always respond that while this is totally okay, you still need to establish an online account for your financial providers because if you don’t someone may do it for you.

This goes doubly for any retirement and pension plans you may have. It’s a good idea for people with older relatives to help those individuals set up and manage online identities for their various accounts — even if those relatives never intend to access any of the accounts online.

This process is doubly important for parents and relatives who have just lost a spouse. When someone passes away, there’s often an obituary in the paper that offers a great deal of information about the deceased and any surviving family members, and identity thieves love to mine this information.

YOUR GOVERNMENT

Whether you’re approaching retirement, middle-aged or just starting out in your career, you should establish an account online at the U.S. Social Security Administration. Maybe you don’t believe Social Security money will actually still be there when you retire, but chances are you’re nevertheless paying into the system now. Either way, the plant-your-flag rules still apply.

Ditto for the Internal Revenue Service. A few years back, ID thieves who specialize in perpetrating tax refund fraud were massively registering people at the IRS’s website to download key data from their prior years’ tax transcripts. While the IRS has improved its taxpayer validation and security measures since then, it’s a good idea to mark your territory here as well.

The same goes for your state’s Department of Motor Vehicles (DMV), which maintains an alarming amount of information about you whether you have an online account there or not. Because the DMV also is the place that typically issues state drivers licenses, you really don’t want to mess around with the possibility that someone could register as you, change your physical address on file, and obtain a new license in your name.

Last but certainly not least, you should create an account for your household at the U.S. Postal Service’s Web site. Having someone divert your mail or delay delivery of it for however long they like is not a fun experience.

Also, the USPS has this nifty service called Informed Delivery, which lets residents view scanned images of all incoming mail prior to delivery. In 2018, the U.S. Secret Service warned that identity thieves have been abusing Informed Delivery to let them know when residents are about to receive credit cards or notices of new lines of credit opened in their names. Do yourself a favor and create an Informed Delivery account as well. Note that multiple occupants of the same street address can each have their own accounts.

YOUR HOME

Online accounts coupled with the strongest multi-factor authentication available also are important for any services that provide you with telephone, television and Internet access.

Strange as it may sound, plenty of people who receive all of these services in a bundle from one ISP do not have accounts online to manage their service. This is dangerous because if thieves can establish an account on your behalf, they can then divert calls intended for you to their own phones.

My original Plant Your Flag piece in 2018 told the story of an older Florida man who had pricey jewelry bought in his name after fraudsters created an online account at his ISP and diverted calls to his home phone number so they could intercept calls from his bank seeking to verify the transactions.

If you own a home, chances are you also have an account at one or more local utility providers, such as power and water companies. If you don’t already have an account at these places, create one and secure access to it with a strong password and any other access controls available.

These frequently monopolistic companies traditionally have poor to non-existent fraud controls, even though they effectively operate as mini credit bureaus. Bear in mind that possession of one or more of your utility bills is often sufficient documentation to establish proof of identity. As a result, such records are highly sought-after by identity thieves.

Another common way that ID thieves establish new lines of credit is by opening a mobile phone account in a target’s name. A little-known entity that many mobile providers turn to for validating new mobile accounts is the National Consumer Telecommunications and Utilities Exchange, or nctue.com. Happily, the NCTUE allows consumers to place a freeze on their file by calling their 800-number, 1-866-349-5355. For more information on the NCTUE, see this page.

Have I missed any important items? Please sound off in the comments below.

CryptogramCryptanalysis of an Old Zip Encryption Algorithm

Mike Stay broke an old zipfile encryption algorithm to recover $300,000 in bitcoin.

DefCon talk here.

Planet DebianMichael Stapelberg: Hermetic packages (in distri)

In distri, packages (e.g. emacs) are hermetic. By hermetic, I mean that the dependencies a package uses (e.g. libusb) don’t change, even when newer versions are installed.

For example, if package libusb-amd64-1.0.22-7 is available at build time, the package will always use that same version, even after the newer libusb-amd64-1.0.23-8 will be installed into the package store.

Another way of saying the same thing is: packages in distri are always co-installable.

This makes the package store more robust: additions to it will not break the system. On a technical level, the package store is implemented as a directory containing distri SquashFS images and metadata files, into which packages are installed in an atomic way.

Out of scope: plugins are not hermetic by design

One exception where hermeticity is not desired are plugin mechanisms: optionally loading out-of-tree code at runtime obviously is not hermetic.

As an example, consider glibc’s Name Service Switch (NSS) mechanism. Page 29.4.1 Adding another Service to NSS describes how glibc searches $prefix/lib for shared libraries at runtime.

Debian ships about a dozen NSS libraries for a variety of purposes, and enterprise setups might add their own into the mix.

systemd (as of v245) accounts for 4 NSS libraries, e.g. nss-systemd for user/group name resolution for users allocated through systemd’s DynamicUser= option.

Having packages be as hermetic as possible remains a worthwhile goal despite any exceptions: I will gladly use a 99% hermetic system over a 0% hermetic system any day.

Side note: Xorg’s driver model (which can be characterized as a plugin mechanism) does not fall under this category because of its tight API/ABI coupling! For this case, where drivers are only guaranteed to work with precisely the Xorg version for which they were compiled, distri uses per-package exchange directories.

Implementation of hermetic packages in distri

On a technical level, the requirement is: all paths used by the program must always result in the same contents. This is implemented in distri via the read-only package store mounted at /ro, e.g. files underneath /ro/emacs-amd64-26.3-15 never change.

To change all paths used by a program, in practice, three strategies cover most paths:

ELF interpreter and dynamic libraries

Programs on Linux use the ELF file format, which contains two kinds of references:

First, the ELF interpreter (PT_INTERP segment), which is used to start the program. For dynamically linked programs on 64-bit systems, this is typically ld.so(8).

Many distributions use system-global paths such as /lib64/ld-linux-x86-64.so.2, but distri compiles programs with -Wl,--dynamic-linker=/ro/glibc-amd64-2.31-4/out/lib/ld-linux-x86-64.so.2 so that the full path ends up in the binary.

The ELF interpreter is shown by file(1), but you can also use readelf -a $BINARY | grep 'program interpreter' to display it.

And secondly, the rpath, a run-time search path for dynamic libraries. Instead of storing full references to all dynamic libraries, we set the rpath so that ld.so(8) will find the correct dynamic libraries.

Originally, we used to just set a long rpath, containing one entry for each dynamic library dependency. However, we have since switched to using a single lib subdirectory per package as its rpath, and placing symlinks with full path references into that lib directory, e.g. using -Wl,-rpath=/ro/grep-amd64-3.4-4/lib. This is better for performance, as ld.so uses a per-directory cache.

Note that program load times are significantly influenced by how quickly you can locate the dynamic libraries. distri uses a FUSE file system to load programs from, so getting proper -ENOENT caching into place drastically sped up program load times.

Instead of compiling software with the -Wl,--dynamic-linker and -Wl,-rpath flags, one can also modify these fields after the fact using patchelf(1). For closed-source programs, this is the only possibility.

The rpath can be inspected by using e.g. readelf -a $BINARY | grep RPATH.

Environment variable setup wrapper programs

Many programs are influenced by environment variables: to start another program, said program is often found by checking each directory in the PATH environment variable.

Such search paths are prevalent in scripting languages, too, to find modules. Python has PYTHONPATH, Perl has PERL5LIB, and so on.

To set up these search path environment variables at run time, distri employs an indirection. Instead of e.g. teensy-loader-cli, you run a small wrapper program that calls precisely one execve system call with the desired environment variables.

Initially, I used shell scripts as wrapper programs because they are easily inspectable. This turned out to be too slow, so I switched to compiled programs. I’m linking them statically for fast startup, and I’m linking them against musl libc for significantly smaller file sizes than glibc (per-executable overhead adds up quickly in a distribution!).

Note that the wrapper programs prepend to the PATH environment variable, they don’t replace it in its entirely. This is important so that users have a way to extend the PATH (and other variables) if they so choose. This doesn’t hurt hermeticity because it is only relevant for programs that were not present at build time, i.e. plugin mechanisms which, by design, cannot be hermetic.

Shebang interpreter patching

The Shebang of scripts contains a path, too, and hence needs to be changed.

We don’t do this in distri yet (the number of packaged scripts is small), but we should.

Performance requirements

The performance improvements in the previous sections are not just good to have, but practically required when many processes are involved: without them, you’ll encounter second-long delays in magit which spawns many git processes under the covers, or in dracut, which spawns one cp(1) process per file.

Downside: rebuild of packages required to pick up changes

Linux distributions such as Debian consider it an advantage to roll out security fixes to the entire system by updating a single shared library package (e.g. openssl).

The flip side of that coin is that changes to a single critical package can break the entire system.

With hermetic packages, all reverse dependencies must be rebuilt when a library’s changes should be picked up by the whole system. E.g., when openssl changes, curl must be rebuilt to pick up the new version of openssl.

This approach trades off using more bandwidth and more disk space (temporarily) against reducing the blast radius of any individual package update.

Downside: long env variables are cumbersome to deal with

This can be partially mitigated by removing empty directories at build time, which will result in shorter variables.

In general, there is no getting around this. One little trick is to use tr : '\n', e.g.:

distri0# echo $PATH
/usr/bin:/bin:/usr/sbin:/sbin:/ro/openssh-amd64-8.2p1-11/out/bin

distri0# echo $PATH | tr : '\n'
/usr/bin
/bin
/usr/sbin
/sbin
/ro/openssh-amd64-8.2p1-11/out/bin

Edge cases

The implementation outlined above works well in hundreds of packages, and only a small handful exhibited problems of any kind. Here are some issues I encountered:

Issue: accidental ABI breakage in plugin mechanisms

NSS libraries built against glibc 2.28 and newer cannot be loaded by glibc 2.27. In all likelihood, such changes do not happen too often, but it does illustrate that glibc’s published interface spec is not sufficient for forwards and backwards compatibility.

In distri, we could likely use a per-package exchange directory for glibc’s NSS mechanism to prevent the above problem from happening in the future.

Issue: wrapper bypass when a program re-executes itself

Some programs try to arrange for themselves to be re-executed outside of their current process tree. For example, consider building a program with the meson build system:

  1. When meson first configures the build, it generates ninja files (think Makefiles) which contain command lines that run the meson --internal helper.

  2. Once meson returns, ninja is called as a separate process, so it will not have the environment which the meson wrapper sets up. ninja then runs the previously persisted meson command line. Since the command line uses the full path to meson (not to its wrapper), it bypasses the wrapper.

Luckily, not many programs try to arrange for other process trees to run them. Here is a table summarizing how affected programs might try to arrange for re-execution, whether the technique results in a wrapper bypass, and what we do about it in distri:

technique to execute itself uses wrapper mitigation
run-time: find own basename in PATH yes wrapper program
compile-time: embed expected path no; bypass! configure or patch
run-time: argv[0] or /proc/self/exe no; bypass! patch

One might think that setting argv[0] to the wrapper location seems like a way to side-step this problem. We tried doing this in distri, but had to revert and go the other way.

Misc smaller issues

Appendix: Could other distributions adopt hermetic packages?

At a very high level, adopting hermetic packages will require two steps:

  1. Using fully qualified paths whose contents don’t change (e.g. /ro/emacs-amd64-26.3-15) generally requires rebuilding programs, e.g. with --prefix set.

  2. Once you use fully qualified paths you need to make the packages able to exchange data. distri solves this with exchange directories, implemented in the /ro file system which is backed by a FUSE daemon.

The first step is pretty simple, whereas the second step is where I expect controversy around any suggested mechanism.

Appendix: demo (in distri)

This appendix contains commands and their outputs, run on upcoming distri version supersilverhaze, but verified to work on older versions, too.

Large outputs have been collapsed and can be expanded by clicking on the output.

The /bin directory contains symlinks for the union of all package’s bin subdirectories:

distri0# readlink -f /bin/teensy_loader_cli
/ro/teensy-loader-cli-amd64-2.1+g20180927-7/bin/teensy_loader_cli

The wrapper program in the bin subdirectory is small:

distri0# ls -lh $(readlink -f /bin/teensy_loader_cli)
-rwxr-xr-x 1 root root 46K Apr 21 21:56 /ro/teensy-loader-cli-amd64-2.1+g20180927-7/bin/teensy_loader_cli

Wrapper programs execute quickly:

distri0# strace -fvy /bin/teensy_loader_cli |& head | cat -n
     1  execve("/bin/teensy_loader_cli", ["/bin/teensy_loader_cli"], ["USER=root", "LOGNAME=root", "HOME=/root", "PATH=/ro/bash-amd64-5.0-4/bin:/r"..., "SHELL=/bin/zsh", "TERM=screen.xterm-256color", "XDG_SESSION_ID=c1", "XDG_RUNTIME_DIR=/run/user/0", "DBUS_SESSION_BUS_ADDRESS=unix:pa"..., "XDG_SESSION_TYPE=tty", "XDG_SESSION_CLASS=user", "SSH_CLIENT=10.0.2.2 42556 22", "SSH_CONNECTION=10.0.2.2 42556 10"..., "SSHTTY=/dev/pts/0", "SHLVL=1", "PWD=/root", "OLDPWD=/root", "=/usr/bin/strace", "LD_LIBRARY_PATH=/ro/bash-amd64-5"..., "PERL5LIB=/ro/bash-amd64-5.0-4/ou"..., "PYTHONPATH=/ro/bash-amd64-5.b0-4/"...]) = 0
     2  arch_prctl(ARCH_SET_FS, 0x40c878)       = 0
     3  set_tid_address(0x40ca9c)               = 715
     4  brk(NULL)                               = 0x15b9000
     5  brk(0x15ba000)                          = 0x15ba000
     6  brk(0x15bb000)                          = 0x15bb000
     7  brk(0x15bd000)                          = 0x15bd000
     8  brk(0x15bf000)                          = 0x15bf000
     9  brk(0x15c1000)                          = 0x15c1000
    10  execve("/ro/teensy-loader-cli-amd64-2.1+g20180927-7/out/bin/teensy_loader_cli", ["/ro/teensy-loader-cli-amd64-2.1+"...], ["USER=root", "LOGNAME=root", "HOME=/root", "PATH=/ro/bash-amd64-5.0-4/bin:/r"..., "SHELL=/bin/zsh", "TERM=screen.xterm-256color", "XDG_SESSION_ID=c1", "XDG_RUNTIME_DIR=/run/user/0", "DBUS_SESSION_BUS_ADDRESS=unix:pa"..., "XDG_SESSION_TYPE=tty", "XDG_SESSION_CLASS=user", "SSH_CLIENT=10.0.2.2 42556 22", "SSH_CONNECTION=10.0.2.2 42556 10"..., "SSHTTY=/dev/pts/0", "SHLVL=1", "PWD=/root", "OLDPWD=/root", "=/usr/bin/strace", "LD_LIBRARY_PATH=/ro/bash-amd64-5"..., "PERL5LIB=/ro/bash-amd64-5.0-4/ou"..., "PYTHONPATH=/ro/bash-amd64-5.0-4/"...]) = 0

Confirm which ELF interpreter is set for a binary using readelf(1):

distri0# readelf -a /ro/teensy-loader-cli-amd64-2.1+g20180927-7/out/bin/teensy_loader_cli | grep 'program interpreter'
[Requesting program interpreter: /ro/glibc-amd64-2.31-4/out/lib/ld-linux-x86-64.so.2]

Confirm the rpath is set to the package’s lib subdirectory using readelf(1):

distri0# readelf -a /ro/teensy-loader-cli-amd64-2.1+g20180927-7/out/bin/teensy_loader_cli | grep RPATH
 0x000000000000000f (RPATH)              Library rpath: [/ro/teensy-loader-cli-amd64-2.1+g20180927-7/lib]

…and verify the lib subdirectory has the expected symlinks and target versions:

distri0# find /ro/teensy-loader-cli-amd64-*/lib -type f -printf '%P -> %l\n'
libc.so.6 -> /ro/glibc-amd64-2.31-4/out/lib/libc-2.31.so
libpthread.so.0 -> /ro/glibc-amd64-2.31-4/out/lib/libpthread-2.31.so
librt.so.1 -> /ro/glibc-amd64-2.31-4/out/lib/librt-2.31.so
libudev.so.1 -> /ro/libudev-amd64-245-11/out/lib/libudev.so.1.6.17
libusb-0.1.so.4 -> /ro/libusb-compat-amd64-0.1.5-7/out/lib/libusb-0.1.so.4.4.4
libusb-1.0.so.0 -> /ro/libusb-amd64-1.0.23-8/out/lib/libusb-1.0.so.0.2.0

To verify the correct libraries are actually loaded, you can set the LD_DEBUG environment variable for ld.so(8):

distri0# LD_DEBUG=libs teensy_loader_cli
[…]
       678:     find library=libc.so.6 [0]; searching
       678:      search path=/ro/teensy-loader-cli-amd64-2.1+g20180927-7/lib            (RPATH from file /ro/teensy-loader-cli-amd64-2.1+g20180927-7/out/bin/teensy_loader_cli)
       678:       trying file=/ro/teensy-loader-cli-amd64-2.1+g20180927-7/lib/libc.so.6
       678:
[…]

NSS libraries that distri ships:

find /lib/ -name "libnss_*.so.2" -type f -printf '%P -> %l\n'
libnss_myhostname.so.2 -> ../systemd-amd64-245-11/out/lib/libnss_myhostname.so.2
libnss_mymachines.so.2 -> ../systemd-amd64-245-11/out/lib/libnss_mymachines.so.2
libnss_resolve.so.2 -> ../systemd-amd64-245-11/out/lib/libnss_resolve.so.2
libnss_systemd.so.2 -> ../systemd-amd64-245-11/out/lib/libnss_systemd.so.2
libnss_compat.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_compat.so.2
libnss_db.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_db.so.2
libnss_dns.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_dns.so.2
libnss_files.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_files.so.2
libnss_hesiod.so.2 -> ../glibc-amd64-2.31-4/out/lib/libnss_hesiod.so.2

Planet DebianMichael Stapelberg: distri: 20x faster initramfs (initrd) from scratch

In case you are not yet familiar with why an initramfs (or initrd, or initial ramdisk) is typically used when starting Linux, let me quote the wikipedia definition:

“[…] initrd is a scheme for loading a temporary root file system into memory, which may be used as part of the Linux startup process […] to make preparations before the real root file system can be mounted.”

Many Linux distributions do not compile all file system drivers into the kernel, but instead load them on-demand from an initramfs, which saves memory.

Another common scenario, in which an initramfs is required, is full-disk encryption: the disk must be unlocked from userspace, but since userspace is encrypted, an initramfs is used.

Motivation

Thus far, building a distri disk image was quite slow:

This is on an AMD Ryzen 3900X 12-core processor (2019):

distri % time make cryptimage serial=1
80.29s user 13.56s system 186% cpu 50.419 total # 19s image, 31s initrd

Of these 50 seconds, dracut’s initramfs generation accounts for 31 seconds (62%)!

Initramfs generation time drops to 8.7 seconds once dracut no longer needs to use the single-threaded gzip(1) , but the multi-threaded replacement pigz(1) :

This brings the total time to build a distri disk image down to:

distri % time make cryptimage serial=1
76.85s user 13.23s system 327% cpu 27.509 total # 19s image, 8.7s initrd

Clearly, when you use dracut on any modern computer, you should make pigz available. dracut should fail to compile unless one explicitly opts into the known-slower gzip. For more thoughts on optional dependencies, see “Optional dependencies don’t work”.

But why does it take 8.7 seconds still? Can we go faster?

The answer is Yes! I recently built a distri-specific initramfs I’m calling minitrd. I wrote both big parts from scratch:

  1. the initramfs generator program (distri initrd)
  2. a custom Go userland (cmd/minitrd), running as /init in the initramfs.

minitrd generates the initramfs image in ≈400ms, bringing the total time down to:

distri % time make cryptimage serial=1
50.09s user 8.80s system 314% cpu 18.739 total # 18s image, 400ms initrd

(The remaining time is spent in preparing the file system, then installing and configuring the distri system, i.e. preparing a disk image you can run on real hardware.)

How can minitrd be 20 times faster than dracut?

dracut is mainly written in shell, with a C helper program. It drives the generation process by spawning lots of external dependencies (e.g. ldd or the dracut-install helper program). I assume that the combination of using an interpreted language (shell) that spawns lots of processes and precludes a concurrent architecture is to blame for the poor performance.

minitrd is written in Go, with speed as a goal. It leverages concurrency and uses no external dependencies; everything happens within a single process (but with enough threads to saturate modern hardware).

Measuring early boot time using qemu, I measured the dracut-generated initramfs taking 588ms to display the full disk encryption passphrase prompt, whereas minitrd took only 195ms.

The rest of this article dives deeper into how minitrd works.

What does an initramfs do?

Ultimately, the job of an initramfs is to make the root file system available and continue booting the system from there. Depending on the system setup, this involves the following 5 steps:

1. Load kernel modules to access the block devices with the root file system

Depending on the system, the block devices with the root file system might already be present when the initramfs runs, or some kernel modules might need to be loaded first. On my Dell XPS 9360 laptop, the NVMe system disk is already present when the initramfs starts, whereas in qemu, we need to load the virtio_pci module, followed by the virtio_scsi module.

How will our userland program know which kernel modules to load? Linux kernel modules declare patterns for their supported hardware as an alias, e.g.:

initrd# grep virtio_pci lib/modules/5.4.6/modules.alias
alias pci:v00001AF4d*sv*sd*bc*sc*i* virtio_pci

Devices in sysfs have a modalias file whose content can be matched against these declarations to identify the module to load:

initrd# cat /sys/devices/pci0000:00/*/modalias
pci:v00001AF4d00001005sv00001AF4sd00000004bc00scFFi00
pci:v00001AF4d00001004sv00001AF4sd00000008bc01sc00i00
[…]

Hence, for the initial round of module loading, it is sufficient to locate all modalias files within sysfs and load the responsible modules.

Loading a kernel module can result in new devices appearing. When that happens, the kernel sends a uevent, which the uevent consumer in userspace receives via a netlink socket. Typically, this consumer is udev(7) , but in our case, it’s minitrd.

For each uevent messages that comes with a MODALIAS variable, minitrd will load the relevant kernel module(s).

When loading a kernel module, its dependencies need to be loaded first. Dependency information is stored in the modules.dep file in a Makefile-like syntax:

initrd# grep virtio_pci lib/modules/5.4.6/modules.dep
kernel/drivers/virtio/virtio_pci.ko: kernel/drivers/virtio/virtio_ring.ko kernel/drivers/virtio/virtio.ko

To load a module, we can open its file and then call the Linux-specific finit_module(2) system call. Some modules are expected to return an error code, e.g. ENODEV or ENOENT when some hardware device is not actually present.

Side note: next to the textual versions, there are also binary versions of the modules.alias and modules.dep files. Presumably, those can be queried more quickly, but for simplicitly, I have not (yet?) implemented support in minitrd.

2. Console settings: font, keyboard layout

Setting a legible font is necessary for hi-dpi displays. On my Dell XPS 9360 (3200 x 1800 QHD+ display), the following works well:

initrd# setfont latarcyrheb-sun32

Setting the user’s keyboard layout is necessary for entering the LUKS full-disk encryption passphrase in their preferred keyboard layout. I use the NEO layout:

initrd# loadkeys neo

3. Block device identification

In the Linux kernel, block device enumeration order is not necessarily the same on each boot. Even if it was deterministic, device order could still be changed when users modify their computer’s device topology (e.g. connect a new disk to a formerly unused port).

Hence, it is good style to refer to disks and their partitions with stable identifiers. This also applies to boot loader configuration, and so most distributions will set a kernel parameter such as root=UUID=1fa04de7-30a9-4183-93e9-1b0061567121.

Identifying the block device or partition with the specified UUID is the initramfs’s job.

Depending on what the device contains, the UUID comes from a different place. For example, ext4 file systems have a UUID field in their file system superblock, whereas LUKS volumes have a UUID in their LUKS header.

Canonically, probing a device to extract the UUID is done by libblkid from the util-linux package, but the logic can easily be re-implemented in other languages and changes rarely. minitrd comes with its own implementation to avoid cgo or running the blkid(8) program.

4. LUKS full-disk encryption unlocking (only on encrypted systems)

Unlocking a LUKS-encrypted volume is done in userspace. The kernel handles the crypto, but reading the metadata, obtaining the passphrase (or e.g. key material from a file) and setting up the device mapper table entries are done in user space.

initrd# modprobe algif_skcipher
initrd# cryptsetup luksOpen /dev/sda4 cryptroot1

After the user entered their passphrase, the root file system can be mounted:

initrd# mount /dev/dm-0 /mnt

5. Continuing the boot process (switch_root)

Now that everything is set up, we need to pass execution to the init program on the root file system with a careful sequence of chdir(2) , mount(2) , chroot(2) , chdir(2) and execve(2) system calls that is explained in this busybox switch_root comment.

initrd# mount -t devtmpfs dev /mnt/dev
initrd# exec switch_root -c /dev/console /mnt /init

To conserve RAM, the files in the temporary file system to which the initramfs archive is extracted are typically deleted.

How is an initramfs generated?

An initramfs “image” (more accurately: archive) is a compressed cpio archive. Typically, gzip compression is used, but the kernel supports a bunch of different algorithms and distributions such as Ubuntu are switching to lz4.

Generators typically prepare a temporary directory and feed it to the cpio(1) program. In minitrd, we read the files into memory and generate the cpio archive using the go-cpio package. We use the pgzip package for parallel gzip compression.

The following files need to go into the cpio archive:

minitrd Go userland

The minitrd binary is copied into the cpio archive as /init and will be run by the kernel after extracting the archive.

Like the rest of distri, minitrd is built statically without cgo, which means it can be copied as-is into the cpio archive.

Linux kernel modules

Aside from the modules.alias and modules.dep metadata files, the kernel modules themselves reside in e.g. /lib/modules/5.4.6/kernel and need to be copied into the cpio archive.

Copying all modules results in a ≈80 MiB archive, so it is common to only copy modules that are relevant to the initramfs’s features. This reduces archive size to ≈24 MiB.

The filtering relies on hard-coded patterns and module names. For example, disk encryption related modules are all kernel modules underneath kernel/crypto, plus kernel/drivers/md/dm-crypt.ko.

When generating a host-only initramfs (works on precisely the computer that generated it), some initramfs generators look at the currently loaded modules and just copy those.

Console Fonts and Keymaps

The kbd package’s setfont(8) and loadkeys(1) programs load console fonts and keymaps from /usr/share/consolefonts and /usr/share/keymaps, respectively.

Hence, these directories need to be copied into the cpio archive. Depending on whether the initramfs should be generic (work on many computers) or host-only (works on precisely the computer/settings that generated it), the entire directories are copied, or only the required font/keymap.

cryptsetup, setfont, loadkeys

These programs are (currently) required because minitrd does not implement their functionality.

As they are dynamically linked, not only the programs themselves need to be copied, but also the ELF dynamic linking loader (path stored in the .interp ELF section) and any ELF library dependencies.

For example, cryptsetup in distri declares the ELF interpreter /ro/glibc-amd64-2.27-3/out/lib/ld-linux-x86-64.so.2 and declares dependencies on shared libraries libcryptsetup.so.12, libblkid.so.1 and others. Luckily, in distri, packages contain a lib subdirectory containing symbolic links to the resolved shared library paths (hermetic packaging), so it is sufficient to mirror the lib directory into the cpio archive, recursing into shared library dependencies of shared libraries.

cryptsetup also requires the GCC runtime library libgcc_s.so.1 to be present at runtime, and will abort with an error message about not being able to call pthread_cancel(3) if it is unavailable.

time zone data

To print log messages in the correct time zone, we copy /etc/localtime from the host into the cpio archive.

minitrd outside of distri?

I currently have no desire to make minitrd available outside of distri. While the technical challenges (such as extending the generator to not rely on distri’s hermetic packages) are surmountable, I don’t want to support people’s initramfs remotely.

Also, I think that people’s efforts should in general be spent on rallying behind dracut and making it work faster, thereby benefiting all Linux distributions that use dracut (increasingly more). With minitrd, I have demonstrated that significant speed-ups are achievable.

Conclusion

It was interesting to dive into how an initramfs really works. I had been working with the concept for many years, from small tasks such as “debug why the encrypted root file system is not unlocked” to more complicated tasks such as “set up a root file system on DRBD for a high-availability setup”. But even with that sort of experience, I didn’t know all the details, until I was forced to implement every little thing.

As I suspected going into this exercise, dracut is much slower than it needs to be. Re-implementing its generation stage in a modern language instead of shell helps a lot.

Of course, my minitrd does a bit less than dracut, but not drastically so. The overall architecture is the same.

I hope my effort helps with two things:

  1. As a teaching implementation: instead of wading through the various components that make up a modern initramfs (udev, systemd, various shell scripts, …), people can learn about how an initramfs works in a single place.

  2. I hope the significant time difference motivates people to improve dracut.

Appendix: qemu development environment

Before writing any Go code, I did some manual prototyping. Learning how other people prototype is often immensely useful to me, so I’m sharing my notes here.

First, I copied all kernel modules and a statically built busybox binary:

% mkdir -p lib/modules/5.4.6
% cp -Lr /ro/lib/modules/5.4.6/* lib/modules/5.4.6/
% cp ~/busybox-1.22.0-amd64/busybox sh

To generate an initramfs from the current directory, I used:

% find . | cpio -o -H newc | pigz > /tmp/initrd

In distri’s Makefile, I append these flags to the QEMU invocation:

-kernel /tmp/kernel \
-initrd /tmp/initrd \
-append "root=/dev/mapper/cryptroot1 rdinit=/sh ro console=ttyS0,115200 rd.luks=1 rd.luks.uuid=63051f8a-54b9-4996-b94f-3cf105af2900 rd.luks.name=63051f8a-54b9-4996-b94f-3cf105af2900=cryptroot1 rd.vconsole.keymap=neo rd.vconsole.font=latarcyrheb-sun32 init=/init systemd.setenv=PATH=/bin rw vga=836"

The vga= mode parameter is required for loading font latarcyrheb-sun32.

Once in the busybox shell, I manually prepared the required mount points and kernel modules:

ln -s sh mount
ln -s sh lsmod
mkdir /proc /sys /run /mnt
mount -t proc proc /proc
mount -t sysfs sys /sys
mount -t devtmpfs dev /dev
modprobe virtio_pci
modprobe virtio_scsi

As a next step, I copied cryptsetup and dependencies into the initramfs directory:

% for f in /ro/cryptsetup-amd64-2.0.4-6/lib/*; do full=$(readlink -f $f); rel=$(echo $full | sed 's,^/,,g'); mkdir -p $(dirname $rel); install $full $rel; done
% ln -s ld-2.27.so ro/glibc-amd64-2.27-3/out/lib/ld-linux-x86-64.so.2
% cp /ro/glibc-amd64-2.27-3/out/lib/ld-2.27.so ro/glibc-amd64-2.27-3/out/lib/ld-2.27.so
% cp -r /ro/cryptsetup-amd64-2.0.4-6/lib ro/cryptsetup-amd64-2.0.4-6/
% mkdir -p ro/gcc-libs-amd64-8.2.0-3/out/lib64/
% cp /ro/gcc-libs-amd64-8.2.0-3/out/lib64/libgcc_s.so.1 ro/gcc-libs-amd64-8.2.0-3/out/lib64/libgcc_s.so.1
% ln -s /ro/gcc-libs-amd64-8.2.0-3/out/lib64/libgcc_s.so.1 ro/cryptsetup-amd64-2.0.4-6/lib
% cp -r /ro/lvm2-amd64-2.03.00-6/lib ro/lvm2-amd64-2.03.00-6/

In busybox, I used the following commands to unlock the root file system:

modprobe algif_skcipher
./cryptsetup luksOpen /dev/sda4 cryptroot1
mount /dev/dm-0 /mnt

Planet DebianMichael Stapelberg: distri: a Linux distribution to research fast package management

Over the last year or so I have worked on a research linux distribution in my spare time. It’s not a distribution for researchers (like Scientific Linux), but my personal playground project to research linux distribution development, i.e. try out fresh ideas.

This article focuses on the package format and its advantages, but there is more to distri, which I will cover in upcoming blog posts.

Motivation

I was a Debian Developer for the 7 years from 2012 to 2019, but using the distribution often left me frustrated, ultimately resulting in me winding down my Debian work.

Frequently, I was noticing a large gap between the actual speed of an operation (e.g. doing an update) and the possible speed based on back of the envelope calculations. I wrote more about this in my blog post “Package managers are slow”.

To me, this observation means that either there is potential to optimize the package manager itself (e.g. apt), or what the system does is just too complex. While I remember seeing some low-hanging fruit¹, through my work on distri, I wanted to explore whether all the complexity we currently have in Linux distributions such as Debian or Fedora is inherent to the problem space.

I have completed enough of the experiment to conclude that the complexity is not inherent: I can build a Linux distribution for general-enough purposes which is much less complex than existing ones.

① Those were low-hanging fruit from a user perspective. I’m not saying that fixing them is easy in the technical sense; I know too little about apt’s code base to make such a statement.

Key idea: packages are images, not archives

One key idea is to switch from using archives to using images for package contents. Common package managers such as dpkg(1) use tar(1) archives with various compression algorithms.

distri uses SquashFS images, a comparatively simple file system image format that I happen to be familiar with from my work on the gokrazy Raspberry Pi 3 Go platform.

This idea is not novel: AppImage and snappy also use images, but only for individual, self-contained applications. distri however uses images for distribution packages with dependencies. In particular, there is no duplication of shared libraries in distri.

A nice side effect of using read-only image files is that applications are immutable and can hence not be broken by accidental (or malicious!) modification.

Key idea: separate hierarchies

Package contents are made available under a fully-qualified path. E.g., all files provided by package zsh-amd64-5.6.2-3 are available under /ro/zsh-amd64-5.6.2-3. The mountpoint /ro stands for read-only, which is short yet descriptive.

Perhaps surprisingly, building software with custom prefix values of e.g. /ro/zsh-amd64-5.6.2-3 is widely supported, thanks to:

  1. Linux distributions, which build software with prefix set to /usr, whereas FreeBSD (and the autotools default), which build with prefix set to /usr/local.

  2. Enthusiast users in corporate or research environments, who install software into their home directories.

Because using a custom prefix is a common scenario, upstream awareness for prefix-correctness is generally high, and the rarely required patch will be quickly accepted.

Key idea: exchange directories

Software packages often exchange data by placing or locating files in well-known directories. Here are just a few examples:

  • gcc(1) locates the libusb(3) headers via /usr/include
  • man(1) locates the nginx(1) manpage via /usr/share/man.
  • zsh(1) locates executable programs via PATH components such as /bin

In distri, these locations are called exchange directories and are provided via FUSE in /ro.

Exchange directories come in two different flavors:

  1. global. The exchange directory, e.g. /ro/share, provides the union of the share sub directory of all packages in the package store.
    Global exchange directories are largely used for compatibility, see below.

  2. per-package. Useful for tight coupling: e.g. irssi(1) does not provide any ABI guarantees, so plugins such as irssi-robustirc can declare that they want e.g. /ro/irssi-amd64-1.1.1-1/out/lib/irssi/modules to be a per-package exchange directory and contain files from their lib/irssi/modules.

Search paths sometimes need to be fixed

Programs which use exchange directories sometimes use search paths to access multiple exchange directories. In fact, the examples above were taken from gcc(1) ’s INCLUDEPATH, man(1) ’s MANPATH and zsh(1) ’s PATH. These are prominent ones, but more examples are easy to find: zsh(1) loads completion functions from its FPATH.

Some search path values are derived from --datadir=/ro/share and require no further attention, but others might derive from e.g. --prefix=/ro/zsh-amd64-5.6.2-3/out and need to be pointed to an exchange directory via a specific command line flag.

FHS compatibility

Global exchange directories are used to make distri provide enough of the Filesystem Hierarchy Standard (FHS) that third-party software largely just works. This includes a C development environment.

I successfully ran a few programs from their binary packages such as Google Chrome, Spotify, or Microsoft’s Visual Studio Code.

Fast package manager

I previously wrote about how Linux distribution package managers are too slow.

distri’s package manager is extremely fast. Its main bottleneck is typically the network link, even at high speed links (I tested with a 100 Gbps link).

Its speed comes largely from an architecture which allows the package manager to do less work. Specifically:

  1. Package images can be added atomically to the package store, so we can safely skip fsync(2) . Corruption will be cleaned up automatically, and durability is not important: if an interactive installation is interrupted, the user can just repeat it, as it will be fresh on their mind.

  2. Because all packages are co-installable thanks to separate hierarchies, there are no conflicts at the package store level, and no dependency resolution (an optimization problem requiring SAT solving) is required at all.
    In exchange directories, we resolve conflicts by selecting the package with the highest monotonically increasing distri revision number.

  3. distri proves that we can build a useful Linux distribution entirely without hooks and triggers. Not having to serialize hook execution allows us to download packages into the package store with maximum concurrency.

  4. Because we are using images instead of archives, we do not need to unpack anything. This means installing a package is really just writing its package image and metadata to the package store. Sequential writes are typically the fastest kind of storage usage pattern.

Fast installation also make other use-cases more bearable, such as creating disk images, be it for testing them in qemu(1) , booting them on real hardware from a USB drive, or for cloud providers such as Google Cloud.

Fast package builder

Contrary to how distribution package builders are usually implemented, the distri package builder does not actually install any packages into the build environment.

Instead, distri makes available a filtered view of the package store (only declared dependencies are available) at /ro in the build environment.

This means that even for large dependency trees, setting up a build environment happens in a fraction of a second! Such a low latency really makes a difference in how comfortable it is to iterate on distribution packages.

Package stores

In distri, package images are installed from a remote package store into the local system package store /roimg, which backs the /ro mount.

A package store is implemented as a directory of package images and their associated metadata files.

You can easily make available a package store by using distri export.

To provide a mirror for your local network, you can periodically distri update from the package store you want to mirror, and then distri export your local copy. Special tooling (e.g. debmirror in Debian) is not required because distri install is atomic (and update uses install).

Producing derivatives is easy: just add your own packages to a copy of the package store.

The package store is intentionally kept simple to manage and distribute. Its files could be exchanged via peer-to-peer file systems, or synchronized from an offline medium.

distri’s first release

distri works well enough to demonstrate the ideas explained above. I have branched this state into branch jackherer, distri’s first release code name. This way, I can keep experimenting in the distri repository without breaking your installation.

From the branch contents, our autobuilder creates:

  1. disk images, which…

  2. a package repository. Installations can pick up new packages with distri update.

  3. documentation for the release.

The project website can be found at https://distr1.org. The website is just the README for now, but we can improve that later.

The repository can be found at https://github.com/distr1/distri

Project outlook

Right now, distri is mainly a vehicle for my spare-time Linux distribution research. I don’t recommend anyone use distri for anything but research, and there are no medium-term plans of that changing. At the very least, please contact me before basing anything serious on distri so that we can talk about limitations and expectations.

I expect the distri project to live for as long as I have blog posts to publish, and we’ll see what happens afterwards. Note that this is a hobby for me: I will continue to explore, at my own pace, parts that I find interesting.

My hope is that established distributions might get a useful idea or two from distri.

There’s more to come: subscribe to the distri feed

I don’t want to make this post too long, but there is much more!

Please subscribe to the following URL in your feed reader to get all posts about distri:

https://michael.stapelberg.ch/posts/tags/distri/feed.xml

Next in my queue are articles about hermetic packages and good package maintainer experience (including declarative packaging).

Feedback or questions?

I’d love to discuss these ideas in case you’re interested!

Please send feedback to the distri mailing list so that everyone can participate!

Planet DebianMichael Stapelberg: Linux distributions: Can we do without hooks and triggers?

Hooks are an extension feature provided by all package managers that are used in larger Linux distributions. For example, Debian uses apt, which has various maintainer scripts. Fedora uses rpm, which has scriptlets. Different package managers use different names for the concept, but all of them offer package maintainers the ability to run arbitrary code during package installation and upgrades. Example hook use cases include adding daemon user accounts to your system (e.g. postgres), or generating/updating cache files.

Triggers are a kind of hook which run when other packages are installed. For example, on Debian, the man(1) package comes with a trigger which regenerates the search database index whenever any package installs a manpage. When, for example, the nginx(8) package is installed, a trigger provided by the man(1) package runs.

Over the past few decades, Open Source software has become more and more uniform: instead of each piece of software defining its own rules, a small number of build systems are now widely adopted.

Hence, I think it makes sense to revisit whether offering extension via hooks and triggers is a net win or net loss.

Hooks preclude concurrent package installation

Package managers commonly can make very little assumptions about what hooks do, what preconditions they require, and which conflicts might be caused by running multiple package’s hooks concurrently.

Hence, package managers cannot concurrently install packages. At least the hook/trigger part of the installation needs to happen in sequence.

While it seems technically feasible to retrofit package manager hooks with concurrency primitives such as locks for mutual exclusion between different hook processes, the required overhaul of all hooks¹ seems like such a daunting task that it might be better to just get rid of the hooks instead. Only deleting code frees you from the burden of maintenance, automated testing and debugging.

① In Debian, there are 8620 non-generated maintainer scripts, as reported by find shard*/src/*/debian -regex ".*\(pre\|post\)\(inst\|rm\)$" on a Debian Code Search instance.

Triggers slow down installing/updating other packages

Personally, I never use the apropos(1) command, so I don’t appreciate the man(1) package’s trigger which updates the database used by apropos(1). The process takes a long time and, because hooks and triggers must be executed serially (see previous section), blocks my installation or update.

When I tell people this, they are often surprised to learn about the existance of the apropos(1) command. I suggest adopting an opt-in model.

Unnecessary work if programs are not used between updates

Hooks run when packages are installed. If a package’s contents are not used between two updates, running the hook in the first update could have been skipped. Running the hook lazily when the package contents are used reduces unnecessary work.

As a welcome side-effect, lazy hook evaluation automatically makes the hook work in operating system images, such as live USB thumb drives or SD card images for the Raspberry Pi. Such images must not ship the same crypto keys (e.g. OpenSSH host keys) to all machines, but instead generate a different key on each machine.

Why do users keep packages installed they don’t use? It’s extra work to remember and clean up those packages after use. Plus, users might not realize or value that having fewer packages installed has benefits such as faster updates.

I can also imagine that there are people for whom the cost of re-installing packages incentivizes them to just keep packages installed—you never know when you might need the program again…

Implemented in an interpreted language

While working on hermetic packages (more on that in another blog post), where the contained programs are started with modified environment variables (e.g. PATH) via a wrapper bash script, I noticed that the overhead of those wrapper bash scripts quickly becomes significant. For example, when using the excellent magit interface for Git in Emacs, I encountered second-long delays² when using hermetic packages compared to standard packages. Re-implementing wrappers in a compiled language provided a significant speed-up.

Similarly, getting rid of an extension point which mandates using shell scripts allows us to build an efficient and fast implementation of a predefined set of primitives, where you can reason about their effects and interactions.

② magit needs to run git a few times for displaying the full status, so small overhead quickly adds up.

Incentivizing more upstream standardization

Hooks are an escape hatch for distribution maintainers to express anything which their packaging system cannot express.

Distributions should only rely on well-established interfaces such as autoconf’s classic ./configure && make && make install (including commonly used flags) to build a distribution package. Integrating upstream software into a distribution should not require custom hooks. For example, instead of requiring a hook which updates a cache of schema files, the library used to interact with those files should transparently (re-)generate the cache or fall back to a slower code path.

Distribution maintainers are hard to come by, so we should value their time. In particular, there is a 1:n relationship of packages to distribution package maintainers (software is typically available in multiple Linux distributions), so it makes sense to spend the work in the 1 and have the n benefit.

Can we do without them?

If we want to get rid of hooks, we need another mechanism to achieve what we currently achieve with hooks.

If the hook is not specific to the package, it can be moved to the package manager. The desired system state should either be derived from the package contents (e.g. required system users can be discovered from systemd service files) or declaratively specified in the package build instructions—more on that in another blog post. This turns hooks (arbitrary code) into configuration, which allows the package manager to collapse and sequence the required state changes. E.g., when 5 packages are installed which each need a new system user, the package manager could update /etc/passwd just once.

If the hook is specific to the package, it should be moved into the package contents. This typically means moving the functionality into the program start (or the systemd service file if we are talking about a daemon). If (while?) upstream is not convinced, you can either wrap the program or patch it. Note that this case is relatively rare: I have worked with hundreds of packages and the only package-specific functionality I came across was automatically generating host keys before starting OpenSSH’s sshd(8)³.

There is one exception where moving the hook doesn’t work: packages which modify state outside of the system, such as bootloaders or kernel images.

③ Even that can be moved out of a package-specific hook, as Fedora demonstrates.

Conclusion

Global state modifications performed as part of package installation today use hooks, an overly expressive extension mechanism.

Instead, all modifications should be driven by configuration. This is feasible because there are only a few different kinds of desired state modifications. This makes it possible for package managers to optimize package installation.

Planet DebianMichael Stapelberg: Optional dependencies don’t work

In the i3 projects, we have always tried hard to avoid optional dependencies. There are a number of reasons behind it, and as I have recently encountered some of the downsides of optional dependencies firsthand, I summarized my thoughts in this article.

What is a (compile-time) optional dependency?

When building software from source, most programming languages and build systems support conditional compilation: different parts of the source code are compiled based on certain conditions.

An optional dependency is conditional compilation hooked up directly to a knob (e.g. command line flag, configuration file, …), with the effect that the software can now be built without an otherwise required dependency.

Let’s walk through a few issues with optional dependencies.

Inconsistent experience in different environments

Software is usually not built by end users, but by packagers, at least when we are talking about Open Source.

Hence, end users don’t see the knob for the optional dependency, they are just presented with the fait accompli: their version of the software behaves differently than other versions of the same software.

Depending on the kind of software, this situation can be made obvious to the user: for example, if the optional dependency is needed to print documents, the program can produce an appropriate error message when the user tries to print a document.

Sometimes, this isn’t possible: when i3 introduced an optional dependency on cairo and pangocairo, the behavior itself (rendering window titles) worked in all configurations, but non-ASCII characters might break depending on whether i3 was compiled with cairo.

For users, it is frustrating to only discover in conversation that a program has a feature that the user is interested in, but it’s not available on their computer. For support, this situation can be hard to detect, and even harder to resolve to the user’s satisfaction.

Packaging is more complicated

Unfortunately, many build systems don’t stop the build when optional dependencies are not present. Instead, you sometimes end up with a broken build, or, even worse: with a successful build that does not work correctly at runtime.

This means that packagers need to closely examine the build output to know which dependencies to make available. In the best case, there is a summary of available and enabled options, clearly outlining what this build will contain. In the worst case, you need to infer the features from the checks that are done, or work your way through the --help output.

The better alternative is to configure your build system such that it stops when any dependency was not found, and thereby have packagers acknowledge each optional dependency by explicitly disabling the option.

Untested code paths bit rot

Code paths which are not used will inevitably bit rot. If you have optional dependencies, you need to test both the code path without the dependency and the code path with the dependency. It doesn’t matter whether the tests are automated or manual, the test matrix must cover both paths.

Interestingly enough, this principle seems to apply to all kinds of software projects (but it slows down as change slows down): one might think that important Open Source building blocks should have enough users to cover all sorts of configurations.

However, consider this example: building cairo without libxrender results in all GTK application windows, menus, etc. being displayed as empty grey surfaces. Cairo does not fail to build without libxrender, but the code path clearly is broken without libxrender.

Can we do without them?

I’m not saying optional dependencies should never be used. In fact, for bootstrapping, disabling dependencies can save a lot of work and can sometimes allow breaking circular dependencies. For example, in an early bootstrapping stage, binutils can be compiled with --disable-nls to disable internationalization.

However, optional dependencies are broken so often that I conclude they are overused. Read on and see for yourself whether you would rather commit to best practices or not introduce an optional dependency.

Best practices

If you do decide to make dependencies optional, please:

  1. Set up automated testing for all code path combinations.
  2. Fail the build until packagers explicitly pass a --disable flag.
  3. Tell users their version is missing a dependency at runtime, e.g. in --version.

Worse Than FailureTeleconference Horror

Jcacweb cam

In the spring of 2020, with very little warning, every school in the United States shut down due to the ongoing global pandemic. Classrooms had to move to virtual meeting software like Zoom, which was never intended to be used as the primary means of educating grade schoolers. The teachers did wonderfully with such little notice, and most kids finished out the year with at least a little more knowledge than they started. This story takes place years before then, when online schooling was seen as an optional add-on and not a necessary backup plan in case of plague.

TelEdu provided their take on such a thing in the form of a free third-party add-on for Moodle, a popular e-learning platform. Moodle provides space for teachers to upload recordings and handouts; TelEdu takes it one step further by adding a "virtual classroom" complete with a virtual whiteboard. The catch? You have to pay a subscription fee to use the free module, otherwise it's nonfunctional.

Initech decided they were on a tight schedule to implement a virtual classroom feature for their corporate training, so they went ahead and bought the service without testing it. They then scheduled a demonstration to the client, still without testing it. The client's 10-man team all joined to test out the functionality, and it wasn't long before the phone started ringing off the hook with complaints: slowness, 504 errors, blank pages, the whole nine yards.

That's where Paul comes in to our story. Paul was tasked with finding what had gone wrong and completing the integration. The most common complaint was that Moodle was being slow, but upon testing it himself, Paul found that only the TelEdu module pages were slow, not the rest of the install. So far so good. The code was open-source, so he went digging through to find out what in view.php was taking so long:

$getplan = telEdu_get_plan();
$paymentinfo = telEdu_get_payment_info();
$getclassdetail = telEdu_get_class($telEduclass->class_id);
$pricelist = telEdu_get_price_list($telEduclass->class_id);

Four calls to get info about the class, three of them to do with payment. Not a great start, but not necessarily terrible, either. So, how was the info fetched?

function telEdu_get_plan() {
    $data['task'] = TELEDU_TASK_GET_PLAN;
    $result = telEdu_get_curl_info($data);
    return $result;
}

"They couldn't possibly ... could they?" Paul wondered aloud.

function telEdu_get_payment_info() {
    $data['task'] = TELEDU_TASK_GET_PAYMENT_INFO;
    $result = telEdu_get_curl_info($data);
    return $result;
}

Just to make sure, Paul next checked what telEdu_get_curl_info actually did:


function telEdu_get_curl_info($data) {
    global $CFG;
    require_once($CFG->libdir . '/filelib.php');

    $key = $CFG->mod_telEdu_apikey;
    $baseurl = $CFG->mod_telEdu_baseurl;

    $urlfirstpart = $baseurl . "/" . $data['task'] . "?apikey=" . $key;

    if (($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) || ($data['task'] == TELEDU_TASK_GET_PLAN)) {
        $location = $baseurl;
    } else {
        $location = telEdu_post_url($urlfirstpart, $data);
    }

    $postdata = '';
    if ($data['task'] == TELEDU_TASK_GET_PAYMENT_INFO) {
        $postdata = 'task=getPaymentInfo&apikey=' . $key;
    } else if ($data['task'] == TELEDU_TASK_GET_PLAN) {
        $postdata = 'task=getplan&apikey=' . $key;
    }

    $options = array(
        'CURLOPT_RETURNTRANSFER' => true, 'CURLOPT_SSL_VERIFYHOST' => false, 'CURLOPT_SSL_VERIFYPEER' => false,
    );

    $curl = new curl();
    $result = $curl->post($location, $postdata, $options);

    $finalresult = json_decode($result, true);
    return $finalresult;
}

A remote call to another API using, of all things, a shell call out to cURL, which queried URLs from the command line. Then it waited for the result, which was clocking in at anywhere between 1 and 30 seconds ... each call. The result wasn't used anywhere, either. It seemed to be just a precaution in case somewhere down the line they wanted these things.

After another half a day of digging through the rest of the codebase, Paul gave up. Sales told the client that "Due to the high number of users, we need more time to make a small server calibration."

The calibration? Replacing TelEdu with BigBlueButton. Problem solved.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianDirk Eddelbuettel: RcppSimdJson 0.1.1: More Features

A first update following for the exciting RcppSimdJson 0.1.0 release last month is now on CRAN. Version 0.1.1 brings further enhancements such direct parsing of raw chars, working with compressed files as well as much expanded querying ability all thanks to Brendan, some improvements to our demos thanks to Daniel as well as a small fix via a one-liner borrowed from upstream for a reported UBSAN issue.

RcppSimdJson wraps the fantastic and genuinely impressive simdjson library by Daniel Lemire and collaborators. Via very clever algorithmic engineering to obtain largely branch-free code, coupled with modern C++ and newer compiler instructions, it results in parsing gigabytes of JSON parsed per second which is quite mindboggling. The best-case performance is ‘faster than CPU speed’ as use of parallel SIMD instructions and careful branch avoidance can lead to less than one cpu cycle use per byte parsed; see the video of the talk by Daniel Lemire at QCon (also voted best talk).

The detailed list of changes follows.

Changes in version 0.1.1 (2020-08-10)

  • Corrected incorrect file deletion when mixing local and remote files (Brendan in #34) closing #33.

  • Added support for raw vectors, compressed files, and compressed downloads (Dirk and Brendan in #36, #39, and #45 closing #35 and addressing issues raised in #40 and #44).

  • Examples in two demos are now more self-sufficient (Daniel Lemire and Dirk in #42).

  • Expanded query functionality to include single, flat, and nested queries (Brendan in #45 closing #43).

  • Split error handling parameters from error_ok/on_error into parse_error_ok/on_parse_error and query_error_ok/on_query_error (Brendan in #45).

  • One-line upstream change to address sanitizer error on cast.

Courtesy of CRANberries, there is also a diffstat report for this release.

For questions, suggestions, or issues please use the issue tracker at the GitHub repo.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecurityMicrosoft Patch Tuesday, August 2020 Edition

Microsoft today released updates to plug at least 120 security holes in its Windows operating systems and supported software, including two newly discovered vulnerabilities that are actively being exploited. Yes, good people of the Windows world, it’s time once again to backup and patch up!

At least 17 of the bugs squashed in August’s patch batch address vulnerabilities Microsoft rates as “critical,” meaning they can be exploited by miscreants or malware to gain complete, remote control over an affected system with little or no help from users. This is the sixth month in a row Microsoft has shipped fixes for more than 100 flaws in its products.

The most concerning of these appears to be CVE-2020-1380, which is a weaknesses in Internet Explorer that could result in system compromise just by browsing with IE to a hacked or malicious website. Microsoft’s advisory says this flaw is currently being exploited in active attacks.

The other flaw enjoying active exploitation is CVE-2020-1464, which is a “spoofing” bug in virtually supported version of Windows that allows an attacker to bypass Windows security features and load improperly signed files.

Trend Micro’s Zero Day Initiative points to another fix — CVE-2020-1472 — which involves a critical issue in Windows Server versions that could let an unauthenticated attacker gain administrative access to a Windows domain controller and run an application of their choosing. A domain controller is a server that responds to security authentication requests in a Windows environment, and a compromised domain controller can give attackers the keys to the kingdom inside a corporate network.

“It’s rare to see a Critical-rated elevation of privilege bug, but this one deserves it,” said ZDI’S Dustin Childs. “What’s worse is that there is not a full fix available.”

Perhaps the most “elite” vulnerability addressed this month earned the distinction of being named CVE-2020-1337, and refers to a security hole in the Windows Print Spooler service that could allow an attacker or malware to escalate their privileges on a system if they were already logged on as a regular (non-administrator) user.

Satnam Narang at Tenable notes that CVE-2020-1337 is a patch bypass for CVE-2020-1048, another Windows Print Spooler vulnerability that was patched in May 2020. Narang said researchers found that the patch for CVE-2020-1048 was incomplete and presented their findings for CVE-2020-1337 at the Black Hat security conference earlier this month. More information on CVE-2020-1337, including a video demonstration of a proof-of-concept exploit, is available here.

Adobe has graciously given us another month’s respite from patching Flash Player flaws, but it did release critical security updates for its Acrobat and PDF Reader products. More information on those updates is available here.

Keep in mind that while staying up-to-date on Windows patches is a must, it’s important to make sure you’re updating only after you’ve backed up your important data and files. A reliable backup means you’re less likely to pull your hair out when the odd buggy patch causes problems booting the system.

So do yourself a favor and backup your files before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

And as ever, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips.

Planet DebianJonathan Carter: GameMode in Debian

What is GameMode, what does it do?

About two years ago, I ran into some bugs running a game on Debian, so installed Windows 10 on a spare computer and ran it on there. I learned that when you launch a game in Windows 10, it automatically disables notifications, screensaver, reduces power saving measures and gives the game maximum priority. I thought “Oh, that’s actually quite nice, but we probably won’t see that kind of integration on Linux any time soon”. The very next week, I read the initial announcement of GameMode, a tool from Feral Interactive that does a bunch of tricks to maximise performance for games running on Linux.

When GameMode is invoked it:

  • Sets the kernel performance governor from ‘powersave’ to ‘performance’
  • Provides I/O priority to the game process
  • Optionally sets nice value to the game process
  • Inhibits the screensaver
  • Tweak the kernel scheduler to enable soft real-time capabilities (handled by the MuQSS kernel scheduler, if available in your kernel)
  • Sets GPU performance mode (NVIDIA and AMD)
  • Attempts GPU overclocking (on supported NVIDIA cards)
  • Runs custom pre/post run scripts. You might want to run a script to disable your ethereum mining or suspend VMs when you start a game and resume it all once you quit.

How GameMode is invoked

Some newer games (proprietary games like “Rise of the Tomb Raider”, “Total War Saga: Thrones of Britannia”, “Total War: WARHAMMER II”, “DiRT 4” and “Total War: Three Kingdoms”) will automatically invoke GameMode if it’s installed. For games that don’t, you can manually evoke it using the gamemoderun command.

Lutris is a tool that makes it easy to install and run games on Linux, and it also integrates with GameMode. (Lutris is currently being packaged for Debian, hopefully it will make it in on time for Bullseye).

Screenshot of Lutris, a tool that makes it easy to install your non-Linux games, which also integrates with GameMode.

GameMode in Debian

The latest GameMode is packaged in Debian (Stephan Lachnit and I maintain it in the Debian Games Team) and it’s also available for Debian 10 (Buster) via buster-backports. All you need to do to get up and running with GameMode is to install the ‘gamemode’ package.

GameMode in Debian supports 64 bit and 32 bit mode, so running it with older games (and many proprietary games) still work. Some distributions (like Arch Linux), have dropped 32 bit support, so 32 bit games on such systems lose any kind of integration with GameMode even if you can get those games running via other wrappers on such systems.

We also include a binary called ‘gamemode-simulate-game’ (installed under /usr/games/). This is a minimalistic program that will invoke gamemode automatically for 10 seconds and then exit without an error if it was successful. Its source code might be useful if you’d like to add GameMode support to your game, or patch a game in Debian to automatically invoke it.

In Debian we install Gamemode’s example config file to /etc/gamemode.ini where a user can customise their system-wide preferences, or alternatively they can place a copy of that in ~/.gamemode.ini with their personal preferences. In this config file, you can also choose to explicitly allow or deny games.

GameMode might also be useful for many pieces of software that aren’t games. I haven’t done any benchmarks on such software yet, but it might be great for users who use CAD programs or use a combination of their CPU/GPU to crunch a large amount of data.

I’ve also packaged an extension for GNOME called gamemode-extension. The Debian package is called ‘gnome-shell-extension-gamemode’. You’ll need to enable it using gnome-tweaks after installation, it will then display a green controller in your notification area whenever GameMode is active. It’s only in testing/bullseye since it relies on a newer gnome-shell than what’s available in buster.

Running gamemode-simulate-game, with the shell extension showing that it’s activated in the top left corner.

Planet DebianMike Gabriel: No Debian LTS Work in July 2020

In July 2020, I was originally assigned 8h of work on Debian LTS as a paid contributor, but holiday season overwhelmed me and I did not do any LTS work, at all.

The assigned hours from July I have taken with me into August 2020.

light+love,
Mike

Cory DoctorowTerra Nullius

Terra Nullius is my March 2019 column in Locus magazine; it explores the commonalities between the people who claim ownership over the things they use to make new creative works and the settler colonialists who arrived in various “new worlds” and declared them to be empty, erasing the people who were already there as a prelude to genocide.

I was inspired by the story of Aloha Poke, in which a white dude from Chicago secured a trademark for his “Aloha Poke” midwestern restaurants, then threatened Hawai’ians who used “aloha” in the names of their restaurants (and later, by the Dutch grifter who claimed a patent on the preparation of teff, an Ethiopian staple grain that has been cultivated and refined for about 7,000 years).

MP3 Link

CryptogramCollecting and Selling Mobile Phone Location Data

The Wall Street Journal has an article about a company called Anomaly Six LLC that has an SDK that's used by "more than 500 mobile applications." Through that SDK, the company collects location data from users, which it then sells.

Anomaly Six is a federal contractor that provides global-location-data products to branches of the U.S. government and private-sector clients. The company told The Wall Street Journal it restricts the sale of U.S. mobile phone movement data only to nongovernmental, private-sector clients.

[...]

Anomaly Six was founded by defense-contracting veterans who worked closely with government agencies for most of their careers and built a company to cater in part to national-security agencies, according to court records and interviews.

Just one of the many Internet companies spying on our every move for profit. And I'm sure they sell to the US government; it's legal and why would they forgo those sales?

Kevin RuddCNN: South China Sea and the US-China Tech War

E&OE TRANSCRIPT
TELEVISION INTERVIEW
CNN, FIRST MOVE
11 AUGUST 2020

Topics: Foreign Affairs article; US-China tech war

Zain Asher
In a sobering assessment in Foreign Affairs magazine, the former Australian Prime Minister Kevin Rudd warns that diplomatic relations are crumbling and raise the possibility of armed conflict. Mr Rudd, who is president of the Asia Society Policy Institute, joins us live now. So Mr Rudd, just walk us through this. You believe that armed conflict is possible and, is this relationship at this point, in your opinion, quite frankly, beyond repair?

Kevin Rudd
It’s not beyond repair, but we’ve got to be blunt about the fact that the level of deterioration has been virtually unprecedented at least in the last half-century. And things are moving at a great pace in terms of the scenarios, the two scenarios which trouble us most are the Taiwan straits and the South China Sea. In the Taiwan straits, we see consistent escalation of tensions between Washington and Beijing. And certainly, in the South China Sea, the pace and intensity of naval and air activity in and around that region increases the possibility, the real possibility, of collisions at sea and collisions in the air. And the question then becomes: do Beijing and Washington really have an intention to de-escalate or then to escalate, if such a crisis was to unfold?

Zain Asher
How do they de-escalate? Is the only way at this point, or how do they reverse the sort of tensions between them? Is the main way at this point that, you know, a new administration comes in in November and it can be reset? If Trump gets re-elected, can there be de-escalation? If so, how?

Kevin Rudd
Well the purpose of my writing the article in Foreign Affairs, which you referred to before, was to, in fact, talk about the real dangers we face in the next three months. That is, before the US presidential election. We all know that in the US right now, that tensions or, shall I say, political pressure on President Trump are acute. But what people are less familiar of within the West is the fact that in Chinese politics there is also pressure on Xi Jinping for a range of domestic and external reasons as well. So what I have simply said is: in this next three months, where we face genuine political pressure operating on both political leaders, if we do have an incident, that is an unplanned incident or collision in the air or at sea, we now have a tinderbox environment. Therefore, the plans which need to be put in place between the grown-ups in the US and Chinese militaries is to have a mechanism to rapidly de-escalate should a collision occur. I’m not sure that those plans currently exist.

Zain Asher
Let’s talk about tech because President Donald Trump, as you know, is forcing ByteDance, the company that owns TikTok, to sell its assets and no longer operate in the US. The premise is that there are national security fears and also this idea that TikTok is handing over user data from American citizens to the Chinese government. How real and concrete are those fears, or is this purely politically motivated? Are the fears justified, in other words?

Kevin Rudd
As far as TikTok is concerned, this is way beyond my paygrade in terms of analysing the technological capacities of a) the company and b) the ability of the Chinese security authorities to backdoor them. What I can say is this a deliberate decision on the part of the US administration to radically escalate the technology war. In the past, it was a war about Huawei and 5G. It then became an unfolding conflict over the question of the future access to semiconductors, computer chips. And now we have, as it were, the unfolding ban imposed by the administration on Chinese-sourced computer apps, including this one, for TikTok. So this is a throwing-down of the gauntlet by the US administration. What I believe we will see, however, is Chinese retaliation. I think they will find a corporate mechanism to retaliate, given the actions taken not just against ByteDance and TikTok, but of course against WeChat. And so the pattern of escalation that we were talking about earlier in technology, the economy, trade, investment, finance, and the hard stuff in national security continues to unfold, which is why we need sober heads to prevail in the months ahead.

The post CNN: South China Sea and the US-China Tech War appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: The Concatenator

In English, there's much debate over the "Oxford Comma": in a list of items, do you put a comma between the penultimate item and the "and" before the final one? For example: "The conference featured bad programmers, Remy and TheDailyWTF readers" versus "The conference featured bad programmers, Remy, and the TheDailyWTF readers."

I'd like to introduce a subtly different one: "the concatenator's comma", or if we want to be generic "the concatenator's seperator character", but that doesn't have the same ring to it. If you're planning to list items as a string, you might to something like this pseudocode:

for each item in items result.append(item + ", ")

This naive approach does pose a problem: we'll have an extra comma. So maybe you have to add logic to decide if you're on the first or last item, and insert (or fail to insert) commas as appropriate. Or, maybe isn't a problem- if we're generating JSON, for example, we can just leave the trailing commas. This isn't universally true, of course, but many formats will ignore extra separators. Edit: I was apparently hallucinating when I wrote this; one of the most annoying things about JSON is that you can't do this.

Like, for example, URL query strings, which don't require a "sub-delim" like "&" to have anything following it.

But fortunately for us, no matter what language we're using, there's almost certainly an API that makes it so that we don't have to do string concatenation anyway, so why even bring it up?

Well, because Mike has a co-worker that has read the docs well enough to know that PHP has a substr method, but not well enough to know it has an http_build_query method. Or even an implode method, which handles string concats for you. Instead, they wrote this:

$query = ''; foreach ($postdata as $var => $val) { $query .= $var .'='. $val .'&'; } $query = substr($query, 0, -1);

This code exploits a little-observed feature of substr: a negative length reads back from the end. So this lops off that trailing "&", which is both unnecessary and one of the most annoying ways to do this.

Maybe it's not enough to RTFM, as Mike puts it, maybe you need to "RTEFM": read the entire manual.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Planet DebianDirk Eddelbuettel: nanotime 0.3.1: Misc Build Fixes for Yuge New Features!

The nanotime 0.3.0 release four days ago was so exciting that we decided to do it again! Kidding aside, and fairly extensive tests notwithstanding we were bitten by a few build errors: who knew clang on macOS needed extra curlies to be happy, another manifestation of Solaris having no idea what a timezone setting “America/New_York” is, plus some extra pickyness from the SAN tests and whatnot. So Leonardo and I gave it some extra care over the weekend, uploaded it late yesterday and here we are with 0.3.1. Thanks again to CRAN for prompt processing even though they are clearly deluged shortly before their (brief) summer break.

nanotime relies on the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from work by Leonardo Silvestri who rejigged internals in S4—and now added new types for periods, intervals and durations.

The NEWS snippet adds full details.

Changes in version 0.3.1 (2020-08-09)

  • Several small cleanups to ensure a more robust compilation (Leonardo and Dirk in #75 fixing #74).

  • Show Solaris some extra love by skipping tests and examples with a timezone (Dirk in #76).

Thanks to CRANberries there is also a diff to the previous version. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureI'm Blue

Designers are used to getting vague direction from clients. "It should have more pop!" or "Can you make the blue more blue?" But Kevin was a contractor who worked on embedded software, so he didn't really expect to have to deal with that, even if he did have to deal with colors a fair bit.

Kevin was taking over a contract from another developer to build software for a colorimeter, a device to measure color. When companies, like paint companies, care about color, they tend to really care about color, and need to be able to accurately observe a real-world color. Once you start diving deep into color theory, you start having to think about things like observers, and illuminants and tristimulus models and "perceptual color spaces".

The operating principle of the device was fairly simple. It had a bright light, of a well known color temperature. It had a brightness sensor. It had a set of colored filter gels that would pass in front of the sensor. Place the colorimeter against an object, and the bright light would reflect off the surface, through each of the filters in turn and record the brightness. With a little computation, you can determine, with a high degree of precision, what color something is.

Now, this is a scientific instrument, and that means that the code which runs it, even though it's proprietary, needs to be vetted by scientists. The device needs to be tested against known samples. Deviations need to be corrected for, and then carefully justified. There should be no "magic numbers" in the code that aren't well documented and explained. If, for example, the company gets its filter gels from a new vendor and they filter slightly different frequencies, the commit needs to link to the datasheets for those gels to explain the change. Similarly, if a sensor has a frequency response that means that the samples may be biased, you commit that with a link to the datasheet showing that to be the case.

Which is why Kevin was a little surprised by the commit by his predecessor. The message read: "Nathan wants the blue 'more blue'? Fine. the blue is more blue." Nathan was the product owner.

The corresponding change was a line which read:

blue += 20;

Well, Nathan got what he wanted. It's a good thing he didn't ask for it to "pop" though.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: rra-c-util 8.3

In this release of my utility library for my other packages, I finally decided to drop support for platforms without a working snprintf.

This dates back to the early 2000s and a very early iteration of this package. At the time, there were still some older versions of UNIX without snprintf at all. More commonly, it was buggy. The most common problem was that it would return -1 if the buffer wasn't large enough rather than returning the necessary size of the buffer. Or, in some cases, it wouldn't support a buffer size of 0 and a NULL buffer to get the necessary size.

At the time I added this support for INN and some other packages, Solaris had several of these issues. But C99 standardized the correct snprintf behavior, and slowly every maintained operating system was fixed. (I forget whether it was fixed in Solaris 8 or Solaris 9, but regardless, Solaris has had a working snprintf for many years.) Meanwhile, the replacement function (Patrick Powell's version, also used by mutt and other packages) was a huge wad of code and a corresponding test suite. Over time, I've increased the aggressiveness of linters to try to catch more dangerous C pitfalls, and that's required carrying more and more small modifications plus a preamble to disable various warnings that I didn't want to try to fix.

The straw that broke the camel's back was Clang's new case fallthrough warning. Clang stopped supporting the traditional /* fallthrough */ comment. It now prefers [[clang:fallthrough]] syntax, but of course older compilers choke on that. It does support the GCC __attribute__((__fallthrough__)) syntax, but older compilers don't like that construction because they think it's an empty statement. It was a mess, and I decided the time had come to drop this support effort.

At this point, if you're still running an operating system without C99 snprintf, I think it's essentially a retrocomputing or at least extremely stable legacy production situation, and you're unlikely to want the latest and greatest releases of new software. Hopefully that assumption is correct, or at least correct enough.

(I realize the right solution to this problem is probably for me to use Gnulib for portability. But converting to it is a whole other project with a lot of other implications and machinery, and I'm not sure that's what I want to spend time on.)

Also in this release is a fix for network tests on hosts with no IPv4 addresses (more on this when I release the next version of remctl), fixes for style issues found by Perl::Critic::Freenode, and some other test suite improvements.

You can get the latest version from the rra-c-util distribution page.

Planet DebianArnaud Rebillout: GoAccess 1.4, a detailed tutorial

GoAccess v1.4 was just released a few weeks ago! Let's take this chance to write a loooong tutorial. We'll go over every steps to install and operate GoAccess. This is a tutorial aimed at those who don't play sysadmin every day, and that's why it's so long, I did my best to provide thorough explanations all along, so that it's more than just a "copy-and-paste" kind of tutorial. And for those who do play sysadmin everyday: please try not to fall asleep while reading, and don't hesitate to drop me an e-mail if you spot anything inaccurate in here. Thanks!

Introduction

So what's GoAccess already? GoAccess is a web log analyzer, and it allows you to visualize the traffic for your website, and get to know a bit more about your visitors: how many visitors and hits, for which pages, coming from where (geolocation, operating system, web browser...), etc... It does so by parsing the access logs from your web server, be it Apache, NGINX or whatever.

GoAccess gives you different options to display the statistics, and in this tutorial we'll focus on producing a HTML report. Meaning that you can see the statistics for your website straight in your web browser, under the form of a single HTML page.

For an example, you can have a look at the stats of my blog here: https://goaccess.arnaudr.io.

GoAccess is written in C, it has very few dependencies, it had been around for about 10 years, and it's distributed under the MIT license.

Assumptions

This tutorial is about installing and configuring, so I'll assume that all the commands are run as root. I won't prefix each of them with sudo.

I use the Apache web server, running on a Debian system. I don't think it matters so much for this tutorial though. If you're using NGINX it's fine, you can keep reading.

Also, I will just use the name SITE for the name of the website that we want to analyze with GoAccess. Just replace that with the real name of your site.

I also assume the following locations for your stuff:

  • the website is at /var/www/SITE
  • the logs are at /var/log/apache2/SITE (yes, there is a sub-directory)
  • we're going to save the GoAccess database in /var/lib/goaccess-db/SITE.

If you have your stuff in /srv/SITE/{log,www} instead, no worries, just adjust the paths accordingly, I bet you can do it.

Installation

The latest version of GoAccess is v1.4, and it's not yet available in the Debian repositories. So for this part, you can follow the instructions from the official GoAccess download page. Install steps are explained in details, so there's nothing left for me to say :)

When this is done, let's get started with the basics.

We're talking about the latest version v1.4 here, let's make sure:

$ goaccess --version
GoAccess - 1.4.
...

Now let's try to create a HTML report. I assume that you already have a website up and running.

GoAccess needs to parse the access logs. These logs are optional, they might or might not be created by your web server, depending on how it's configured. Usually, these log files are named access.log, unsurprisingly.

You can check if those logs exist on your system by running this command:

find /var/log -name access.log

Another important thing to know is that these logs can be in different formats. In this tutorial we'll assume that we work with the combined log format, because it seems to be the most common default.

To check what kind of access logs your web server produces, you must look at the configuration for your site.

For an Apache web server, you should have such a line in the file /etc/apache2/sites-enabled/SITE.conf:

CustomLog ${APACHE_LOG_DIR}/SITE/access.log combined

For NGINX, it's quite similar. The configuration file would be something like /etc/nginx/sites-enabled/SITE, and the line to enable access logs would be something like:

access_log /var/log/nginx/SITE/access.log

Note that NGINX writes the access logs in the combined format by default, that's why you don't see the word combined anywhere in the line above: it's implicit.

Alright, so from now on we assume that yes, you have access log files available, and yes, they are in the combined log format. If that's the case, then you can already run GoAccess and generate a report, for example for the log file /var/log/apache2/access.log

goaccess \
    --log-format COMBINED \
    --output /tmp/report.html \
    /var/log/apache2/access.log

It's possible to give GoAccess more than one log files to process, so if you have for example the file access.log.1 around, you can use it as well:

goaccess \
    --log-format COMBINED \
    --output /tmp/report.html \
    /var/log/apache2/access.log \
    /var/log/apache2/access.log.1

If GoAccess succeeds (and it should), you're on the right track!

All is left to do to complete this test is to have a look at the HTML report created. It's a single HTML page, so you can easily scp it to your machine, or just move it to the document root of your site, and then open it in your web browser.

Looks good? So let's move on to more interesting things.

Web server configuration

This part is very short, because in terms of configuration of the web server, there's very little to do. As I said above, the only thing you want from the web server is to create access log files. Then you want to be sure that GoAccess and your web server agree on the format for these files.

In the part above we used the combined log format, but GoAccess supports many other common log formats out of the box, and even allows you to parse custom log formats. For more details, refer to the option --log-format in the GoAccess manual page.

Another common log format is named, well, common. It even has its own Wikipedia page. But compared to combined, the common log format contains less information, it doesn't include the referrer and user-agent values, meaning that you won't have it in the GoAccess report.

So at this point you should understand that, unsurprisingly, GoAccess can only tell you about what's in the access logs, no more no less.

And that's all in term of web server configuration.

Configuration to run GoAccess unprivileged

Now we're going to create a user and group for GoAccess, so that we don't have to run it as root. The reason is that, well, for everything running unattended on your server, the less code runs as root, the better. It's good practice and common sense.

In this case, GoAccess is simply a log analyzer. So it just needs to read the logs files from your web server, and there is no need to be root for that, an unprivileged user can do the job just as well, assuming it has read permissions on /var/log/apache2 or /var/log/nginx.

The log files of the web server are usually part of the adm group (though it might depend on your distro, I'm not sure). This is something you can check easily with the following command:

ls -l /var/log | grep -e apache2 -e nginx

As a result you should get something like that:

drwxr-x--- 2 root adm 20480 Jul 22 00:00 /var/log/apache2/

And as you can see, the directory apache2 belongs to the group adm. It means that you don't need to be root to read the logs, instead any unprivileged user that belongs to the group adm can do it.

So, let's create the goaccess user, and add it to the adm group:

adduser --system --group --no-create-home goaccess
addgroup goaccess adm

And now, let's run GoAccess unprivileged, and verify that it can still read the log files:

setpriv \
    --reuid=goaccess --regid=goaccess \
    --init-groups --inh-caps=-all \
    -- \
    goaccess \
    --log-format COMBINED \
    --output /tmp/report2.html \
    /var/log/apache2/access.log

setpriv is the command used to drop privileges. The syntax is quite verbose, it's not super friendly for tutorials, but don't be scared and read the manual page to learn what it does.

In any case, this command should work, and at this point, it means that you have a goaccess user ready, and we'll use it to run GoAccess unprivileged.

Integration, option A - Run GoAccess once a day, from a logrotate hook

In this part we wire things together, so that GoAccess processes the log files once a day, adds the new logs to its internal database, and generates a report from all that aggregated data. The result will be a single HTML page.

Introducing logrotate

In order to do that, we'll use a logrotate hook. logrotate is a little tool that should already be installed on your server, and that runs once a day, and that is in charge of rotating the log files. "Rotating the logs" means moving access.log to access.log.1 and so on. With logrotate, a new log file is created every day, and log files that are too old are deleted. That's what prevents your logs from filling up your disk basically :)

You can check that logrotate is indeed installed and enabled with this command (assuming that your init system is systemd):

systemctl status logrotate.timer

What's interesting for us is that logrotate allows you to run scripts before and after the rotation is performed, so it's an ideal place from where to run GoAccess. In short, we want to run GoAccess just before the logs are rotated away, in the prerotate hook.

But let's do things in order. At first, we need to write a little wrapper script that will be in charge of running GoAccess with the right arguments, and that will process all of your sites.

The wrapper script

This wrapper is made to process more than one site, but if you have only one site it works just as well, of course.

So let me just drop it on you like that, and I'll explain afterward. Here's my wrapper script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
#!/bin/bash

# Process log files /var/www/apache2/SITE/access.log,
# only if /var/lib/goaccess-db/SITE exists.
# Create HTML reports in $1, a directory that must exist.

set -eu

OUTDIR=
LOGDIR=/var/log/apache2
DBDIR=/var/lib/goaccess-db

fail() { echo >&2 "$@"; exit 1; }

[ $# -eq 1 ] || fail "Usage: $(basename $0) OUTPUT_DIRECTORY"

OUTDIR=$1

[ -d "$OUTDIR" ] || fail "'$OUTDIR' is not a directory"
[ -d "$LOGDIR" ] || fail "'$LOGDIR' is not a directory"
[ -d "$DBDIR"  ] || fail "'$DBDIR' is not a directory"

for d in $(find "$LOGDIR" -mindepth 1 -maxdepth 1 -type d); do
    site=$(basename "$sitedir")
    dbdir=$DBDIR/$site
    logfile=$d/access.log
    outfile=$OUTDIR/$site.html

    if [ ! -d "$dbdir" ] || [ ! -e "$logfile" ]; then
        echo "‣ Skipping site '$site'"
        continue
    else
        echo "‣ Processing site '$site'"
    fi

    setpriv \
        --reuid=goaccess --regid=goaccess \
        --init-groups --inh-caps=-all \
        -- \
    goaccess \
        --agent-list \
        --anonymize-ip \
        --persist \
        --restore \
        --config-file /etc/goaccess/goaccess.conf \
        --db-path "$dbdir" \
        --log-format "COMBINED" \
        --output "$outfile" \
        "$logfile"
done

So you'd install this script at /usr/local/bin/goaccess-wrapper for example, and make it executable:

chmod +x /usr/local/bin/goaccess-wrapper

A few things to note:

  • We run GoAccess with --persist, meaning that we save the parsed logs in the internal database, and --restore, meaning that we include everything from the database in the report. In other words, we aggregate the data at every run, and the report grows bigger every time.
  • The parameter --config-file /etc/goaccess/goaccess.conf is a workaround for #1849. It should not be needed for future versions of GoAccess > 1.4.

As is, the script makes the assumption that the logs for your site are logged in a sub-directory /var/log/apache2/SITE/. If it's not the case, adjust that in the wrapper accordingly.

The name of this sub-directory is then used to find the GoAccess database directory /var/lib/goaccess-db/SITE/. This directory is expected to exist, meaning that if you don't create it yourself, the wrapper won't process this particular site. It's a simple way to control which sites are processed by this GoAccess wrapper, and which sites are not.

So if you want goaccess-wrapper to process the site SITE, just create a directory with the name of this site under /var/lib/goaccess-db:

mkdir -p /var/lib/goaccess-db/SITE
chown goaccess:goaccess /var/lib/goaccess-db/SITE

Now let's create an output directory:

mkdir /tmp/goaccess-reports
chown goaccess:goaccess /tmp/goaccess-reports

And let's give a try to the wrapper script:

goaccess-wrapper /tmp/goaccess-reports
ls /tmp/goaccess-reports

Which should give you:

SITE.html

At the same time, you can check that GoAccess populated the database with a bunch of files:

ls /var/lib/goaccess-db/SITE

Setting up the logrotate prerotate hook

At this point, we have the wrapper in place. Let's now add a pre-rotate hook so that goaccess-wrapper runs once a day, just before the logs are rotated away.

The logrotate config file for Apache2 is located at /etc/logrotate.d/apache2, and for NGINX it's at /etc/logrotate.d/nginx. Among the many things you'll see in this file, here's what is of interest for us:

  • daily means that your logs are rotated every day
  • sharedscripts means that the pre-rotate and post-rotate scripts are executed once total per rotation, and not once per log file.

In the config file, there is also this snippet:

prerotate
    if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
        run-parts /etc/logrotate.d/httpd-prerotate; \
    fi; \
endscript

It indicates that scripts in the directory /etc/logrotate.d/httpd-prerotate/ will be executed before the rotation takes place. Refer to the man page run-parts(8) for more details...

Putting all of that together, it means that logs from the web server are rotated once a day, and if we want to run scripts just before the rotation, we can just drop them in the httpd-prerotate directory. Simple, right?

Let's first create this directory if it doesn't exist:

mkdir -p /etc/logrotate.d/httpd-prerotate/

And let's create a tiny script at /etc/logrotate.d/httpd-prerotate/goaccess:

1
2
#!/bin/sh
exec goaccess-wrapper /tmp/goaccess-reports

Don't forget to make it executable:

chmod +x /etc/logrotate.d/httpd-prerotate/goaccess

As you can see, the only thing that this script does is to invoke the wrapper with the right argument, ie. the output directory for the HTML reports that are generated.

And that's all. Now you can just come back tomorrow, check the logs, and make sure that the hook was executed and succeeded. For example, this kind of command will tell you quickly if it worked:

journalctl | grep logrotate

Integration, option B - Run GoAccess once a day, from a systemd service

OK so we've just seen how to use a logrotate hook. One downside with that is that we have to drop privileges in the wrapper script, because logrotate runs as root, and we don't want to run GoAccess as root. Hence the rather convoluted syntax with setpriv.

Rather than embedding this kind of thing in a wrapper script, we can instead run the wrapper script from a [systemd][] service, and define which user runs the wrapper straight in the systemd service file.

Introducing systemd niceties

So we can create a systemd service, along with a systemd timer that fires daily. We can then set the user and group that execute the script straight in the systemd service, and there's no need for setpriv anymore. It's a bit more streamlined.

We can even go a bit further, and use systemd parameterized units (also called templates), so that we have one service per site (instead of one service that process all of our sites). That will simplify the wrapper script a lot, and it also looks nicer in the logs.

With this approach however, it seems that we can't really run exactly before the logs are rotated away, like we did in the section above. But that's OK. What we'll do is that we'll run once a day, no matter the time, and we'll just make sure to process both log files access.log and access.log.1 (ie. the current logs and the logs from yesterday). This way, we're sure not to miss any line from the logs.

Note that GoAccess is smart enough to only consider newer entries from the log files, and discard entries that are already in the database. In other words, it's safe to parse the same log file more than once, GoAccess will do the right thing. For more details see "INCREMENTAL LOG PROCESSING" from man goaccess.

systemd]: https://freedesktop.org/wiki/Software/systemd/

Implementation

And here's how it all looks like.

First, a little wrapper script for GoAccess:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash

# Usage: $0 SITE DBDIR LOGDIR OUTDIR

set -eu

SITE=$1
DBDIR=$2
LOGDIR=$3
OUTDIR=$4

LOGFILES=()
for ext in log log.1; do
    logfile="$LOGDIR/access.$ext"
    [ -e "$logfile" ] && LOGFILES+=("$logfile")
done

if [ ${#LOGFILES[@]} -eq 0 ]; then
    echo "No log files in '$LOGDIR'"
    exit 0
fi

goaccess \
    --agent-list \
    --anonymize-ip \
    --persist \
    --restore \
    --config-file /etc/goaccess/goaccess.conf \
    --db-path "$DBDIR" \
    --log-format "COMBINED" \
    --output "$OUTDIR/$SITE.html" \
    "${LOGFILES[@]}"

This wrapper does very little. Actually, the only thing it does is to check for the existence of the two log files access.log and access.log.1, to be sure that we don't ask GoAccess to process a file that does not exist (GoAccess would not be happy about that).

Save this file under /usr/local/bin/goaccess-wrapper, don't forget to make it executable:

chmod +x /usr/local/bin/goaccess-wrapper

Then, create a systemd parameterized unit file, so that we can run this wrapper as a systemd service. Save it under /etc/systemd/system/goaccess@.service:

[Unit]
Description=Update GoAccess report - %i
ConditionPathIsDirectory=/var/lib/goaccess-db/%i
ConditionPathIsDirectory=/var/log/apache2/%i
ConditionPathIsDirectory=/tmp/goaccess-reports
PartOf=goaccess.service

[Service]
Type=oneshot
User=goaccess
Group=goaccess
Nice=19
ExecStart=/usr/local/bin/goaccess-wrapper \
 %i \
 /var/lib/goaccess-db/%i \
 /var/log/apache2/%i \
 /tmp/goaccess-reports

So, what is a systemd parameterized unit? It's a service to which you can pass an argument when you enable it. The %i in the unit definition will be replaced by this argument. In our case, the argument will be the name of the site that we want to process.

As you can see, we use the directive ConditionPathIsDirectory= extensively, so that if ever one of the required directories does not exist, the unit will just be skipped (and marked as such in the logs). It's a graceful way to fail.

We run the wrapper as the user and group goaccess, thanks to User= and Group=. We also use Nice= to give a low priority to the process.

At this point, it's already possible to test. Just make sure that you created a directory for the GoAccess database:

mkdir -p /var/lib/goaccess-db/SITE
chown goaccess:goaccess /var/lib/goaccess-db/SITE

Also make sure that the output directory exists:

mkdir /tmp/goaccess-reports
chown goaccess:goaccess /tmp/goaccess-reports

Then reload systemd and fire the unit to see if it works:

systemctl daemon-reload
systemctl start goaccess@SITE.service
journalctl | tail

And that should work already.

As you can see, the argument, SITE, is passed in the systemctl start command. We just append it after the @, in the name of the unit.

Now, let's create another GoAccess service file, which sole purpose is to group all the parameterized units together, so that we can start them all in one go. Note that we don't use a systemd target for that, because ultimately we want to run it once a day, and that would not be possible with a target. So instead we use a dummy oneshot service.

So here it is, saved under /etc/systemd/system/goaccess.service:

[Unit]
Description=Update GoAccess reports
Requires= \
 goaccess@SITE1.service \
 goaccess@SITE2.service

[Service]
Type=oneshot
ExecStart=true

As you can see, we simply list the sites that we want to process in the Requires= directive. In this example we have two sites named SITE1 and SITE2.

Let's ensure that everything is still good:

systemctl daemon-reload
systemctl start goaccess.service
journalctl | tail

Check the logs, both sites SITE1 and SITE2 should have been processed.

And finally, let's create a timer, so that systemd runs goaccess.service once a day. Save it under /etc/systemd/system/goaccess.timer.

[Unit]
Description=Daily update of GoAccess reports

[Timer]
OnCalendar=daily
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

Finally, enable the timer:

systemctl daemon-reload
systemctl enable --now goaccess.timer

At this point, everything should be OK. Just come back tomorrow and check the logs with something like:

journalctl | grep goaccess

Last word: if you have only one site to process, of course you can simplify, for example you can hardcode all the paths in the file goaccess.service instead of using a parameterized unit. Up to you.

Daily operations

So in this part, we assume that you have GoAccess all setup and running, once a day or so. Let's just go over a few things worth noting.

Serve your report

Up to now in this tutorial, we created the reports in /tmp/goaccess-reports, but that was just for the sake of the example. You will probably want to save your reports in a directory that is served by your web server, so that, well, you can actually look at it in your web browser, that was the point, right?

So how to do that is a bit out of scope here, and I guess that if you want to monitor your website, you already have a website, so you will have no trouble serving the GoAccess HTML report.

However there's an important detail to be aware of: GoAccess shows all the IP addresses of your visitors in the report. As long as the report is private it's OK, but if ever you make your GoAccess report public, then you should definitely invoke GoAccess with the option --anonymize-ip.

Keep an eye on the logs

In this tutorial, the reports we create, along with the GoAccess databases, will grow bigger every day, forever. It also means that the GoAccess processing time will grow a bit each day.

So maybe the first thing to do is to keep an eye on the logs, to see how long it takes to GoAccess to do its job every day. Also, maybe you'd like to keep an eye on the size of the GoAccess database with:

du -sh /var/lib/goaccess-db/SITE

If your site has few visitors, I suspect it won't be a problem though.

You could also be a bit pro-active in preventing this problem in the future, and for example you could break the reports into, say, monthly reports. Meaning that every month, you would create a new database in a new directory, and also start a new HTML report. This way you'd have monthly reports, and you make sure to limit the GoAccess processing time, by limiting the database size to a month.

This can be achieved very easily, by including something like YEAR-MONTH in the database directory, and in the HTML report. You can handle that automatically in the wrapper script, for example:

sfx=$(date +'%Y-%m')

mkdir -p $DBDIR/$sfx

goaccess \
    --db-path $DBDIR/$sfx \
    --output "$OUTDIR/$SITE-$sfx.html" \
    ...

You get the idea.

Further notes

Migration from older versions

With the --persist option, GoAccess keeps all the information from the logs in a database, so that it can re-use it later. In prior versions, GoAccess used the Tokyo Cabinet key-value store for that. However starting from v1.4, GoAccess dropped this dependency and now uses its own database format.

As a result, the previous database can't be used anymore, you will have to remove it and restart from zero. At the moment there is no way to convert the data from the old database to the new one. If you're interested, this is discussed upstream at [#1783][bug-1783].

Another thing that changed with this new version is the name for some of the command-line options. For example, --load-from-disk was dropped in favor of --restore, and --keep-db-files became --persist. So you'll have to look at the documentation a bit, and update your script(s) accordingly.

Other ways to use GoAccess

It's also possible to do it completely differently. You could keep GoAccess running, pretty much like a daemon, with the --real-time-html option, and have it process the logs continuously, rather than calling it on a regular basis.

It's also possible to see the GoAccess report straight in the terminal, thanks to libncurses, rather than creating a HTML report.

And much more, GoAccess is packed with features.

Conclusion

I hope that this tutorial helped some of you folks. Feel free to drop an e-mail for comments.

,

Planet DebianRuss Allbery: DocKnot 3.05

I keep telling myself that the next release of DocKnot will be the one where I convert everything to YAML and then feel confident about uploading it to Debian, and then I keep finding one more thing to fix to release another package I'm working on.

Anyway, this is the package I use to generate software documentation and, in the long run, will subsume my static web site generator and software release workflow. This release tweaks a heuristic for wrapping paragraphs in text documents, fixes the status badge for software with Debian packages to do what I had intended, and updates dependencies based on the advice of Perl::Critic::Freenode.

You can get the latest version from CPAN or from the DocKnot distribution page.

Planet DebianJunichi Uekawa: Started writing some golang code.

Started writing some golang code. Trying to rewrite some of the tools as a daily driver for machine management tool. It's easier than rust in that having a good rust compiler is a hassle though golang preinstalled on systems can build and run. go run is simple enough to invoke on most Debian systems.

Planet DebianCharles Plessy: Thank you, VAIO

I use everyday a VAIO Pro mk2 that I bought 5 years ago with 3 years of warranty. It has been a few months that I was noticing that something was slowly inflating inside. In July, things accelerated to the point that its thickness had doubled. After we called the customer service of VAIO, somebody came to pick up the laptop in order to make a cost estimate. Then we learned on the phone that it would be free. It is back in my hands in less than two weeks. Bravo VAIO !

,

Planet DebianHolger Levsen: 20200808-debconf8

DebConf8

This tshirt is 12 years old and from DebConf8.

DebConf8 was my 6th DebConf and took place in Mar de la Plata, Argentina.

Also this is my 6th post in this series of posts about DebConfs and for the last two days for the first time I failed my plan to do one post per day. And while two days ago I still planned to catch up on this by doing more than one post in a day, I have now decided to give in to realities, which mostly translates to sudden fantastic weather in Hamburg and other summer related changes in life. So yeah, I still plan to do short posts about all the DebConfs I was lucky to attend, but there might be days without a blog post. Anyhow, Mar de la Plata.

When we held DebConf in Argentina it was winter there, meaning locals and other folks would wear jackets, scarfs, probably gloves, while many Debian folks not so much. Andreas Tille freaked out and/or amazed local people by going swimming in the sea every morning. And when I told Stephen Gran that even I would find it a bit cold with just a tshirt he replied "na, the weather is fine, just like british summer", while it was 14 celcius and mildly raining.

DebConf8 was the first time I've met Valessio Brito, who I had worked together since at least DebConf6. That meeting was really super nice, Valessio is such a lovely person. Back in 2008 however, there was just one problem: his spoken English was worse than his written one, and that was already hard to parse sometimes. Fast forward eleven years to Curitiba last year and boom, Valessio speaks really nice English now.

And, you might wonder why I'm telling this, especially if you were exposed to my Spanish back then and also now. So my point in telling this story about Valessio is to illustrate two things: a.) one can contribute to Debian without speaking/writing much English, Valessio did lots of great artwork since DebConf6 and b.) one can learn English by doing Debian stuff. It worked for me too!

During set up of the conference there was one very memorable moment, some time after the openssl maintainer, Kurt Roeckx arrived at the venue: Shortly before DebConf8 Luciano Bello, from Argentina no less, had found CVE-2008-0166 which basically compromised the security of sshd of all Debian and Ubuntu installations done in the last 4 years (IIRC two Debian releases were affected) and which was commented heavily and noticed everywhere. So poor Kurt arrived and wondered whether we would all hate him, how many toilets he would have to clean and what not... And then, someone rather quickly noticed this, approached some people and suddenly a bunch of people at DebConf were group-hugging Kurt and then we were all smiling and continuing doing set up of the conference.

That moment is one of my most joyful memories of all DebConfs and partly explains why I remember little about the conference itself, everything else pales in comparison and most things pale over the years anyway. As I remember it, the conference ran very smoothly in the end, despite quite some organisational problems right before the start. But as usual, once the geeks arrive and are happily geeking, things start to run smooth, also because Debian people are kind and smart and give hands and brain were needed.

And like other DebConfs, Mar de la Plata also had moments which I want to share but I will only hint about, so it's up to you to imagine the special leaves which were brought to that cheese and wine party! ;-)

Update: added another xkcd link, spelled out Kurt's name after talking to him and added a link to a video of the group hug.

Planet DebianReproducible Builds: Reproducible Builds in July 2020

Welcome to the July 2020 report from the Reproducible Builds project.

In these monthly reports, we round-up the things that we have been up to over the past month. As a brief refresher, the motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced from the original free software source code to the pre-compiled binaries we install on our systems. (If you’re interested in contributing to the project, please visit our main website.)

General news

At the upcoming DebConf20 conference (now being held online), Holger Levsen will present a talk on Thursday 27th August about “Reproducing Bullseye in practice”, focusing on independently verifying that the binaries distributed from ftp.debian.org were made from their claimed sources.

Tavis Ormandy published a blog post making the provocative claim that “You don’t need reproducible builds”, asserting elsewhere that the many attacks that have been extensively reported in our previous reports are “fantasy threat models”. A number of rebuttals have been made, including one from long-time contributor Reproducible Builds contributor Bernhard Wiedemann.

On our mailing list this month, Debian Developer Graham Inggs posted to our list asking for ideas why the openorienteering-mapper Debian package was failing to build on the Reproducible Builds testing framework. Chris Lamb remarked from the build logs that the package may be missing a build dependency, although Graham then used our own diffoscope tool to show that the resulting package remains unchanged with or without it. Later, Nico Tyni noticed that the build failure may be due to the relationship between the FILE C preprocessor macro and the -ffile-prefix-map GCC flag.

An issue in Zephyr, a small-footprint kernel designed for use on resource-constrained systems, around .a library files not being reproducible was closed after it was noticed that a key part of their toolchain was updated that now calls --enable-deterministic-archives by default.

Reproducible Builds developer kpcyrd commented on a pull request against the libsodium cryptographic library wrapper for Rust, arguing against the testing of CPU features at compile-time. He noted that:

I’ve accidentally shipped broken updates to users in the past because the build system was feature-tested and the final binary assumed the instructions would be present without further runtime checks

David Kleuker also asked a question on our mailing list about using SOURCE_DATE_EPOCH with the install(1) tool from GNU coreutils. When comparing two installed packages he noticed that the filesystem ‘birth times’ differed between them. Chris Lamb replied, realising that this was actually a consequence of using an outdated version of diffoscope and that a fix was in diffoscope version 146 released in May 2020.

Later in July, John Scott posted asking for clarification regarding on the Javascript files on our website to add metadata for LibreJS, the browser extension that blocks non-free Javascript scripts from executing. Chris Lamb investigated the issue and realised that we could drop a number of unused Javascript files [][][] and added unminified versions of Bootstrap and jQuery [].


Development work

Website

On our website this month, Chris Lamb updated the main Reproducible Builds website and documentation to drop a number of unused Javascript files [][][] and added unminified versions of Bootstrap and jQuery []. He also fixed a number of broken URLs [][].

Gonzalo Bulnes Guilpain made a large number of grammatical improvements [][][][][] as well as some misspellings, case and whitespace changes too [][][].

Lastly, Holger Levsen updated the README file [], marked the Alpine Linux continuous integration tests as currently disabled [] and linked the Arch Linux Reproducible Status page from our projects page [].

diffoscope

diffoscope is our in-depth and content-aware diff utility that can not only locate and diagnose reproducibility issues, it provides human-readable diffs of all kinds. In July, Chris Lamb made the following changes to diffoscope, including releasing versions 150, 151, 152, 153 & 154:

  • New features:

    • Add support for flash-optimised F2FS filesystems. (#207)
    • Don’t require zipnote(1) to determine differences in a .zip file as we can use libarchive. []
    • Allow --profile as a synonym for --profile=-, ie. write profiling data to standard output. []
    • Increase the minimum length of the output of strings(1) to eight characters to avoid unnecessary diff noise. []
    • Drop some legacy argument styles: --exclude-directory-metadata and --no-exclude-directory-metadata have been replaced with --exclude-directory-metadata={yes,no}. []
  • Bug fixes:

    • Pass the absolute path when extracting members from SquashFS images as we run the command with working directory in a temporary directory. (#189)
    • Correct adding a comment when we cannot extract a filesystem due to missing libguestfs module. []
    • Don’t crash when listing entries in archives if they don’t have a listed size such as hardlinks in ISO images. (#188)
  • Output improvements:

    • Strip off the file offset prefix from xxd(1) and show bytes in groups of 4. []
    • Don’t emit javap not found in path if it is available in the path but it did not result in an actual difference. []
    • Fix ... not available in path messages when looking for Java decompilers that used the Python class name instead of the command. []
  • Logging improvements:

    • Add a bit more debugging info when launching libguestfs. []
    • Reduce the --debug log noise by truncating the has_some_content messages. []
    • Fix the compare_files log message when the file does not have a literal name. []
  • Codebase improvements:

    • Rewrite and rename exit_if_paths_do_not_exist to not check files multiple times. [][]
    • Add an add_comment helper method; don’t mess with our internal list directly. []
    • Replace some simple usages of str.format with Python ‘f-strings’ [] and make it easier to navigate to the main.py entry point [].
    • In the RData comparator, always explicitly return None in the failure case as we return a non-None value in the success one. []
    • Tidy some imports [][][] and don’t alias a variable when we do not use it. []
    • Clarify the use of a separate NullChanges quasi-file to represent missing data in the Debian package comparator [] and clarify use of a ‘null’ diff in order to remember an exit code. []
  • Other changes:

Jean-Romain Garnier also made the following changes:

  • Allow passing a file with a list of arguments via diffoscope @args.txt. (!62)
  • Improve the output of side-by-side diffs by detecting added lines better. (!64)
  • Remove offsets before instructions in objdump [][] and remove raw instructions from ELF tests [].

Other tools

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. It is used automatically in most Debian package builds. In July, Chris Lamb ensured that we did not install the internal handler documentation generated from Perl POD documents [] and fixed a trivial typo []. Marc Herbert added a --verbose-level warning when the Archive::Cpio Perl module is missing. (!6)

reprotest is our end-user tool to build same source code twice in widely differing environments and then checks the binaries produced by each build for any differences. This month, Vagrant Cascadian made a number of changes to support diffoscope version 153 which had removed the (deprecated) --exclude-directory-metadata and --no-exclude-directory-metadata command-line arguments, and updated the testing configuration to also test under Python version 3.8 [].


Distributions

Debian

In June 2020, Timo Röhling filed a wishlist bug against the debhelper build tool impacting the reproducibility status of hundreds of packages that use the CMake build system. This month however, Niels Thykier uploaded debhelper version 13.2 that passes the -DCMAKE_SKIP_RPATH=ON and -DBUILD_RPATH_USE_ORIGIN=ON arguments to CMake when using the (currently-experimental) Debhelper compatibility level 14.

According to Niels, this change:

… should fix some reproducibility issues, but may cause breakage if packages run binaries directly from the build directory.

34 reviews of Debian packages were added, 14 were updated and 20 were removed this month adding to our knowledge about identified issues. Chris Lamb added and categorised the nondeterministic_order_of_debhelper_snippets_added_by_dh_fortran_mod [] and gem2deb_install_mkmf_log [] toolchain issues.

Lastly, Holger Levsen filed two more wishlist bugs against the debrebuild Debian package rebuilder tool [][].

openSUSE

In openSUSE, Bernhard M. Wiedemann published his monthly Reproducible Builds status update.

Bernhard also published the results of performing 12,235 verification builds of packages from openSUSE Leap version 15.2 and, as a result, created three pull requests against the openSUSE Build Result Compare Script [][][].

Other distributions

In Arch Linux, there was a mass rebuild of old packages in an attempt to make them reproducible. This was performed because building with a previous release of the pacman package manager caused file ordering and size calculation issues when using the btrfs filesystem.

A system was also implemented for Arch Linux packagers to receive notifications if/when their package becomes unreproducible, and packagers now have access to a dashboard where they can all see all their unreproducible packages (more info).

Paul Spooren sent two versions of a patch for the OpenWrt embedded distribution for adding a ‘build system’ revision to the ‘packages’ manifest so that all external feeds can be rebuilt and verified. [][]

Upstream patches

The Reproducible Builds project detects, dissects and attempts to fix as many currently-unreproducible packages as possible. We endeavour to send all of our patches upstream where appropriate. This month, we wrote a large number of these patches, including:

Vagrant Cascadian also reported two issues, the first regarding a regression in u-boot boot loader reproducibility for a particular target [] and a non-deterministic segmentation fault in the guile-ssh test suite []. Lastly, Jelle van der Waa filed a bug against the MeiliSearch search API to report that it embeds the current build date.

Testing framework

We operate a large and many-featured Jenkins-based testing framework that powers tests.reproducible-builds.org.

This month, Holger Levsen made the following changes:

  • Debian-related changes:

    • Tweak the rescheduling of various architecture and suite combinations. [][]
    • Fix links for ‘404’ and ‘not for us’ icons. (#959363)
    • Further work on a rebuilder prototype, for example correctly processing the sbuild exit code. [][]
    • Update the sudo configuration file to allow the node health job to work correctly. []
    • Add php-horde packages back to the pkg-php-pear package set for the bullseye distribution. []
    • Update the version of debrebuild. []
  • System health check development:

    • Add checks for broken SSH [], logrotate [], pbuilder [], NetBSD [], ‘unkillable’ processes [], unresponsive nodes [][][][], proxy connection failures [], too many installed kernels [], etc.
    • Automatically fix some failed systemd units. []
    • Add notes explaining all the issues that hosts are experiencing [] and handle zipped job log files correctly [].
    • Separate nodes which have been automatically marked as down [] and show status icons for jobs with issues [].
  • Misc:

In addition, Mattia Rizzolo updated the init_node script to suggest using sudo instead of explicit logout and logins [][] and the usual build node maintenance was performed by Holger Levsen [][][][][][], Mattia Rizzolo [][] and Vagrant Cascadian [][][][].



If you are interested in contributing to the Reproducible Builds project, please visit our Contribute page on our website. However, you can get in touch with us via:

LongNowThe Deep Sea

As detailed in the exquisite documentary Proteus, the ocean floor was until very recently a repository for the dreams of humankind — the receptacle for our imagination. But when the H.M.S. Challenger expedition surveyed the world’s deep-sea life and brought it back for cataloging by now-legendary illustrator Ernst Haeckel (who coined the term “ecology”), the hidden benthic universe started coming into view. What we found, and what we continue to discover on the ocean floor, is far stranger than the monsters we’d projected.

This spectacular site by Neal Agarwal brings depth into focus. You’ve surfed the Web; now take a few and dive all the way down to Challenger Deep, scrolling past the animals that live at every depth.

Just as The Long Now situates us in a humbling, Copernican experience of temporality, Deep Sea reminds us of just how thin of a layer surface life exists in. Just as with Stewart Brand’s pace layers, the further down you go, the slower everything unfolds: the cold and dark and pressure slow the evolutionary process, dampening the frequency of interactions between creatures, bestowing space and time for truly weird and wondrous and as-yet-uncategorized life.

Dig in the ground and you might pull up the fossils of some strange long-gone organisms. Dive to the bottom of the ocean and you might find them still alive down there, the unmolested records of an ancient world still drifting in slow motion, going about their days-without-days…

For evidence of time-space commutability, settle in for a sublime experience that (like benthic life itself) makes much of very little: just one page, one scroll bar, and one journey to a world beyond.

(Mobile device suggested: this scroll goes in, not just across…)

Learn More:

  • The “Big Here” doesn’t get much bigger than Neal Agarwal‘s The Size of Space, a new interactive visualization that provides a dose of perspective on our place in the universe.

Planet DebianThorsten Alteholz: My Debian Activities in July 2020

FTP master

This month I accepted 434 packages and rejected 54. The overall number of packages that got accepted was 475.

Debian LTS

This was my seventy-third month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 25.25h. During that time I did LTS uploads of:

  • [DLA 2289-1] mupdf security update for five CVEs
  • [DLA 2290-1] e2fsprogs security update for one CVE
  • [DLA 2294-1] salt security update for two CVEs
  • [DLA 2295-1] curl security update for one CVE
  • [DLA 2296-1] luajit security update for one CVE
  • [DLA 2298-1] libapache2-mod-auth-openidc security update for three CVEs

I started to work on python2.7 as well but stumbled over some hurdles in the testsuite, so I did not upload a fixed version yet.

This month was much influenced by the transition from Jessie LTS to Stretch LTS and one or another workflow/script needed some adjustments.

Last but not least I did some days of frontdesk duties.

Debian ELTS

This month was the twenty fifth ELTS month.

During my allocated time I uploaded:

  • ELA-230-1 for luajit
  • ELA-231-1 for curl

Like in LTS, I also started to work on python2.7 and encountered the same hurdles in the testsuite. So I did not upload a fixed version for ELTS as well.

Last but not least I did some days of frontdesk duties.

Other stuff

In this section nothing much happened this month. Thanks a lot to everybody who NMUed a package to fix a bug.

Planet DebianDirk Eddelbuettel: RVowpalWabbit 0.0.15: Some More CRAN Build Issues

Another maintenance RVowpalWabbit package update brought us to version 0.0.15 earlier today. We attempted to fix one compilation error on Solaris, and addressed a few SAN/UBSAN issues with the gcc build.

As noted before, there is a newer package rvw based on the excellent GSoC 2018 and beyond work by Ivan Pavlov (mentored by James and myself) so if you are into Vowpal Wabbit from R go check it out.

CRANberries provides a summary of changes to the previous version. More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianFrançois Marier: Setting the default web browser on Debian and Ubuntu

If you are wondering what your default web browser is set to on a Debian-based system, there are several things to look at:

$ xdg-settings get default-web-browser
brave-browser.desktop

$ xdg-mime query default x-scheme-handler/http
brave-browser.desktop

$ xdg-mime query default x-scheme-handler/https
brave-browser.desktop

$ ls -l /etc/alternatives/x-www-browser
lrwxrwxrwx 1 root root 29 Jul  5  2019 /etc/alternatives/x-www-browser -> /usr/bin/brave-browser-stable*

$ ls -l /etc/alternatives/gnome-www-browser
lrwxrwxrwx 1 root root 29 Jul  5  2019 /etc/alternatives/gnome-www-browser -> /usr/bin/brave-browser-stable*

Debian-specific tools

The contents of /etc/alternatives/ is system-wide defaults and must therefore be set as root:

sudo update-alternatives --config x-www-browser
sudo update-alternatives --config gnome-www-browser

The sensible-browser tool (from the sensible-utils package) will use these to automatically launch the most appropriate web browser depending on the desktop environment.

Standard MIME tools

The others can be changed as a normal user. Using xdg-settings:

xdg-settings set default-web-browser brave-browser-beta.desktop

will also change what the two xdg-mime commands return:

$ xdg-mime query default x-scheme-handler/http
brave-browser-beta.desktop

$ xdg-mime query default x-scheme-handler/https
brave-browser-beta.desktop

since it puts the following in ~/.config/mimeapps.list:

[Default Applications]
text/html=brave-browser-beta.desktop
x-scheme-handler/http=brave-browser-beta.desktop
x-scheme-handler/https=brave-browser-beta.desktop
x-scheme-handler/about=brave-browser-beta.desktop
x-scheme-handler/unknown=brave-browser-beta.desktop

Note that if you delete these entries, then the system-wide defaults, defined in /etc/mailcap, will be used, as provided by the mime-support package.

Changing the x-scheme-handler/http (or x-scheme-handler/https) association directly using:

xdg-mime default brave-browser-nightly.desktop x-scheme-handler/http

will only change that particular one. I suppose this means you could have one browser for insecure HTTP sites (hopefully with HTTPS Everywhere installed) and one for HTTPS sites though I'm not sure why anybody would want that.

Summary

In short, if you want to set your default browser everywhere (using Brave in this example), do the following:

sudo update-alternatives --config x-www-browser
sudo update-alternatives --config gnome-www-browser
xdg-settings set default-web-browser brave-browser.desktop

Planet DebianJelmer Vernooij: Improvements to Merge Proposals by the Janitor

The Debian Janitor is an automated system that commits fixes for (minor) issues in Debian packages that can be fixed by software. It gradually started proposing merges in early December. The first set of changes sent out ran lintian-brush on sid packages maintained in Git. This post is part of a series about the progress of the Janitor.

Since the original post, merge proposals created by the janitor now include the debdiff between a build with and without the changes (showing the impact to the binary packages), in addition to the merge proposal diff (which shows the impact to the source package).

New merge proposals also include a link to the diffoscope diff between a vanilla build and the build with changes. Unfortunately these can be a bit noisy for packages that are not reproducible yet, due to the difference in build environment between the two builds.

This is part of the effort to keep the changes from the janitor high-quality.

The rollout surfaced some bugs in lintian-brush; these have been either fixed or mitigated (e.g. by disabling specified fixers).

For more information the Janitor's lintian-fixes efforts, see the landing page

,

Planet DebianAntonio Terceiro: When community service providers fail

I'm starting a new blog, and instead of going into the technical details on how it's made, or on how I migrated my content from the previous ones, I want to focus on why I did it.

It's been a while since I have written a blog post. I wanted to get back into it, but also wanted to finally self-host my blog/homepage because I have been let down before. And sadly, I was not let down by a for-profit, privacy invading corporation, but by a free software organization.

The sad story of wiki.softwarelivre.org

My first blog that was hosted in a blog engine written by me, which was hosted in a TWiki, and later Foswiki instance previously available at wiki.softwarelivre.org, hosted by ASL.org.

I was the one who introduced the tool to the organization in the first place. I had come from a previous, very fruitful experience on the use of wikis for creation of educational material while in university, which ultimately led me to become a core TWiki, and then Foswiki developer.

In 2004, I had just moved to Porto Alegre, got involved in ASL.org, and there was a demand for a tool like that. 2 years later, I left Porto Alegre, and some time after that the daily operations of ASL.org when it became clear that it was not really prepared for remote participation. I was still maintaing the wiki software and the OS for quite some years after, until I wasn't anymore.

In 2016, the server that hosted it went haywire, and there were no backups. A lot of people and free software groups lost their content forever. My blog was the least important content in there. To mention just a few examples, here are some groups that lost their content in there:

  • The Brazilian Foswiki user group hosted a bunch of getting started documentation in Portuguese, organized meetings, and coordinated through there.
  • GNOME Brazil hosted its homepage there until the moment the server broke.
  • The Inkscape Brazil user group had an amazing space there where they shared tutorials, a gallery of user-contributed drawings, and a lot more.

Some of this can still be reached via the Internet Archive Wayback Machine, but that is only useful for recovering content, not for it to be used by the public.

The announced tragedy of softwarelivre.org

My next blog after that was hosted at softwarelivre.org, a Noosfero instance also hosted by ASL.org. When it was introduced in 2010, this Noosfero instance became responsible for the main domain softwarelivre.org name. This was a bold move by ASL.org, and was a demonstration of trust in a local free software project, led by a local free software cooperative (Colivre).

I was a lead developer in the Noosfero project for a long time, and I was also involved in maintaining that server as well.

However, for several years there is little to no investment in maintaining that service. I already expect that it will probably blow up at some point as the wiki did, or that it may be just shut down on purpose.

On the responsibility of organizations

Today, a large part of wast mot people consider "the internet" is controlled by a handful of corporations. Most popular services on the internet might look like they are gratis (free as in beer), but running those services is definitely not without costs. So when you use services provided by for-profit companies and are not paying for them with money, you are paying with your privacy and attention.

Society needs independent organizations to provide alternatives.

The market can solve a part of the problem by providing ethical services and charging for them. This is legitimate, and as long as there is transparency about how peoples' data and communications are handled, there is nothing wrong with it.

But that only solves part of the problem, as there will always be people who can't afford to pay, and people and communities who can afford to pay, but would rather rely on a nonprofit. That's where community-based services, provided by nonprofits, are also important. We should have more of them, not less.

So it makes me worry to realize ASL.org left the community in the dark. Losing the wiki wasn't even the first event of its kind, as the listas.softwarelivre.org mailing list server, with years and years of community communications archived in it, broke with no backups in 2012.

I do not intend to blame the ASL.org leadership personally, they are all well meaning and good people. But as an organization, it failed to recognize the importance of this role of service provider. I can even include myself in it: I was member of the ASL.org board some 15 years ago; I was involved in the deployment of both the wiki and Noosfero, the former as a volunteer and the later professionally. Yet, I did nothing to plan the maintenance of the infrastructure going forward.

When well meaning organizations fail, people who are not willing to have their data and communications be exploited for profit are left to their own devices. I can afford a virtual private server, and have the technical knowledge to import my old content into a static website generator, so I did it. But what about all the people who can't, or don't?

Of course, these organizations have to solve the challenge of being sustainable, and being able to pay professionals to maintain the services that the community relies on. We should be thankful to these organizations, and their leadership needs to recognize the importance of those services, and actively plan for them to be kept alive.

CryptogramFriday Squid Blogging: New SQUID

There's a new SQUID:

A new device that relies on flowing clouds of ultracold atoms promises potential tests of the intersection between the weirdness of the quantum world and the familiarity of the macroscopic world we experience every day. The atomtronic Superconducting QUantum Interference Device (SQUID) is also potentially useful for ultrasensitive rotation measurements and as a component in quantum computers.

"In a conventional SQUID, the quantum interference in electron currents can be used to make one of the most sensitive magnetic field detectors," said Changhyun Ryu, a physicist with the Material Physics and Applications Quantum group at Los Alamos National Laboratory. "We use neutral atoms rather than charged electrons. Instead of responding to magnetic fields, the atomtronic version of a SQUID is sensitive to mechanical rotation."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Kevin RuddThe Guardian: If the Liberal Party truly cared about racial injustice, they would pay their fair share to Close the Gap

Published in the Guardian on 7 August 2020

Throughout our country’s modern history, the treatment of our Aboriginal and Torres Strait Islander brothers and sisters has been appalling. It has also been inconsistent with the original instructions from the British Admiralty to treat the Indigenous peoples of this land with proper care and respect. From first encounter to the frontier wars, the stolen generations and ongoing institutionalised racism, First Nations people have been handed a raw deal. The gaps between Indigenous and non-Indigenous Australians’ outcomes in areas of education, employment, health, housing and justice are a product of historical, intergenerational maltreatment.

In 2008, I apologised to the stolen generations and Indigenous Australians for the racist laws and policies of successive Australian governments. The apology may have been 200 years late, but it was an important part of the reconciliation process.

But the apology meant nothing if it wasn’t backed by action. For this reason, my government acted on Aboriginal and Torres Strait Islander social justice commissioner Tom Calma’s call to Close the Gap. We worked hard to push this framework through the Council of Australian governments so that all states and territories were on board with the strategy. We also funded it, with $4.6bn committed to achieve each of the six targets we set. While the targets and funding were critical to any improvements in the lives of Indigenous Australians, we suspected the Coalition would scrap our programs once they returned to government. After all, only a few years earlier, John Howard’s Indigenous affairs minister was denying the very existence of the stolen generations. Howard himself had refused to deliver an apology for a decade. And then both he and Peter Dutton decided to boycott the official apology in 2008.

To ensure that the Closing the Gap strategy would not be abandoned, we made it mandatory for the prime minister to stand before the House of Representatives each year and account for the success and failures in reaching the targets that were set.

Had we not adopted the Closing the Gap framework, would we now be on target to have 95% of Indigenous four year-olds enrolled in early childhood education? I think not. Would we have halved the gap for young Indigenous adults to have completed year 12 by 2020? I think not. And would we see progress on closing the gap in child mortality, and literacy and numeracy skills? No, I think not.

Despite these achievements, the most recent Closing the Gap report nonetheless showed Australia was not on track to meet four of the deadlines we’d originally set. A major reason for this is that federal funding for the closing the gap strategy collapsed under Tony Abbott, the great wrecking-ball of Australian politics, whose government cut $534.4m from programs dedicated to improving the lives of Indigenous Australians. And it’s never been restored by Abbott’s successors. It’s all there in the budget papers.

Whatever targets are put in place, governments must commit to physical resourcing of Closing the Gap. They are not going to be delivered by magic.

On Thursday last week, the new national agreement on Closing the Gap was announced. I applaud Pat Turner and other Indigenous leaders who will now sit with the leaders of the commonwealth, states, territories and local government to devise plans to achieve the new targets they have negotiated.

Scott Morrison, however, sought to discredit our government’s targets, rather than coming clean about the half-billion-dollar funding cuts that had made it impossible to achieve these targets under any circumstances. His argument that the original targets were conjured out of thin air by my government is demonstrably untrue. The truth is, Jenny Macklin, the responsible minister, spoke widely with Indigenous leaders to prioritise the areas that needed to be urgently addressed in the original Closing the Gap targets. Furthermore, if Morrison is now truly awakened to the intrinsic value of listening to Indigenous Australians, I look forward to him enshrining an Indigenous voice to parliament in the Constitution, given this is the universal position of all Indigenous groups.

Yet amid the welter of news coverage of the new closing the gap agreement, the central question remains: who will be paying the bill? While shared responsibility to close the gap between all levels of government and Indigenous organisations might sound like good news, this will quickly unravel into a political blame game if the commonwealth continues to shirk its financial duty.

The announcement this week that the commonwealth would allocate $45m over four years is just a very bad joke. This is barely 10% of what the Liberals cut from our national Closing the Gap strategy. And barely 1% of our total $4.5bn national program to meet our targets agreed to with the states and territories in 2009.

The Liberals want you to believe they care about racial injustice. But they don’t believe there are any votes in it. This is well understood by Scotty From Marketing, a former state director of the Liberal party, who lives and breathes polling and focus groups. That’s why they are not even pretending to fund the realisation of the new more “realistic” targets they have so loudly proclaimed.

The post The Guardian: If the Liberal Party truly cared about racial injustice, they would pay their fair share to Close the Gap appeared first on Kevin Rudd.

Planet DebianJonathan Dowland: Vimwiki

At the start of the year I begun keeping a daily diary for work as a simple text file. I've used various other approaches for this over the years, including many paper diaries and more complex digital systems. One great advantage of the one-page text file was it made assembling my weekly status report email very quick, nearly just a series of copies and pastes. But of course there are drawbacks and room for improvement.

vimwiki is a personal wiki plugin for the vim and neovim editors. I've tried to look at it before, years ago, but I found it too invasive, changing key bindings and display settings for any use of vim, and I use vim a lot.

I decided to give it another look. The trigger was actually something completely unrelated: Steve Losh's blog post "Coming Home to vim". I've been using vim for around 17 years but I still learned some new things from that blog post. In particular, I've never bothered to Use The Leader for user-specific shortcuts.

The Leader, to me, feels like a namespace that plugins should not touch: it's like the /usr/local of shortcut keys, a space for the local user only. Vimwiki's default bindings include several incorporating the Leader. Of course since I didn't use the leader, those weren't the ones that bothered me: It turns out I regularly use carriage return and backspace for moving the cursor around in normal mode, and Vimwiki steals both of those. It also truncates the display of (what it thinks are) URIs. It turns out I really prefer to see exactly what's in the file I'm editing. I haven't used vim folds since I first switched to it, despite them being why I switched.

Disabling all the default bindings and URI concealing stuff and Vimwiki is now much less invasive and I can explore its features at my own pace:

let g:vimwiki_key_mappings = { 'all_maps': 0, }
let g:vimwiki_conceallevel = 0
let g:vimwiki_url_maxsave = 0 

Followed by explicitly configuring the bindings I want. I'm letting it steal carriage return. And yes, I've used some Leader bindings after all.

nnoremap <leader>ww :VimwikiIndex<cr>
nnoremap <leader>wi :VimwikiDiaryIndex<cr>
nnoremap <leader>wd :VimwikiMakeDiaryNote<cr>

nnoremap <CR> :VimwikiFollowLink<cr>
nnoremap <Tab> :VimwikiNextLink<cr>
nnoremap <S-Tab> :VimwikiPrevLink<cr>
nnoremap <C-Down> :VimwikiDiaryNextDay<cr>
nnoremap <C-Up> :VimwikiDiaryPrevDay<cr>

,wd (my leader) now brings me straight to today's diary page, and I can create separate, non-diary pages for particular work items (e.g. a Ticket reference) that will span more than one day, and keep all the relevant stuff in one place.

Worse Than FailureError'd: All Natural Errors

"I'm glad the asdf is vegan. I'm really thinking of going for the asasdfsadf, though. With a name like that, you know it's got to be 2 1/2 times as good for you," writes VJ.

 

Phil G. wrote, "Get games twice as fast with Epic's new multidimensional downloads!"

 

"But...it DOES!" Zed writes.

 

John M. wrote, "I appreciate the helpful suggestion, but I think I'll take a pass."

 

"java.lang.IllegalStateException...must be one of those edgy indie games! I just hope it's not actually illegal," writes Matthijs .

 

"For added flavor, I received this reminder two hours after I'd completed my checkout and purchased that very same item_name," Aaron K. writes.

 

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

Planet DebianReproducible Builds (diffoscope): diffoscope 155 released

The diffoscope maintainers are pleased to announce the release of diffoscope version 155. This version includes the following changes:

[ Chris Lamb ]
* Bump Python requirement from 3.6 to 3.7 - most distributions are either
  shipping3.5 or 3.7, so supporting 3.6 is not somewhat unnecessary and also
  more difficult to test locally.
* Improvements to setup.py:
  - Apply the Black source code reformatter.
  - Add some URLs for the site of PyPI.org.
  - Update "author" and author email.
* Explicitly support Python 3.8.

[ Frazer Clews ]
* Move away from the deprecated logger.warn method logger.warning.

[ Mattia Rizzolo ]
* Document ("classify") on PyPI that this project works with Python 3.8.

You find out more by visiting the project homepage.

,

Planet DebianDirk Eddelbuettel: nanotime 0.3.0: Yuge New Features!

A fresh major release of the nanotime package for working with nanosecond timestamps is hitting CRAN mirrors right now.

nanotime relies on the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it has benefitted greatly from work by Leonardo Silvestri who rejigged internals in S4—and now added new types for periods, intervals and durations. This is what is commonly called a big fucking deal!! So a really REALLY big thank you to my coauthor Leonardo for all these contributions.

With all these Yuge changes patiently chisseled in by Leonardo, it took some time since the last release and a few more things piled up. Matt Dowle corrected something we borked for integration with the lovely and irreplacable data.table. We also switched to the awesome yet minimal tinytest package by Mark van der Loo, and last but not least we added the beginnings of a proper vignette—currently at nine pages but far from complete.

The NEWS snippet adds full details.

Changes in version 0.3.0 (2020-08-06)

  • Use tzstr= instead of tz= in call to RcppCCTZ::parseDouble()) (Matt Dowle in #49).

  • Add new comparison operators for nanotime and charcters (Dirk in #54 fixing #52).

  • Switch from RUnit to tinytest (Dirk in #55)

  • Substantial functionality extension in with new types nanoduration, nanoival and nanoperiod (Leonardo in #58, #60, #62, #63, #65, #67, #70 fixing #47, #51, #57, #61, #64 with assistance from Dirk).

  • A new (yet still draft-ish) vignette was added describing the four core types (Leonardo and Dirk in #71).

  • A required compilation flag for Windows was added (Leonardo in #72).

  • RcppCCTZ function are called in new 'non-throwing' variants to not trigger exeception errors (Leonardo in #73).

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianChris Lamb: The Bringers of Beethoven

This is a curiously poignant work to me that I doubt I would ever be able to communicate. I found it about fifteen years ago, along with a friend who I am quite regrettably no longer in regular contact with, so there was some complicated nostalgia entangled with rediscovering it today.

What might I say about it instead? One tell-tale sign of 'good' art is that you can find something new in it, or yourself, each time. In this sense, despite The Bringers of Beethoven being more than a little ridiculous, it is somehow 'good' music to me. For example, it only really dawned on me now that the whole poem is an allegory for a GDR-like totalitarianism.

But I also realised that it is not an accident that it is Beethoven himself (quite literally the soundtrack for Enlightenment humanism) that is being weaponised here, rather than some fourth-rate composer of military marches or one with a problematic past. That is to say, not only is the poem arguing that something universally recognised as an unalloyed good can be subverted for propagandistic ends, but that is precisely the point being made by the regime. An inverted Clockwork Orange, if you like.

Yet when I listen to it again I can't help but laugh. I think of the 18th-century poet Alexander Pope, who first used the word bathos to refer to those abrupt and often absurd transitions from the elevated to the ordinary, contrasting it with the concept of pathos, the sincere feeling of sadness and tragedy. I can't think of two better words.

Planet DebianJoey Hess: Mr Process's wild ride

When a unix process is running in a directory, and that directory gets renamed, the process is taken on a ride to a new location in the filesystem. Suddenly, any "../" paths it might be using point to new, and unexpected locations.

This can be a source of interesting behavior, and also of security holes.

Suppose root is poking around in ~user/foo/bar/ and decides to vim ../../etc/conffile

If the user notices this process is running, they can mv ~/foo/bar /tmp and when vim saves the file, it will write to /tmp/bar/../../etc/conffile AKA /etc/conffile.

(Vim does warn that the file has changed while it was being edited. Other editors may not. Or root may be feeling especially BoFH and decide to overwrite the user's changes to their file. Or the rename could perhaps be carefully timed to avoid vim's overwrite protection.)

Or, suppose root, in the same place, decides to archive ../../etc with tar, and then delete it:

tar cf etc.tar ../../etc; rm -rf ../../etc

Now the user has some time to take root's shell on a ride, before the rm starts ... and make it delete all of /etc!

Anyone know if this class of security hole has a name?

Krebs on SecurityHacked Data Broker Accounts Fueled Phony COVID Loans, Unemployment Claims

A group of thieves thought to be responsible for collecting millions in fraudulent small business loans and unemployment insurance benefits from COVID-19 economic relief efforts gathered personal data on people and businesses they were impersonating by leveraging several compromised accounts at a little-known U.S. consumer data broker, KrebsOnSecurity has learned.

In June, KrebsOnSecurity was contacted by a cybersecurity researcher who discovered that a group of scammers was sharing highly detailed personal and financial records on Americans via a free web-based email service that allows anyone who knows an account’s username to view all email sent to that account — without the need of a password.

The source, who asked not to be identified in this story, said he’s been monitoring the group’s communications for several weeks and sharing the information with state and federal authorities in a bid to disrupt their fraudulent activity.

The source said the group appears to consist of several hundred individuals who collectively have stolen tens of millions of dollars from U.S. state and federal treasuries via phony loan applications with the U.S. Small Business Administration (SBA) and through fraudulent unemployment insurance claims made against several states.

KrebsOnSecurity reviewed dozens of emails the fraud group exchanged, and noticed that a great many consumer records they shared carried a notation indicating they were cut and pasted from the output of queries made at Interactive Data LLC, a Florida-based data analytics company.

Interactive Data, also known as IDIdata.com, markets access to a “massive data repository” on U.S. consumers to a range of clients, including law enforcement officials, debt recovery professionals, and anti-fraud and compliance personnel at a variety of organizations.

The consumer dossiers obtained from IDI and shared by the fraudsters include a staggering amount of sensitive data, including:

-full Social Security number and date of birth;
-current and all known previous physical addresses;
-all known current and past mobile and home phone numbers;
-the names of any relatives and known associates;
-all known associated email addresses
-IP addresses and dates tied to the consumer’s online activities;
-vehicle registration, and property ownership information
-available lines of credit and amounts, and dates they were opened
-bankruptcies, liens, judgments, foreclosures and business affiliations

Reached via phone, IDI Holdings CEO Derek Dubner acknowledged that a review of the consumer records sampled from the fraud group’s shared communications indicates “a handful” of authorized IDI customer accounts had been compromised.

“We identified a handful of legitimate businesses who are customers that may have experienced a breach,” Dubner said.

Dubner said all customers are required to use multi-factor authentication, and that everyone applying for access to its services undergoes a rigorous vetting process.

“We absolutely credential businesses and have several ways do that and exceed the gold standard, which is following some of the credit bureau guidelines,” he said. “We validate the identity of those applying [for access], check with the applicant’s state licensor and individual licenses.”

Citing an ongoing law enforcement investigation into the matter, Dubner declined to say if the company knew for how long the handful of customer accounts were compromised, or how many consumer records were looked up via those stolen accounts.

“We are communicating with law enforcement about it,” he said. “There isn’t much more I can share because we don’t want to impede the investigation.”

The source told KrebsOnSecurity he’s identified more than 2,000 people whose SSNs, DoBs and other data were used by the fraud gang to file for unemployment insurance benefits and SBA loans, and that a single payday can land the thieves $20,000 or more. In addition, he said, it seems clear that the fraudsters are recycling stolen identities to file phony unemployment insurance claims in multiple states.

ANALYSIS

Hacked or ill-gotten accounts at consumer data brokers have fueled ID theft and identity theft services of various sorts for years. In 2013, KrebsOnSecurity broke the news that the U.S. Secret Service had arrested a 24-year-old man named Hieu Minh Ngo for running an identity theft service out of his home in Vietnam.

Ngo’s service, variously named superget[.]info and findget[.]me, gave customers access to personal and financial data on more than 200 million Americans. He gained that access by posing as a private investigator to a data broker subsidiary acquired by Experian, one of the three major credit bureaus in the United States.

Ngo’s ID theft service superget.info

Experian was hauled before Congress to account for the lapse, and assured lawmakers there was no evidence that consumers had been harmed by Ngo’s access. But as follow-up reporting showed, Ngo’s service was frequented by ID thieves who specialized in filing fraudulent tax refund requests with the Internal Revenue Service, and was relied upon heavily by an identity theft ring operating in the New York-New Jersey region.

Also in 2013, KrebsOnSecurity broke the news that ssndob[.]ms, then a major identity theft service in the cybercrime underground, had infiltrated computers at some of America’s large consumer and business data aggregators, including LexisNexis Inc., Dun & Bradstreet, and Kroll Background America Inc.

The now defunct SSNDOB identity theft service.

In 2006, The Washington Post reported that a group of five men used stolen or illegally created accounts at LexisNexis subsidiaries to lookup SSNs and other personal information more than 310,000 individuals. And in 2004, it emerged that identity thieves masquerading as customers of data broker Choicepoint had stolen the personal and financial records of more than 145,000 Americans.

Those compromises were noteworthy because the consumer information warehoused by these data brokers can be used to find the answers to so-called knowledge-based authentication (KBA) questions used by companies seeking to validate the financial history of people applying for new lines of credit.

In that sense, thieves involved in ID theft may be better off targeting data brokers like IDI and their customers than the major credit bureaus, said Nicholas Weaver, a researcher at the International Computer Science Institute and lecturer at UC Berkeley.

“This means you have access not only to the consumer’s SSN and other static information, but everything you need for knowledge-based authentication because these are the types of companies that are providing KBA data.”

The fraud group communications reviewed by this author suggest they are cashing out primarily through financial instruments like prepaid cards and a small number of online-only banks that allow consumers to establish accounts and move money just by providing a name and associated date of birth and SSN.

While most of these instruments place daily or monthly limits on the amount of money users can deposit into and withdraw from the accounts, some of the more popular instruments for ID thieves appear to be those that allow spending, sending or withdrawal of between $5,000 to $7,000 per transaction, with high limits on the overall number or dollar value of transactions allowed in a given time period.

KrebsOnSecurity is investigating the extent to which a small number of these financial instruments may be massively over-represented in the incidence of unemployment insurance benefit fraud at the state level, and in SBA loan fraud at the federal level. Anyone in the financial sector or state agencies with information about these apparent trends may confidentially contact this author at krebsonsecurity @ gmail dot com, or via the encrypted message service Wickr at “krebswickr“.

The looting of state unemployment insurance programs by identity thieves has been well documented of late, but far less public attention has centered on fraud targeting Economic Injury Disaster Loan (EIDL) and advance grant programs run by the U.S. Small Business Administration in response to the COVID-19 crisis.

Late last month, the SBA Office of Inspector General (OIG) released a scathing report (PDF) saying it has been inundated with complaints from financial institutions reporting suspected fraudulent EIDL transactions, and that it has so far identified $250 million in loans given to “potentially ineligible recipients.” The OIG said many of the complaints were about credit inquiries for individuals who had never applied for an economic injury loan or grant.

The figures released by the SBA OIG suggest the financial impact of the fraud may be severely under-reported at the moment. For example, the OIG said nearly 3,800 of the 5,000 complaints it received came from just six financial institutions (out of several thousand across the United States). One credit union reportedly told the U.S. Justice Department that 59 out of 60 SBA deposits it received appeared to be fraudulent.

CryptogramThe NSA on the Risks of Exposing Location Data

The NSA has issued an advisory on the risks of location data.

Mitigations reduce, but do not eliminate, location tracking risks in mobile devices. Most users rely on features disabled by such mitigations, making such safeguards impractical. Users should be aware of these risks and take action based on their specific situation and risk tolerance. When location exposure could be detrimental to a mission, users should prioritize mission risk and apply location tracking mitigations to the greatest extent possible. While the guidance in this document may be useful to a wide range of users, it is intended primarily for NSS/DoD system users.

The document provides a list of mitigation strategies, including turning things off:

If it is critical that location is not revealed for a particular mission, consider the following recommendations:

  • Determine a non-sensitive location where devices with wireless capabilities can be secured prior to the start of any activities. Ensure that the mission site cannot be predicted from this location.
  • Leave all devices with any wireless capabilities (including personal devices) at this non-sensitive location. Turning off the device may not be sufficient if a device has been compromised.
  • For mission transportation, use vehicles without built-in wireless communication capabilities, or turn off the capabilities, if possible.

Of course, turning off your wireless devices is itself a signal that something is going on. It's hard to be clandestine in our always connected world.

News articles.

Planet DebianChristian Kastner: My new favorite utility: autojump

Like any developer, I have amassed an impressive collection of directory trees both broad and deep. Navigating these trees became increasingly cumbersome, and setting CDPATH, using auto-completion, and working with the readline history search alleviated this only somewhat.

Enter autojump, from the package of the same name.

Whatever magic it uses is unbelievably effective. I estimate that in at least 95% of my cases, typing j <name-fragment> changes to the directory I was actually thinking of.

Say I'm working on package scikit-learn. My clone of the Salsa repo is in ~/code/pkg-scikit-learn/scikit-learn. Changing to that directory is trivial, I only need to specify a name fragment:

$ j sci
/home/christian/code/pkg-scikit-learn/scikit-learn
christian@workstation:~/code/pkg-scikit-learn/scikit-learn

But what if I want to work on scikit-learn upstream, to prepare a patch, for example? That repo has been cloned to ~/code/github/scikit-learn. No problem at all, just add another name fragment:

$ j gi sci
/home/christian/code/github/scikit-learn
christian@workstation:~/code/github/scikit-learn

The magic, however, is most evident with directory trees I rarely enter. As in: I have a good idea of the directory name I wish to change to, but I don't really recall its exact name, nor where (in the tree) it is located. I used to rely on autocomplete to somehow get there which can involve hitting the [TAB] key far too many times, and falling back to find in the worst case, but now, autojump always seems gets me there on first try.

I can't believe that this has been available in Debian for 10 years and I only discovered it now.

LongNowChildhood as a solution to explore–exploit tensions

Big questions abound regarding the protracted childhood of Homo sapiens, but there’s a growing argument that it’s an adaptation to the increased complexity of our social environment and the need to learn longer and harder in order to handle the ever-raising bar of adulthood. (Just look to the explosion of requisite schooling over the last century for a concrete example of how childhood grows along with social complexity.)

It’s a tradeoff between genetic inheritance and enculturation — see also Kevin Kelly’s remarks in The Inevitable that we have entered an age of lifelong learning and the 21st Century requires all of us to be permanent “n00bs”, due to the pace of change and the scale at which we have to grapple with evolutionarily relevant sociocultural information.

New research from Past Long Now Seminar Speaker Alison Gopnik:

“I argue that the evolution of our life history, with its distinctively long, protected human childhood, allows an early period of broad hypothesis search and exploration, before the demands of goal-directed exploitation set in. This cognitive profile is also found in other animals and is associated with early behaviours such as neophilia and play. I relate this developmental pattern to computational ideas about explore–exploit trade-offs, search and sampling, and to neuroscience findings. I also present several lines of empirical evidence suggesting that young human learners are highly exploratory, both in terms of their search for external information and their search through hypothesis spaces. In fact, they are sometimes more exploratory than older learners and adults.”

Alison Gopnik, “Childhood as a solution to explore-exploit tensions” in Philosophical Transactions of the Royal Society B.

Planet DebianSam Hartman: Good Job Debian: Compatibility back to 1999

So, I needed a container of Debian Slink (2.1), released back in 1999. I expected this was going to be a long and involved process. Things didn't look good from the start:
root@mount-peerless:/usr/lib/python3/dist-packages/sqlalchemy# debootstrap  slink /build/slink2 
http://archive.debian.org/debian                                                               
E: No such script: /usr/share/debootstrap/scripts/slink

Hmm, I thought I remembered slink support for debootstrap--not that slink used debootstrap by default--back when I was looking through the debootstrap sources years ago. Sure enough looking through the changelogs, slink support was dropped back in 2005.
Okay, well, this isn't going to work either, but I guess I could try debootstrapping sarge and from there go back to slink.
Except it worked fine.
Go us!

Worse Than FailureCodeSOD: A Slow Moving Stream

We’ve talked about Java’s streams in the past. It’s hardly a “new” feature at this point, but its blend of “being really useful” and “based on functional programming techniques” and “different than other APIs” means that we still have developers struggling to figure out how to use it.

Jeff H has a co-worker, Clarence, who is very “anti-stream”. “It creates too many copies of our objects, so it’s terrible for memory, and it’s so much slower. Don’t use streams unless you absolutely have to!” So in many a code review, Jeff submits some very simple, easy to read, and fast-performing bit of stream code, and Clarence objects. “It’s slow. It wastes memory.”

Sometimes, another team member goes to bat for Jeff’s code. Sometimes they don’t. But then, in a recent review, Clarence submitted his own bit of stream code.

schedules.stream().forEach(schedule -> visitors.stream().forEach(scheduleVisitor -> {
    scheduleVisitor.visitSchedule(schedule);

    if (schedule.getDays() != null && !schedule.getDays().isEmpty()) {
        schedule.getDays().stream().forEach(day -> visitors.stream().forEach(dayVisitor -> {
            dayVisitor.visitDay(schedule, day);

            if (day.getSlots() != null && !day.getSlots().isEmpty()) {
                day.getSlots().stream().forEach(slot -> visitors.stream().forEach(slotVisitor -> {
                    slotVisitor.visitSlot(schedule, day, slot);
                }));
            }
        }));
    }
}));

That is six nested “for each” operations, and they’re structured so that we iterate across the same list multiple times. For each schedule, we look at each visitor on that schedule, then we look at each day for that schedule, and then we look at every visitor again, then we look at each day’s slots, and then we look at each visitor again.

Well, if nothing else, we understand why Clarence thinks the Java Streams API is slow. This code did not pass code review.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Planet DebianHolger Levsen: 20200805-debconf7

DebConf7

This tshirt is 13 years old and from DebConf7.

DebConf7 was my 5th DebConf and took place in Edinburgh, Scotland.

And finally I could tell people I was a DD :-D Though as you can guess, that's yet another story to be told. So anyway, Edinburgh.

I don't recall exactly whether the video team had to record 6 or 7 talk rooms on 4 floors, but this was probably the most intense set up we ran. And we ran a lot, from floor to floor, and room to room.

DebConf7 was also special because it had a very special night venue, which was in an ex-church in a rather normal building, operated as sort of community center or some such, while the old church interior was still very much visible as in everything new was build around the old stuff.

And while the night venue was cool, it also ment we (video team) had no access to our machines over night (or for much of the evening), because we had to leave the university over night and the networking situation didn't allow remote access with the bandwidth needed to do anything video.

The night venue had some very simple house rules, like don't rearrange stuff, don't break stuff, don't fix stuff and just a few little more and of course we broke them in the best possible way: Toresbe with the help of people I don't remember fixed the organ, which was broken for decades. And so the house sounded in some very nice new old tune and I think everybody was happy we broke that rule.

I believe the city is really nice from the little I've seen of it. A very nice old town, a big castle on the hill :) I'm not sure whether I missed the day trip to Glasgow to fix video things or to rest or both...

Another thing I missed was getting a kilt, for which Phil Hands made a terrific design (update: the design is called tartan and was made by Phil indeed!), which spelled Debian in morse code. That was pretty cool and the kilts are really nice on DebConf group pictures since then. And if you've been wearing this kilt regularily for the last 13 years it was probably also a sensible investment. ;)

It seems I don't have that many more memories of this DebConf, British power plugs and how to hack them comes to my mind and some other stuff here and there, but I remember less than previous years. I'm blaming this on the intense video setup and also on the sheer amount of people, which was the hightest until then and for some years, I believe maybe even until Heidelberg 8 years later. IIRC there were around 470 people there and over my first five years of DebConf I was incredible lucky to make many friends in Debian, so I probably just hung out and had good times.

Planet DebianHolger Levsen: 20200805-debconf7.jpg

Chaotic IdealismTo a Newly Diagnosed Autistic Teenager

I was recently asked by a 14-year-old who had just been diagnosed autistic what advice I had to give. This is what I said.

The thing that helped me most was understanding myself and talking to other autistic people, so you’re already well on that road.

The more you learn about yourself, the more you learn about how you *learn*… meaning that you can become better at teaching yourself to communicate with neurotypicals.

Remember though: The goal is to communicate. Blending in is secondary, or even irrelevant, depending on your priorities. If you can get your ideas from your brain to theirs, and understand what they’re saying, and live in the world peacefully without hurting anyone and without putting yourself in danger, then it does not matter how different you are or how differently you do things.

Autistic is not better and not worse than neurotypical; it’s simply different. Having a disability is a normal part of human life; it’s nothing to be proud of and nothing to be ashamed of. Disability doesn’t stop you from being talented or from becoming unusually skilled, especially with practice. Being different means that you see things from a different perspective, which means that as you grow and gain experience you will be able to provide solutions to problems that other people simply don’t see, to contribute skills that most people don’t have.

Learn to advocate for yourself. If you have an IEP, go to the meetings and ask questions about what help is available and what problems you have. When you are mistreated, go to someone you trust and ask for help; and if you can’t get help, protect yourself as best you can. Learn to stand up for yourself, to keep other people from taking advantage of you. Also learn to help other people stay safe.

Your best social connections now will be anyone who treats you with kindness. You can tell whether someone is kind by observing how they treat those they have power over when nobody, or nobody with much influence, is watching. You want people who are honest, or who only lie when they are trying to protect others’ feelings. Talk to these people; explain that you are not very good with social things and that you sometimes embarrass yourself or accidentally insult people, and that you would like them to tell you when you are doing something clumsy, offensive, confusing, or cringeworthy. Explain to these people that you would prefer to know about mistakes you are making, because if you are not told you will never be able to correct those mistakes.

Learn to apologize, and learn that an apology simply means, “I recognize I have made a mistake and shall work to correct it in the future.” An apology is not a sign of failure or an admission of inferiority. Sometimes an apology can even mean, “I have made a mistake that I could not control; if I had been able to control it, I would not have made the mistake.” Therefore, it is okay to apologize if you have simply made an honest mistake. The best apology includes an explanation of how you will fix your mistake or what you will change to keep it from happening in the future.

Learn not to apologize when you have done nothing wrong. Do not apologize for being different, for standing up for yourself or for other people, or for having an opinion others disagree with. You do not need to justify your existence. You should never give in to the pressure to say, “I am autistic, but that’s okay because I have this skill and that talent.” The correct statement is, “I am autistic, and that is okay.” You don’t need to do anything to be valuable. You just need to be human.

If someone uses you to fulfill their own desires but doesn’t give things back in return; if someone doesn’t care about your needs when you tell them; if someone can tell you are hurt and doesn’t care; then that is a person you cannot trust.

In general, you can expect your teen years to be harder than your young-adult years. As you grow and gain experience, you’ll gain skills and you’ll gather a library of techniques to help you navigate the social and sensory world, to help you deal with your emotions and with your relationships. You will never be perfect–but then, nobody is. What you’re aiming for is useful, functional skills, in whatever form they take, whether they are the typical way of doing things or not. As the saying goes: If it looks stupid but it works, it isn’t stupid.

Keep trying. Take good care of yourself. When you are tired, rest. Learn to push yourself to your limits, but not beyond; and learn where those limits are. When you are tired from something that would not tire a neurotypical, be unashamed about your need for down time. Learn to say “no” when you don’t want something, and learn to say “yes” when you want something but you are a little bit intimidated by it because it is new or complicated or unpredictable. Learn to accept failure and learn from it. Help others. Make your world better. Make your own way. Grow. Live.

You’ll be okay.

Krebs on SecurityPorn Clip Disrupts Virtual Court Hearing for Alleged Twitter Hacker

Perhaps fittingly, a Web-streamed court hearing for the 17-year-old alleged mastermind of the July 15 mass hack against Twitter was cut short this morning after mischief makers injected a pornographic video clip into the proceeding.

17-year-old Graham Clark of Tampa, Fla. was among those charged in the July 15 Twitter hack. Image: Hillsborough County Sheriff’s Office.

The incident occurred at a bond hearing held via the videoconferencing service Zoom by the Hillsborough County, Fla. criminal court in the case of Graham Clark. The 17-year-old from Tampa was arrested earlier this month on suspicion of social engineering his way into Twitter’s internal computer systems and tweeting out a bitcoin scam through the accounts of high-profile Twitter users.

Notice of the hearing was available via public records filed with the Florida state attorney’s office. The notice specified the Zoom meeting time and ID number, essentially allowing anyone to participate in the proceeding.

Even before the hearing officially began it was clear that the event would likely be “zoom bombed.” That’s because while participants were muted by default, they were free to unmute their microphones and transmit their own video streams to the channel.

Sure enough, less than a minute had passed before one attendee not party to the case interrupted a discussion between Clark’s attorney and the judge by streaming a live video of himself adjusting his face mask. Just a few minutes later, someone began interjecting loud music.

It became clear that presiding Judge Christopher C. Nash was personally in charge of administering the video hearing when, after roughly 15 seconds worth of random chatter interrupted the prosecution’s response, Nash told participants he was removing the troublemakers as quickly as he could.

Judge Nash, visibly annoyed immediately after one of the many disruptions to today’s hearing.

What transpired a minute later was almost inevitable given the permissive settings of this particular Zoom conference call: Someone streamed a graphic video clip from Pornhub for approximately 15 seconds before Judge Nash abruptly terminated the broadcast.

With the ongoing pestilence that is the COVID-19 pandemic, the nation’s state and federal courts have largely been forced to conduct proceedings remotely via videoconferencing services. While Zoom and others do offer settings that can prevent participants from injecting their own audio and video into the stream unless invited to do so, those settings evidently were not enabled in today’s meeting.

At issue before the court today was a defense motion to modify the amount of the defendant’s bond, which has been set at $750,000. The prosecution had argued that Clark should be required to show that any funds used toward securing that bond were gained lawfully, and were not merely the proceeds from his alleged participation in the Twitter bitcoin scam or some other form of cybercrime.

Florida State Attorney Andrew Warren’s reaction as a Pornhub clip began streaming to everyone in today’s Zoom proceeding.

Mr. Clark’s attorneys disagreed, and spent most of the uninterrupted time in today’s hearing explaining why their client could safely be released under a much smaller bond and close supervision restrictions.

On Sunday, The New York Times published an in-depth look into Clark’s wayward path from a small-time cheater and hustler in online games like Minecraft to big-boy schemes involving SIM swapping, a form of fraud that involves social engineering employees at mobile phone companies to gain control over a target’s phone number and any financial, email and social media accounts associated with that number.

According to The Times, Clark was suspected of being involved in a 2019 SIM swapping incident which led to the theft of 164 bitcoins from Gregg Bennett, a tech investor in the Seattle area. That theft would have been worth around $856,000 at the time; these days 164 bitcoins is worth approximately $1.8 million.

The Times said that soon after the theft, Bennett received an extortion note signed by Scrim, one of the hacker handles alleged to have been used by Clark. From that story:

“We just want the remainder of the funds in the Bittrex,” Scrim wrote, referring to the Bitcoin exchange from which the coins had been taken. “We are always one step ahead and this is your easiest option.”

In April, the Secret Service seized 100 Bitcoins from Mr. Clark, according to government forfeiture documents. A few weeks later, Mr. Bennett received a letter from the Secret Service saying they had recovered 100 of his Bitcoins, citing the same code that was assigned to the coins seized from Mr. Clark.

Florida prosecutor Darrell Dirks was in the middle of explaining to the judge that investigators are still in the process of discovering the extent of Clark’s alleged illegal hacking activities since the Secret Service returned the 100 bitcoin when the porn clip was injected into the Zoom conference.

Ultimately, Judge Nash decided to keep the bond amount as is, but to remove the condition that Clark prove the source of the funds.

Clark has been charged with 30 felony counts and is being tried as an adult. Federal prosecutors also have charged two other young men suspected of playing roles in the Twitter hack, including a 22-year-old from Orlando, Fla. and a 19-year-old from the United Kingdom.

Kevin RuddABC RN: South China Sea

E&OE TRANSCRIPT
RADIO INTERVIEW
RN BREAKFAST
ABC RADIO NATIONAL
5 AUGUST 2020

Fran Kelly
Prime Minister Scott Morrison today will warn of the unprecedented militarization of the Indo-Pacific which he says has become the epicentre of strategic competition between the US and China. In his virtual address to the Aspen Security Forum in the United States, Scott Morrison will also condemn the rising frequency of cyber attacks and the new threats democratic nations are facing from foreign interference. This speech coincides with a grim warning from former prime minister Kevin Rudd that the threat of armed conflict in the region is especially high in the run-up to the US presidential election in November. Kevin Rudd, welcome back to breakfast.

Kevin Rudd
Thanks for having me on the program, Fran.

Fran Kelly
Kevin Rudd, you’ve written in the Foreign Affairs journal that the US-China tensions could lead to, quote, a hot war not just a cold one. That conflict, you say, is no longer unthinkable. It’s a fairly alarming assessment. Just how likely do you rate the confrontation in the Indo-Pacific other coming three or four months?

Kevin Rudd
Well, Fran, I think it’s important to narrow our geographical scope here. Prime Minister Morrison is talking about a much wider theatre. My comments in Foreign Affairs are about crisis scenarios emerging over what will happen or could happen in Hong Kong over the next three months leading up to the presidential election. And I think things in Hong Kong are more likely to get worse than better. What’s happening in relation to the Taiwan Straits where things have become much sharper than before in terms of actions on both sides, that’s the Chinese and the United States. But the thrust of my article is that the real problem area in terms of crisis management, crisis escalation, etc, lies in the South China Sea. And what I simply try to pull together is the fact that we now have a much greater concentration of military hardware, ships at sea, aircraft flying reconnaissance missions, together with changes in deployments by the Chinese fighters and bombers now into the actual Paracel Islands themselves in the north part of the South China Sea. Together with the changes in the declaratory postures of both sides. So what I do in this article this pull these threads together and say to both sides: be careful what you wish for; you’re playing with fire.

Fran Kelly
And when you talk about a heightened risk of armed conflict, or you’re talking about a being confined to a flare-up in one very specific location like the South China Sea?

Kevin Rudd
What I try to do is to go to where could a crisis actually emerge?

Fran Kelly
Yeah.

Kevin Rudd
If you go across the whole spectrum of conflicts, at the moment between China and the United States on a whole range of policies, all roads tend to lead back to the South China Sea because it’s effectively a ruleless environment at the moment. We have contested views of both territorial and maritime sovereignty. And that’s where my concern, Fran, is that we have a crisis, which comes about through a collision at sea, a collision in the air, and given the nationalist politics now in Washington because of the presidential election, but also the nationalist politics in China, as its own leadership go to their annual August retreat, Beidaihe, that it’s a very bad admixture which could result in a crisis for allies like Australia, which have treaty obligations with the United States through the ANZUS treaty. This is a deeply concerning set of developments because if the crisis erupts, what then does the Australian government do?

Fran Kelly
Well, what does it do in your view from your viewpoint as a former Prime Minister. You know Australia tries to walk a very fine line by Washington and Beijing. That’s proved very difficult lately, but we are in the ANZUS alliance. Would we need to get involved militarily?

Kevin Rudd
Let me put it in these terms: Australia, like other countries dealing with China’s greater regional and international assertiveness, has had to adjust its strategy. We can have a separate debate, Fran, about what that strategy should be across the board in terms of the economy, technology, Huawei in the rest. But what I’ve sought to do in this article is go specifically to the possibility of a national security crisis. Now, if I was Prime Minister Morrison, what I’d be doing in the current circumstances is taking out the fire hose to those in Washington and to the extent that you can to those in Beijing, and simply make it as plain as possible through private diplomacy and public statements, the time has come for de-escalation because the obligations under the treaty, Fran, to go to your direct question, are relatively clear. What it says in one of the operational clauses of the ANZUS treaty of 1951 is that if the armed forces of either of the contracting parties, namely Australia or the United States, come under attack in the Pacific area, then the allies shall meet and consult to meet the common danger. That, therefore, puts us as an Australian ally directly into this frame. Hence my call for people to be very careful about the months which lie ahead.

Fran Kelly
In terms of ‘the time has come for de-escalation’, that message, do we see signs that that was the message very clearly being given by the Foreign Minister and the Defense Minister when they’re in Washington last week? Marise Payne didn’t buy into Secretary of State Mike Pompeo’s very elevated rhetoric aimed at China, kept a distance there. And is it your view that this danger period will be over come the first Tuesday in November, the presidential election?

Kevin Rudd
I think when we looking at the danger of genuine armed conflict between China and United States, that is now with us for a long period of time, whoever wins in November, including under the Democrats. What I’m more concerned about, however, is given that President Trump is in desperate domestic political circumstances at present in Washington, and that there will be a temptation to continue to elevate. And also domestic politics are playing their role in China itself where Xi Jinping is under real pressure because of the state of the Chinese economy because of COVID and a range of other factors. On Australia, you asked directly about what Marise Payne was doing in Washington. I think finally the penny dropped with Prime Minister Morrison and Foreign Minister Payne that the US presidential election campaign strategy was beginning to directly influence the content of rational national security policy. I think wisely they decided to step back slightly from that.

Fran Kelly
Former Prime Minister Kevin Rudd is our guest. Kevin Rudd, this morning Scott Morrison, the Prime Minister, is addressing the US Aspen Security Forum. He’s also talking about rising tensions in the Indo-Pacific. He’s pledged that Australia won’t be a bystander, quote, who will leave it to others in the region. He wants other like-minded democracies of the region to step up and act in some kind of alliance. Is that the best way to counter Beijing’s rising aggression and assertiveness?

Kevin Rudd
Well, Prime Minister Morrison seems to like making speeches but I’ve yet to see evidence of a coherent Australian national China strategy in terms of what the government is operationally doing as opposed to what it continues to talk about. So my concern on his specific proposal is: what are you doing, Mr Morrison? The talk of an alliance I think is misplaced. The talk of, shall we say, a common policy approach to the challenges which China now represents, that is an entirely appropriate course of action and something which we sought to do during our own period in government, but it’s a piece of advice which Morrison didn’t bother listening to himself when he unilaterally went out and called for an independent global investigation into the origins of COVID-19. Far wiser, if Morrison had taken his own counsel and brought together a coalition of the policy willing first and said: do we have a group of 10 robust states standing behind this proposal? And the reason for that, Fran, is that makes it much harder then for Beijing to unilaterally pick off individual countries.

Fran Kelly
Kevin Rudd, thank you very much for joining us on Breakfast.

Kevin Rudd
Good to be with you, Fran.

Fran Kelly
Former Prime Minister Kevin Rudd. He’s president of the Asia Society Policy Institute in New York, and the article that he’s just penned is in the Foreign Affairs journal.

The post ABC RN: South China Sea appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Private Code Review

Jessica has worked with some cunning developers in the past. To help cope with some of that “cunning”, they’ve recently gone out searching for new developers.

Now, there were some problems with their job description and salary offer, specifically, they were asking for developers who do too much and get paid too little. Which is how Jessica started working with Blair. Jessica was hoping to staff up her team with some mid-level or junior developers with a background in web development. Instead, she got Blair, a 13+ year veteran who had just started doing web development in the past six months.

Now, veteran or not, there is a code review process, so everything Blair does goes through code review. And that catches some… annoying habits, but every once in awhile, something might sneak through. For example, he thinks static is a code smell, and thus removes the keyword any time he sees it. He’ll rewrite most of the code to work around it, except once the method was called from a cshtml template file, so no one discovered that it didn’t work until someone reported the error.

Blair also laments that with all the JavaScript and loosely typed languages, kids these days don’t understand the importance of separation of concerns and putting a barrier between interface and implementation. To prove his point, he submitted his MessageBL class. BL, of course, is to remind you that this class is “business logic”, which is easy to forget because it’s in an assembly called theappname.businesslogic.

Within that class, he implemented a bunch of data access methods, and this pair of methods lays out the pattern he followed.

public async Task<LinkContentUpdateTrackingModel> GetLinkAndContentTrackingModelAndUpdate(int id, Msg msg)
{
    return await GetLinkAndContentTrackingAndUpdate(id, msg);
}

/// <summary>
/// LinkTrackingUpdateLinks
/// returns: HasAnalyticsConfig, LinkTracks, ContentTracks
/// </summary>
/// <param name="id"></param>
/// <param name="msg"></param>
private async Task<LinkContentUpdateTrackingModel> GetLinkAndContentTrackingAndUpdate(int id, Msg msg)
{
  //snip
}

Here, we have one public method, and one private method. Their names, as you can see, are very similar. The public method does nothing but invoke the private method. This public method is, in fact, the only place the private method is invoked. The public method, in turn, is called only twice, from one controller.

This method also doesn’t ever need to be called, because the same block of code which constructs this object also fetches the relevant model objects. So instead of going back to the database with this thing, we could just use the already fetched objects.

But the real magic here is that Blair was veteran enough to know that he should put some “thorough” documentation using Visual Studio’s XML comment features. But he put the comments on the private method.

Jessica was not the one who reviewed this code, but adds:

I won’t blame the code reviewer for letting this through. There’s only so many times you can reject a peer review before you start questioning yourself. And sometimes, because Blair has been here so long, he checks code in without peer review as it’s a purely manual process.

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.8: Minor API Extension

A new minor release 0.2.8 of RcppCCTZ is now on CRAN.

RcppCCTZ uses Rcpp to bring CCTZ to R. CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. And while CCTZ is made by Google(rs), it is not an official Google product. The RcppCCTZ page has a few usage examples and details. This package was the first CRAN package to use CCTZ; by now at least three others do—using copies in their packages which remains less than ideal.

This version adds three no throw variants of three existing functions, contributed again by Leonardo. This will be used in an upcoming nanotime release which we are finalising now.

Changes in version 0.2.8 (2020-08-04)

  • Added three new nothrow variants (for win32) needed by the expanded nanotime package (Leonardo in #37)

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

If you like this or other open-source work I do, you can now sponsor me at GitHub. For the first year, GitHub will match your contributions.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianHolger Levsen: 20200804-debconf6

DebConf6

This tshirt is 14 years old and from DebConf6.

DebConf6 was my 4th DebConf and took place in Oaxtepec, Mexico.

I'm a bit exhausted right now which is probably quite fitting to write something about DebConf6... many things in life are a question of perception, so I will mention the waterfall and the big swirl and the band playing with the fireworks during the conference dinner, the joy that we finally could use the local fiber network (after asking for months) just after discovering that the 6h shopping tour forgot to bring the essential pig tail connectors to connect the wireless antennas to the cards, which we needed to provide network to the rooms where the talks would take place.

DebConf6 was the first DebConf with live streaming using dvswitch (written by Ben Hutchings and removed from unstable in 2015 as the world had moved to voctomix, which is yet another story to be told eventually). The first years (so DebConf6 and some) the videoteam focussed on getting the post processing done and the videos released, and streaming was optional, even though it was an exciting new feature and we still managed to stream mostly all we recorded and sometimes more... ;)

Setting up the network uplink also was very challenging and took, I don't remember exactly, until day 4 or 5 of DebCamp (which lasted 7 days), so there were group of geeks in need of network, and mostly unable to fix it, because for fixing it we needed to communicate and IRC was down. (There was no mobile phone data at that time, the first iphone wasn't sold yet, it were the dark ages.)

I remember literally standing on a roof to catch the wifi signal and excitingly shouting "I got one ping back! ... one ping back ...", less excitingly. I'll spare you the details now (and me writing them down) but I'll say that the solution involved Neil McGovern climbing an antenna and attaching a wifi antenna up high, probably 15m or 20m or some such. Finally we had uplink. I don't recall if that pig tail connector incident happened before of after, but in the end the network setup worked nicely on the wide area we occupied. Even though in some dorms the cleaning people daily removed one of our APs to be able to watch TV while cleaning ;) (Which kind of was ok, but still... they could have plugged it back in.)

I also joyfully remember a certain vegetarian table, a most memorable bus ride (I'll just say 5 or rather cinco, and, unrelated except on the same bus ride, "Jesus" (and "Maria" for sure..)!) and talking with Jim Gettys and thus learning about the One Laptop per Child (OLPC) project.

As for any DebConf, there's sooo much more to be told, but I'll end here and just thank Gunnar Wolf (as he masterminded much of this DebConf) and go to bed now :-)

,

Planet DebianOsamu Aoki: exim4 configuration for Desktop (better gmail support)

Since gmail rewrites "From:" address now (2020) and keep changing access limitation, it is wise not  to use it as smarthost any more.  (If you need to access multiple gmail addresses from mutt etc, use esmtp etc.)

---
For most of our Desktop PC running with stock exim4 and mutt, I think sending out mail is becoming a bit rough since using random smarthost causes lots of trouble due to the measures taken to prevent spams.

As mentioned in Exim4 user FAQ , /etc/hosts should have FQDN with external DNS resolvable domain name listed instead of localdomain to get the correct EHLO/HELO line.  That's the first step.

The stock configuration of exim4 only allows you to use single smarthost for all your mails.  I use one address for my personal use which is checked by my smartphone too.  The other account is for subscribing to the mailing list.  So I needed to tweak ...

Usually, mutt is smart enough to set the From address since my .muttrc has

# Set default for From: for replyes for alternates.
set reverse_name

So how can I teach exim4 to send mails depending on the  mail accounts listed in the From header.

For my gmail accounts, each mail should be sent to the account specific SMTP connection matching your From header to get all the modern SPAM protection data in right state.  DKIM, SPF, DMARC...  (Besides, they overwrite From: header anyway if you use wrong connection.)

For my debian.org mails, mails should be sent from my shell account on people.debian.org so it is very unlikely to be blocked.  Sometimes, I wasn't sure some of these debian.org mails sent through my ISP's smarthost are really getting to the intended person.

To these ends, I have created small patches to the /etc/exim4/conf.d files and reported it to Debian BTS: #869480 Support multiple smarthosts (gmail support).  These patches are for the source package.

To use my configuration tweak idea, you have easier route no matter which exim version you are using.  Please copy and read pertinent edited files from my github site to your installed /etc/exim4/conf.d files and get the benefits.
If you really wish to keep envelope address etc. to match From: header, please rewite agressively using the From: header using eddited rewrite/31_exim4-config_rewriting as follows:

.ifndef NO_EAA_REWRITE_REWRITE
*@+local_domains "${lookup{${local_part}}lsearch{/etc/email-addresses}\
                   {$value}fail}" f
# identical rewriting rule for /etc/mailname
*@ETC_MAILNAME "${lookup{${local_part}}lsearch{/etc/email-addresses}\
                   {$value}fail}" f
.endif
* "$h_from:" Frs

So far its working fine for me but if you find bug, let me know.

Osamu

LongNowTraditional Ecological Knowledge

Archaeologist Stefani Crabtree writes about her work to reconstruct Indigenous food and use networks for the National Park Service:

Traditional Ecological Knowledge gets embedded in the choices that people make when they consume, and how TEK can provide stability of an ecosystem. Among Martu, the use of fire for hunting and the knowledge of the habits of animals are enshrined in the Dreamtime stories passed inter-generationally; these Dreamtime stories have material effects on the food web, which were detected in our simulations. The ecosystem thrived with Martu; it was only through their removal that extinctions began to cascade through the system.

Kevin RuddForeign Affairs: Beware the Guns of August — in Asia

U.S. Navy photo by Mass Communication Specialist 2nd Class Taylor DiMartino

Published in Foreign Affairs on August 3, 2020.

In just a few short months, the U.S.-Chinese relationship seems to have returned to an earlier, more primal age. In China, Mao Zedong is once again celebrated for having boldly gone to war against the Americans in Korea, fighting them to a truce. In the United States, Richard Nixon is denounced for creating a global Frankenstein by introducing Communist China to the wider world. It is as if the previous half century of U.S.-Chinese relations never happened.

The saber rattling from both Beijing and Washington has become strident, uncompromising, and seemingly unending. The relationship lurches from crisis to crisis—from the closures of consulates to the most recent feats of Chinese “wolf warrior” diplomacy to calls by U.S. officials for the overthrow of the Chinese Communist Party (CCP). The speed and intensity of it all has desensitized even seasoned observers to the scale and significance of change in the high politics of the U.S.-Chinese relationship. Unmoored from the strategic assumptions of the previous 50 years but without the anchor of any mutually agreed framework to replace them, the world now finds itself at the most dangerous moment in the relationship since the Taiwan Strait crises of the 1950s.

The question now being asked, quietly but nervously, in capitals around the world is, where will this end? The once unthinkable outcome—actual armed conflict between the United States and China—now appears possible for the first time since the end of the Korean War. In other words, we are confronting the prospect of not just a new Cold War, but a hot one as well.

Click here to read the rest of the article at Foreign Affairs.

The post Foreign Affairs: Beware the Guns of August — in Asia appeared first on Kevin Rudd.

Worse Than FailureA Massive Leak

"Memory leaks are impossible in a garbage collected language!" is one of my favorite lies. It feels true, but it isn't. Sure, it's much harder to make them, and they're usually much easier to track down, but you can still create a memory leak. Most times, it's when you create objects, dump them into a data structure, and never empty that data structure. Usually, it's just a matter of finding out what object references are still being held. Usually.

A few months ago, I discovered a new variation on that theme. I was working on a C# application that was leaking memory faster than bad waterway engineering in the Imperial Valley.

A large, glowing, computer-controlled chandelier

I don't exactly work in the "enterprise" space anymore, though I still interact with corporate IT departments and get to see some serious internal WTFs. This is a chandelier we built for the Allegheny Health Network's Cancer Institute which recently opened in Pittsburgh. It's 15 meters tall, weighs about 450kg, and is broken up into 30 segments, each with hundreds of addressable LEDs in a grid. The software we were writing was built to make them blink pretty.

Each of those 30 segments is home to a single-board computer with their GPIO pins wired up to addressable LEDs. Each computer runs a UDP listener, and we blast them with packets containing RGB data, which they dump to the LEDs using a heavily tweaked version of LEDScape.

This is our standard approach to most of our lighting installations. We drop a Beaglebone onto a custom circuit board and let it drive the LEDs, then we have a render-box someplace which generates frame data and chops it up into UDP packets. Depending on the environment, we can drive anything from 30-120 frames per second this way (and probably faster, but that's rarely useful).

Apologies to the networking folks, but this works very well. Yes, we're blasting many megabytes of raw bitmap data across the network, but we're usually on our own dedicated network segment. We use UDP because, well, we don't care about the data that much. A dropped packet or an out of order packet isn't going to make too large a difference in most cases. We don't care if our destination Beaglebone is up or down, we just blast the packets out onto the network, and they get there reliably enough that the system works.

Now, normally, we do this from Python programs on Linux. For this particular installation, though, we have an interactive kiosk which provides details about cancer treatments and patient success stories, and lets the users interact with the chandelier in real time. We wanted to show them a 3D model of the chandelier on the screen, and show them an animation on the UI that was mirrored in the physical object. After considering our options, we decided this was a good case for Unity and C#. After a quick test of doing multitouch interactions, we also decided that we shouldn't deploy to Linux (Unity didn't really have good Linux multitouch support), so we would deploy a Windows kiosk. This meant we were doing most of our development on MacOS, but our final build would be for Windows.

Months go by. We worked on the software while building the physical pieces, which meant the actual testbed hardware wasn't available for most of the development cycle. Custom electronics were being refined and physical designs were changing as we iterated to the best possible outcome. This is normal for us, but it meant that we didn't start getting real end-to-end testing until very late in the process.

Once we started test-hanging chandelier pieces, we started basic developer testing. You know how it is: you push the run button, you test a feature, you push the stop button. Tweak the code, rinse, repeat. Eventually, though, we had about 2/3rds of the chandelier pieces plugged in, and started deploying to the kiosk computer, running Windows.

We left it running, and the next time someone walked by and decided to give the screen a tap… nothing happened. It was hung. Well, that could be anything. We rebooted and checked again, and everything seems fine, until a few minutes later, when it's hung… again. We checked the task manager- which hey, everything is really slow, and sure enough, RAM is full and the computer is so slow because it's constantly thrashing to disk.

We're only a few weeks before we actually have to ship this thing, and we've discovered a massive memory leak, and it's such a sudden discovery that it feels like the draining of Lake Agassiz. No problem, though, we go back to our dev machines, fire it up in the profiler, and start looking for the memory leak.

Which wasn't there. The memory leak only appeared in the Windows build, and never happened in the Mac or Linux builds. Clearly, there must be some different behavior, and it must be around object lifecycles. When you see a memory leak in a GCed language, you assume you're creating objects that the GC ends up thinking are in use. In the case of Unity, your assumption is that you're handing objects off to the game engine, and not telling it you're done with them. So that's what we checked, but we just couldn't find anything that fit the bill.

Well, we needed to create some relatively large arrays to use as framebuffers. Maybe that's where the problem lay? We keep digging through the traces, we added a bunch of profiling code, we spent days trying to dig into this memory leak…

… and then it just went away. Our memory leak just became a Heisenbug, our shipping deadline was even closer, and we officially knew less about what was going wrong than when we started. For bonus points, once this kiosk ships, it's not going to be connected to the Internet, so if we need to patch the software, someone is going to have to go onsite. And we aren't going to have a suitable test environment, because we're not exactly going to build two gigantic chandeliers.

The folks doing assembly had the whole chandelier built up, hanging in three sections (we don't have any 14m tall ceiling spaces), and all connected to the network for a smoke test. There wasn't any smoke, but they needed to do more work. Someone unplugged a third of the chandelier pieces from the network.

And the memory leak came back.

We use UDP because we don't care if our packet sends succeed or not. Frame-by-frame, we just want to dump the data on the network and hope for the best. On MacOS and Linux, our software usually uses a sender thread that just, at the end of the day, wraps around calls to the send system call. It's simple, it's dumb, and it works. We ignore errors.

In C#, though, we didn't do things exactly the same way. Instead, we used the .NET UdpClient object and its SendAsync method. We assumed that it would do roughly the same thing.

We were wrong.

await client.SendAsync(packet, packet.Length, hostip, port);

Async operations in C# use Tasks, which are like promises or futures in other environments. It lets .NET manage background threads without the developer worrying about the details. The await keyword is syntactic sugar which lets .NET know that it can hand off control to another thread while we wait. While we await here, we don't actually await the results of the await, because again: we don't care about the results of the operation. Just send the packet, hope for the best.

We don't care- but Windows does. After a load of investigation, what we discovered is that Windows would first try and resolve the IP address. Which, if a host was down, obviously it couldn't. But Windows was friendly, Windows was smart, and Windows wasn't going to let us down: it kept the Task open and kept trying to resolve the address. It held the task open for 3 seconds before finally deciding that it couldn't reach the host and errored out.

An error which, as I stated before, we were ignoring, because we didn't care.

Still, if you can count and have a vague sense of the linear passage of time, you can see where this is going. We had 30 hosts. We sent each of the 30 packets every second. When one or more of those hosts were down, Windows would keep each of those packets "alive" for 3 seconds. By the time that one expired, 90 more had queued up behind it.

That was the source of our memory leak, and our Heisenbug. If every Beaglebone was up, we didn't have a memory leak. If only one of them was down, the leak was pretty slow. If ten or twenty were out, the leak was a waterfall.

I spent a lot of time reading up on Windows networking after this. Despite digging through the socket APIs, I honestly couldn't figure out how to defeat this behavior. I tried various timeout settings. I tried tracking each task myself and explicitly timing them out if they took longer than a few frames to send. I was never able to tell Windows, "just toss the packet and hope for the best".

Well, my co-worker was building health monitoring on the Beaglebones anyway. While the kiosk wasn't going to be on the Internet via a "real" Internet connection, we did have a cellular modem attached, which we could use to send health info, so getting pings that say "hey, one of the Beaglebones failed" is useful. So my co-worker hooked that into our network sending layer: don't send frames to Beaglebones which are down. Recheck the down Beaglebones every five minutes or so. Continue to hope for the best.

This solution worked. We shipped. The device looks stunning, and as patients and guests come to use it, I hope they find some useful information, a little joy, and maybe some hope while playing with it. And while there may or may not be some ugly little hacks still lurking in that code, this was the one thing which made me say: WTF.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityRobocall Legal Advocate Leaks Customer Data

A California company that helps telemarketing firms avoid getting sued for violating a federal law that seeks to curb robocalls has leaked the phone numbers, email addresses and passwords of all its customers, as well as the mobile phone numbers and other data on people who have hired lawyers to go after telemarketers.

The Blacklist Alliance provides technologies and services to marketing firms concerned about lawsuits under the Telephone Consumer Protection Act (TCPA), a 1991 law that restricts the making of telemarketing calls through the use of automatic telephone dialing systems and artificial or prerecorded voice messages. The TCPA prohibits contact with consumers — even via text messages — unless the company has “prior express consent” to contact the consumer.

With statutory damages of $500 to $1,500 per call, the TCPA has prompted a flood of lawsuits over the years. From the telemarketer’s perspective, the TCPA can present something of a legal minefield in certain situations, such as when a phone number belonging to someone who’d previously given consent gets reassigned to another subscriber.

Enter The Blacklist Alliance, which promises to help marketers avoid TCPA legal snares set by “professional plaintiffs and class action attorneys seeking to cash in on the TCPA.” According to the Blacklist, one of the “dirty tricks” used by TCPA “frequent filers” includes “phone flipping,” or registering multiple prepaid cell phone numbers to receive calls intended for the person to whom a number was previously registered.

Lawyers representing TCPA claimants typically redact their clients’ personal information from legal filings to protect them from retaliation and to keep their contact information private. The Blacklist Alliance researches TCPA cases to uncover the phone numbers of plaintiffs and sells this data in the form of list-scrubbing services to telemarketers.

“TCPA predators operate like malware,” The Blacklist explains on its website. “Our Litigation Firewall isolates the infection and protects you from harm. Scrub against active plaintiffs, pre litigation complainers, active attorneys, attorney associates, and more. Use our robust API to seamlessly scrub these high-risk numbers from your outbound campaigns and inbound calls, or adjust your suppression settings to fit your individual requirements and appetite for risk.”

Unfortunately for the Blacklist paying customers and for people represented by attorneys filing TCPA lawsuits, the Blacklist’s own Web site until late last week leaked reams of data to anyone with a Web browser. Thousands of documents, emails, spreadsheets, images and the names tied to countless mobile phone numbers all could be viewed or downloaded without authentication from the domain theblacklist.click.

The directory also included all 388 Blacklist customer API keys, as well as each customer’s phone number, employer, username and password (scrambled with the relatively weak MD5 password hashing algorithm).

The leaked Blacklist customer database points to various companies you might expect to see using automated calling systems to generate business, including real estate and life insurance providers, credit repair companies and a long list of online advertising firms and individual digital marketing specialists.

The very first account in the leaked Blacklist user database corresponds to its CEO Seth Heyman, an attorney in southern California. Mr. Heyman did not respond to multiple requests for comment, although The Blacklist stopped leaking its database not long after that contact request.

Two other accounts marked as administrators were among the third and sixth registered users in the database; those correspond to two individuals at Riip Digital, a California-based email marketing concern that serves a diverse range of clients in the lead generation business, from debt relief and timeshare companies, to real estate firms and CBD vendors.

Riip Digital did not respond to requests for comment. But According to Spamhaus, an anti-spam group relied upon by many Internet service providers (ISPs) to block unsolicited junk email, the company has a storied history of so-called “snowshoe spamming,” which involves junk email purveyors who try to avoid spam filters and blacklists by spreading their spam-sending systems across a broad swath of domains and Internet addresses.

The irony of this data leak is that marketers who constantly scrape the Web for consumer contact data may not realize the source of the information, and end up feeding it into automated systems that peddle dubious wares and services via automated phone calls and text messages. To the extent this data is used to generate sales leads that are then sold to others, such a leak could end up causing more legal problems for The Blacklist’s customers.

The Blacklist and their clients talk a lot about technologies that they say separate automated telephonic communications from dime-a-dozen robocalls, such as software that delivers recorded statements that are manually selected by a live agent. But for your average person, this is likely a distinction without a difference.

Robocalls are permitted for political candidates, but beyond that if the recording is a sales message and you haven’t given your written permission to get calls from the company on the other end, the call is illegal. According to the Federal Trade Commission (FTC), companies are using auto-dialers to send out thousands of phone calls every minute for an incredibly low cost.

In fiscal year 2019, the FTC received 3.78 million complaints about robocalls. Readers may be able to avoid some marketing calls by registering their mobile number with the Do Not Call registry, but the list appears to do little to deter all automated calls — particularly scam calls that spoof their real number. If and when you do receive robocalls, consider reporting them to the FTC.

Some wireless providers now offer additional services and features to help block automated calls. For example, AT&T offers wireless customers its free Call Protect app, which screens incoming calls and flags those that are likely spam calls. See the FCC’s robocall resource page for links to resources at your mobile provider. In addition, there are a number of third-party mobile apps designed to block spammy calls, such as Nomorobo and TrueCaller.

Obviously, not all telemarketing is spammy or scammy. I have friends and relatives who’ve worked at non-profits that rely a great deal on fundraising over the phone. Nevertheless, readers who are fed up with telemarketing calls may find some catharsis in the Jolly Roger Telephone Company, which offers subscribers a choice of automated bots that keep telemarketers engaged for several minutes. The service lets subscribers choose which callers should get the bot treatment, and then records the result.

For my part, the volume of automated calls hitting my mobile number got so bad that I recently enabled a setting on my smart phone to simply send to voicemail all calls from numbers that aren’t already in my contacts list. This may not be a solution for everyone, but since then I haven’t received a single spammy jingle.

Planet DebianHolger Levsen: 20200803-debconf5

DebConf5

This tshirt is 15 years old and from DebConf5. It still looks quite nice! :)

DebConf5 was my 3rd DebConf and took place in Helsinki, or rather Espoo, in Finland.

This was one of my most favorite DebConfs (though I basically loved them all) and I'm not really sure why, I guess it's because of the kind of community at the event. We stayed in some future dorms of the universtity, which were to be first used by some European athletics chamopionship and which we could use even before that, guests zero. Being in Finland there were of course saunas in the dorms, which we frequently used and greatly enjoyed. Still, one day we had to go on a trip to another sauna in the forest, because of course you cannot visit Finland and only see one sauna. Or at least, you should not.

Another aspect which increased community bonding was that we had to authenticate using 802.10 (IIRC, please correct me) which was an authentication standard mostly used for wireless but which also works for wired ethernet, except that not many had used it on Linux before. Thus quite some related bugs were fixed in the first days of DebCamp...

Then my powerpc ibook also decided to go bad, so I had to remove 30 screws to get the harddrive out and 30 screws back in, to not have 30 screws laying around for a week. Then I put the harddrive into a spare (x86) laptop and only used my /home partition and was very happy this worked nicely. And then, for travelling back, I had to unscrew and screw 30 times again. (I think my first attempt took 1.5h and the fourth only 45min or so ;) Back home then I bought a laptop where one could remove the harddrive using one screw.

Oh, and then I was foolish during the DebConf5 preparations and said, that I could imagine setting up a team and doing video recordings, as previous DebConfs mostly didn't have recordings and the one that had, didn't have releases of them...

And so we did videos. And as we were mostly inexperienced we did them the hard way: during the day we recorded on tape and then when the talks were done, we used a postprocessing tool called 'cinelerra' and edited them. And because Eric Evans was on the team and because Eric worked every night almost all night, all nights, we managed to actually release them all when DebConf5 was over. I very well remember many many (23 or 42) Debian people cleaning the dorms thoroughly (as they were brand new..) and Eric just sitting somewhere, exhausted and watching the cleaners. And everybody was happy Eric was idling there, cause we knew why. In the aftermath of DebConf5 Ben Hutchings then wrote videolink (removed from sid in 2013) which we used to create video DVDs of our recordings based on a simple html file with links to the actual videos.

There were many more memorable events. The boat ride was great. A pirate flag appeared. One night people played guitar until very late (or rather early) close to the dorms, so at about 3 AM someone complained about it, not in person, but on the debian-devel mailinglist. And those drunk people playing guitar, replied immediatly on the mailinglist. And then someone from the guitar group gave a talk, at 9 AM, and the video is online... ;) (It's a very slowwwwwww talk.)

If you haven't been to or close to the polar circles it's almost impossible to anticipate how life is in summer there. It get's a bit darker after midnight or rather after 1 AM and then at 3 AM it get's light again, so it's reaaaaaaally easy to miss the night once and it's absolutly not hard to miss the night for several nights in a row. And then I shared a room with 3 people who all snore quite loud...

There was more. I was lucky to witness the first (or second?) cheese and whine party which at that time took place in a dorm room with, dunno 10 people and maybe 15 kinds of cheese. And, of course, I met many wonderful people there, to mention a few I'll say Jesus, I mean mooch or data, Amaya and p2. And thanks to some bad luck which turned well, I also had my first time ever Sushi in Helsinki.

And and and. DebConfs are soooooooo good! :-) I'll stop here as I originally planned to only write a paragraph or two about each and there are quite some to be written!

Oh, and as we all learned, there are probably no mosquitos in Helsinki, just in Espoo. And you can swim naked through a lake and catch a taxi on the other site, with no clothes and no money, no big deal. (And you might not believe it, but that wasn't me. I cannot swim that well.)

Planet DebianGiovanni Mascellani: Bye bye Python 2!

And so, today, while I was browsing updates for my Debian unstable laptop, I noticed that aptitude wouldn't automatically upgrade python2 and related packages (I don't know why, and at this point don't care). So I decided to dare: I removed the python2 package to see what the dependency solver would have proposed me. It turned out that there was basically nothing I couldn't live without.

So, bye bye Python 2. It was a long ride and I loved programming with you. But now it's the turn of your younger brother.

$ python
bash: python: comando non trovato

(guess what "comando non trovato" means?)

And thanks to all those who made this possible!

CryptogramBlackBerry Phone Cracked

Australia is reporting that a BlackBerry device has been cracked after five years:

An encrypted BlackBerry device that was cracked five years after it was first seized by police is poised to be the key piece of evidence in one of the state's longest-running drug importation investigations.

In April, new technology "capabilities" allowed authorities to probe the encrypted device....

No details about those capabilities.

Planet DebianSylvain Beucler: Debian LTS and ELTS - July 2020

Debian LTS Logo

Here is my transparent report for my work on the Debian Long Term Support (LTS) and Debian Extended Long Term Support (ELTS), which extend the security support for past Debian releases, as a paid contributor.

In July, the monthly sponsored hours were split evenly among contributors depending on their max availability - I was assigned 25.25h for LTS (out of 30 max; all done) and 13.25h for ELTS (out of 20 max; all done).

We shifted suites: welcome Stretch LTS and Jessie ELTS. The LTS->ELTS switch happened at the start of the month, but the oldstable->LTS switch happened later (after finalizing and flushing proposed-updates to a last point release), causing some confusion but nothing major.

ELTS - Jessie

  • New local build setup
  • ELTS buildds: request timezone harmonization
  • Reclassify in-progress updates from jessie-LTS to jessie-ELTS
  • python3.4: finish preparing update, security upload ELA 239-1
  • net-snmp: global triage: bisect CVE-2019-20892 to identify affected version, jessie/stretch not-affected
  • nginx: global triage: clarify CVE-2013-0337 status; locate CVE-2020-11724 original patch and regression tests, update MITRE
  • nginx: security upload ELA-247-1 with 2 CVEs

LTS - Stretch

  • Reclassify in-progress/needed updates from stretch/oldstable to stretch-LTS
  • rails: upstream security: follow-up on CVE-2020-8163 (RCE) on upstream bug tracker and create pull request for 4.x (merged), hence getting some upstream review
  • rails: global security: continue coordinating upload in multiple Debian versions, prepare fixes for common stretch/buster vulnerabilities in buster
  • rails: security upload DLA-2282 fixing 3 CVEs
  • python3.5: security upload DLA-2280-1 fixing 13 pending non-critical vulnerabilities, and its test suite
  • nginx: security upload DLA-2283 (cf. common ELTS work)
  • net-snmp: global triage (cf. common ELTS work)
  • public IRC monthly team meeting
  • reach out to clarify the intro from last month's report, following unsettled feedback during meeting

Documentation/Scripts

  • ELTS/README.how-to-release-an-update: fix typo
  • ELTS buildd: attempt to diagnose slow perfs, provide comparison with Debian and local builds
  • LTS/Meetings: improve presentation
  • SourceOnlyUpload: clarify/de-dup pbuilder doc
  • LTS/Development: reference build logs URL, reference proposed-updates issue during dists switch, reference new-upstream-versioning discussion, multiple jessie->stretch fixes and clean-ups
  • LTS/Development/Asan: drop wheezy documentation
  • Warn about jruby mis-triage
  • Provide feedback for ksh/CVE-2019-14868
  • Provide feedback for condor update
  • LTS/TestsSuites/nginx: test with new request smuggling test cases

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 12)

Here’s part twelve of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

Worse Than FailureCodeSOD: A Unique Choice

There are many ways to mess up doing unique identifiers. It's a hard problem, and that's why we've sorta agreed on a few distinct ways to do it. First, we can just autonumber. Easy, but it doesn't always scale that well, especially in distributed systems. Second, we can use something like UUIDs: mix a few bits of real data in with a big pile of random data, and you can create a unique ID. Finally, there are some hashing-related options, where the data itself generates its ID.

Tiffanie was digging into some weird crashes in a database application, and discovered that their MODULES table couldn't decide which was correct, and opted for two: MODULE_ID, an autonumbered field, and MODULE_UUID, which one would assume, held a UUID. There were also the requsite MODULE_NAME and similar fields. A quick scan of the table looked like:

MODULE_ID MODULE_NAME MODULE_UUID MODULE_DESC
0 Defects 8461aa9b-ba38-4201-a717-cee257b73af0 Defects
1 Test Plan 06fd18eb-8214-4431-aa66-e11ae2a6c9b3 Test Plan

Now, using both UUIDs and autonumbers is a bit suspicious, but there might be a good reason for that (the UUIDs might be used for tracking versions of installed modules, while the ID is the local database-reference for that, so the ID shouldn't change ever, but the UUID might). Still, given that MODULE_NAME and MODULE_DESC both contain exactly the same information in every case, I suspect that this table was designed by the Department of Redunancy Department.

Still, that's hardly the worst sin you could commit. What would be really bad would be using the wrong datatype for a column. This is a SQL Server database, and so we can safely expect that the MODULE_ID is numeric, the MODULE_NAME and MODULE_DESC must be text, and clearly the MODULE_UUID field should be the UNIQUEIDENTIFIER type, right?

Well, let's look at one more row from this table:

MODULE_ID MODULE_NAME MODULE_UUID MODULE_DESC
11 Releases Releases does not have a UUID Releases

Oh, well. I think I have a hunch what was causing the problems. Sure enough, the program was expecting the UUID field to contain UUIDs, and was failing when a field contained something that couldn't be converted into a UUID.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianArnaud Rebillout: GoAccess 1.4, a detailed tutorial

GoAccess v1.4 was just released a few weeks ago! Let's take this chance to write a loooong tutorial. We'll go over every steps to install and operate GoAccess. This is a tutorial aimed at those who don't play sysadmin every day, and that's why it's so long, I did my best to provide thorough explanations all along, so that it's more than just a "copy-and-paste" kind of tutorial. And for those who do play sysadmin everyday: please try not to fall asleep while reading, and don't hesitate to drop me an e-mail if you spot anything inaccurate in here. Thanks!

Introduction

So what's GoAccess already? GoAccess is a web log analyzer, and it allows you to visualize the traffic for your website, and get to know a bit more about your visitors: how many visitors and hits, for which pages, coming from where (geolocation, operating system, web browser...), etc... It does so by parsing the access logs from your web server, be it Apache, NGINX or whatever.

GoAccess gives you different options to display the statistics, and in this tutorial we'll focus on producing a HTML report. Meaning that you can see the statistics for your website straight in your web browser, under the form of a single HTML page.

For an example, you can have a look at the stats of my blog here: http://goaccess.arnaudr.io.

GoAccess is written in C, it has very few dependencies, it had been around for about 10 years, and it's distributed under the MIT license.

Assumptions

This tutorial is about installing and configuring, so I'll assume that all the commands are run as root. I won't prefix each of them with sudo.

I use the Apache web server, running on a Debian system. I don't think it matters so much for this tutorial though. If you're using NGINX it's fine, you can keep reading.

Also, I will just use the name SITE for the name of the website that we want to analyze with GoAccess. Just replace that with the real name of your site.

I also assume the following locations for your stuff:

  • the website is at /var/www/SITE
  • the logs are at /var/log/apache2/SITE (yes, there is a sub-directory)
  • we're going to save the GoAccess database in /var/lib/goaccess-db/SITE.

If you have your stuff in /srv/SITE/{log,www} instead, no worries, just adjust the paths accordingly, I bet you can do it.

Installation

The latest version of GoAccess is v1.4, and it's not yet available in the Debian repositories. So for this part, you can follow the instructions from the official GoAccess download page. Install steps are explained in details, so there's nothing left for me to say :)

When this is done, let's get started with the basics.

We're talking about the latest version v1.4 here, let's make sure:

$ goaccess --version
GoAccess - 1.4.
...

Now let's try to create a HTML report. I assume that you already have a website up and running.

GoAccess needs to parse the access logs. These logs are optional, they might or might not be created by your web server, depending on how it's configured. Usually, these log files are named access.log, unsurprisingly.

You can check if those logs exist on your system by running this command:

find /var/log -name access.log

Another important thing to know is that these logs can be in different formats. In this tutorial we'll assume that we work with the combined log format, because it seems to be the most common default.

To check what kind of access logs your web server produces, you must look at the configuration for your site.

For an Apache web server, you should have such a line in the file /etc/apache2/sites-enabled/SITE.conf:

CustomLog ${APACHE_LOG_DIR}/SITE/access.log combined

For NGINX, it's quite similar. The configuration file would be something like /etc/nginx/sites-enabled/SITE, and the line to enable access logs would be something like:

access_log /var/log/nginx/SITE/access.log

Note that NGINX writes the access logs in the combined format by default, that's why you don't see the word combined anywhere in the line above: it's implicit.

Alright, so from now on we assume that yes, you have access log files available, and yes, they are in the combined log format. If that's the case, then you can already run GoAccess and generate a report, for example for the log file /var/log/apache2/access.log

goaccess \
    --log-format COMBINED \
    --output /tmp/report.html \
    /var/log/apache2/access.log

It's possible to give GoAccess more than one log files to process, so if you have for example the file access.log.1 around, you can use it as well:

goaccess \
    --log-format COMBINED \
    --output /tmp/report.html \
    /var/log/apache2/access.log \
    /var/log/apache2/access.log.1

If GoAccess succeeds (and it should), you're on the right track!

All is left to do to complete this test is to have a look at the HTML report created. It's a single HTML page, so you can easily scp it to your machine, or just move it to the document root of your site, and then open it in your web browser.

Looks good? So let's move on to more interesting things.

Web server configuration

This part is very short, because in terms of configuration of the web server, there's very little to do. As I said above, the only thing you want from the web server is to create access log files. Then you want to be sure that GoAccess and your web server agree on the format for these files.

In the part above we used the combined log format, but GoAccess supports many other common log formats out of the box, and even allows you to parse custom log formats. For more details, refer to the option --log-format in the GoAccess manual page.

Another common log format is named, well, common. It even has its own Wikipedia page. But compared to combined, the common log format contains less information, it doesn't include the referrer and user-agent values, meaning that you won't have it in the GoAccess report.

So at this point you should understand that, unsurprisingly, GoAccess can only tell you about what's in the access logs, no more no less.

And that's all in term of web server configuration.

Configuration to run GoAccess unprivileged

Now we're going to create a user and group for GoAccess, so that we don't have to run it as root. The reason is that, well, for everything running unattended on your server, the less code runs as root, the better. It's good practice and common sense.

In this case, GoAccess is simply a log analyzer. So it just needs to read the logs files from your web server, and there is no need to be root for that, an unprivileged user can do the job just as well, assuming it has read permissions on /var/log/apache2 or /var/log/nginx.

The log files of the web server are usually part of the adm group (though it might depend on your distro, I'm not sure). This is something you can check easily with the following command:

ls -l /var/log | grep -e apache2 -e nginx

As a result you should get something like that:

drwxr-x--- 2 root adm 20480 Jul 22 00:00 /var/log/apache2/

And as you can see, the directory apache2 belongs to the group adm. It means that you don't need to be root to read the logs, instead any unprivileged user that belongs to the group adm can do it.

So, let's create the goaccess user, and add it to the adm group:

adduser --system --group --no-create-home goaccess
addgroup goaccess adm

And now, let's run GoAccess unprivileged, and verify that it can still read the log files:

setpriv \
    --reuid=goaccess --regid=goaccess \
    --init-groups --inh-caps=-all \
    -- \
    goaccess \
    --log-format COMBINED \
    --output /tmp/report2.html \
    /var/log/apache2/access.log

setpriv is the command used to drop privileges. The syntax is quite verbose, it's not super friendly for tutorials, but don't be scared and read the manual page to learn what it does.

In any case, this command should work, and at this point, it means that you have a goaccess user ready, and we'll use it to run GoAccess unprivileged.

Integration, option A - Run GoAccess once a day, from a logrotate hook

In this part we wire things together, so that GoAccess processes the log files once a day, adds the new logs to its internal database, and generates a report from all that aggregated data. The result will be a single HTML page.

Introducing logrotate

In order to do that, we'll use a logrotate hook. logrotate is a little tool that should already be installed on your server, and that runs once a day, and that is in charge of rotating the log files. "Rotating the logs" means moving access.log to access.log.1 and so on. With logrotate, a new log file is created every day, and log files that are too old are deleted. That's what prevents your logs from filling up your disk basically :)

You can check that logrotate is indeed installed and enabled with this command (assuming that your init system is systemd):

systemctl status logrotate.timer

What's interesting for us is that logrotate allows you to run scripts before and after the rotation is performed, so it's an ideal place from where to run GoAccess. In short, we want to run GoAccess just before the logs are rotated away, in the prerotate hook.

But let's do things in order. At first, we need to write a little wrapper script that will be in charge of running GoAccess with the right arguments, and that will process all of your sites.

The wrapper script

This wrapper is made to process more than one site, but if you have only one site it works just as well, of course.

So let me just drop it on you like that, and I'll explain afterward. Here's my wrapper script:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
#!/bin/bash

# Process log files /var/www/apache2/SITE/access.log,
# only if /var/lib/goaccess-db/SITE exists.
# Create HTML reports in $1, a directory that must exist.

set -eu

OUTDIR=
LOGDIR=/var/log/apache2
DBDIR=/var/lib/goaccess-db

fail() { echo >&2 "$@"; exit 1; }

[ $# -eq 1 ] || fail "Usage: $(basename $0) OUTPUT_DIRECTORY"

OUTDIR=$1

[ -d "$OUTDIR" ] || fail "'$OUTDIR' is not a directory"
[ -d "$LOGDIR" ] || fail "'$LOGDIR' is not a directory"
[ -d "$DBDIR"  ] || fail "'$DBDIR' is not a directory"

for d in $(find "$LOGDIR" -mindepth 1 -maxdepth 1 -type d); do
    site=$(basename "$sitedir")
    dbdir=$DBDIR/$site
    logfile=$d/access.log
    outfile=$OUTDIR/$site.html

    if [ ! -d "$dbdir" ] || [ ! -e "$logfile" ]; then
        echo "‣ Skipping site '$site'"
        continue
    else
        echo "‣ Processing site '$site'"
    fi

    setpriv \
        --reuid=goaccess --regid=goaccess \
        --init-groups --inh-caps=-all \
        -- \
    goaccess \
        --agent-list \
        --anonymize-ip \
        --persist \
        --restore \
        --config-file /etc/goaccess/goaccess.conf \
        --db-path "$dbdir" \
        --log-format "COMBINED" \
        --output "$outfile" \
        "$logfile"
done

So you'd install this script at /usr/local/bin/goaccess-wrapper for example, and make it executable:

chmod +x /usr/local/bin/goaccess-wrapper

A few things to note:

  • We run GoAccess with --persist, meaning that we save the parsed logs in the internal database, and --restore, meaning that we include everything from the database in the report. In other words, we aggregate the data at every run, and the report grows bigger every time.
  • The parameter --config-file /etc/goaccess/goaccess.conf is a workaround for #1849. It should not be needed for future versions of GoAccess > 1.4.

As is, the script makes the assumption that the logs for your site are logged in a sub-directory /var/log/apache2/SITE/. If it's not the case, adjust that in the wrapper accordingly.

The name of this sub-directory is then used to find the GoAccess database directory /var/lib/goaccess-db/SITE/. This directory is expected to exist, meaning that if you don't create it yourself, the wrapper won't process this particular site. It's a simple way to control which sites are processed by this GoAccess wrapper, and which sites are not.

So if you want goaccess-wrapper to process the site SITE, just create a directory with the name of this site under /var/lib/goaccess-db:

mkdir -p /var/lib/goaccess-db/SITE
chown goaccess:goaccess /var/lib/goaccess-db/SITE

Now let's create an output directory:

mkdir /tmp/goaccess-reports
chown goaccess:goaccess /tmp/goaccess-reports

And let's give a try to the wrapper script:

goaccess-wrapper /tmp/goaccess-reports
ls /tmp/goaccess-reports

Which should give you:

SITE.html

At the same time, you can check that GoAccess populated the database with a bunch of files:

ls /var/lib/goaccess-db/SITE

Setting up the logrotate prerotate hook

At this point, we have the wrapper in place. Let's now add a pre-rotate hook so that goaccess-wrapper runs once a day, just before the logs are rotated away.

The logrotate config file for Apache2 is located at /etc/logrotate.d/apache2, and for NGINX it's at /etc/logrotate.d/nginx. Among the many things you'll see in this file, here's what is of interest for us:

  • daily means that your logs are rotated every day
  • sharedscripts means that the pre-rotate and post-rotate scripts are executed once total per rotation, and not once per log file.

In the config file, there is also this snippet:

prerotate
    if [ -d /etc/logrotate.d/httpd-prerotate ]; then \
        run-parts /etc/logrotate.d/httpd-prerotate; \
    fi; \
endscript

It indicates that scripts in the directory /etc/logrotate.d/httpd-prerotate/ will be executed before the rotation takes place. Refer to the man page run-parts(8) for more details...

Putting all of that together, it means that logs from the web server are rotated once a day, and if we want to run scripts just before the rotation, we can just drop them in the httpd-prerotate directory. Simple, right?

Let's first create this directory if it doesn't exist:

mkdir -p /etc/logrotate.d/httpd-prerotate/

And let's create a tiny script at /etc/logrotate.d/httpd-prerotate/goaccess:

1
2
#!/bin/sh
exec goaccess-wrapper /tmp/goaccess-reports

Don't forget to make it executable:

chmod +x /etc/logrotate.d/httpd-prerotate/goaccess

As you can see, the only thing that this script does is to invoke the wrapper with the right argument, ie. the output directory for the HTML reports that are generated.

And that's all. Now you can just come back tomorrow, check the logs, and make sure that the hook was executed and succeeded. For example, this kind of command will tell you quickly if it worked:

journalctl | grep logrotate

Integration, option B - Run GoAccess once a day, from a systemd service

OK so we've just seen how to use a logrotate hook. One downside with that is that we have to drop privileges in the wrapper script, because logrotate runs as root, and we don't want to run GoAccess as root. Hence the rather convoluted syntax with setpriv.

Rather than embedding this kind of thing in a wrapper script, we can instead run the wrapper script from a [systemd][] service, and define which user runs the wrapper straight in the systemd service file.

Introducing systemd niceties

So we can create a systemd service, along with a systemd timer that fires daily. We can then set the user and group that execute the script straight in the systemd service, and there's no need for setpriv anymore. It's a bit more streamlined.

We can even go a bit further, and use systemd parameterized units (also called templates), so that we have one service per site (instead of one service that process all of our sites). That will simplify the wrapper script a lot, and it also looks nicer in the logs.

With this approach however, it seems that we can't really run exactly before the logs are rotated away, like we did in the section above. But that's OK. What we'll do is that we'll run once a day, no matter the time, and we'll just make sure to process both log files access.log and access.log.1 (ie. the current logs and the logs from yesterday). This way, we're sure not to miss any line from the logs.

Note that GoAccess is smart enough to only consider newer entries from the log files, and discard entries that are already in the database. In other words, it's safe to parse the same log file more than once, GoAccess will do the right thing. For more details see "INCREMENTAL LOG PROCESSING" from man goaccess.

systemd]: https://freedesktop.org/wiki/Software/systemd/

Implementation

And here's how it all looks like.

First, a little wrapper script for GoAccess:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#!/bin/bash

# Usage: $0 SITE DBDIR LOGDIR OUTDIR

set -eu

SITE=$1
DBDIR=$2
LOGDIR=$3
OUTDIR=$4

LOGFILES=()
for ext in log log.1; do
    logfile="$LOGDIR/access.$ext"
    [ -e "$logfile" ] && LOGFILES+=("$logfile")
done

if [ ${#LOGFILES[@]} -eq 0 ]; then
    echo "No log files in '$LOGDIR'"
    exit 0
fi

goaccess \
    --agent-list \
    --anonymize-ip \
    --persist \
    --restore \
    --config-file /etc/goaccess/goaccess.conf \
    --db-path "$DBDIR" \
    --log-format "COMBINED" \
    --output "$OUTDIR/$SITE.html" \
    "${LOGFILES[@]}"

This wrapper does very little. Actually, the only thing it does is to check for the existence of the two log files access.log and access.log.1, to be sure that we don't ask GoAccess to process a file that does not exist (GoAccess would not be happy about that).

Save this file under /usr/local/bin/goaccess-wrapper, don't forget to make it executable:

chmod +x /usr/local/bin/goaccess-wrapper

Then, create a systemd parameterized unit file, so that we can run this wrapper as a systemd service. Save it under /etc/systemd/system/goaccess@.service:

[Unit]
Description=Update GoAccess report - %i
ConditionPathIsDirectory=/var/lib/goaccess-db/%i
ConditionPathIsDirectory=/var/log/apache2/%i
ConditionPathIsDirectory=/tmp/goaccess-reports
PartOf=goaccess.service

[Service]
Type=oneshot
User=goaccess
Group=goaccess
Nice=19
ExecStart=/usr/local/bin/goaccess-wrapper \
 %i \
 /var/lib/goaccess-db/%i \
 /var/log/apache2/%i \
 /tmp/goaccess-reports

So, what is a systemd parameterized unit? It's a service to which you can pass an argument when you enable it. The %i in the unit definition will be replaced by this argument. In our case, the argument will be the name of the site that we want to process.

As you can see, we use the directive ConditionPathIsDirectory= extensively, so that if ever one of the required directories does not exist, the unit will just be skipped (and marked as such in the logs). It's a graceful way to fail.

We run the wrapper as the user and group goaccess, thanks to User= and Group=. We also use Nice= to give a low priority to the process.

At this point, it's already possible to test. Just make sure that you created a directory for the GoAccess database:

mkdir -p /var/lib/goaccess-db/SITE
chown goaccess:goaccess /var/lib/goaccess-db/SITE

Also make sure that the output directory exists:

mkdir /tmp/goaccess-reports
chown goaccess:goaccess /tmp/goaccess-reports

Then reload systemd and fire the unit to see if it works:

systemctl daemon-reload
systemctl start goaccess@SITE.service
journalctl | tail

And that should work already.

As you can see, the argument, SITE, is passed in the systemctl start command. We just append it after the @, in the name of the unit.

Now, let's create another GoAccess service file, which sole purpose is to group all the parameterized units together, so that we can start them all in one go. Note that we don't use a systemd target for that, because ultimately we want to run it once a day, and that would not be possible with a target. So instead we use a dummy oneshot service.

So here it is, saved under /etc/systemd/system/goaccess.service:

[Unit]
Description=Update GoAccess reports
Requires= \
 goaccess@SITE1.service \
 goaccess@SITE2.service

[Service]
Type=oneshot
ExecStart=true

As you can see, we simply list the sites that we want to process in the Requires= directive. In this example we have two sites named SITE1 and SITE2.

Let's ensure that everything is still good:

systemctl daemon-reload
systemctl start goaccess.service
journalctl | tail

Check the logs, both sites SITE1 and SITE2 should have been processed.

And finally, let's create a timer, so that systemd runs goaccess.service once a day. Save it under /etc/systemd/system/goaccess.timer.

[Unit]
Description=Daily update of GoAccess reports

[Timer]
OnCalendar=daily
RandomizedDelaySec=1h
Persistent=true

[Install]
WantedBy=timers.target

Finally, enable the timer:

systemctl daemon-reload
systemctl enable --now goaccess.timer

At this point, everything should be OK. Just come back tomorrow and check the logs with something like:

journalctl | grep goaccess

Last word: if you have only one site to process, of course you can simplify, for example you can hardcode all the paths in the file goaccess.service instead of using a parameterized unit. Up to you.

Daily operations

So in this part, we assume that you have GoAccess all setup and running, once a day or so. Let's just go over a few things worth noting.

Serve your report

Up to now in this tutorial, we created the reports in /tmp/goaccess-reports, but that was just for the sake of the example. You will probably want to save your reports in a directory that is served by your web server, so that, well, you can actually look at it in your web browser, that was the point, right?

So how to do that is a bit out of scope here, and I guess that if you want to monitor your website, you already have a website, so you will have no trouble serving the GoAccess HTML report.

However there's an important detail to be aware of: GoAccess shows all the IP addresses of your visitors in the report. As long as the report is private it's OK, but if ever you make your GoAccess report public, then you should definitely invoke GoAccess with the option --anonymize-ip.

Keep an eye on the logs

In this tutorial, the reports we create, along with the GoAccess databases, will grow bigger every day, forever. It also means that the GoAccess processing time will grow a bit each day.

So maybe the first thing to do is to keep an eye on the logs, to see how long it takes to GoAccess to do its job every day. Also, maybe you'd like to keep an eye on the size of the GoAccess database with:

du -sh /var/lib/goaccess-db/SITE

If your site has few visitors, I suspect it won't be a problem though.

You could also be a bit pro-active in preventing this problem in the future, and for example you could break the reports into, say, monthly reports. Meaning that every month, you would create a new database in a new directory, and also start a new HTML report. This way you'd have monthly reports, and you make sure to limit the GoAccess processing time, by limiting the database size to a month.

This can be achieved very easily, by including something like YEAR-MONTH in the database directory, and in the HTML report. You can handle that automatically in the wrapper script, for example:

sfx=$(date +'%Y-%m')

mkdir -p $DBDIR/$sfx

goaccess \
    --db-path $DBDIR/$sfx \
    --output "$OUTDIR/$SITE-$sfx.html" \
    ...

You get the idea.

Further notes

Migration from older versions

With the --persist option, GoAccess keeps all the information from the logs in a database, so that it can re-use it later. In prior versions, GoAccess used the Tokyo Cabinet key-value store for that. However starting from v1.4, GoAccess dropped this dependency and now uses its own database format.

As a result, the previous database can't be used anymore, you will have to remove it and restart from zero. At the moment there is no way to convert the data from the old database to the new one. If you're interested, this is discussed upstream at [#1783][bug-1783].

Another thing that changed with this new version is the name for some of the command-line options. For example, --load-from-disk was dropped in favor of --restore, and --keep-db-files became --persist. So you'll have to look at the documentation a bit, and update your script(s) accordingly.

Other ways to use GoAccess

It's also possible to do it completely differently. You could keep GoAccess running, pretty much like a daemon, with the --real-time-html option, and have it process the logs continuously, rather than calling it on a regular basis.

It's also possible to see the GoAccess report straight in the terminal, thanks to libncurses, rather than creating a HTML report.

And much more, GoAccess is packed with features.

Conclusion

I hope that this tutorial helped some of you folks. Feel free to drop an e-mail for comments.

,

Planet DebianEnrico Zini: Toxic positivity links

“That which we do not bring to consciousness appears in our lives as fate.” — Carl Jung
Emotional support of others can take the form of surface-level consolation. But compassion means being willing to listen and feel, even when it's uncomfortable.
Ultimately, the driving force behind the “power of positive thinking” meme is the word “power.” But what about those whose bodies are not powerful? What about those who are vulnerable? What about those who are tired, isolated, and struggling? What about those who are ill? What about those who lack
I have often been dismissive or unhelpful when someone close to me was dealing with painful circumstances, having learned to “accentuate the positive.” In the more recent past, I have recognized these behavioral patterns as part of what some mental health professionals term, “toxic positivity.”
Toxic positivity is the overgeneralization of a happy, optimistic state resulting in the denial & invalidation of the authentic human emotional experience.

Planet DebianHolger Levsen: 20200802-debconf4

DebConf4

This tshirt is 16 years old and from DebConf4. Again, I should probably wash it at 60 celcius for once...

DebConf4 was my 2nd DebConf and took place in Porto Alegre, Brasil.

Like many DebConfs, it was a great opportunity to meet people: I remember sitting in the lobby of the venue and some guy asked me what I did in Debian and I told him about my little involvements and then asked him what he was doing, and he told me he wanted to become involved in Debian again, after getting distracted away. His name was Ian Murdock...

DebConf4 also had a very cool history session in the hallway track (IIRC, but see below) with Bdale Garbee, Ian Jackson and Ian Murdock and with a young student named Biella Coleman busy writing notes.

That same hallway also saw the kickoff meeting of the Debian Women project, though sadly today http://tinc.debian.net ("there's no cabal") only shows an apache placeholder page and not a picture of that meeting.

DebCon4 was also the first time I got a bit involved in preparing DebConf, together with Jonas Smedegaard I've set up some computers there, using FAI. I had no idea that this was the start of me contributing to DebConfs for text ten years.

And of course I also saw some talks, including one which I really liked, which then in turn made me notice there were no people doing video recordings, which then lead to something...

I missed the group picture of this one. I guess it's important to me to mention it because I've met very wonderful people at this DebConf... (some mentioned in this post, some not. You know who you are!)

Afterwards some people stayed in Porto Alegre for FISL, where we saw Lawrence Lessing present Creative Commons to the world for the first time. On the flight back I sat next to a very friendly guy from Poland and we talked almost the whole flight and then we never saw each other again, until 15 years later in Asia...

Oh, and then, after DebConf4, I used IRC for the first time. And stayed in the #debconf4 IRC channel for quite some years :)

Finally, DebConf4 and more importantly FISL, which was really big (5000 people?) and after that, the wizard of OS conference in Berlin (which had a very nice talk about Linux in different places in the world, illustrating the different states of 'first they ignore you, then they laugh at you, then they fight you, then you win'), made me quit my job at a company supporting Windows- and Linux-setups as I realized I'd better start freelancing with Linux-only jobs. So, once again, my life would have been different if I would not have attended these events!

Note: yesterdays post about DebConf3 was thankfully corrected twice. This might well happen to this post too! :)

Planet DebianEnrico Zini: Libreoffice presentation tips

Snap guides

Dragging from the rulers does not always create snap guides. If it doesn't, click on the slide background, "Snap guides", "Insert snap guide". In my case, after the first snap guide was manually inserted, it was possible to drag new one from the rulers.

Master slides

How to edit a master slide

  • Show master slides side pane
  • Right click on master slide
  • Edit Master...
  • An icon appears in the toolbar: "Close Master View"
  • Apply to all slides might not apply to the first slide created as the document was opened

Change styles in master slide

Do not change properties of text by selecting placeholder text in the Master View. Instead, open the Styles and formatting sidebar, and edit the styles in there.

This means the style changes are applied to pages in all layouts, not just the "Title, Content" layout that is the only one editable in the "Master View".

How to duplicate a master slide

There seems to be no feature implemented for this, but you can do it, if you insist:

  • Save a copy of the document
  • Rename the master slide
  • Drag a slide, that uses the renamed master slide, from the copy of the document to the original one

It's needed enough that someone made a wikihow: https://www.wikihow.com/Copy-a-LibreOffice-Impress-Master-Slide archive.org

How to change the master slide for a layout that is not "Title, Content"

I could not find a way to do it, but read on for a workaround.

I found an ask.libreoffice.org question that went unanswered.

I asked on #libreoffice on IRC and got no answer:

Hello. I'm doing the layout for a presentation in impress, and I can edit all sorts of aspects of the master slide. It seems that I can only edit the "Title, Content" layout of the master slide, though. I'd like to edit, for example, the "Title only" layout so that the title appears in a different place than the top of the page. Is it possible to edit specific layouts in a master page?

In the master slide editor it seems impossible to select a layout, for example.

Alternatively I tried creating multiple master slides, but then if I want to create a master slide for a title page, there's no way to remove the outline box, or the title box.

My work around has been to create multiple master slides, one for each layout. For a title layout, I moved the outline box into a corner, and one has to remove it manually after create a new slide.

There seems to be no way of changing the position of elements not found in the "Title, Content" layout, like "Subtitle". On the other hand, given that one's working with an entirely different master slide, one can abuse the outline box as a subtitle.

Note that if you later decide to change a style element for all the slides, you'll need to go propagate the change to the "Styles and Formatting" menu of all master slides you're using.

Planet DebianAndrew Cater: Debian 10.5 media testing - 202001082250 - last few debian-live images being tested for amd64 - Calamares issue - Post 5 of several.

Last few debian-live images being tested for amd64. We have found a bug with the debian-live Gnome flavour. This specifically affects installs after booting from the live media and then installing to the machine using  the Calamares installer found on the desktop. The bug was introduced as a fix for one issue that has produced further buggy behaviour as a result.

Fixes are known - we've had highvoltage come and debug them with us - but will not be put out with this release but will wait for the 10.6 release which will allow for a longer time for debugging overall.

You can still run from the live-media, you can still install with the standard Debian installers found in the menu of the live-media disk - this is _only_ a limited time issue with the Calamares installer. At this point in the release cycle, it's been judged better to release the images as they are - with known and documented issues - than to try and debug them in a hurry and risk damaging or delaying a stable point release.

Planet DebianEnrico Zini: Gender, inclusive communities, and dragonflies

From https://en.wikipedia.org/wiki/Dragonfly#Sex_ratios:

Sex ratios

The sex ratio of male to female dragonflies varies both temporally and spatially. Adult dragonflies have a high male-biased ratio at breeding habitats. The male-bias ratio has contributed partially to the females using different habitats to avoid male harassment.

As seen in Hine's emerald dragonfly (Somatochlora hineana), male populations use wetland habitats, while females use dry meadows and marginal breeding habitats, only migrating to the wetlands to lay their eggs or to find mating partners.

Unwanted mating is energetically costly for females because it affects the amount of time that they are able to spend foraging.

,

Planet DebianMolly de Blanc: busy busy

I’ve been working with Karen Sandler over the past few months on the first draft of the Declaration of Digital Autonomy. Feedback welcome, please be constructive. It’s a pretty big deal for me, and feels like the culmination of a lifetime of experiences and the start of something new.

We talked about it at GUADEC and HOPE. We don’t have any other talks scheduled yet, but are available for events, meetups, dinner parties, and b’nai mitzvahs.

Planet DebianAndrew Cater: Debian 10.5 media testing - 202008012055 - post 4 of several

We've more or less finished testing on the Debian install images. Now moving on to the debian-live images. Bugs found and being triaged live as I type. Lots of typing and noises in the background of the video conference. Now at about 12-14 hours in on this for some of the participants. Lots of good work still going on, as ever.

Planet DebianAndrew Cater: Debian 10.5 media testing - pause for supper - 202001081715 - post 3 of several

Various of the folk doing this have taken a food break until 1900 local. A few glitches, a few that needed to be tried over again - but it's all going fairly well.

It is likely that at least one of the CD images will be dropped. The XFCE desktop install CD for i386 is now too large to fit on CD media. The netinst .iso files / the DVD 1 file / any of the larger files available via Jigdo will all help you achieve the same result.

There are relatively few machines that are i386 architecture only - it might be appropriate for people to use 64 bit amd64 from this point onwards as pure i386 machines are now approaching ten years old as a minimum. If you do need a graphical user environment for a pure i386 machine, it can be installed by using an expert install or using tasksel in the installation process.

Planet DebianHolger Levsen: 20200801-debconf3

DebConf3

This tshirt is 17 years old and from DebConf3. I should probably wash it at 60 celcius for once...

DebConf3 was my first DebConf and took place in Oslo, Norway, in 2003. I was very happy to be invited, like any Debian contributor at that time, and that Debian would provide food and accomodation for everyone. Accomodation was sleeping on the floor in some classrooms of an empty school and I remember having tasted grasshoppers provided by a friendly Gunnar Wolf there, standing in line on the first day with the SSH maintainer (OMG!1 (update: I originally wrote here that it wasn't Colin back then, but Colin mailed me to say that he was indeed maintaining SSH even back then, so I've met a previous maintainer there)) and meeting the one Debian person I had actually worked with before: Thomas Lange or MrFAI (update: Thomas also mailed me and said this was at DebConf5). In Oslo I also was exposed to Skolelinux / Debian Edu for the first time, saw a certain presentation from the FTP masters and also noticed some people recording the talks, though as I learned later these videos were never released to the public. And there was this fiveteen year old called Toresbe, who powered on the PDP's which were double his age. And then actually made use of them. And and and.

I'm very happy I went to this DebConf. Without going my Debian journey would have been very different today. Thanks to everyone who made this such a welcoming event. Thanks to anyone who makes any event welcoming! :)

,

Krebs on SecurityThree Charged in July 15 Twitter Compromise

Three individuals have been charged for their alleged roles in the July 15 hack on Twitter, an incident that resulted in Twitter profiles for some of the world’s most recognizable celebrities, executives and public figures sending out tweets advertising a bitcoin scam.

Amazon CEO Jeff Bezos’s Twitter account on the afternoon of July 15.

Nima “Rolex” Fazeli, a 22-year-old from Orlando, Fla., was charged in a criminal complaint in Northern California with aiding and abetting intentional access to a protected computer.

Mason “Chaewon” Sheppard, a 19-year-old from Bognor Regis, U.K., also was charged in California with conspiracy to commit wire fraud, money laundering and unauthorized access to a computer.

A U.S. Justice Department statement on the matter does not name the third defendant charged in the case, saying juvenile proceedings in federal court are sealed to protect the identity of the youth. But an NBC News affiliate in Tampa reported today that authorities had arrested 17-year-old Graham Clark as the alleged mastermind of the hack.

17-year-old Graham Clark of Tampa, Fla. was among those charged in the July 15 Twitter hack. Image: Hillsborough County Sheriff’s Office.

Wfla.com said Clark was hit with 30 felony charges, including organized fraud, communications fraud, one count of fraudulent use of personal information with over $100,000 or 30 or more victims, 10 counts of fraudulent use of personal information and one count of access to a computer or electronic device without authority. Clark’s arrest report is available here (PDF). A statement from prosecutors in Florida says Clark will be charged as an adult.

On Thursday, Twitter released more details about how the hack went down, saying the intruders “targeted a small number of employees through a phone spear phishing attack,” that “relies on a significant and concerted attempt to mislead certain employees and exploit human vulnerabilities to gain access to our internal systems.”

By targeting specific Twitter employees, the perpetrators were able to gain access to internal Twitter tools. From there, Twitter said, the attackers targeted 130 Twitter accounts, tweeting from 45 of them, accessing the direct messages of 36 accounts, and downloading the Twitter data of seven.

Among the accounts compromised were democratic presidential candidate Joe BidenAmazon CEO Jeff BezosPresident Barack ObamaTesla CEO Elon Musk, former New York Mayor Michael Bloomberg and investment mogul Warren Buffett.

The hacked Twitter accounts were made to send tweets suggesting they were giving away bitcoin, and that anyone who sent bitcoin to a specified account would be sent back double the amount they gave. All told, the bitcoin accounts associated with the scam received more than 400 transfers totaling more than $100,000.

Sheppard’s alleged alias Chaewon was mentioned twice in stories here since the July 15 incident. On July 16, KrebsOnSecurity wrote that just before the Twitter hack took place, a member of the social media account hacking forum OGUsers named Chaewon advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

The OGUsers forum user “Chaewon” taking requests to modify the email address tied to any twitter account.

On July 17, The New York Times ran a story that featured interviews with several people involved in the attack. The young men told The Times they weren’t responsible for the Twitter bitcoin scam and had only brokered the purchase of accounts from the Twitter hacker — who they referred to only as “Kirk.”

One of those interviewed by The Times used the alias “Ever So Anxious,” and said he was a 19-year from the U.K. In my follow-up story on July 22, it emerged that Ever So Anxious was in fact Chaewon.

The person who shared that information was the principal subject of my July 16 post, which followed clues from tweets sent by one of the accounts claimed during the Twitter compromise back to a 21-year-old from the U.K. who uses the nickname PlugWalkJoe.

That individual shared a series of screenshots showing he had been in communications with Chaewon/Ever So Anxious just prior to the Twitter hack, and had asked him to secure several desirable Twitter usernames from the Twitter hacker. He added that Chaewon/Ever So Anxious also was known as “Mason.”

The negotiations over highly-prized Twitter usernames took place just prior to the hijacked celebrity accounts tweeting out bitcoin scams. PlugWalkJoe is pictured here chatting with Ever So Anxious/Chaewon/Mason using his Discord username “Beyond Insane.”

On July 22, KrebsOnSecurity interviewed Mason/Chaewon/Ever So Anxious, who confirmed that PlugWalkJoe had indeed asked him to ask Kirk to change the profile picture and display name for a specific Twitter account on July 15. Mason/Chaewon/Ever So Anxious acknowledged that while he did act as a “middleman” between Kirk and others seeking to claim desirable Twitter usernames, he had nothing to do with the hijacking of the VIP Twitter accounts for the bitcoin scam that same day.

“Encountering Kirk was the worst mistake I’ve ever made due to the fact it has put me in issues I had nothing to do with,” he said. “If I knew Kirk was going to do what he did, or if even from the start if I knew he was a hacker posing as a rep I would not have wanted to be a middleman.”

Another individual who told The Times he worked with Ever So Anxious/Chaewon/Mason in communicating with Kirk said he went by the nickname “lol.” On July 22, KrebsOnSecurity identified lol as a young man who went to high school in Danville, Calif.

Federal investigators did not mention lol by his nickname or his real name, but the charging document against Sheppard says that on July 21 federal agents executed a search warrant at a residence in Northern California to question a juvenile who assisted Kirk and Chaewon in selling access to Twitter accounts. According to that document, the juvenile and Chaewon had discussed turning themselves in to authorities after the Twitter hack became publicly known.

CryptogramFriday Squid Blogging: Squid Proteins for a Better Face Mask

Researchers are synthesizing squid proteins to create a face mask that better survives cleaning. (And you thought there was no connection between squid and COVID-19.) The military thinks this might have applications for self-healing robots.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramData and Goliath Book Placement

Notice the copy of Data and Goliath just behind the head of Maine Senator Angus King.

Screenshot of MSNBC interview with Angus King

This demonstrates the importance of a vibrant color and a large font.

Kevin RuddABC: Closing The Gap, AUSMIN & Public Health

E&OE TRANSCRIPT
TELEVISION INTERVIEW

ABC NEWS CHANNEL
AFTERNOON BRIEFING
31 JULY 2020

Patricia Karvelas
My next guest this afternoon is the former prime minister Kevin Rudd. He’s the man that delivered the historic Apology to the stolen generations and launched the original Close the Gap targets. Of course, yesterday, there was a big revamp of the Close the Gap so we thought it was a good idea to talk to the man originally responsible. Kevin Rudd, welcome.

Kevin Rudd
Good to be with you. Patricia.

Patricia Karvelas
Prime Minister Scott Morrison said there had been a failure to partner with Indigenous people to develop and deliver the 2008 targets. Is that something you regret?

Kevin Rudd
Oh, Prime Minister Morrison is always out to differentiate himself from what previous Labor governments have done. We worked closely with Indigenous leaders at the time through minister Jenny Macklin in framing those Closing the Gap targets. The bottom line is: we deliver the National Apology; we established a Closing the Gap framework, which we thought should be measurable; and on top of that, Patricia, what we also did was, we negotiated the first-ever commonwealth-state agreement in 2008-9 over the following 10-year period, which had Closing the Gap targets as the basis for the funding commitments by the commonwealth and the states. Those things have been sustained into the future. If the Indigenous leadership of Australia have decided that it’s time to refresh the targets then I support Pat Turner’s leadership and I support what Indigenous leaders have done.

Patricia Karvelas
She’s got a seat at the table though. I remember, you know, I covered it extensively at the time. But she has got a point and they have a point that they now have a seat at the table in a different partnership model than was delivered originally.

Kevin Rudd
Well, as you know, the realities back in 2007 were radically different. Back then there was a huge partisan fight over whether we should have a National Apology. We had people like Peter Dutton and Tony Abbott threatening not to participate in the Apology. So it was a highly partisan environment back then. So these things evolve over time. The Apology remains in place. The national statement each year on the anniversary of the Apology remains in place on progress in achieving Closing the Gap, our successes and our failures. But yes, I welcome any advance that’s been made. But here’s the rub, Patricia: why have there been challenges in delivering on previous Closing the Gap targets? In large part it’s because in the 2014 budget, the first year after the current Coalition government took office, as you know, someone who’s covered the area extensively, they pulled out half a billion dollars worth of funding. Now you’re not going to achieve targets, if simultaneously you gut the funding capacity to act in these areas. That’s what essentially happened over the last five-to-six years.

Patricia Karvelas
That’s absolutely part of the story. But is it all of the story? I mean, if you look at failure to deliver on these targets, it’s been very disappointing for Aboriginal Australians. But I think for Australians who wanted to see the gap closed because it’s the right thing to do; it’s the kind of country they want to live in. There are other reasons aren’t there, that the gap hasn’t been closed? Isn’t one of the reasons that it’s lacked Aboriginal authority and ownership, that it’s been a top-down approach?

Kevin Rudd
Well, I welcome the statement by Pat Turner in bringing Indigenous leadership to the table with these new targets for the future. I’m fully supportive of that. You’re looking at someone who has stood for a lifetime in empowerment of Indigenous organisations. As I said, realities change over time, and I welcome what will happen in the future. But the bottom line is, Patricia, with or without Indigenous leadership from the ground up, nothing will happen in the absence of physical resources as well. And that is a critical part of the equation as I think you’ve just agreed with me. And we can have as many notional targets as we like, but if on day two you, as it were, disembowel the funding arrangements, which is what happened under the current government, guess what: nothing happens. And I note that when these new targets were announced yesterday that Ken Wyatt and the Prime Minister were silent on the question of future funding commitments by the commonwealth. So our Closing the Gap targets, yes, they weren’t all realised. We were on track to achieve two of the six targets that we set. We made some progress on another two. And we were kind of flatlining when it came to the remaining two. But I make no apology for measurement, Patricia, because unless you measure things, guess what? They never happen. And so I’m all for actually an annual report card on success and failure. That’s why I did it in the first place, and without apology.

Patricia Karvelas
I want to move on just to another story that was big this week. What did you make of this week’s AUSMIN talks and the Foreign Minister’s emphasis on Australia taking what is an independent position here, particularly with our relationship with China, was that significant?

Kevin Rudd
Well, whacko! The Australian Foreign Minister says we should have an independent foreign policy! Hold the front page! I mean, for God’s sake.

Patricia Karvelas
Well, it was in the AUSMIN framework. I mean, it wasn’t just a statement to the media, do you think?

Kevin Rudd
Yeah, yeah, but you know, the function of the national government of Australia is to run the foreign policy of Australia, an independent foreign policy. And if the conservatives have recently discovered this principle is a good one, well, I welcome them to the table. That’s been our view for about the last hundred years that the Australian Labor Party has been engaged in the foreign policy debates of this country. But why did she say that? That’s the more important question, I think, Patricia. I think the Australian Government, both Morrison and the Foreign Minister looked at Secretary of State Pompeo’s speech at the Nixon Library a week or so ago when effectively he called for a Second Cold War against China and, within that, called for the overthrow of the Chinese Communist Party. Even for the current Australian conservative government, that looked like a bridge too far, and I think they basically took fright at what they were walking into. And my judgment is: it’s very important to separate out our national interests from those the United States; secondly, understand what a combined allied strategy could and should be on China, as opposed to finding yourself wrapped up either in President Trump’s re-election strategy or Secretary of State Pompeo’s interest in securing the Republican nomination in 2024. These are quite separate political matters as opposed to national strategy.

Patricia Karvelas
Just on COVID, before I let you go, the Queensland Government has declared all of Greater Sydney as a COVID-19 hotspot and the state’s border will be closed to people travelling from that region from 1am on Saturday. Is that the right decision?

Kevin Rudd
Well, absolutely right. I mean, Premier Palaszczuk has faced like every premier, Daniel Andrews and Gladys Berejiklian, very hard public policy decisions. But what Premier Palaszczuk has done — and I’ve been here in Queensland for the last three and a half months now, observing this on a daily basis — is that she has taken the Chief Medical Officer’s advice day-in, day-out and acted accordingly. She’s come under enormous attack within Queensland, led initially by the Murdoch media, followed up by Frecklington, the leader of the LNP, saying ‘open the borders’. In fact, I think Frecklington called for the borders to be opened some 60 or 70 separate times, but to give Palaszczuk her due, she’s just stood her ground and said ‘my job is to give effect to the Chief Medical Officer’s advice, despite all the political clamour to the contrary’. So as she did then and as she does now, I think that’s right in terms of the public health and wellbeing of your average Queenslanders, including me.

Patricia Karvelas
Including you. And now you are very much a long-standing Queenslander being there for that long. Kevin Rudd, thank you so much for joining us this afternoon.

Kevin Rudd
Still from Queensland. Here to help. Bye.

Patricia Karvelas
Always. That’s the former prime minister Kevin Rudd, Joining me to talk about yesterday’s Closing the Gap announcement, defending his government’s legacy there but also, of course, talking about the failure as well to deliver on those targets. But particularly pointed comments around the withdrawal of funding in relation to Indigenous affairs which happened under the Abbott Government and he says was responsible for the failure to deliver at the rate that was expected, and it’s been obviously a disappointing journey not quite as planned. Now, a whole bunch of new targets.

The post ABC: Closing The Gap, AUSMIN & Public Health appeared first on Kevin Rudd.

LongNowPredicting the Animals of the Future

Jim Cooke / Gizmodo

Gizmodo asks half a dozen natural historians to speculate on who is going to be doing what jobs on Earth after the people disappear. One of the streams that runs wide and deep through this series of fun thought experiments is how so many niches stay the same through catastrophic changes in the roster of Earth’s animals. Dinosaurs die out but giant predatory birds evolve to take their place; butterflies took over from (unrelated) dot-winged, nectar-sipping giant lacewing pollinator forebears; before orcas there were flippered ocean-going crocodiles, and there will probably be more one day.

In Annie Dillard’s Pulitzer Prize-winning Pilgrim at Tinker Creek, she writes about a vision in which she witnesses glaciers rolling back and forth “like blinds” over the Appalachian Mountains. In this Gizmodo piece, Alexis Mychajliw of the La Brea Tar Pits & Museum talks about how fluctuating sea levels connected island chains or made them, fusing and splitting populations in great oscillating cycles, shrinking some creatures and giantizing others. There’s something soothing in the view from orbit paleontologists, like other deep-time mystics, possess, embody, and transmit: a sense for the clockwork of the cosmos and its orderliness, an appreciation for the powerful resilience of life even in the face of the ephemerality of life-forms.

While everybody interviewed here has obvious pet futures owing to their areas of interest, hold all of them superimposed together and you’ll get a clearer image of the secret teachings of biology…

(This article must have been inspired deeply by Dougal Dixon’s book After Man, but doesn’t mention him – perhaps a fair turn, given Dixon was accused of plagiarizing Wayne Barlowe for his follow-up, Man After Man.)

Worse Than FailureError'd: Please Reboot Faster, I Can't Wait Any Longer

"Saw this at a German gas station along the highway. The reboot screen at the pedestal just kept animating the hourglass," writes Robin G.

 

"Somewhere, I imagine there's a large number of children asking why their new bean bag is making them feel hot and numb," Will N. wrote.

 

Joel B. writes, "I came across these 'deals' on the Microsoft Canada store. Normally I'd question it, but based on my experiences with Windows, I bet, to them, the math checks out."

 

Kyle H. wrote, "Truly, nothing but the best quality strip_zeroes will be accepted."

 

"My Nan is going to be thrilled at the special discount on these masks!" Paul R. wrote.

 

Paul G. writes, "I know it seemed like the hours were passing more slowly, and thanks to Apple, I now know why."

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

CryptogramFake Stories in Real News Sites

Fireeye is reporting that a hacking group called Ghostwriter broke into the content management systems of Eastern European news sites to plant fake stories.

From a Wired story:

The propagandists have created and disseminated disinformation since at least March 2017, with a focus on undermining NATO and the US troops in Poland and the Baltics; they've posted fake content on everything from social media to pro-Russian news websites. In some cases, FireEye says, Ghostwriter has deployed a bolder tactic: hacking the content management systems of news websites to post their own stories. They then disseminate their literal fake news with spoofed emails, social media, and even op-eds the propagandists write on other sites that accept user-generated content.

That hacking campaign, targeting media sites from Poland to Lithuania, has spread false stories about US military aggression, NATO soldiers spreading coronavirus, NATO planning a full-on invasion of Belarus, and more.


Kevin RuddAIIA: The NT’s Global Opportunities and Challenges

REMARKS AT THE LAUNCH OF THE
NORTHERN TERRITORY BRANCH OF THE
AUSTRALIAN INSTITUTE OF INTERNATIONAL AFFAIRS

 

 

Image: POIS Tom Gibson/ADF

 

The post AIIA: The NT’s Global Opportunities and Challenges appeared first on Kevin Rudd.

Krebs on SecurityIs Your Chip Card Secure? Much Depends on Where You Bank

Chip-based credit and debit cards are designed to make it infeasible for skimming devices or malware to clone your card when you pay for something by dipping the chip instead of swiping the stripe. But a recent series of malware attacks on U.S.-based merchants suggest thieves are exploiting weaknesses in how certain financial institutions have implemented the technology to sidestep key chip card security features and effectively create usable, counterfeit cards.

A chip-based credit card. Image: Wikipedia.

Traditional payment cards encode cardholder account data in plain text on a magnetic stripe, which can be read and recorded by skimming devices or malicious software surreptitiously installed in payment terminals. That data can then be encoded onto anything else with a magnetic stripe and used to place fraudulent transactions.

Newer, chip-based cards employ a technology known as EMV that encrypts the account data stored in the chip. The technology causes a unique encryption key — referred to as a token or “cryptogram” — to be generated each time the chip card interacts with a chip-capable payment terminal.

Virtually all chip-based cards still have much of the same data that’s stored in the chip encoded on a magnetic stripe on the back of the card. This is largely for reasons of backward compatibility since many merchants — particularly those in the United States — still have not fully implemented chip card readers. This dual functionality also allows cardholders to swipe the stripe if for some reason the card’s chip or a merchant’s EMV-enabled terminal has malfunctioned.

But there are important differences between the cardholder data stored on EMV chips versus magnetic stripes. One of those is a component in the chip known as an integrated circuit card verification value or “iCVV” for short — also known as a “dynamic CVV.”

The iCVV differs from the card verification value (CVV) stored on the physical magnetic stripe, and protects against the copying of magnetic-stripe data from the chip and the use of that data to create counterfeit magnetic stripe cards. Both the iCVV and CVV values are unrelated to the three-digit security code that is visibly printed on the back of a card, which is used mainly for e-commerce transactions or for card verification over the phone.

The appeal of the EMV approach is that even if a skimmer or malware manages to intercept the transaction information when a chip card is dipped, the data is only valid for that one transaction and should not allow thieves to conduct fraudulent payments with it going forward.

However, for EMV’s security protections to work, the back-end systems deployed by card-issuing financial institutions are supposed to check that when a chip card is dipped into a chip reader, only the iCVV is presented; and conversely, that only the CVV is presented when the card is swiped. If somehow these do not align for a given transaction type, the financial institution is supposed to decline the transaction.

The trouble is that not all financial institutions have properly set up their systems this way. Unsurprisingly, thieves have known about this weakness for years. In 2017, I wrote about the increasing prevalence of “shimmers,” high-tech card skimming devices made to intercept data from chip card transactions.

A close-up of a shimmer found on a Canadian ATM. Source: RCMP.

More recently, researchers at Cyber R&D Labs published a paper detailing how they tested 11 chip card implementations from 10 different banks in Europe and the U.S. The researchers found they could harvest data from four of them and create cloned magnetic stripe cards that were successfully used to place transactions.

There are now strong indications the same method detailed by Cyber R&D Labs is being used by point-of-sale (POS) malware to capture EMV transaction data that can then be resold and used to fabricate magnetic stripe copies of chip-based cards.

Earlier this month, the world’s largest payment card network Visa released a security alert regarding a recent merchant compromise in which known POS malware families were apparently modified to target EMV chip-enabled POS terminals.

“The implementation of secure acceptance technology, such as EMV® Chip, significantly reduced the usability of the payment account data by threat actors as the available data only included personal account number (PAN), integrated circuit card verification value (iCVV) and expiration date,” Visa wrote. “Thus, provided iCVV is validated properly, the risk of counterfeit fraud was minimal. Additionally, many of the merchant locations employed point-to-point encryption (P2PE) which encrypted the PAN data and further reduced the risk to the payment accounts processed as EMV® Chip.”

Visa did not name the merchant in question, but something similar seems to have happened at Key Food Stores Co-Operative Inc., a supermarket chain in the northeastern United States. Key Food initially disclosed a card breach in March 2020, but two weeks ago updated its advisory to clarify that EMV transaction data also was intercepted.

“The POS devices at the store locations involved were EMV enabled,” Key Food explained. “For EMV transactions at these locations, we believe only the card number and expiration date would have been found by the malware (but not the cardholder name or internal verification code).”

While Key Food’s statement may be technically accurate, it glosses over the reality that the stolen EMV data could still be used by fraudsters to create magnetic stripe versions of EMV cards presented at the compromised store registers in cases where the card-issuing bank hadn’t implemented EMV correctly.

Earlier today, fraud intelligence firm Gemini Advisory released a blog post with more information on recent merchant compromises — including Key Food — in which EMV transaction data was stolen and ended up for sale in underground shops that cater to card thieves.

“The payment cards stolen during this breach were offered for sale in the dark web,” Gemini explained. “Shortly after discovering this breach, several financial institutions confirmed that the cards compromised in this breach were all processed as EMV and did not rely on the magstripe as a fallback.”

Gemini says it has verified that another recent breach — at a liquor store in Georgia — also resulted in compromised EMV transaction data showing up for sale at dark web stores that sell stolen card data. As both Gemini and Visa have noted, in both cases proper iCVV verification from banks should render this intercepted EMV data useless to crooks.

Gemini determined that due to the sheer number of stores affected, it’s extremely unlikely the thieves involved in these breaches intercepted the EMV data using physically installed EMV card shimmers.

“Given the extreme impracticality of this tactic, they likely used a different technique to remotely breach POS systems to collect enough EMV data to perform EMV-Bypass Cloning,” the company wrote.

Stas Alforov, Gemini’s director of research and development, said financial institutions that aren’t performing these checks risk losing the ability to notice when those cards are used for fraud.

That’s because many banks that have issued chip-based cards may assume that as long as those cards are used for chip transactions, there is virtually no risk that the cards will be cloned and sold in the underground. Hence, when these institutions are looking for patterns in fraudulent transactions to determine which merchants might be compromised by POS malware, they may completely discount any chip-based payments and focus only on those merchants at which a customer has swiped their card.

“The card networks are catching on to the fact that there’s a lot more EMV-based breaches happening right now,” Alforov said. “The larger card issuers like Chase or Bank of America are indeed checking [for a mismatch between the iCVV and CVV], and will kick back transactions that don’t match. But that is clearly not the case with some smaller institutions.”

For better or worse, we don’t know which financial institutions have failed to properly implement the EMV standard. That’s why it always pays to keep a close eye on your monthly statements, and report any unauthorized transactions immediately. If your institution lets you receive transaction alerts via text message, this can be a near real-time way to keep an eye out for such activity.

CryptogramImages in Eye Reflections

In Japan, a cyberstalker located his victim by enhancing the reflections in her eye, and using that information to establish a location.

Reminds me of the image enhancement scene in Blade Runner. That was science fiction, but now image resolution is so good that we have to worry about it.

LongNowThe Digital Librarian as Essential Worker

Michelle Swanson, an Oregon-based educator and educational consultant, has written a blog post on the Internet Archive on the increased importance of digital librarians during the pandemic:

With public library buildings closed due to the global pandemic, teachers, students, and lovers of books everywhere have increasingly turned to online resources for access to information. But as anyone who has ever turned up 2.3 million (mostly unrelated) results from a Google search knows, skillfully navigating the Internet is not as easy as it seems. This is especially true when conducting serious research that requires finding and reviewing older books, journals and other sources that may be out of print or otherwise inaccessible.

Enter the digital librarian.

Michelle Swanson, “Digital Librarians – Now More Essential Than Ever” from the Internet Archive.

Kevin Kelly writes (in New Rules for the New Economy and in The Inevitable) about how an information economy flips the relative valuation of questions and answers — how search makes useless answers nearly free and useful questions even more precious than before, and knowing how to reliably produce useful questions even more precious still.

But much of our knowledge and outboard memory is still resistant to or incompatible with web search algorithms — databases spread across both analog and digital, with unindexed objects or idiosyncratic cataloging systems. Just as having map directions on your phone does not outdo a local guide, it helps to have people intimate with a library who can navigate the weird specifics. And just as scientific illustrators still exist to mostly leave out the irrelevant and make a paper clear as day (which cameras cannot do, as of 02020), a librarian is a sharp instrument that cuts straight through the extraneous info to what’s important.

Knowing what to enter in a search is one thing; knowing when it won’t come up in search and where to look amidst an analog collection is another skill entirely. Both are necessary at a time when libraries cannot receive (as many) scholars in the flesh, and what Penn State Prof Rich Doyle calls the “infoquake” online — the too-much-all-at-once-ness of it all — demands an ever-sharper reason just to stay afloat.

Learn More

  • Watch Internet Archive founder Brewster Kahle’s 02011 Long Now talk, “Universal Access to All Knowledge.”

Worse Than FailureCodeSOD: A Variation on Nulls

Submitter “NotAThingThatHappens” stumbled across a “unique” way to check for nulls in C#.

Now, there are already a few perfectly good ways to check for nulls. variable is null, for example, or use nullable types specifically. But “NotAThingThatHappens” found approach:

if(((object)someObjThatMayBeNull) is var _)
{
    //object is null, react somehow
} 
else
{ 
  UseTheObjectInAMethod(someObjThatMayBeNull);
}

What I hate most about this is how cleverly it exploits the C# syntax to work.

Normally, the _ is a discard. It’s meant to be used for things like tuple unpacking, or in cases where you have an out parameter but don’t actually care about the output- foo(out _) just discards the output data.

But _ is also a perfectly valid identifier. So var _ creates a variable _, and the type of that variable is inferred from context- in this case, whatever type it’s being compared against in someObjThatMayBeNull. This variable is scoped to the if block, so we don’t have to worry about it leaking into our namespace, but since it’s never initialized, it’s going to choose the appropriate default value for its type- and for reference types, that default is null. By casting explicitly to object, we guarantee that our type is a reference type, so this makes sure that we don’t get weird behavior on value types, like integers.

So really, this is just an awkward way of saying objectCouldBeNull is null.

NotAThingThatHappens adds:

The code never made it to production… but I was surprised that the compiler allowed this.
It’s stupid, but it WORKS!

It’s definitely stupid, it definitely works, I’m definitely glad it’s not in your codebase.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Sam VargheseHistory lessons at a late stage of life

In 1987, I got a job in Dubai, to work for a newspaper named Khaleej (Gulf) Times. I was chosen because the interviewer was a jolly Briton who came down to Bombay to do the interview on 12 June.

Malcolm Payne, the first editor of the newspaper that had been started in 1978 by Iranian brothers named Galadari, told me that he had always wanted to come and pick some people to work at the paper. By then he had been pushed out of the editorship by the politics of both Pakistani and Indian journalists who worked there.

For some strange reason, he took a liking to me. At the end of about 45 minutes of what was a much more robust conversation than I had ever experienced in earlier job interviews, which were normally tense affairs, Payne told me, “You’re a good bugger, Samuel. I’ll see you in Dubai.”

I took it with a pinch of salt. Anyway, I reckoned that I would know in a matter of months whether he pulling my leg or not in few months. I was more focused on my upcoming wedding which was to be scheduled shortly.

But, Payne turned out to be a man of his word. In September, I got a telegram from Dubai asking me to send copies of my passport in order that a visa could be obtained for me to work in Dubai. I had mixed emotions: on the one hand, I was happy that a chance to get out of the grinding poverty I lived in had presented itself. At the same time, I was worried about leaving my sickly mother in India; by then, she had been a widow for a few months and I was her only son.

When my mother-in-law to be heard about the job opportunity, she insisted that the wedding should be held before I left for Dubai. Probably she thought that once I went to the Persian Gulf, I would begin to look for another woman.

The wedding was duly fixed for 19 October and I was to leave for Dubai on 3 November.

After I landed in Dubai, I learnt about the tension that exists between most Indian and Pakistanis as a result of the partition of the subcontinent in 1947. Pakistanis are bitter because they feel that they were forced to leave for a country that had turned out to be a basket case, subsisting only because of aid from the US, and Indians felt that the Pakistanis had been the ones to force Britain, then the colonial ruler, to split the country.

Never did this enmity come to the fore more than when India and Pakistan sent their cricket teams to the UAE — Dubai is part of this country — to play in a tournament organised there by some businessman from Sharjah.

Of course, the whole raison d’etre for the tournament was the Indo-Pakistan enmity; pitting teams that had a history of this sort against each other was like staging a proxy war. What’s more, there were both expatriate Indians and Pakistanis in large numbers waiting eagerly to buy tickets and pour into what was literally a coliseum.
The other teams who were invited — sometimes there was a three-way contest, at others a four-way fight — were just there to make up the numbers.

And the organisers always prayed for an India-Pakistan final.

A year before I arrived in Dubai, a Pakistani batsman known as Javed Miandad had taken his team to victory by hitting a six off the last ball; the contests were limited to 50 overs a side. He was showered with gifts by rich Pakistanis and one even gifted him some land. Such was the euphoria a victory in the former desert generated.

Having been born and raised in Sri Lanka, I knew nothing of the history of India. My parents did not clue me in either. I learnt all about the grisly history of the subcontinent after I landed in Dubai.

That enmity resulted in several other incidents worth telling, which I shall relate soon.

,

Krebs on SecurityHere’s Why Credit Card Fraud is Still a Thing

Most of the civilized world years ago shifted to requiring computer chips in payment cards that make it far more expensive and difficult for thieves to clone and use them for fraud. One notable exception is the United States, which is still lurching toward this goal. Here’s a look at the havoc that lag has wrought, as seen through the purchasing patterns at one of the underground’s biggest stolen card shops that was hacked last year.

In October 2019, someone hacked BriansClub, a popular stolen card bazaar that uses this author’s likeness and name in its marketing. Whoever compromised the shop siphoned data on millions of card accounts that were acquired over four years through various illicit means from legitimate, hacked businesses around the globe — but mostly from U.S. merchants. That database was leaked to KrebsOnSecurity, which in turn shared it with multiple sources that help fight payment card fraud.

An ad for BriansClub has been using my name and likeness for years to peddle millions of stolen credit cards.

Among the recipients was Damon McCoy, an associate professor at New York University’s Tandon School of Engineering [full disclosure: NYU has been a longtime advertiser on this blog]. McCoy’s work in probing the credit card systems used by some of the world’s biggest purveyors of junk email greatly enriched the data that informed my 2014 book Spam Nation, and I wanted to make sure he and his colleagues had a crack at the BriansClub data as well.

McCoy and fellow NYU researchers found BriansClub earned close to $104 million in gross revenue from 2015 to early 2019, and listed over 19 million unique card numbers for sale. Around 97% of the inventory was stolen magnetic stripe data, commonly used to produce counterfeit cards for in-person payments.

“What surprised me most was there are still a lot of people swiping their cards for transactions here,” McCoy said.

In 2015, the major credit card associations instituted new rules that made it riskier and potentially more expensive for U.S. merchants to continue allowing customers to swipe the stripe instead of dip the chip. Complicating this transition was the fact that many card-issuing U.S. banks took years to replace their customer card stocks with chip-enabled cards, and countless retailers dragged their feet in updating their payment terminals to accept chip-based cards.

Indeed, three years later the U.S. Federal Reserve estimated (PDF) that 43.3 percent of in-person card payments were still being processed by reading the magnetic stripe instead of the chip. This might not have been such a big deal if payment terminals at many of those merchants weren’t also compromised with malicious software that copied the data when customers swiped their cards.

Following the 2015 liability shift, more than 84 percent of the non-chip cards advertised by BriansClub were sold, versus just 35 percent of chip-based cards during the same time period.

“All cards without a chip were in much higher demand,” McCoy said.

Perhaps surprisingly, McCoy and his fellow NYU researchers found BriansClub customers purchased only 40% of its overall inventory. But what they did buy supports the notion that crooks generally gravitate toward cards issued by financial institutions that are perceived as having fewer or more lax protections against fraud.

Source: NYU.

While the top 10 largest card issuers in the United States accounted for nearly half of the accounts put up for sale at BriansClub, only 32 percent of those accounts were sold — and at a roughly half the median price of those issued by small- and medium-sized institutions.

In contrast, more than half of the stolen cards issued by small and medium-sized institutions were purchased from the fraud shop. This was true even though by the end of 2018, 91 percent of cards for sale from medium-sized institutions were chip-based, and 89 percent from smaller banks and credit unions. Nearly all cards issued by the top ten largest U.S. card issuers (98 percent) were chip-enabled by that time.

REGION LOCK

The researchers found BriansClub customers strongly preferred cards issued by financial institutions in specific regions of the United States, specifically Colorado, Nevada, and South Carolina.

“For whatever reason, those regions were perceived as having lower anti-fraud systems or those that were not as effective,” McCoy said.

Cards compromised from merchants in South Carolina were in especially high demand, with fraudsters willing to spend twice as much on those cards per capita than any other state — roughly $1 per resident.

That sales trend also was reflected in the support tickets filed by BriansClub customers, who frequently were informed that cards tied to the southeastern United States were less likely to be restricted for use outside of the region.

Image: NYU.

McCoy said the lack of region locking also made stolen cards issued by banks in China something of a hot commodity, even though these cards demanded much higher prices (often more than $100 per account): The NYU researchers found virtually all available Chinese cards were sold soon after they were put up for sale. Ditto for the relatively few corporate and business cards for sale.

A lack of region locks may also have caused card thieves to gravitate toward buying up as many cards as they could from USAA, a savings bank that caters to active and former military service members and their immediate families. More than 83 percent of the available USAA cards were sold between 2015 and 2019, the researchers found.

Although Visa cards made up more than half of accounts put up for sale (12.1 million), just 36 percent were sold. MasterCards were the second most-plentiful (3.72 million), and yet more than 54 percent of them sold.

American Express and Discover, which unlike Visa and MasterCard are so-called “closed loop” networks that do not rely on third-party financial institutions to issue cards and manage fraud on them, saw 28.8 percent and 33 percent of their stolen cards purchased, respectively.

PREPAIDS

Some people concerned about the scourge of debit and credit card fraud opt to purchase prepaid cards, which generally enjoy the same cardholder protections against fraudulent transactions. But the NYU team found compromised prepaid accounts were purchased at a far higher rate than regular debit and credit cards.

Several factors may be at play here. For starters, relatively few prepaid cards for sale were chip-based. McCoy said there was some data to suggest many of these prepaids were issued to people collecting government benefits such as unemployment and food assistance. Specifically, the “service code” information associated with these prepaid cards indicated that many were restricted for use at places like liquor stores and casinos.

“This was a pretty sad finding, because if you don’t have a bank this is probably how you get your wages,” McCoy said. “These cards were disproportionately targeted. The unfortunate and striking thing was the sheer demand and lack of [chip] support for prepaid cards. Also, these cards were likely more attractive to fraudsters because [the issuer’s] anti-fraud countermeasures weren’t up to par, possibly because they know less about their customers and their typical purchase history.”

PROFITS

The NYU researchers estimate BriansClub pulled in approximately $24 million in profit over four years. They calculated this number by taking the more than $100 million in total sales and subtracting commissions paid to card thieves who supplied the shop with fresh goods, as well as the price of cards that were refunded to buyers. BriansClub, like many other stolen card shops, offers refunds on certain purchases if the buyer can demonstrate the cards were no longer active at the time of purchase.

On average, BriansClub paid suppliers commissions ranging from 50-60 percent of the total value of the cards sold. Card-not-present (CNP) accounts — or those stolen from online retailers and purchased by fraudsters principally for use in defrauding other online merchants — fetched a much steeper supplier commission of 80 percent, but mainly because these cards were in such high demand and low supply.

The NYU team found card-not-present sales accounted for just 7 percent of all revenue, even though card thieves clearly now have much higher incentives to target online merchants.

A story here last year observed that this exact supply and demand tug-of-war had helped to significantly increase prices for card-not-present accounts across multiple stolen credit card shops in the underground. Not long ago, the price of CNP accounts was less than half that of card-present accounts. These days, those prices are roughly equivalent.

One likely reason for that shift is the United States is the last of the G20 nations to fully transition to more secure chip-based payment cards. In every other country that long ago made the chip card transition, they saw the same dynamic: As they made it harder for thieves to counterfeit physical cards, the fraud didn’t go away but instead shifted to online merchants.

The same progression is happening now in the United States, only the demand for stolen CNP data still far outstrips supply. Which might explain why we’ve seen such a huge uptick over the past few years in e-commerce sites getting hacked.

“Everyone points to this displacement effect from card-present to card-not-present fraud,” McCoy said. “But if the supply isn’t there, there’s only so much room for that displacement to occur.”

No doubt the epidemic of card fraud has benefited mightily from hacked retail chains — particularly restaurants — that still allow customers to swipe chip-based cards. But as we’ll see in a post to be published tomorrow, new research suggests thieves are starting to deploy ingenious methods for converting card data from certain compromised chip-based transactions into physical counterfeit cards.

A copy of the NYU research paper is available here (PDF).

LongNowThe Unexpected Influence of Cosmic Rays on DNA

Samuel Velasco/Quanta Magazine

Living in a world with multiple spatiotemporal scales, the very small and fast can often drive the future of the very large and slow: Microscopic genetic mutations change macroscopic anatomy. Undetectably small variations in local climate change global weather patterns (the infamous “butterfly effect”).

And now, one more example comes from a new theory about why DNA on modern Earth only twists in one of two possible directions:

Our spirals might all trace back to an unexpected influence from cosmic rays. Cosmic ray showers, like DNA strands, have handedness. Physical events typically break right as often as they break left, but some of the particles in cosmic ray showers tap into one of nature’s rare exceptions. When the high energy protons in cosmic rays slam into the atmosphere, they produce particles called pions, and the rapid decay of pions is governed by the weak force — the only fundamental force with a known mirror asymmetry.

Millions if not billions of cosmic ray strikes could be required to yield one additional free electron in a [right-handed] strand, depending on the event’s energy. But if those electrons changed letters in the organisms’ genetic codes, those tweaks may have added up. Over perhaps a million years…cosmic rays might have accelerated the evolution of our earliest ancestors, letting them out-compete their [left-handed] rivals.

In other words, properties of the subatomic world seem to have conferred a benefit to the potential for innovation among right-handed nucleic acids, and a “talent” for generating useful copying errors led to the entrenched monopoly we observe today.

But that isn’t the whole story. Read more at Quanta.

Worse Than FailureCodeSOD: True if Documented

“Comments are important,” is one of those good rules that often gets misapplied. No one wants to see a method called addOneToSet and a comment that tells us Adds one item to the set.

Still, a lot of our IDEs and other tooling encourage these kinds of comments. You drop a /// or /* before a method or member, and you get an autostubbed out comment that gives you a passable, if useless, comment.

Scott Curtis thinks that is where this particular comment originated, but over time it decayed into incoherent nonsense:

///<summary> True to use quote value </summary>
///
///<value> True if false, false if not </value>
private readonly bool _mUseQuoteValue

True if false, false if not. Or, worded a little differently, documentation makes code less clear, clearer if not.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Rondam RamblingsThe insidious problem of racism

Take a moment to seriously think about what is wrong with racism.  If you're like most people, your answer will probably be that racism is bad because it's a form of prejudice, and prejudice is bad.  This is not wrong, but it misses a much deeper, more insidious issue.  The real problem with racism is that it is that it can be (and usually is) rationalized and those rationalizations can turn into

CryptogramSurvey of Supply Chain Attacks

The Atlantic Council has a released a report that looks at the history of computer supply chain attacks.

Key trends from their summary:

  1. Deep Impact from State Actors: There were at least 27 different state attacks against the software supply chain including from Russia, China, North Korea, and Iran as well as India, Egypt, the United States, and Vietnam.States have targeted software supply chains with great effect as the majority of cases surveyed here did, or could have, resulted in remote code execution. Examples: CCleaner, NotPetya, Kingslayer, SimDisk, and ShadowPad.

  2. Abusing Trust in Code Signing: These attacks undermine public key cryptography and certificates used to ensure the integrity of code. Overcoming these protections is a critical step to enabling everything from simple alterations of open-source code to complex nation-state espionage campaigns. Examples: ShadowHammer, Naid/McRAT, and BlackEnergy 3.

  3. Hijacking Software Updates: 27% of these attacks targeted software updates to insert malicious code against sometimes millions of targets. These attacks are generally carried out by extremely capable actors and poison updates from legitimate vendors. Examples: Flame, CCleaner 1 & 2, NotPetya, and Adobe pwdum7v71.

  4. Poisoning Open-Source Code: These incidents saw attackers either modify open-source code by gaining account access or post their own packages with names similar to common examples. Attacks targeted some of the most widely used open source tools on the internet. Examples: Cdorked/Darkleech, RubyGems Backdoor, Colourama, and JavaScript 2018 Backdoor.

  5. Targeting App Stores: 22% of these attacks targeted app stores like the Google Play Store, Apple's App Store, and other third-party app hubs to spread malware to mobile devices. Some attacks even targeted developer tools ­ meaning every app later built using that tool was potentially compromised. Examples: ExpensiveWall, BankBot, Gooligan, Sandworm's Android attack, and XcodeGhost.

Recommendations included in the report. The entirely open and freely available dataset is here.

Worse Than FailureCodeSOD: Underscoring the Comma

Andrea writes to confess some sins, though I'm not sure who the real sinner is. To understand the sins, we have to talk a little bit about C/C++ macros.

Andrea was working on some software to control a dot-matrix display from an embedded device. Send an array of bytes to it, and the correct bits on the display light up. Now, if you're building something like this, you want an easy way to "remember" the proper sequences. So you might want to do something like:

uint8_t glyph0[] = {'0', 0x0E, 0x11, 0x0E, 0};
uint8_t glyph1[] = {'1', 0x09, 0x1F, 0x01, 0};

And so on. And heck, you might want to go so far as to have a lookup array, so you might have a const uint8_t *const glyphs[] = {glyph0, glyph1…}. Now, you could just hardcode those definitions, but wouldn't it be cool to use macros to automate that a bit, as your definitions might change?

Andrea went with a style known as X macros, which let you specify one pattern of data which can be re-used by redefining X. So, for example, I could do something like:

#define MY_ITEMS \
  X(a, 5) \
  X(b, 6) \
  X(c, 7)
  
#define X(name, value) int name = value;
MY_ITEMS
#undef X

This would generate:

int a = 5;
int b = 6;
int c = 7;

But I could re-use this, later:

#define X(name, data) name, 
int items[] = { MY_ITEMS nullptr};
#undef X

This would generate, in theory, something like: int items[] = {a,b,c,nullptr};

We are recycling the MY_ITEMS macro, and we're changing its behavior by altering the X macro that it invokes. This can, in practice, result in much more readable and maintainable code, especially code where you need to have parallel lists of items. It's also one of those things that the first time you see it, it's… surprising.

Now, this is all great, and it means that Andrea could potentially have a nice little macro system for defining arrays of bytes and a lookup array pointing to those arrays. There's just one problem.

Specifically, if you tried to write a macro like this:

#define GLYPH_DEFS \
  X(glyph0, {'0', 0x0E, 0x11, 0x0E, 0})

It wouldn't work. It doesn't matter what you actually define X to do, the preprocessor isn't aware of the C/C++ syntax. So it doesn't say "oh, that second comma is inside of an array initalizer, I'll ignore it", it says, "Oh, they're trying to pass more than two parameters to the macro X."

So, you need some way to define an array initializer that doesn't use commas. If macros got you into this situation, macros can get you right back out. Here is Andrea's solution:

#define _ ,  // Sorry.
#define GLYPH_DEFS \
	X(glyph0, { '0' _ 0x0E _ 0x11 _ 0x0E _ 0 } ) \
	X(glyph1, { '1' _ 0x09 _ 0x1F _ 0x01 _ 0 }) \
	X(glyph2, { '2' _ 0x13 _ 0x15 _ 0x09 _ 0 }) \
	X(glyph3, { '3' _ 0x15 _ 0x15 _ 0x0A _ 0 }) \
	X(glyph4, { '4' _ 0x18 _ 0x04 _ 0x1F _ 0 }) \
	X(glyph5, { '5' _ 0x1D _ 0x15 _ 0x12 _ 0 }) \
	X(glyph6, { '6' _ 0x0E _ 0x15 _ 0x03 _ 0 }) \
	X(glyph7, { '7' _ 0x10 _ 0x13 _ 0x0C _ 0 }) \
	X(glyph8, { '8' _ 0x0A _ 0x15 _ 0x0A _ 0 }) \
	X(glyph9, { '9' _ 0x08 _ 0x14 _ 0x0F _ 0 }) \
	X(glyphA, { 'A' _ 0x0F _ 0x14 _ 0x0F _ 0 }) \
	X(glyphB, { 'B' _ 0x1F _ 0x15 _ 0x0A _ 0 }) \
	X(glyphC, { 'C' _ 0x0E _ 0x11 _ 0x11 _ 0 }) \
	X(glyphD, { 'D' _ 0x1F _ 0x11 _ 0x0E _ 0 }) \
	X(glyphE, { 'E' _ 0x1F _ 0x15 _ 0x15 _ 0 }) \
	X(glyphF, { 'F' _ 0x1F _ 0x14 _ 0x14 _ 0 }) \

#define X(name, data) const uint8_t name [] = data ;
GLYPH_DEFS
#undef X

#define X(name, data) name _
const uint8_t *const glyphs[] = { GLYPH_DEFS nullptr };
#undef X
#undef _

So, when processing the X macro, we pass it a pile of _s, which aren't commas, so it doesn't complain. Then we expand the _ macro and voila: we have syntactically valid array initalizers. If Andrea ever changes the list of glyphs, adding or removing any, the macro will automatically sync the declaration of the individual arrays and their pointers over in the glyphs array.

Andrea adds:

The scope of this definition is limited to this data structure, in which the X macros are used, and it is #undef'd just after that. However, with all the stories of #define abuse on this site, I feel I still need to atone.
The testing sketch works perfectly.

Honestly, all sins are forgiven. There isn't a true WTF here, beyond "the C preprocessor is TRWTF". It's a weird, clever hack, and it's interesting to see this technique in use.

That said, as you might note: this was a testing sketch, just to prove a concept. Instead of getting clever with macros, your disposable testing code should probably just get to proving your concept as quickly as possible. You can worry about code maintainability later. So, if there are any sins by Andrea, it's the sin of overengineering a disposable test program.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityBusiness ID Theft Soars Amid COVID Closures

Identity thieves who specialize in running up unauthorized lines of credit in the names of small businesses are having a field day with all of the closures and economic uncertainty wrought by the COVID-19 pandemic, KrebsOnSecurity has learned. This story is about the victims of a particularly aggressive business ID theft ring that’s spent years targeting small businesses across the country and is now pivoting toward using that access for pandemic assistance loans and unemployment benefits.

Most consumers are likely aware of the threat from identity theft, which occurs when crooks apply for new lines of credit in your name. But the same crime can be far more costly and damaging when thieves target small businesses. Unfortunately, far too many entrepreneurs are simply unaware of the threat or don’t know how to be watchful for it.

What’s more, with so many small enterprises going out of business or sitting dormant during the COVID-19 pandemic, organized fraud rings have an unusually rich pool of targets to choose from.

Short Hills, N.J.-based Dun & Bradstreet [NYSE:DNB] is a data analytics company that acts as a kind of de facto credit bureau for companies: When a business owner wants to open a new line of credit, creditors typically check with Dun & Bradstreet to gauge the business’s history and trustworthiness.

In 2019, Dun & Bradstreet saw more than a 100 percent increase in business identity theft. For 2020, the company estimates an overall 258 percent spike in the crime. Dun & Bradstreet said that so far this year it has received over 4,700 tips and leads where business identity theft or malfeasance are suspected.

“The ferocity of cyber criminals to take advantage of COVID-19 uncertainties by preying on small businesses is disturbing,” said Andrew LaMarca, who leads the global high-risk and fraud team at Dun & Bradstreet.

For the past several months, Milwaukee, Wisc. based cyber intelligence firm Hold Security has been monitoring the communications between and among a businesses ID theft gang apparently operating in Georgia and Florida but targeting businesses throughout the United States. That surveillance has helped to paint a detailed picture of how business ID thieves operate, as well as the tricks they use to gain credit in a company’s name.

Hold Security founder Alex Holden said the group appears to target both active and dormant or inactive small businesses. The gang typically will start by looking up the business ownership records at the Secretary of State website that corresponds to the company’s state of incorporation. From there, they identify the officers and owners of the company, acquire their Social Security and Tax ID numbers from the dark web and other sources online.

To prove ownership over the hijacked firms, they hire low-wage image editors online to help fabricate and/or modify a number of official documents tied to the business — including tax records and utility bills.

The scammers frequently then file phony documents with the Secretary of State’s office in the name(s) of the business owners, but include a mailing address that they control. They also create email addresses and domain names that mimic the names of the owners and the company to make future credit applications appear more legitimate, and submit the listings to business search websites, such as yellowpages.com.

For both dormant and existing businesses, the fraudsters attempt to create or modify the target company’s accounts at Dun & Bradstreet. In some cases, the scammers create dashboard accounts in the business’s names at Dun & Bradstreet’s credit builder portal; in others, the bad guys have actually hacked existing business accounts at DNB, requesting a new DUNS numbers for the business (a DUNS number is a unique, nine-digit identifier for businesses).

Finally, after the bogus profiles are approved by Dun & Bradstreet, the gang waits a few weeks or months and then starts applying for new lines of credit in the target business’s name at stores like Home Depot, Office Depot and Staples. Then they go on a buying spree with the cards issued by those stores.

Usually, the first indication a victim has that they’ve been targeted is when the debt collection companies start calling.

“They are using mostly small companies that are still active businesses but currently not operating because of COVID-19,” Holden said. “With this gang, we see four or five people working together. The team leader manages the work between people. One person seems to be in charge of getting stolen cards from the dark web to pay for the reactivation of businesses through the secretary of state sites. Another team member works on revising the business documents and registering them on various sites. The others are busy looking for specific businesses they want to revive.”

Holden said the gang appears to find success in getting new lines of credit with about 20 percent of the businesses they target.

“One’s personal credit is nothing compared to the ability of corporations to borrow money,” he said. “That’s bad because while the credit system may be flawed for individuals, it’s an even worse situation on average when we’re talking about businesses.”

Holden said over the past few months his firm has seen communications between the gang’s members indicating they have temporarily shifted more of their energy and resources to defrauding states and the federal government by filing unemployment insurance claims and apply for pandemic assistance loans with the Small Business Administration.

“It makes sense, because they’ve already got control over all these dormant businesses,” he said. “So they’re now busy trying to get unemployment payments and SBA loans in the names of these companies and their employees.”

PHANTOM OFFICES

Hold Security shared data intercepted from the gang that listed the personal and financial details of dozens of companies targeted for ID theft, including Dun & Bradstreet logins the crooks had created for the hijacked businesses. Dun & Bradstreet declined to comment on the matter, other than to say it was working with federal and state authorities to alert affected businesses and state regulators.

Among those targeted was Environmental Safety Consultants Inc. (ESC), a 37-year-old environmental engineering firm based in Bradenton, Fla. ESC owner Scott Russell estimates his company was initially targeted nearly two years ago, and that he first became aware something wasn’t right when he recently began getting calls from Home Depot’s corporate offices inquiring about the company’s delinquent account.

But Russell said he didn’t quite grasp the enormity of the situation until last year, when he was contacted by the manager of a virtual office space across town who told him about a suspiciously large number of deliveries at an office space that was rented out in his name.

Russell had never rented that particular office. Rather, the thieves had done it for him, using his name and the name of his business. The office manager said the deliveries came virtually non-stop, even though there was apparently no business operating within the rented premises. And in each case, shortly after the shipments arrived someone would show up and cart them away.

“She said we don’t think it’s you,” he recalled. “Turns out, they had paid for a lease in my name with someone else’s credit card. She shared with me a copy of the lease, which included a fraudulent ID and even a vehicle insurance card for a Land Cruiser we got rid of like 15 years ago. The application listed our home address with me and some woman who was not my wife’s name.”

The crates and boxes being delivered to his erstwhile office space were mostly computers and other high-priced items ordered from 10 different Office Depot credit cards that also were not in his name.

“The total value of the electronic equipment that was bought and delivered there was something like $75,000,” Russell said, noting that it took countless hours and phone calls with Office Depot to make it clear they would no longer accept shipments addressed to him or his company. “It was quite spine-tingling to see someone penned a lease in the name of my business and personal identity.”

Even though the virtual office manager had the presence of mind to take photocopies of the driver’s licenses presented by the people arriving to pick up the fraudulent shipments, the local police seemed largely uninterested in pursuing the case, Russell said.

“I went to the local county sheriff’s office and showed them all the documentation I had and the guy just yawned and said he’d get right on it,” he recalled. “The place where the office space was rented was in another county, and the detective I spoke to there about it was interested, but he could never get anyone from my county to follow up.”

RECYCLING VICTIMS

Russell said he believes the fraudsters initially took out new lines of credit in his company’s name and then used those to defraud others in a similar way. One of those victims is another victim on the gang’s target list obtained by Hold Security — Mary McMahan, owner of Fan Experiences, an event management company in Winter Park, Fla.

McMahan also had stolen goods from Office Depot and other stores purchased in her company’s name and delivered to the same office space rented in Russell’s name. McMahan said she and her businesses have suffered hundreds of thousands of dollars in fraud, and spent nearly as much in legal fees fending off collections firms and restoring her company’s credit.

McMahan said she first began noticing trouble almost four years ago, when someone started taking out new credit cards in her company’s name. At the same time, her business was used to open a new lease on a virtual office space in Florida that also began receiving packages tied to other companies victimized by business ID theft.

“About four years back, they hit my credit hard for a year, getting all these new lines of credit at Home Depot, Office Depot, Office Max, you name it,” she said. “Then they came back again two years ago and hit it hard for another year. They even went to the [Florida Department of Motor Vehicles] to get a driver’s license in my name.”

McMahan said the thieves somehow hacked her DNB account, and then began adding new officers and locations for her business listing.

“They changed the email and mailing address, and even went on Yelp and Google and did the same,” she said.

McMahan said she’s since locked down her personal and business credit to the point where even she would have a tough time getting a new line of credit or mortgage if she tried.

“There’s no way they can even utilize me anymore because there’s so many marks on my credit stating that it’s been stolen” she said. “These guys are relentless, and they recycle victims to defraud others until they figure out they can’t recycle them anymore.”

SAY…THAT’S A NICE CREDIT PROFILE YOU GOT THERE…

McMahan says she, too, has filed multiple reports about the crimes with local police, but has so far seen little evidence that anyone is interested in following up on the matter. For now, she is paying Dun and Bradstreet more than a $100 a month to monitor her business credit profile.

Dun & Bradstreet does offer a free version of credit monitoring called Credit Signal that lets business owners check their business credit scores and any inquiries made in the previous 14 days up to four times a year. However, those looking for more frequent checks or additional information about specific credit inquiries beyond 14 days are steered toward DNB’s subscription-based services.

Eva Velasquez, president of the Identity Theft Resource Center, a California-based nonprofit that assists ID theft victims, said she finds that troubling.

“When we look at these institutions that are necessary for us to operate and function in society and they start to charge us a fee for a service to fix a problem they helped create through their infrastructure, that’s just unconscionable,” Velasquez said. “We need to take a hard look at the infrastructures that businesses are beholden to and make sure the risk minimization protections they’re entitled to are not fee-based — particularly if it’s a problem created by the very infrastructure of the system.”

Velasquez said it’s unfortunate that small business owners don’t have the same protections afforded to consumers. For example, only recently did the three major consumer reporting bureaus allow all U.S. residents to place a freeze on their credit files for free.

“We’ve done a good job in educating the public that anyone can be victim of identity theft, and in compelling our infrastructure to provide robust consumer protection and risk minimization processes that are more uniform,” she said. “It’s still not good by any means, but it’s definitely better for consumers than it is for businesses. We currently put all the responsibility on the small business owner, and very little on the infrastructure and processes that should be designed to protect them but aren’t doing a great job, frankly.”

Rather, the onus continues to be on the business owner to periodically check with DNB and state agencies to monitor for any signs of unauthorized changes. Worse still, too many private and public organizations still don’t do a good enough job protecting employee identification and tax ID numbers that are so often abused in business identity theft, Velasquez said.

“You can put alerts and other protections in place but the problem is you have to go on a department by department and case by case basis,” she said. “The place to begin is your secretary of state’s office or wherever you file your documents to operate your business.

For its part, Dun & Bradstreet recently published a blog post outlining recommendations for businesses to ward off identity thieves. DNB says anyone who suspects fraudulent activity on their account should contact its support team.

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 11)

Here’s part eleven of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

LongNowDiscovery in Mexican Cave May Drastically Change the Known Timeline of Humans’ Arrival to the Americas

Human history in the Americas may be twice long as long as previously believed — at least 26,500 years — according to authors of a new study at Mexico’s Chiquihuite cave and other sites throughout Central Mexico.

According to the study’s lead author Ciprian Ardelean:

“This site alone can’t be considered a definitive conclusion. But with other sites in North America like Gault (Texas), Bluefish Caves (Yukon), maybe Cactus Hill (Virginia)—it’s strong enough to favor a valid hypothesis that there were humans here probably before and almost surely during the Last Glacial Maximum.”

Worse Than FailureUltrabase

After a few transfers across departments at IniTech, Lydia found herself as a senior developer on an internal web team. They built intranet applications which covered everything from home-grown HR tools to home-grown supply chain tools, to home-grown CMSes, to home-grown "we really should have purchased something but the approval process is so onerous and the budgeting is so constrained that it looks cheaper to carry an IT team despite actually being much more expensive".

A new feature request came in, and it seemed extremely easy. There was a stored procedure that was normally invoked by a scheduled job. The admin users in one of the applications wanted to be able to invoke it on demand. Now, Lydia might be "senior", but she was new to the team, so she popped over to Desmond's cube to see what he thought.

"Oh, sure, we can do that, but it'll take about a week."

"A week?" Lydia asked. "A week? To add a button that invokes a stored procedure. It doesn't even take any parameters or return any results you'd need to display."

"Well, roughly 40 hours of effort, yeah. I can't promise it'd be a calendar week."

"I guess, with testing, and approvals, I could see it taking that long," Lydia said.

"Oh, no, that's just development time," Desmond said. "You're new to the team, so it's time you learned about Ultrabase."

Wyatt was the team lead. Lydia had met him briefly during her onboarding with the team, but had mostly been interacting with the other developers on the team. Wyatt, as it turned out, was a Certified Super Genius™, and was so smart that he recognized that most of their applications were, functionally, quite the same. CRUD apps, mostly. So Wyatt had "automated" the process, with his Ultrabase solution.

First, there was a configuration database. Every table, every stored procedure, every view or query, needed to be entered into the configuration database. Now, Wyatt, Certified Super Genius™, knew that he couldn't define a simple schema which would cover all the possible cases, so he didn't. He defined a fiendishly complicated schema with opaque and inconsistent validity rules. Once you had entered the data for all of your database objects, hopefully correctly, you could then execute the Data program.

The Data program would read through the configuration database, and through the glories of string concatenation generate a C# solution containing the definitions of your data model objects. The Data program itself was very fault tolerant, so fault tolerant that if anything went wrong, it still just output C# code, just not syntactically correct C# code. If the C# code couldn't compile, you needed to go back to the configuration database and figure out what was wrong.

Eventually, once you had a theoretically working data model library, you pushed the solution to the build server. That would build and sign the library with a corporate key, and publish it to their official internal software repository. This could take days or weeks to snake its way through all the various approval steps.

Once you had the official release of the datamodel, you could fire up the Data Access Layer tool, which would then pull down the signed version in the repository, and using reflection and the config database, the Data Access Layer program would generate a DAL. Assuming everything worked, you would push that to the build server, and then wait for that to wind its way through the plumbing of approvals.

Then the Business Logic Layer. Then the "Core" layer. The "UI Adapter Layer". The "Front End" layer.

Each layer required the previous layer to be in the corporate repository before you could generate it. Each layer also needed to check the config database. It was trivial to make an error that wouldn't be discovered until you tried to generate the front end layer, and if that happened, you needed to go all the way back to the beginning.

"Wyatt is working on a 'config validation tool' which he says will avoid some of these errors," Desmond said. "So we've got that to look forward to. Anyway, that's our process. Glad to have you on the team!"

Lydia was significantly less glad to be on the team, now that Desmond had given her a clearer picture of how it actually worked.

[Advertisement] Keep the plebs out of prod. Restrict NuGet feed privileges with ProGet. Learn more.

,

Rondam RamblingsAbortion restrictions result in more abortions

Not that this was ever in any serious doubt, but now there is actual data published in The Lancet showing that abortion restrictions increase the number of abortions: In 2015–19, there were 121.0 million unintended pregnancies annually (80% uncertainty interval [UI] 112.8–131.5), corresponding to a global rate of 64 unintended pregnancies (UI 60–70) per 1000 women aged 15–49 years. 61% (58–63)

Rondam RamblingsMark your calendars: I am debating Ken Hovind on July 9

I've recently taken up a new hobby of debating young-earth creationists on YouTube.  (It's a dirty job, but somebody's gotta do it.)  I've done two of them so far [1][2], both on a creationist channel called Standing For Truth.  My third debate will be against Kent Hovind, one of the more prominent and, uh, outspoken members of the YEC community.  In case you haven't heard of him, here's a sample

,

Krebs on SecurityThinking of a Cybersecurity Career? Read This

Thousands of people graduate from colleges and universities each year with cybersecurity or computer science degrees only to find employers are less than thrilled about their hands-on, foundational skills. Here’s a look at a recent survey that identified some of the bigger skills gaps, and some thoughts about how those seeking a career in these fields can better stand out from the crowd.

Virtually every week KrebsOnSecurity receives at least one email from someone seeking advice on how to break into cybersecurity as a career. In most cases, the aspirants ask which certifications they should seek, or what specialization in computer security might hold the brightest future.

Rarely am I asked which practical skills they should seek to make themselves more appealing candidates for a future job. And while I always preface any response with the caveat that I don’t hold any computer-related certifications or degrees myself, I do speak with C-level executives in cybersecurity and recruiters on a regular basis and frequently ask them for their impressions of today’s cybersecurity job candidates.

A common theme in these C-level executive responses is that a great many candidates simply lack hands-on experience with the more practical concerns of operating, maintaining and defending the information systems which drive their businesses.

Granted, most people who have just graduated with a degree lack practical experience. But happily, a somewhat unique aspect of cybersecurity is that one can gain a fair degree of mastery of hands-on skills and foundational knowledge through self-directed study and old fashioned trial-and-error.

One key piece of advice I nearly always include in my response to readers involves learning the core components of how computers and other devices communicate with one another. I say this because a mastery of networking is a fundamental skill that so many other areas of learning build upon. Trying to get a job in security without a deep understanding of how data packets work is a bit like trying to become a chemical engineer without first mastering the periodic table of elements.

But please don’t take my word for it. The SANS Institute, a Bethesda, Md. based security research and training firm, recently conducted a survey of more than 500 cybersecurity practitioners at 284 different companies in an effort to suss out which skills they find most useful in job candidates, and which are most frequently lacking.

The survey asked respondents to rank various skills from “critical” to “not needed.” Fully 85 percent ranked networking as a critical or “very important” skill, followed by a mastery of the Linux operating system (77 percent), Windows (73 percent), common exploitation techniques (73 percent), computer architectures and virtualization (67 percent) and data and cryptography (58 percent). Perhaps surprisingly, only 39 percent ranked programming as a critical or very important skill (I’ll come back to this in a moment).

How did the cybersecurity practitioners surveyed grade their pool of potential job candidates on these critical and very important skills? The results may be eye-opening:

“Employers report that student cybersecurity preparation is largely inadequate and are frustrated that they have to spend months searching before they find qualified entry-level employees if any can be found,” said Alan Paller, director of research at the SANS Institute. “We hypothesized that the beginning of a pathway toward resolving those challenges and helping close the cybersecurity skills gap would be to isolate the capabilities that employers expected but did not find in cybersecurity graduates.”

The truth is, some of the smartest, most insightful and talented computer security professionals I know today don’t have any computer-related certifications under their belts. In fact, many of them never even went to college or completed a university-level degree program.

Rather, they got into security because they were passionately and intensely curious about the subject, and that curiosity led them to learn as much as they could — mainly by reading, doing, and making mistakes (lots of them).

I mention this not to dissuade readers from pursuing degrees or certifications in the field (which may be a basic requirement for many corporate HR departments) but to emphasize that these should not be viewed as some kind of golden ticket to a rewarding, stable and relatively high-paying career.

More to the point, without a mastery of one or more of the above-mentioned skills, you simply will not be a terribly appealing or outstanding job candidate when the time comes.

BUT..HOW?

So what should you focus on, and what’s the best way to get started? First, understand that while there are a near infinite number of ways to acquire knowledge and virtually no limit to the depths you can explore, getting your hands dirty is the fastest way to learning.

No, I’m not talking about breaking into someone’s network, or hacking some poor website. Please don’t do that without permission. If you must target third-party services and sites, stick to those that offer recognition and/or incentives for doing so through bug bounty programs, and then make sure you respect the boundaries of those programs.

Besides, almost anything you want to learn by doing can be replicated locally. Hoping to master common vulnerability and exploitation techniques? There are innumerable free resources available; purpose-built exploitation toolkits like Metasploit, WebGoat, and custom Linux distributions like Kali Linux that are well supported by tutorials and videos online. Then there are a number of free reconnaissance and vulnerability discovery tools like Nmap, Nessus, OpenVAS and Nikto. This is by no means a complete list.

Set up your own hacking labs. You can do this with a spare computer or server, or with older hardware that is plentiful and cheap on places like eBay or Craigslist. Free virtualization tools like VirtualBox can make it simple to get friendly with different operating systems without the need of additional hardware.

Or look into paying someone else to set up a virtual server that you can poke at. Amazon’s EC2 services are a good low-cost option here. If it’s web application testing you wish to learn, you can install any number of web services on computers within your own local network, such as older versions of WordPress, Joomla or shopping cart systems like Magento.

Want to learn networking? Start by getting a decent book on TCP/IP and really learning the network stack and how each layer interacts with the other.

And while you’re absorbing this information, learn to use some tools that can help put your newfound knowledge into practical application. For example, familiarize yourself with Wireshark and Tcpdump, handy tools relied upon by network administrators to troubleshoot network and security problems and to understand how network applications work (or don’t). Begin by inspecting your own network traffic, web browsing and everyday computer usage. Try to understand what applications on your computer are doing by looking at what data they are sending and receiving, how, and where.

ON PROGRAMMING

While being able to program in languages like Go, Java, Perl, Python, C or Ruby may or may not be at the top of the list of skills demanded by employers, having one or more languages in your skillset is not only going to make you a more attractive hire, it will also make it easier to grow your knowledge and venture into deeper levels of mastery.

It is also likely that depending on which specialization of security you end up pursuing, at some point you will find your ability to expand that knowledge is somewhat limited without understanding how to code.

For those intimidated by the idea of learning a programming language, start by getting familiar with basic command line tools on Linux. Just learning to write basic scripts that automate specific manual tasks can be a wonderful stepping stone. What’s more, a mastery of creating shell scripts will pay handsome dividends for the duration of your career in almost any technical role involving computers (regardless of whether you learn a specific coding language).

GET HELP

Make no mistake: Much like learning a musical instrument or a new language, gaining cybersecurity skills takes most people a good deal of time and effort. But don’t get discouraged if a given topic of study seems overwhelming at first; just take your time and keep going.

That’s why it helps to have support groups. Seriously. In the cybersecurity industry, the human side of networking takes the form of conferences and local meetups. I cannot stress enough how important it is for both your sanity and career to get involved with like-minded people on a semi-regular basis.

Many of these gatherings are free, including Security BSides events, DEFCON groups, and OWASP chapters. And because the tech industry continues to be disproportionately populated by men, there are also a number cybersecurity meetups and membership groups geared toward women, such as the Women’s Society of Cyberjutsu and others listed here.

Unless you live in the middle of nowhere, chances are there’s a number of security conferences and security meetups in your general area. But even if you do reside in the boonies, the good news is many of these meetups are going virtual to avoid the ongoing pestilence that is the COVID-19 epidemic.

In summary, don’t count on a degree or certification to prepare you for the kinds of skills employers are going to understandably expect you to possess. That may not be fair or as it should be, but it’s likely on you to develop and nurture the skills that will serve your future employer(s) and employability in this field.

I’m certain that readers here have their own ideas about how newbies, students and those contemplating a career shift into cybersecurity can best focus their time and efforts. Please feel free to sound off in the comments. I may even update this post to include some of the better recommendations.

CryptogramFriday Squid Blogging: Introducing the Seattle Kraken

The Kraken is the name of Seattle's new NFL franchise.

I have always really liked collective nouns as sports team names (like the Utah Jazz or the Minnesota Wild), mostly because it's hard to describe individual players.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

LongNowThe Comet Neowise as seen from the ISS

For everyone who cannot see the Comet Neowise with their own eyes this week — or just wants to see it from a higher perch — this video by artist Seán Doran combines 550 NASA images from the International Space Station into a real-time view of the comet from 250 miles above Earth’s surface and 17,500 mph.

CryptogramUpdate on NIST's Post-Quantum Cryptography Program

NIST has posted an update on their post-quantum cryptography program:

After spending more than three years examining new approaches to encryption and data protection that could defeat an assault from a quantum computer, the National Institute of Standards and Technology (NIST) has winnowed the 69 submissions it initially received down to a final group of 15. NIST has now begun the third round of public review. This "selection round" will help the agency decide on the small subset of these algorithms that will form the core of the first post-quantum cryptography standard.

[...]

For this third round, the organizers have taken the novel step of dividing the remaining candidate algorithms into two groups they call tracks. The first track contains the seven algorithms that appear to have the most promise.

"We're calling these seven the finalists," Moody said. "For the most part, they're general-purpose algorithms that we think could find wide application and be ready to go after the third round."

The eight alternate algorithms in the second track are those that either might need more time to mature or are tailored to more specific applications. The review process will continue after the third round ends, and eventually some of these second-track candidates could become part of the standard. Because all of the candidates still in play are essentially survivors from the initial group of submissions from 2016, there will also be future consideration of more recently developed ideas, Moody said.

"The likely outcome is that at the end of this third round, we will standardize one or two algorithms for encryption and key establishment, and one or two others for digital signatures," he said. "But by the time we are finished, the review process will have been going on for five or six years, and someone may have had a good idea in the interim. So we'll find a way to look at newer approaches too."

Details are here. This is all excellent work, and exemplifies NIST at its best. The quantum-resistant algorithms will be standardized far in advance of any practical quantum computer, which is how we all want this sort of thing to go.

Kevin RuddCNN: Cold War 1.5

INTERVIEW VIDEO
TV INTERVIEW
CONNECT THE WORLD, CNN
24 JULY 2020

Topics: US-China relations, Australia’s coronavirus second wave

BECKY ANDERSON: Kevin Rudd is the president of the Asia Society Policy Institute and he’s joining us now from the Sunshine Coast in Australia. It’s great to have you. This type of rhetoric you say is not new. But it does feel like we are approaching a precipitous point.

KEVIN RUDD: Well Becky, I think there’s been a lot of debate in recent months as to whether we’re on the edge of a new Cold War between China and the United States. Rather than being Cold War 2.0, I basically see it as Cold War 1.5. That is, it’s sliding in that direction, and sliding rapidly in that direction. But we’re by no means there yet. And one of the reasons we’re not there yet is because of the continued depth and breadth of the economic relationship between China and the United States, which was never the case, in terms of the historical relationship, between the United States and the Soviet Union during the first Cold War. That may change, but that I think is where we are right now.

ANDERSON: We haven’t seen an awful lot of retaliation nor very much of a narrative really from Beijing in response to some of this US anti-China narrative. What do you expect next from Beijing?

RUDD: Well, in terms of the consulate general, I think as night follows day, you’re likely to see either a radical reduction in overall American diplomatic staff numbers in China and consular staff numbers, or the direct reciprocal action, which would close for example, the US Consulate General in perhaps Chengdu or Wuhan or in Shenyang, somewhere like that. But this as you said before in your introduction, Becky, forms just one part of a much broader deterioration relationship. I’ve been observing the US-China relationship for the better part of 35 years. Really, since Nixon and Kissinger first went to Beijing in 1971/1972. This is the low point, the lowest point of the US-China relationship in now half a century. And it’s only heading in one direction. Is there an exit ramp? Open question. But the dynamics both in Beijing and in Washington are pulling this relationship right apart, and that leaves third countries in an increasingly difficult position.

ANDERSON: Yes, and I wanted to talk to you about that because Australia is continually torn between the sort of economic relationship with China that it has, and its strategic partnership with the US. We have seen the US to all intents and purposes, leaning on the UK over Huawei. How should other countries engage with China going forward?

RUDD: Well, one thing I think is to understand that Xi Jinping’s China is quite different from the China of Hu Jintao, Jiang Zemin or even Deng Xiaoping. And since Xi Jinping took over in 2012/2013, it’s a much more assertive China, right across the board. And even in this COVID reality of 2020, we see not just the Hong Kong national security legislation, we see new actions by China in the South China Sea, against Taiwan, against Japan, in the East China Sea, on the Sino-Indian border, and the frictions with Canada, Australia, the United Kingdom – you’ve just mentioned – and elsewhere as well. So, this is a new, assertive China – quite different from the one we’ve seen in the past. So, your question is entirely valid – how do, as it were, the democracies of Asia and the democracies of Europe and elsewhere respond to this new phenomenon on the global stage? I think it’s along these lines. Number one, be confident in the position which democracies have, that we believe in universal values, and human rights and democracy. And we’re not about to change. Number two, many of us, whether we’re in Asia or Europe, or longstanding allies, the United States, that’s not about change. But number three, to make it plain to our Chinese friends that on a reciprocal basis, we wish to have a mutually productive trade, investment, and capital markets relationship. And four, the big challenges of global governance – whether it’s pandemics, or climate change, or stability of global financial markets, and the current crisis we have around the world – where it is incumbent on all of us to work together. I think those four principles form a basis for us dealing with Xi Jinping’s China.

ANDERSON: Kevin, do you see this as a Cold War?

RUDD: As I said before, we’re trending that way. As I said, the big difference between the Soviet Union and the United States is that China and the United States are deeply economically in mesh and have become that way over the last 20 years or so. And that never was the case in the old Cold War. Secondly, in the old Cold War, we basically had a strategic relationship of mutually assured destruction, which came to the flashpoint of the Cuban Missile Crisis in the early 1960s. That’s not the case either. But I’ve got to say in all honesty, it’s trending in a fundamentally negative direction, and when we start to see actions like shutting down each other’s consulate generals, that does remind me of where we got to in the last Cold War as well. There should be an exit ramp, but it’s going to require a new strategic framework for the US-China relationship, based on what I describe as: manage strategic competition between these two powers, where each side’s red lines are well recognized, understood and observed – and competition occurs, as it were, in all other domains. At present, we don’t seem to have parameters or red lines at all.

ANDERSON: And we might have had this discussion four or five months ago. The new layer of course, is the coronavirus pandemic and the way that the US has responded which you say has provided an opportunity for the Chinese to steal a march on the US with regard to its position and its power around the world. Is Beijing, do you think – if you believe that there is a power vacuum at present after this coronavirus response – is Beijing taking advantage of that vacuum?

RUDD: Well, when the coronavirus broke out, China was, by definition, in a defensive position, because the virus came from Wuhan, and therefore, as the virus then spread across the world, China found itself in a deeply problematic position – not just the damage to its economy at home – but frankly its reputation abroad as well. However, President Trump’s America has demonstrated to the world that a) his administration can’t handle the virus within the United States itself, and b) there has been a phenomenal lack of American global leadership in dealing with the public health and global economic dimensions of – let’s call it the COVID-19 crisis – across the world. So, the argument that I’m attracted to is that both these great powers have been fundamentally damaged by the coronavirus crisis that has inflicted the world. So the challenge for the future is whether in fact we a) see a change in administration in Washington with Biden, and secondly, whether a democrat administration will choose to reassert American global leadership through the institutions of global governance, where frankly, the current administration has left so many vacuums across the UN system and beyond it. And that remains the open question – which I think the international community is focusing on – as we move towards that event in November, when the good people the United States cast their ballot.

ANDERSON: Yeah, no fascinating. I’ll just stick to the coronavirus for a final question for you and thank you for this sort of wide-ranging discussion. Australia, of course, applauded for its ability to act fast and flatten its coronavirus curve back in April. That has all been derailed. We’ve seen a second wave. It’s worse than the first. Earlier this week, the country reporting its worst day since the pandemic began despite new tough restrictions. What do you believe it will take to flatten the curve again? And are you concerned that the situation in Australia is slipping out of control?

RUDD: What the situation in the state of Victoria and the city of Melbourne in particular demonstrates is what we see in so many countries around the world, which is the ease with which a second wave effect can be made manifest. It’s not just of course in Australia. We see evidence of this in Hong Kong. We see it in other countries, where in fact, the initial management of the crisis was pretty effective. What the lesson of Melbourne, and the lesson of Victoria is for all of us, is that when it comes to maintaining the disciplines of social distancing, of proper quarantine arrangements, as well as contact tracing and the rest, that there is no, as it were, release of our discipline applied to these challenges. And in the case of Victoria, it was in Melbourne – it was simply a poor application of quarantine arrangements in a single hotel, or Australians returning from elsewhere in the world, that led to this community-level transmission. And that can happen in the northern part of the United Kingdom. It can happen in regional France; it can happen anywhere in Germany. What’s the message? Vigilance across the board, until we can eliminate this thing. We’ve still got a lot to learn from Jacinda Ardern’s success in New Zealand in virtually eliminating this virus altogether.

ANDERSON: With that, we’re going to leave it there. Kevin Rudd, former Prime Minister of Australia, it’s always a pleasure. Thank you very much indeed for joining us.

RUDD: Good to be with you.

ANDERSON: Extremely important subject, US-China relations at present.

The post CNN: Cold War 1.5 appeared first on Kevin Rudd.

Worse Than FailureError'd: Free Coff...Wait!

"Hey! I like free coffee! Let me just go ahead and...um...hold on a second..." writes Adam R.

 

"I know I have a lot of online meetings these days but I don't remember signing up for this one," Ged M. wrote.

 

Peter G. writes, "The $60 off this $1M nylon bag?! What a deal! I should buy three of them!"

 

"So, because it's free, it's null, so I guess that's how Starbucks' app logic works?" James wrote.

 

Graham K. wrote, "How very 'zen' of National Savings to give me this particular error when I went to change my address."

 

"I'm not sure I trust "scenem3.com" with their marketing services, if they send out unsolicited template messages. (Muster is German for template, Max Muster is our equivalent of John Doe.)" Lukas G. wrote.

 

[Advertisement] ProGet’s got you covered with security and access controls on your NuGet feeds. Learn more.

,

Krebs on SecurityNY Charges First American Financial for Massive Data Leak

In May 2019, KrebsOnSecurity broke the news that the website of mortgage title insurance giant First American Financial Corp. had exposed approximately 885 million records related to mortgage deals going back to 2003. On Wednesday, regulators in New York announced that First American was the target of their first ever cybersecurity enforcement action in connection with the incident, charges that could bring steep financial penalties.

First American Financial Corp.

Santa Ana, Calif.-based First American [NYSE:FAF] is a leading provider of title insurance and settlement services to the real estate and mortgage industries. It employs some 18,000 people and brought in $6.2 billion in 2019.

As first reported here last year, First American’s website exposed 16 years worth of digitized mortgage title insurance records — including bank account numbers and statements, mortgage and tax records, Social Security numbers, wire transaction receipts, and drivers license images.

The documents were available without authentication to anyone with a Web browser.

According to a filing (PDF) by the New York State Department of Financial Services (DFS), the weakness that exposed the documents was first introduced during an application software update in May 2014 and went undetected for years.

Worse still, the DFS found, the vulnerability was discovered in a penetration test First American conducted on its own in December 2018.

“Remarkably, Respondent instead allowed unfettered access to the personal and financial data of millions of its customers for six more months until the breach and its serious ramifications were widely publicized by a nationally recognized cybersecurity industry journalist,” the DFS explained in a statement on the charges.

A redacted screenshot of one of many millions of sensitive records exposed by First American’s Web site.

Reuters reports that the penalties could be significant for First American: The DFS considers each instance of exposed personal information a separate violation, and the company faces penalties of up to $1,000 per violation.

In a written statement, First American said it strongly disagrees with the DFS’s findings, and that its own investigation determined only a “very limited number” of consumers — and none from New York — had personal data accessed without permission.

In August 2019, the company said a third-party investigation into the exposure identified just 32 consumers whose non-public personal information likely was accessed without authorization.

When KrebsOnSecurity asked last year how long it maintained access logs or how far back in time that review went, First American declined to be more specific, saying only that its logs covered a period that was typical for a company of its size and nature.

But in Wednesday’s filing, the DFS said First American was unable to determine whether records were accessed prior to Jun 2018.

“Respondent’s forensic investigation relied on a review of web logs retained from June 2018 onward,” the DFS found. “Respondent’s own analysis demonstrated that during this 11-month period, more than 350,000 documents were accessed without authorization by automated ‘bots’ or ‘scraper’ programs designed to collect information on the Internet.

The records exposed by First American would have been a virtual gold mine for phishers and scammers involved in so-called Business Email Compromise (BEC) scams, which often impersonate real estate agents, closing agencies, title and escrow firms in a bid to trick property buyers into wiring funds to fraudsters. According to the FBI, BEC scams are the most costly form of cybercrime today.

First American’s stock price fell more than 6 percent the day after news of their data leak was published here. In the days that followed, the DFS and U.S. Securities and Exchange Commission each announced they were investigating the company.

First American released its first quarter 2020 earnings today. A hearing on the charges alleged by the DFS is slated for Oct. 26.

Kevin RuddBloomberg: US-China Relations Worsen

E&OE TRANSCRIPT
BLOOMBERG
23 JULY 2020

TOM MACKENZIE: Let’s start with your reaction to this latest sequence of events.

KEVIN RUDD: Well, structurally, the US-China relationship is in the worst state it’s been in about 50 years. It’s 50 years next year since Henry Kissinger undertook his secret diplomacy in Beijing. So, this relationship is in trouble strategically, militarily, diplomatically, politically, economically, trade, investment technology, and of course, in the wonderful world of espionage as well. And so, whereas this is a surprising move against a Chinese consulate general of the United States, it certainly fits within the fabric of a structural deterioration relationship underway now for quite a number of years.

MACKENZIE: So far, China, Beijing has taken what many would argue would be a proportionate response to actions by the US, at least in the last few months. Is there an argument now that this kind of action, calling for the closure of this consulate in Houston, will strengthen the hands of the hardliners here in Beijing, and it will force them to take a stronger response? What do you think ultimately will be the material reaction then from Beijing

RUDD: Well, on this particular consulate general closure, I think, as night follows day, you’ll see a Chinese decision to close an American consulate general in China. There are a number already within China. I think you would look to see what would happen with the future of the US Consulate General in say Shenyang up in the northeast, or in Chengdu in the west, because this tit-for-tat is alive very much in the way in which China views the necessity politically, to respond in like form to what the Americans have done. But overall, the Chinese leadership are a very hard-bitten, deeply experienced Marxist-Leninist leadership, who look at the broad view of the US-China relationship. They see it as structurally deteriorating. They see it in part as an inevitable reaction to China’s rise. And if you look carefully at some of the internal statements by Xi Jinping in recent months, the Chinese system is gearing up for what it describes internally as 20 to 30 years of growing friction in the US-China relationship, and that will make life difficult for all countries who have deep relationships with both countries.

MACKENZIE: Mike Pompeo, the US Secretary of State was in London talking to his counterparts there, and he called for a coalition with allies. Presumably, that will include at some point Australia, though we have yet to hear from their leaders about the sense of a coalition against China. Do you think this is significant? Do you think this is a shift in US policy? How much traction do you think Mike Pompeo and the Trump administration will get in forming a coalition to push back against China?

RUDD: Well, the truth is, most friends and allies of the United States, are waiting to see what happens in the US presidential election. There was a general expectation that President Trump will not be re-elected. Therefore, the attitude of friends and allies of the United States: well, what will be the policy cost here of an incoming Biden administration, in relation to China, and in critical areas like the economy, trade, investment technology and the rest? Bear in mind, however, that what has happened under Xi Jinping’s leadership, since he became leader of the Chinese Communist Party the end of 2012, is that China has progressively become more assertive in the promotion of its international interests, whether it’s in the South China Sea, the East China Sea, whether it’s in Europe, whether it’s the United States, whether its countries like Australia. And therefore, what is happening is that countries who are now experiencing this for the first time – the real impact of an assertive Chinese foreign policy – are themselves beginning to push back. And so whether it’s with American leadership or not, the bottom line is that what I now observe is that countries in Europe, democracies in Europe, democracies in Asia, are increasingly in discussion with one another about how do you deal with the emerging China challenge to the international rules based system. That I think is happening as a matter of course, whether or not Mike Pompeo seeks to lead it or not.

DAVID INGLES: Mr Rudd I’d like to pick it up there. David here, by the way, in Hong Kong. In terms of what do you think is the proper way to engage an emerging China? You’ve dealt with them at many levels. You understand how sensitive their past is to their leadership, and how that shapes where they think their country should be, their ambitions. How should the world – let alone the US, let’s set that aside – how should the rest of the world engage an emerging China?

RUDD: Well you’re right. In one capacity or another, I’ve been dealing with China for the last 35 years, since I first went to work there as an Australian embassy official way back in the 1980s. It’s almost the Mesolithic period now. And I’ve seen the evolution of China’s international posture over that period of time. And certainly, there is a clear dividing line with the emergence of Xi Jinping’s leadership, where China has ceased to hide its strength, bide its time, never to take the lead – that was Deng Xiaoping’s axiom for the past. And instead, we see a China under this new leadership, which is infinitely more assertive. And so my advice to governments when they asked me about this, is that governments need to have a coordinated China strategy themselves – just as China has a strategy for dealing with the rest of the world including the major countries and economies within it. But the principles of those strategies should be pretty basic. Number one, those of us who are democracies, we simply make it plain to the Chinese leadership that that’s our nature, our identity, and we’re not about change as far as our belief in universal human rights and values are concerned. Number two, most of us are allies with the United States for historical reasons, and current reasons as well. And that’s not going to change either. Number three, we would like to however, prosecute a mutually beneficial trade and investment and capital markets relationship with you in China, that works for both of us on the basis of reciprocity in each other’s markets. And four, there are so many global challenges out there at the moment – from the pandemic, through to global climate change action, and onto financial markets stability – which require us and China to work together in the major forums of the world like the G20. I think those principles should govern everyone’s approach to how you deal with this emerging and different China.

The post Bloomberg: US-China Relations Worsen appeared first on Kevin Rudd.

CryptogramAdversarial Machine Learning and the CFAA

I just co-authored a paper on the legal risks of doing machine learning research, given the current state of the Computer Fraud and Abuse Act:

Abstract: Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, "What are the potential legal risks to adversarial ML researchers when they attack ML systems?" Studying or testing the security of any operational system potentially runs afoul the Computer Fraud and Abuse Act (CFAA), the primary United States federal statute that creates liability for hacking. We claim that Adversarial ML research is likely no different. Our analysis show that because there is a split in how CFAA is interpreted, aspects of adversarial ML attacks, such as model inversion, membership inference, model stealing, reprogramming the ML system and poisoning attacks, may be sanctioned in some jurisdictions and not penalized in others. We conclude with an analysis predicting how the US Supreme Court may resolve some present inconsistencies in the CFAA's application in Van Buren v. United States, an appeal expected to be decided in 2021. We argue that the court is likely to adopt a narrow construction of the CFAA, and that this will actually lead to better adversarial ML security outcomes in the long term.

Medium post on the paper. News article, which uses our graphic without attribution.

Kevin RuddCNN: America, Britain and China

E&OE TRANSCRIPT
TELEVISION INTERVIEW
QUEST MEANS BUSINESS, CNN
22 JULY 2020

Richard Quest
Kevin Rudd, very good to see you. The strategy that China is now employing to put pressure — you’ve already seen the result of what the US sanctions on China has done — so now what happens with Australia?

Kevin Rudd
Well, it’s important, Richard, to understand what’s driving I think Chinese government decision making, not just on the Hong Kong question, but more broadly what’s happening with a number of other significant relationships which China has in the world. Since COVID-19 has hit, what many of us have observed is China actually doubling down hard in a nationalist direction in terms of a whole range of its external relationships, whether it’s with Canada, Australia, United Kingdom, but also over questions like Hong Kong, the South China Sea, Taiwan, and look at what most recently has happened on the China-India border. And so, therefore, we see a much more hardening Chinese response across the board. And it’s inevitable in my judgment, this is going to generate reactions of the type that we’ve seen in governments from Canberra to London to other points in between.

Richard Quest
But is China in danger of fighting on too many fronts? It’s got its enormous trade war with the United States. It’s now, of course, got the problems over Hong Kong, which will add more potential sanctions and tariffs to China. Now it’s got its row with the UK and of course its recent now with Australia. So what point in your view does China have to start rowing back?

Kevin Rudd
Well, it’s an important question for the Chinese leadership now in August, Richard, because in August they retreat east of Beijing for a month of high-level meetings amongst the extended central leadership. And a central question on the agenda for this upcoming set of meetings will be a) the state of the US-China relationship, which for them is central to everything, and b) the relationship with other principal countries like the UK and c) the unstated topic will be: has China gone too far? In Chinese strategic literature, there’s an expression just like you mentioned before, Richard, that is, it’s not sensible to fight on multiple fronts simultaneously. So there’s an internal debate in China at the moment about whether, in fact, the current strategy is the right one. And therefore the impact of this decision including the British decision most recently both the impending decision on Huawei and on Hong Kong will feed into that.

Richard Quest
But Kevin, whether it’s wise or not, and bearing in mind that China has enormous problems at home, it’s not as if President Xi has, by any means, an electorate or populace, I should say, more likely, that that’s entirely behind him. But he seems determined to prosecute these disagreements with other nations, whatever the cost, and I suggest to you that because he doesn’t have to face an electorate, like all the rest of them have to.

Kevin Rudd
But the bottom line is, however, Richard, is that you then see the economic impact of China being progressively, as it were, imperilled and its principal economic relationships abroad. The big debate in Beijing, for example, with the US-China trade war in the last two years has been: has China pushed too far in order to generate the magnitude of this American reaction? Parallel logic on Huawei, parallel logic in terms of the Hong Kong national security law. So your point goes to whether Xi Jinping is domestically immune from pressure? Well, yes, China is not a liberal democracy. We all know that. It never has been, at least since 1949 and for a long time before that, as well. But there are pressures within the Communist Party at a level of sheer pragmatism, which is: is this sustainable in terms of China’s economic interests? Remember 38% of the Chinese gross domestic product is generated through the traded sector of its economy. It has an unfolding balance of payments challenge and therefore, in terms of any potential financial sanctions coming out of the Hong Kong national security law from Washington in particular. China, therefore, experiences the economic impact, which then feeds into its domestic political debate within the Communist Party.

Journalist
Kevin Rudd joining us.

The post CNN: America, Britain and China appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Step too Var

Astor works for a company that provides software for research surveys. Someone needed a method to return a single control object from a list of control objects, so they wrote this C# code:

 
private ResearchControl GetResearchControlFromListOfResearchControls(int theIndex, 
    List<ResearchControl> researchControls)
{
    var result = new ResearchControl();
    result = researchControls[theIndex];
    return result;
}

Astor has a theory: “I can only guess the author was planning to return some default value in some case…”

I’m sorry, Astor, but you are mistaken. Honestly, if that were the case, I wouldn’t consider this much of a WTF at all, but here we have a subtle hint about deeper levels of ignorance, and it’s all encoded in that little var.

C# is strongly typed, but declaring the type for every variable is a pain, and in many cases, it’s redundant information. So C# lets you declare a variable with var, which does type interpolation. A var variable has a type, just instead of saying what it is, we just ask the compiler to figure it out from context.

But you have to give it that context, which means you have to declare and assign to the variable in a single step.

So, imagine you’re a developer who doesn’t know C# very well. Maybe you know some JavaScript, and you’re just trying to muddle through.

“Okay, I need a variable to hold the result. I’ll type var result. Hmm. Syntax error. Why?”

The developer skims through the code, looking for similar statements, and sees a var / new construct, and thinks, “Ah, that must be what I need to do!” So var result = new ResearchControl() appears, and the syntax error goes away.

Now, that doesn’t explain all of this code. There are still more questions, like: why not just return researchControls[index] or realize that, wait, you’re just indexing an array, so why not just not write a function at all? Maybe someone had some thoughts about adding exception handling, or returning a default value in cases where there wasn’t a valid entry in the array, but none of that ever happened. Instead, we just get this little artifact of someone who didn’t know better, and who wasn’t given any direction on how to do better.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityTwitter Hacking for Profit and the LoLs

The New York Times last week ran an interview with several young men who claimed to have had direct contact with those involved in last week’s epic hack against Twitter. These individuals said they were only customers of the person who had access to Twitter’s internal employee tools, and were not responsible for the actual intrusion or bitcoin scams that took place that day. But new information suggests that at least two of them operated a service that resold access to Twitter employees for the purposes of modifying or seizing control of prized Twitter profiles.

As first reported here on July 16, prior to bitcoin scam messages being blasted out from such high-profile Twitter accounts @barackobama, @joebiden, @elonmusk and @billgates, several highly desirable short-character Twitter account names changed hands, including @L, @6 and @W.

A screenshot of a Discord discussion between the key Twitter hacker “Kirk” and several people seeking to hijack high-value Twitter accounts.

Known as “original gangster” or “OG” accounts, short-character profile names confer a measure of status and wealth in certain online communities, and such accounts can often fetch thousands of dollars when resold in the underground.

The people involved in obtaining those OG accounts on July 15 said they got them from a person identified only as “Kirk,” who claimed to be a Twitter employee. According to The Times, Kirk first reached out to the group through a hacker who used the screen name “lol” on OGusers, a forum dedicated to helping users hijack and resell OG accounts from Twitter and other social media platforms. From The Times’s story:

“The hacker ‘lol’ and another one he worked with, who went by the screen name ‘ever so anxious,’ told The Times that they wanted to talk about their work with Kirk in order to prove that they had only facilitated the purchases and takeovers of lesser-known Twitter addresses early in the day. They said they had not continued to work with Kirk once he began more high-profile attacks around 3:30 p.m. Eastern time on Wednesday.

‘lol’ did not confirm his real-world identity, but said he lived on the West Coast and was in his 20s. “ever so anxious” said he was 19 and lived in the south of England with his mother.

Kirk connected with “lol” late Tuesday and then “ever so anxious” on Discord early on Wednesday, and asked if they wanted to be his middlemen, selling Twitter accounts to the online underworld where they were known. They would take a cut from each transaction.”

Twice in the past year, the OGUsers forum was hacked, and both times its database of usernames, email addresses and private messages was leaked online. A review of the private messages for “lol” on OGUsers provides a glimpse into the vibrant market for the resale of prized OG accounts.

On OGUsers, lol was known to other members as someone who had a direct connection to one or more people working at Twitter who could be used to help fellow members gain access to Twitter profiles, including those that had been suspended for one reason or another. In fact, this was how lol introduced himself to the OGUsers community when he first joined.

“I have a twitter contact who I can get users from (to an extent) and I believe I can get verification from,” lol explained.

In a direct message exchange on OGUsers from November 2019, lol is asked for help from another OGUser member whose Twitter account had been suspended for abuse.

“hello saw u talking about a twitter rep could you please ask if she would be able to help unsus [unsuspend] my main and my friends business account will pay 800-1k for each,” the OGUusers profile inquires of lol.

Lol says he can’t promise anything but will look into it. “I sent her that, not sure if I will get a reply today bc its the weekend but ill let u know,” Lol says.

In another exchange, an OGUser denizen quizzes lol about his Twitter hookup.

“Does she charge for escalations? And how do you know her/what is her department/job. How do you connect with them if I may ask?”

“They are in the Client success team,” lol replies. “No they don’t charge, and I know them through a connection.”

As for how he got access to the Twitter employee, lol declines to elaborate, saying it’s a private method. “It’s a lil method, sorry I cant say.”

In another direct message, lol asks a fellow OGUser member to edit a comment in a forum discussion which included the Twitter account “@tankska,” saying it was his IRL (in real life) Twitter account and that he didn’t want to risk it getting found out or suspended (Twitter says this account doesn’t exist, but a simple text search on Twitter shows the profile was active until late 2019).

“can u edit that comment out, @tankska is a gaming twitter of mine and i dont want it to be on ogu :D’,” lol wrote. “just dont want my irl getting sus[pended].”

Still another OGUser member would post lol’s identifying information into a forum thread, calling lol by his first name — “Josh” — in a post asking lol what he might offer in an auction for a specific OG name.

“Put me down for 100, but don’t note my name in the thread please,” lol wrote.

WHO IS LOL?

The information in lol’s OGUsers registration profile indicates he was probably being truthful with The Times about his location. The hacked forum database shows a user “tankska” registered on OGUsers back in July 2018, but only made one post asking about the price of an older Twitter account for sale.

The person who registered the tankska account on OGUsers did so with the email address jperry94526@gmail.com, and from an Internet address tied to the San Ramon Unified School District in Danville, Calif.

According to 4iq.com, a service that indexes account details like usernames and passwords exposed in Web site data breaches, the jperry94526 email address was used to register accounts at several other sites over the years, including one at the apparel store Stockx.com under the profile name Josh Perry.

Tankska was active only briefly on OGUsers, but the hacked OGUsers database shows that “lol” changed his username three times over the years. Initially, it was “freej0sh,” followed by just “j0sh.”

lol did not respond to requests for comment sent to email addresses tied to his various OGU profiles and Instagram accounts.

ALWAYS IN DISCORD

Last week’s story on the Twitter compromise noted that just before the bitcoin scam tweets went out, several OG usernames changed hands. The story traced screenshots of Twitter tools posted online back to a moniker that is well-known in the OGUsers circle: PlugWalkJoe, a 21-year-old from the United Kingdom.

Speaking with The Times, PlugWalkJoe — whose real name is Joseph O’Connor — said while he acquired a single OG Twitter account (@6) through one of the hackers in direct communication with Kirk, he was otherwise not involved in the conversation.

“I don’t care,” O’Connor told The Times. “They can come arrest me. I would laugh at them. I haven’t done anything.”

In an interview with KrebsOnSecurity, O’Connor likewise asserted his innocence, suggesting at least a half dozen other hacker handles that may have been Kirk or someone who worked with Kirk on July 15, including “Voku,” “Crim/Criminal,” “Promo,” and “Aqua.”

“That twit screenshot was the first time in a while I joke[d], and evidently I shouldn’t have,” he said. “Joking is what got me into this mess.”

O’Connor shared a number of screenshots from a Discord chat conversation on the day of the Twitter hack between Kirk and two others: “Alive,” which is another handle used by lol, and “Ever So Anxious.” Both were described by The Times as middlemen who sought to resell OG Twitter names obtained from Kirk. O’Connor is referenced in these screenshots as both “PWJ” and by his Discord handle, “Beyond Insane.”

The negotiations over highly-prized OG Twitter usernames took place just prior to the hijacked celebrity accounts tweeting out bitcoin scams.

Ever So Anxious told Kirk his OGU nickname was “Chaewon,” which corresponds to a user in the United Kingdom. Just prior to the Twitter compromise, Chaewon advertised a service on the forum that could change the email address tied to any Twitter account for around $250 worth of bitcoin. O’Connor said Chaewon also operates under the hacker alias “Mason.”

“Ever So Anxious” tells Kirk his OGUsers handle is “Chaewon,” and asks Kirk to modify the display names of different OG Twitter handles to read “lol” and “PWJ”.

At one point in the conversation, Kirk tells Alive and Ever So Anxious to send funds for any OG usernames they want to this bitcoin address. The payment history of that address shows that it indeed also received approximately $180,000 worth of bitcoin from the wallet address tied to the scam messages tweeted out on July 15 by the compromised celebrity accounts.

The Twitter hacker “Kirk” telling lol/Alive and Chaewon/Mason/Ever So Anxious where to send the funds for the OG Twitter accounts they wanted.

SWIMPING

My July 15 story observed there were strong indications that the people involved in the Twitter hack have connections to SIM swapping, an increasingly rampant form of crime that involves bribing, hacking or coercing employees at mobile phone and social media companies into providing access to a target’s account.

The account “@shinji,” a.k.a. “PlugWalkJoe,” tweeting a screenshot of Twitter’s internal tools interface.

SIM swapping was thought to be behind the hijacking of Twitter CEO Jack Dorsey‘s Twitter account last year. As recounted by Wired.com, @jack was hijacked after the attackers conducted a SIM swap attack against AT&T, the mobile provider for the phone number tied to Dorsey’s Twitter account.

Immediately after Jack Dorsey’s Twitter handle was hijacked, the hackers tweeted out several shout-outs, including one to @PlugWalkJoe. O’Connor told KrebsOnSecurity he has never been involved in SIM swapping, although that statement was contradicted by two law enforcement sources who closely track such crimes.

However, Chaewon’s private messages on OGusers indicate that he very much was involved in SIM swapping. Use of the term “SIM swapping” was not allowed on OGusers, and the forum administrators created an automated script that would watch for anyone trying to post the term into a private message or discussion thread.

The script would replace the term with “I do not condone illegal activities.” Hence, a portmanteau was sometimes used: “Swimping.”

“Are you still swimping?” one OGUser member asks of Chaewon on Mar. 24, 2020. “If so and got targs lmk your discord.” Chaewon responds in the affirmative, and asks the other user to share his account name on Wickr, an encrypted online messaging app that automatically deletes messages after a few days.

Chaewon/Ever So Anxious/Mason did not respond to requests for comment.

O’Connor told KrebsOnSecurity that one of the individuals thought to be associated with the July 15 Twitter hack — a young man who goes by the nickname “Voku” — is still actively involved in SIM-swapping, particularly against customers of AT&T and Verizon.

Voku is one of several hacker handles used by a Canton, Mich. youth whose mom turned him in to the local police in February 2018 when she overheard him talking on the phone and pretending to be an AT&T employee. Officers responding to the report searched the residence and found multiple cell phones and SIM cards, as well as files on the kid’s computer that included “an extensive list of names and phone numbers of people from around the world.”

The following month, Michigan authorities found the same individual accessing personal consumer data via public Wi-Fi at a local library, and seized 45 SIM cards, a laptop and a Trezor wallet — a hardware device designed to store crytpocurrency account data. In April 2018, Voku’s mom again called the cops on her son — identified only as confidential source #1 (“CS1”) in the criminal complaint against him — saying he’d obtained yet another mobile phone.

Voku’s cooperation with authorities led them to bust up a conspiracy involving at least nine individuals who stole millions of dollars worth of cryptocurrency and other items of value from their targets.

CONSPIRACY

Samy Tarazi, an investigator with the Santa Clara County District Attorney’s Office, has spent hundreds of hours tracking young hackers during his tenure with REACT, a task force set up to combat SIM swapping and bring SIM swappers to justice.

According to Tarazi, multiple actors in the cybercrime underground are constantly targeting people who work in key roles at major social media and online gaming platforms, from Twitter and Instagram to Sony, Playstation and Xbox.

Tarazi said some people engaged in this activity seek to woo their targets, sometimes offering them bribes in exchange for the occasional request to unban or change the ownership of specific accounts.

All too often, however, employees at these social media and gaming platforms find themselves the object of extremely hostile and persistent personal attacks that threaten them and their families unless and until they give in to demands.

“In some cases, they’re just hitting up employees saying, ‘Hey, I’ve got a business opportunity for you, do you want to make some money?'” Tarazi explained. “In other cases, they’ve done everything from SIM swapping and swatting the victim many times to posting their personal details online or extorting the victims to give up access.”

Allison Nixon is chief research officer at Unit 221B, a cyber investigations company based in New York. Nixon says she doesn’t buy the idea that PlugWalkJoe, lol, and Ever So Anxious are somehow less culpable in the Twitter compromise, even if their claims of not being involved in the July 15 Twitter bitcoin scam are accurate.

“You have the hackers like Kirk who can get the goods, and the money people who can help them profit — the buyers and the resellers,” Nixon said. “Without the buyers and the resellers, there is no incentive to hack into all these social media and gaming companies.”

Mark Rasch, Unit 221B’s general counsel and a former U.S. federal prosecutor, said all of the players involved in the Twitter compromise of July 15 can be charged with conspiracy, a legal concept in the criminal statute which holds that any co-conspirators are liable for the acts of any other co-conspirator in furtherance of the crime, even if they don’t know who those other people are in real life or what else they may have been doing at the time.

“Conspiracy has been called the prosecutor’s friend because it makes the agreement the crime,” Rasch said. “It’s a separate crime in addition to the underlying crime, whether it be breaking in to a network, data theft or account takeover. The ‘I just bought some usernames and gave or sold them to someone else’ excuse is wrong because it’s a conspiracy and these people obviously don’t realize that.”

In a statement on its ongoing investigation into the July 15 incident, Twitter said it resulted from a small number of employees being manipulated through a social engineering scheme. Twitter said at least 130 accounts were targeted by the attackers, who succeeded in sending out unauthorized tweets from 45 of them and may have been able to view additional information about those accounts, such as direct messages.

On eight of the compromised accounts, Twitter said, the attackers managed to download the account history using the Your Twitter Data tool. Twitter added that it is working with law enforcement and is rolling out additional company-wide training to guard against social engineering tactics.

CryptogramFawkes: Digital Image Cloaking

Fawkes is a system for manipulating digital images so that they aren't recognized by facial recognition systems.

At a high level, Fawkes takes your personal images, and makes tiny, pixel-level changes to them that are invisible to the human eye, in a process we call image cloaking. You can then use these "cloaked" photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, "cloaked" images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable, and will not cause errors in model training. However, when someone tries to identify you using an unaltered image of you (e.g. a photo taken in public), and tries to identify you, they will fail.

Research paper.

Worse Than FailureScience Is Science

Oil well

Bruce worked for a small engineering consultant firm providing custom software solutions for companies in the industrial sector. His project for CompanyX involved data consolidation for a new oil well monitoring system. It was a two-phased approach: Phase 1 was to get the raw instrument data into the cloud, and Phase 2 was to aggregate that data into a useful format.

Phase 1 was completed successfully. When it came time to write the business logic for aggregating the data, CompanyX politely informed Bruce's team that their new in-house software team would take over from here.

Bruce and his team smelled trouble. They did everything they could think of to persuade CompanyX not to go it alone when all the expertise rested on their side. However, CompanyX was confident they could handle the job, parting ways with handshakes and smiles.

Although Phase 2 was officially no longer on his plate, Bruce had a suspicion borne from experience that this wasn't the last he'd hear from CompanyX. Sure enough, a month later he received an urgent support request via email from Rick, an electrical engineer.

We're having issues with our aggregated data not making it into the database. Please help!!

Rick Smith
LEAD SOFTWARE ENGINEER

"Lead Software Engineer!" Bruce couldn't help repeating out loud. Sadly, he'd seen this scenario before with other clients. In a bid to save money, their management would find the most sciency people on their payroll and would put them in charge of IT or, worse, programming.

Stifling a cringe, Bruce dug deeper into the email. Rick had written a Python script to read the raw instrument data, aggregate it in memory, and re-insert it into a table he'd added to the database. Said script was loaded with un-parameterized queries, filters on non-indexed fields, and SELECT * FROM queries. The aggregation logic was nothing to write home about, either. It was messy, slow, and a slight breeze could take it out. Bruce fired up the SQL profiler and found a bigger issue: a certain query was failing every time, throwing the error Cannot insert the value NULL into column 'requests', table 'hEvents'; column does not allow nulls. INSERT fails.

Well, that seemed straightforward enough. Bruce replied to Rick's email, asking if he knew about the error.

Rick's reply came quickly, and included someone new on the email chain. Yes, but we couldn't figure it out, so we were hoping you could help us. Aaron is our SQL expert and even he's stumped.

Product support was part of Bruce's job responsibilities. He helpfully pointed out the specific query that was failing and described how to use the SQL profiler to pinpoint future issues.

Unfortunately, CompanyX's crack new in-house software team took this opportunity to unload every single problem they were having on Bruce, most of them just as basic or even more basic than the first. The back-and-forth email chain grew to epic proportions, and had less to do with product support than with programming education. When Bruce's patience finally gave out, he sent Rick and Aaron a link to the W3 schools SQL tutorial page. Then he talked to his manager. Agreeing that things had gotten out of hand, Bruce's manager arranged for a BA to contact CompanyX to offer more formal assistance. A teleconference was scheduled for the next week, which Bruce and his manager would also be attending.

When the day of the meeting came, Bruce and his associates dialed in—but no one from CompanyX did. After some digging, they learned that the majority of CompanyX's software team had been fired or reassigned. Apparently, the CompanyX project manager had been BCC'd on Bruce's entire email chain with Rick and Aaron. Said PM had decided a new new software team was in order. The last Bruce heard, the team was still "getting organized." The fate of Phase 2 remains unknown.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

CryptogramHacking a Power Supply

This hack targets the firmware on modern power supplies. (Yes, power supplies are also computers.)

Normally, when a phone is connected to a power brick with support for fast charging, the phone and the power adapter communicate with each other to determine the proper amount of electricity that can be sent to the phone without damaging the device­ -- the more juice the power adapter can send, the faster it can charge the phone.

However, by hacking the fast charging firmware built into a power adapter, Xuanwu Labs demonstrated that bad actors could potentially manipulate the power brick into sending more electricity than a phone can handle, thereby overheating the phone, melting internal components, or as Xuanwu Labs discovered, setting the device on fire.

Research paper, in Chinese.

Kevin RuddBrookings: Prioritising Education During Covid-19

SPEAKING REMARKS
BROOKINGS INSTITUTION
LEADERSHIP DIALOGUE SERIES
21 JULY 2020

The post Brookings: Prioritising Education During Covid-19 appeared first on Kevin Rudd.

Kevin RuddAFR: Seven Questions Morrison Must Answer

On the eve the government’s much-delayed financial statement, it’s time for some basic questions about Australia’s response.

The uncomfortable truth is that we are still in the economic equivalent of ‘‘the phoney war’’ between September 1939 and May 1940. Our real problem is not now but the fourth quarter of this year, and next year, by when temporary measures will have washed through, while globally the real economy will still be wrecked. But there’s no sign yet of a long-term economic strategy, centred on infrastructure, to rebuild business confidence to start investing and re-employing people.

So while Scott Morrison may look pleased with himself (after months of largely uncritical media, a Parliament that barely meets and a delayed budget) it’s time for some intellectual honesty in what passes for our public policy debate. So here are seven questions for Scotty to answer.

First, the big one. It’s well past time to come fully clean on the two dreaded words of Australian politics: debt and deficit. How on earth can Morrison’s Liberal Party and its coalition partner, the Murdoch party, justify their decade-long assault on public expenditure and investment in response to an existential financial and economic crisis?

Within nine months of taking office, we had to deal with a global financial crisis that threatened our banks, while avoiding mass unemployment. We avoided economic and social disaster by … borrowing. In total, we expended $88 billion, taking our federal net debt to 13 per cent of GDP by 2014 – while still sustaining our AAA credit rating.

Four months into the current crisis, Morrison has so far allocated $259 billion, resulting in a debt-to-GDP ratio of about 50 per cent and rising. We haven’t avoided recession – partly because Morrison had to be dragged kicking and screaming late to the stimulus table. He ignored Reserve Bank and Treasury advice to act earlier because it contradicted the Liberals’ political mantra of getting ‘‘back in black’’.

On debt and deficit, this emperor has no clothes. Indeed, the gargantuan nature of this stimulus strategy has destroyed the entire edifice of Liberal ideology and politics. No wonder Scotty from Marketing now talks of us being ‘‘beyond ideology’’: he no longer has one. He’s adopted social democracy instead, including the belated rediscovery that the agency of the state is essential in the economy, public health and broadband. Where would we be on online delivery of health, education and business Zoom services in the absence of our NBN, despite the Liberals botching its final form?

So Morrison and the Murdoch party should just admit their dishonest debt-anddeficit campaign was bullshit all along – a political myth manufactured to advance the proposition that Labor governments couldn’t manage the economy.

Then there’s Morrison’s claim that his mother-of-all-stimulus-strategies, unlike ours, is purring like a well-oiled machine without a wasted dollar. What about the monumental waste of paying $19,500 to young people who were previously working only part-time for less than half that amount? All part of a $130 billion program that suddenly became $70 billion after a little accounting error (imagine the howls of ‘‘incompetence’’ had we done that).

And let’s not forget the eerie silence surrounding the $40 billion ‘‘loans program’’ to businesses. If that’s administered with anything like the finesse we’ve seen with the $100 million sports rorts affair, heaven help the Auditor-General. Then there’s Stuart ‘‘Robodebt’’ Robert and the rolling administrative debacle that is Centrelink. Public administration across the board is just rolling along tickety-boo.

Third, the $30 billion snatch-and-grab raid (so far) on superannuation balances is the most financially irresponsible assault on savings since Federation. Paul Keating built a $3.1 trillion national treasure. I added to it by lifting the super guarantee from 9 per cent to 12 per cent, which the Liberals are seeking to wreck. The long-term damage this will do to the fiscal balance (age pensions), the balance of payments and our credit rating is sheer economic vandalism.

Fourth, industry policy. Yes to bailouts for regional media, despite Murdoch using COVID-19 to kill more than 100 local and regional papers nationwide. But no JobKeeper for the universities, one of our biggest export industries. Why? Ideology! The Liberals hate universities because they worry educated people become lefties. It’s like the Liberals killing off Australian car manufacturing because they hated unions, despite the fact our industry was among the least subsidised in the world.

Fifth, Morrison proclaimed an automatic ‘‘snapback’’ of his capital-S stimulus strategy after six months to avoid the ‘‘mistakes’’ of my government in allowing ours to taper out over two years. Looks like Scotty has been mugged by reality again. Global recessions have a habit of ignoring domestic political fiction.

Sixth, infrastructure. For God’s sake, we should be using near-zero interest rates to deploy infrastructure bonds and invest in our economic future. Extend the national transmission grid to accommodate industrial-scale solar. Admit the fundamental error of abandoning fibre for copper broadband and complete the NBN as planned. The future global economy will become more digital, not less. Use Infrastructure Australia (not the Nationals) to advise on the cost benefit of each.

Finally, there is trade – usually 43 per cent of our GDP. Global trade is collapsing because of the pandemic and Trumpian protectionism. Yet nothing from the government on forging a global free-trade coalition. Yes, the China relationship is hard. But the government’s failure to prosecute an effective China strategy is now compounding our economic crisis. And, outrageously, the US is moving in on our barley and beef markets. Trade policy is a rolled-gold mess.

So far Morrison’s government, unlike mine, has had unprecedented bipartisan support from the opposition. But public trust is hanging by a thread. It’s time for Morrison to get real with these challenges, not just spin us a line. Ultimately, the economy does not lie.

Kevin Rudd was the 26th prime minister of Australia.

First published in the Australian Financial Review on 21 July 2020.

The post AFR: Seven Questions Morrison Must Answer appeared first on Kevin Rudd.

Worse Than FailureCodeSOD: A Dropped Pass

A charitable description of Java is that it’s a strict language, at least in terms of how it expects you to interact with types and definitions. That strictness can create conflict when you’re interacting with less strict systems, like JSON data.

Tessie produces data as a JSON API that wraps around sensing devices which report a numerical value. These sensors, as far as we care for this example, come in two flavors: ones that report a maximum recorded value, and ones which don’t. Something like:

  {
    dataNoMax: [
      {name: "sensor1", value: 20, max: 0} 
    ],
    dataWithMax: [
      {name: "sensor2", value: 25, max: 50 }
    ]
  }

By convention, the API would report max: 0 for all the devices which didn’t have a max.

With that in mind, they designed their POJOs like this:

  class Data {
    String name;
    int value;
    int max;
  }

  class Readings {
    List<Data> dataNoMax;
    List<Data> dataWithMax;
  }

These POJOs would be used both on the side producing the data, and in the client libraries for consuming the data.

Of course, by JSON convention, including a field that doesn’t actually hold a meaningful value is a bad idea- max: 0 should either be max: null, or better yet, just excluded from the output entirely.

So one of Tessie’s co-workers hacked some code into the JSON serializer to conditionally include the max field in the output.

QA needed to validate that this change was correct, so they needed to implement some automated tests. And this is where the problems started to crop up. The developer hadn’t changed the implementation of the POJOs, and they were using int.

For all that Java has a reputation as “everything’s an object”, a few things explicitly aren’t: primitive types. int is a primitive integer, while Integer is an object integer. Integers are references. ints are not. An Integer could be null, but an int cannot ever be null.

This meant if QA tried to write a test assertion that looked like this:

assertThat(readings.dataNoMax[0].getMax()).isNull()

it wouldn’t work. max could never be null.

There are a few different ways to solve this. One could make the POJO support nullable types, which is probably a better way to represent an object which may not have a value for certain fields. An int in Java that isn’t initialized to a value will default to zero, so they probably could have left their last unit test unchanged and it still would have passed. But this was a code change, and a code change needs to have a test change to prove the code change was correct.

Let’s compare versions. Here was their original test:

/** Should display max */
assertEquals("sensor2", readings.dataWithMax[0].getName())
assertEquals(50, readings.dataWithMax[0].getMax());
assertEquals(25, readings.dataWithMax[0].getValue());

/** Should not display max */
assertEquals("sensor1", readings.dataNoMax[0].getName())
assertEquals(0, readings.dataNoMax[0].getMax());
assertEquals(20, readings.dataNoMax[0].getValue());

And, since the code changed, and they needed to verify that change, this is their new test:

/** Should display max */
assertEquals("sensor2", readings.dataWithMax[0].getName())
assertThat(readings.dataWithMax[0].getMax()).isNotNull()
assertEquals(25, readings.dataWithMax[0].getValue());

/** Should not display max */
assertEquals("sensor1", readings.dataNoMax[0].getName())
//assertThat(readings.dataNoMax[0].getMax()).isNull();
assertEquals(20, readings.dataNoMax[0].getValue());

So, their original test compared strictly against values. When they needed to test if values were present, they switched to using an isNotNull comparison. On the side with a max, this test will always pass- it can’t possibly fail, because an int can’t possibly be null. When they tried to do an isNull check, on the other value, that always failed, because again- it can’t possibly be null.

So they commented it out.

Test is green. Clearly, this code is ready to ship.

Tessie adds:

[This] is starting to explain why our git history is filled with commits that “fix failing test” by removing all the asserts.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Cory DoctorowSomeone Comes to Town, Someone Leaves Town (part 10)

Here’s part ten of my new reading of my novel Someone Comes to Town, Someone Leaves Town (you can follow all the installments, as well as the reading I did in 2008/9, here).

This is easily the weirdest novel I ever wrote. Gene Wolfe (RIP) gave me an amazing quote for it: “Someone Comes to Town, Someone Leaves Town is a glorious book, but there are hundreds of those. It is more. It is a glorious book unlike any book you’ve ever read.”

Here’s how my publisher described it when it came out:

Alan is a middle-aged entrepeneur who moves to a bohemian neighborhood of Toronto. Living next door is a young woman who reveals to him that she has wings—which grow back after each attempt to cut them off.

Alan understands. He himself has a secret or two. His father is a mountain, his mother is a washing machine, and among his brothers are sets of Russian nesting dolls.

Now two of the three dolls are on his doorstep, starving, because their innermost member has vanished. It appears that Davey, another brother who Alan and his siblings killed years ago, may have returned, bent on revenge.

Under the circumstances it seems only reasonable for Alan to join a scheme to blanket Toronto with free wireless Internet, spearheaded by a brilliant technopunk who builds miracles from scavenged parts. But Alan’s past won’t leave him alone—and Davey isn’t the only one gunning for him and his friends.

Whipsawing between the preposterous, the amazing, and the deeply felt, Cory Doctorow’s Someone Comes to Town, Someone Leaves Town is unlike any novel you have ever read.

MP3

,

CryptogramOn the Twitter Hack

Twitter was hacked this week. Not a few people's Twitter accounts, but all of Twitter. Someone compromised the entire Twitter network, probably by stealing the log-in credentials of one of Twitter's system administrators. Those are the people trusted to ensure that Twitter functions smoothly.

The hacker used that access to send tweets from a variety of popular and trusted accounts, including those of Joe Biden, Bill Gates, and Elon Musk, as part of a mundane scam -- stealing bitcoin -- but it's easy to envision more nefarious scenarios. Imagine a government using this sort of attack against another government, coordinating a series of fake tweets from hundreds of politicians and other public figures the day before a major election, to affect the outcome. Or to escalate an international dispute. Done well, it would be devastating.

Whether the hackers had access to Twitter direct messages is not known. These DMs are not end-to-end encrypted, meaning that they are unencrypted inside Twitter's network and could have been available to the hackers. Those messages -- between world leaders, industry CEOs, reporters and their sources, heath organizations -- are much more valuable than bitcoin. (If I were a national-intelligence agency, I might even use a bitcoin scam to mask my real intelligence-gathering purpose.) Back in 2018, Twitter said it was exploring encrypting those messages, but it hasn't yet.

Internet communications platforms -- such as Facebook, Twitter, and YouTube -- are crucial in today's society. They're how we communicate with one another. They're how our elected leaders communicate with us. They are essential infrastructure. Yet they are run by for-profit companies with little government oversight. This is simply no longer sustainable. Twitter and companies like it are essential to our national dialogue, to our economy, and to our democracy. We need to start treating them that way, and that means both requiring them to do a better job on security and breaking them up.

In the Twitter case this week, the hacker's tactics weren't particularly sophisticated. We will almost certainly learn about security lapses at Twitter that enabled the hack, possibly including a SIM-swapping attack that targeted an employee's cellular service provider, or maybe even a bribed insider. The FBI is investigating.

This kind of attack is known as a "class break." Class breaks are endemic to computerized systems, and they're not something that we as users can defend against with better personal security. It didn't matter whether individual accounts had a complicated and hard-to-remember password, or two-factor authentication. It didn't matter whether the accounts were normally accessed via a Mac or a PC. There was literally nothing any user could do to protect against it.

Class breaks are security vulnerabilities that break not just one system, but an entire class of systems. They might exploit a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system's software. Or a vulnerability in internet-enabled digital video recorders and webcams that allows an attacker to recruit those devices into a massive botnet. Or a single vulnerability in the Twitter network that allows an attacker to take over every account.

For Twitter users, this attack was a double whammy. Many people rely on Twitter's authentication systems to know that someone who purports to be a certain celebrity, politician, or journalist is really that person. When those accounts were hijacked, trust in that system took a beating. And then, after the attack was discovered and Twitter temporarily shut down all verified accounts, the public lost a vital source of information.

There are many security technologies companies like Twitter can implement to better protect themselves and their users; that's not the issue. The problem is economic, and fixing it requires doing two things. One is regulating these companies, and requiring them to spend more money on security. The second is reducing their monopoly power.

The security regulations for banks are complex and detailed. If a low-level banking employee were caught messing around with people's accounts, or if she mistakenly gave her log-in credentials to someone else, the bank would be severely fined. Depending on the details of the incident, senior banking executives could be held personally liable. The threat of these actions helps keep our money safe. Yes, it costs banks money; sometimes it severely cuts into their profits. But the banks have no choice.

The opposite is true for these tech giants. They get to decide what level of security you have on your accounts, and you have no say in the matter. If you are offered security and privacy options, it's because they decided you can have them. There is no regulation. There is no accountability. There isn't even any transparency. Do you know how secure your data is on Facebook, or in Apple's iCloud, or anywhere? You don't. No one except those companies do. Yet they're crucial to the country's national security. And they're the rare consumer product or service allowed to operate without significant government oversight.

For example, President Donald Trump's Twitter account wasn't hacked as Joe Biden's was, because that account has "special protections," the details of which we don't know. We also don't know what other world leaders have those protections, or the decision process surrounding who gets them. Are they manual? Can they scale? Can all verified accounts have them? Your guess is as good as mine.

In addition to security measures, the other solution is to break up the tech monopolies. Companies like Facebook and Twitter have so much power because they are so large, and they face no real competition. This is a national-security risk as well as a personal-security risk. Were there 100 different Twitter-like companies, and enough compatibility so that all their feeds could merge into one interface, this attack wouldn't have been such a big deal. More important, the risk of a similar but more politically targeted attack wouldn't be so great. If there were competition, different platforms would offer different security options, as well as different posting rules, different authentication guidelines -- different everything. Competition is how our economy works; it's how we spur innovation. Monopolies have more power to do what they want in the quest for profits, even if it harms people along the way.

This wasn't Twitter's first security problem involving trusted insiders. In 2017, on his last day of work, an employee shut down President Donald Trump's account. In 2019, two people were charged with spying for the Saudi government while they were Twitter employees.

Maybe this hack will serve as a wake-up call. But if past incidents involving Twitter and other companies are any indication, it won't. Underspending on security, and letting society pay the eventual price, is far more profitable. I don't blame the tech companies. Their corporate mandate is to make as much money as is legally possible. Fixing this requires changes in the law, not changes in the hearts of the company's leaders.

This essay previously appeared on TheAtlantic.com.

LongNowSix Ways to Think Long-term: A Cognitive Toolkit for Good Ancestors

Image for post
Illustration: Tom Lee at Rocket Visual

Human beings have an astonishing evolutionary gift: agile imaginations that can shift in an instant from thinking on a scale of seconds to a scale of years or even centuries. Our minds constantly dance across multiple time horizons. One moment we can be making a quickfire response to a text and the next thinking about saving for our pensions or planting an acorn in the ground for posterity. We are experts at the temporal pirouette. Whether we are fully making use of this gift is, however, another matter.

The need to draw on our capacity to think long-term has never been more urgent, whether in areas such as public health care (like planning for the next pandemic on the horizon), to deal with technological risks (such as from AI-controlled lethal autonomous weapons), or to confront the threats of an ecological crisis where nations sit around international conference tables, bickering about their near-term interests, while the planet burns and species disappear. At the same time, businesses can barely see past the next quarterly report, we are addicted to 24/7 instant news, and find it hard to resist the Buy Now button.

What can we do to overcome the tyranny of the now? The easy answer is to say we need more long-term thinking. But here’s the problem: almost nobody really knows what it is.

In researching my latest book, The Good Ancestor: How to Think Long Term in a Short-Term World, I spoke to dozens of experts — psychologists, futurists, economists, public officials, investors — who were all convinced of the need for more long-term thinking to overcome the pathological short-termism of the modern world, but few of them could give me a clear sense of what it means, how it works, what time horizons are involved and what steps we must take to make it the norm. This intellectual vacuum amounts to nothing less than a conceptual emergency.

Let’s start with the question, ‘how long is long-term?’ Forget the corporate vision of ‘long-term’, which rarely extends beyond a decade. Instead, consider a hundred years as a minimum threshold for long-term thinking. This is the current length of a long human lifespan, taking us beyond the ego boundary of our own mortality, so we begin to imagine futures that we can influence but not participate in ourselves. Where possible we should attempt to think longer, for instance taking inspiration from cultural endeavours like the 10,000 Year Clock (the Long Now Foundation’s flagship project), which is being designed to stay accurate for ten millennia. At the very least, when you aim to think ‘long-term’, take a deep breath and think ‘a hundred years and more’.

The Tug of War for Time

It is just as crucial to equip ourselves with a mental framework that identifies different forms of long-term thinking. My own approach is represented in a graphic I call ‘The Tug of War for Time’ (see below). On one side, six drivers of short-termism threaten to drag us over the edge of civilizational breakdown. On the other, six ways to think long-term are drawing us towards a culture of longer time horizons and responsibility for the future of humankind.

Image for post

These six ways to think long are not a simplistic blueprint for a new economic or political system, but rather comprise a cognitive toolkit for challenging our obsession with the here and now. They offer conceptual scaffolding for answering what I consider to be the most important question of our time: How can we be good ancestors?

The tug of war for time is the defining struggle of our generation. It is going on both inside our own minds and in our societies. Its outcome will affect the fate of the billions upon billions of people who will inhabit the future. In other words, it matters. So let’s unpack it a little.

Drivers of Short-termism

Amongst the six drivers of short-termism, we all know about the power of digital distraction to immerse us in a here-and-now addiction of clicks, swipes and scrolls. A deeper driver has been the growing tyranny of the clock since the Middle Ages. The mechanical clock was the key machine of the Industrial Revolution, regimenting and speeding up time itself, bringing the future ever-nearer: by 01700 most clocks had minute hands and by 01800 second hands were standard. And it still dominates our daily lives, strapped to our wrists and etched onto our screens.

Speculative capitalism has been a source of boom-bust turbulence at least since the Dutch Tulip Bubble of 01637, through to the 02008 financial crash and the next one waiting around the corner. Electoral cycles also play their part, generating a myopic political presentism where politicians can barely see beyond the next poll or the latest tweet. Such short-termism is amplified by a world of networked uncertainty, where events and risks are increasingly interdependent and globalised, raising the prospect of rapid contagion effects and rendering even the near-term future almost unreadable.

Looming behind it all is our obsession with perpetual progress, especially the pursuit of endless GDP growth, which pushes the Earth system over critical thresholds of carbon emissions, biodiversity loss and other planetary boundaries. We are like a kid who believes they can keep blowing up the balloon, bigger and bigger, without any prospect that it could ever burst.

Put these six drivers together and you get a toxic cocktail of short-termism that could send us into a blind-drunk civilizational freefall. As Jared Diamond argues, ‘short-term decision making’ coupled with an absence of ‘courageous long-term thinking’ has been at the root of civilizational collapse for centuries. A stark warning, and one that prompts us to unpack the six ways to think long.

Six Ways to Think Long-term

1. Deep-Time Humility: grasp we are an eyeblink in cosmic time

Deep-time humility is about recognising that the two hundred thousand years that humankind has graced the earth is a mere eyeblink in the cosmic story. As John McPhee (who coined the concept of deep time in 01980) put it: ‘Consider the earth’s history as the old measure of the English yard, the distance from the king’s nose to the tip of his outstretched hand. One stroke of a nail file on his middle finger erases human history.’

But just as there is deep time behind us, there is also deep time ahead. In six billion years, any creatures that will be around to see our sun die, will be as different from us, as we are from the first single-celled bacteria.

Yet why exactly do long-term thinkers need this sense of temporal humility? Deep time prompts us to consider the consequences of our actions far beyond our own lifetimes, and puts us back in touch with the long-term cycles of the living world like the carbon cycle. But it also helps us grasp our destructive potential: in an incredibly short period of time — only a couple of centuries — we have endangered a world that took billions of years to evolve. We are just a tiny link in the great chain of living organisms, so who are we to put it all in jeopardy with our ecological blindness and deadly technologies? Don’t we have an obligation to our planetary future and the generations of humans and other species to come?

2. Legacy Mindset: be remembered well by posterity

We are the inheritors of extraordinary legacies from the past — from those who planted the first seeds, built the cities where we now live, and made the medical discoveries we benefit from. But alongside the good ancestors are the ‘bad ancestors’, such as those who bequeathed us colonial and slavery-era racism and prejudice that deeply permeate today’s criminal justice systems. This raises the question of what legacies we will leave to future generations: how do we want to be remembered by posterity?

The challenge is to leave a legacy that goes beyond egoistic legacy (like a Russian oligarch who wants a wing of an art gallery named after them) or even familial legacy (like wishing to pass on property or cultural traditions to our children). If we hope to be good ancestors, we need to develop a transcendent ‘legacy mindset’, where we aim to be remembered well by the generations we will never know, by the universal strangers of the future.

We might look for inspiration in many places. The Māori concept of whakapapa (‘genealogy’), describes a continues lifeline that connects an individual to the past, present and future, and generates a sense of respect for the traditions of previous generations while being mindful of those yet to come. In Katie Paterson’s art project Future Library, every year for a hundred years a famous writer (the first was Margaret Atwood) is depositing a new work, which will remain unread until 02114, when they will all be printed on paper made from a thousand trees that have been planted in a forest outside Oslo. Then there are activists like Wangari Maathai, the first African woman to win the Nobel Peace Prize. In 01977 she founded the Green Belt Movement in Kenya, which by the time of her death in 02011 had trained more than 25,000 women in forestry skills and planted 40 million trees. That’s how to pass on a legacy gift to the future.

3. Intergenerational Justice: consider the seventh generation ahead

“Why should I care about future generations? What have they ever done for me?’ This clever quip attributed to Groucho Marx highlights the issue of intergenerational justice. This is not the legacy question of how we will be remembered, but the moral question of what responsibilities we have to the ‘futureholders’ — the generations who will succeed us.

One approach, rooted in utilitarian philosophy, is to recognise that at least in terms of sheer numbers, the current population is easily outweighed by all those who will come after us. In a calculation made by writer Richard Fisher, around 100 billion people have lived and died in the past 50,000 years. But they, together with the 7.7 billion people currently alive, are far outweighed by the estimated 6.75 trillion people who will be born over the next 50,000 years, if this century’s birth rate is maintained (see graphic below). Even in just the next millennium, more than 135 billion people are likely to be born. How could we possibly ignore their wellbeing, and think that our own is of such greater value?

Image for post

Such thinking is embodied in the idea of ‘seventh-generation decision making’, an ethic of ecological stewardship practised amongst some Native American peoples such as the Oglala Lakota Nation in South Dakota: community decisions take into the account the impacts seven generations from the present. This ideal is fast becoming a cornerstone of the growing global intergenerational justice movement, inspiring groups such as Our Children’s Trust (fighting for the legal rights of future generations in the US) and Future Design in Japan (which promotes citizens’ assemblies for city planning, where residents imagine themselves being from future generations).

4. Cathedral thinking: plan projects beyond a human lifetime

Cathedral thinking is the practice of envisaging and embarking on projects with time horizons stretching decades and even centuries into the future, just like medieval cathedral builders who began despite knowing they were unlikely to see construction finished within their own lifetimes. Greta Thunberg has said that it will take ‘cathedral thinking’ to tackle the climate crisis.

Historically, cathedral thinking has taken different forms. Apart from religious buildings, there are public works projects such as the sewers built in Victorian London after the ‘Great Stink’ of 01858, which are still in use today (we might call this ‘sewer thinking’ rather than ‘cathedral thinking’). Scientific endeavours include the Svalbard Global Seed in the remote Arctic, which contains over one million seeds from more than 6,000 species, and intends to keep them safe in an indestructible rock bunker for at least a thousand years. We should also include social and political movements with long time horizons, such as the Suffragettes, who formed their first organisation in Manchester in 01867 and didn’t achieve their aim of votes for women for over half a century.

Inspiring stuff. But remember that cathedral thinking can be directed towards narrow and self-serving ends. Hitler hoped to create a Thousand Year Reich. Dictators have sought to preserve their power and privilege for their progeny through the generations: just look at North Korea. In the corporate world, Gus Levy, former head of investment bank Goldman Sachs, once proudly declared, ‘We’re greedy, but long-term greedy, not short-term greedy’.

That’s why cathedral thinking alone is not enough to create a long-term civilization that respects the interests of future generations. It needs to be guided by other approaches, such as intergenerational justice and a transcendent goal (see below).

5. Holistic Forecasting: envision multiple pathways for civilization

Numerous studies demonstrate that most forecasting professionals tend to have a poor record at predicting future events. Yet we must still try to map out the possible long-term trajectories of human civilization itself — what I call holistic forecasting — otherwise we will end up only dealing with crises as they hit us in the present. Experts in the fields of global risk studies and scenario planning have identified three broad pathways, which I call Breakdown, Reform and Transformation (see graphic below).

Image for post

Breakdown is the path of business-as-usual. We continue striving for the old twentieth-century goal of material economic progress but soon reach a point of societal and institutional collapse in the near term as we fail to respond to rampant ecological and technological crises, and cross dangerous civilizational tipping points (think Cormac McCarthy’s The Road).

A more likely trajectory is Reform, where we respond to global crises such as climate change but in an inadequate and piecemeal way that merely extends the Breakdown curve outwards, to a greater or lesser extent. Here governments put their faith in reformist ideals such as ‘green growth’, ‘reinventing capitalism’, or a belief that technological solutions are just around the corner.

A third trajectory is Transformation, where we see a radical shift in the values and institutions of society towards a more long-term sustainable civilization. For instance, we jump off the Breakdown curve onto a new pathway dominated by post-growth economic models such as Doughnut Economics or a Green New Deal.

Note the crucial line of Disruptions. These are disruptive innovations or events that offer an opportunity to switch from one curve onto another. It could be a new technology like blockchain, the rise of a political movement like Black Lives Matter, or a global pandemic like COVID-19. Successful long-term thinking requires turning these disruptions towards Transformative change and ensuring they are not captured by the old system.

6. Transcendent Goal: strive for one-planet thriving

Every society, wrote astronomer Carl Sagan, needs a ‘telos’ to guide it — ‘a long-term goal and a sacred project’. What are the options? While the goal of material progress served us well in the past, we now know too much about its collateral damage: fossil fuels and material waste have pushed us into the Anthropocene, the perilous new era characterised by a steep upward trend in damaging planetary indicators called the Great Acceleration (see graphic).

Image for post
See an enlarged version of this graphic here.

An alternative transcendent goal is to see our destiny in the stars: the only way to guarantee the survival of our species is to escape the confines of Earth and colonise other worlds. Yet terraforming somewhere like Mars to make it habitable could take centuries — if it could be done at all. Additionally, the more we set our sights on escaping to other worlds, the less likely we are to look after our existing one. As cosmologist Martin Rees points out, ‘It’s a dangerous delusion to think that space offers an escape from Earth’s problems. We’ve got to solve these problems here.’

That’s why our primary goal should be to learn to live within the biocapacity of the only planet we know that sustains life. This is the fundamental principle of the field of ecological economics developed by visionary thinkers such as Herman Daly: don’t use more resources than the earth can naturally regenerate (for instance, only harvest timber as fast as it can grow back), and don’t create more wastes than it can naturally absorb (so avoid burning fossil fuels that can’t be absorbed by the oceans and other carbon sinks).

Once we’ve learned to do this, we can do as much terraforming of Mars as we like: as any mountaineer knows, make sure your basecamp is in order with ample supplies before you tackle a risky summit. But according to the Global Footprint Network, we are not even close and currently use up the equivalent of around 1.6 planet Earths each year. That’s short-termism of the most deadly kind. A transcendent goal of one-planet thriving is our best guarantee of a long-term future. And we do it by caring about place as much as rethinking time.

Bring on the Time Rebellion

So there is a brief overview of a cognitive toolkit we could draw on to survive and thrive into the centuries and millennia to come. None of these six ways is enough alone to create a long-term revolution of the human mind — a fundamental shift in our perception of time. But together — and when practised by a critical mass of people and organisations — a new age of long-term thinking could emerge out of their synergy.

Is this a likely prospect? Can we win the tug of war against short-termism?

‘Only a crisis — actual or perceived — produces real change,’ wrote economist Milton Friedman. Out of the ashes of World War Two came pioneering long-term institutions such as the World Health Organisation, the European Union and welfare states. So too out of the global crisis of COVID-19 could emerge the long-term institutions we need to tackle the challenges of our own time: climate change, technology threats, the racism and inequality structured into our political and economic systems. Now is the moment for expanding our time horizons into a longer now. Now is the moment to become a time rebel.


Roman Krznaric is a public philosopher, research fellow of the Long Now Foundation, and founder of the world’s first Empathy Museum. His latest book is The Good Ancestor: How to Think Long Term in a Short-Term World. He lives in Oxford, UK. @romankrznaric

Note: All graphics from The Good Ancestor: How to Think Long Term in a Short-Term World by Roman Krznaric. Graphic design by Nigel Hawtin. Licensed under CC BY-NC-ND.

Worse Than FailureMega-Agile

A long time ago, way back in 2009, Bruce W worked for the Mega-Bureaucracy. It was a slog of endless forms, endless meetings, endless projects that just never hit a final ship date. The Mega-Bureaucracy felt that the organization which manages best manages the most, and ensured that there were six tons of management overhead attached to the smallest project.

After eight years in that position, Bruce finally left for another division in the same company.

But during those eight years, Bruce learned a few things about dealing with the Mega-Bureaucracy. His division was a small division, and while Bruce needed to interface with the Mega-Bureaucracy, he could shield the other developers on his team from it, as much as possible. This let them get embedded into the business unit, working closely with the end users, revising requirements on the fly based on rapid feedback and a quick release cycle. It was, in a word, "Agile", in the most realistic version of the term: focus on delivering value to your users, and build processes which support that. They were a small team, and there were many layers of management above them, which served to blunt and filter some of the mandates of the Mega-Bureaucracy, and that let them stay Agile.

Nothing, however, protects against management excess than a track record of success. While they had a reputation for being dangerous heretics: they released to test continuously and releasing to production once a month, they changed requirements as needs changed, meaning what they delivered was almost never what they specced, but it was what their users needed, and worst of all, their software defeated all the key Mega-Bureaucracy metrics. It performed better, it had fewer reported defects, it return-on-investment metrics their software saved the division millions of dollars in operating costs.

The Mega-Bureaucracy seethed at these heretics, but the C-level of the company just saw a high functioning team. There was nothing that the Bureaucracy could do to bring them in line-

-at least until someone opened up a trade magazine, skimmed the buzzwords, and said, "Maybe our processes are too cumbersome. We should do Agile. Company wide, let's lay out an Agile Process."

There's a huge difference between the "agile" created by a self-organizing team, that grows based on learning what works best for the team and their users, and the kind of "agile" that's imposed from the corporate overlords.

First, you couldn't do Agile without adopting the Agile Process, which in Mega-Bureaucracy-speak meant "we're doing a very specific flavor of scrum". This meant morning standups were mandated. You needed a scrum-master on the team, which would be a resource drawn from the project management office, and well, they'd also pull double duty as the project manager. The word "requirements" was forbidden, you had to write User Stories, and then estimate those User Stories as taking a certain number of hours. Then you could hold your Sprint Planning meeting, where you gathered a bucket of stories that would fit within your next sprint, which would be a 4-week cadence, but that was just the sprint planning cadence. Releases to production would happen only quarterly. Once user stories were written, they were never to be changed, just potentially replaced with a new story, but once a story was added to a sprint, you were expected to implement it, as written. No changes based on user feedback. At the end of the sprint, you'd have a whopping big sprint retrospective, and since this was a new process, instead of letting the team self-evaluate in private and make adjustments, management from all levels of the company would sit in on the retrospectives to be "informed" about the "challenges" in adopting the new process.

The resulting changes pleased nearly no one. The developers hated it, the users, especially in Bruce's division, hated it, management hated it. But the Mega-Bureaucracy had won; the dangerous heretics who didn't follow the process now were following the process. They were Agile.

That is what motivated Bruce to transfer to a new position.

Two years later, he attended an all-IT webcast. The CIO announced that they'd spun up a new pilot development team. This new team would get embedded into the business unit, work closely with the end user, revise requirements on the fly based on rapid feedback and a continuous release cycle. "This is something brand new for our company, and we're excited to see where it goes!"

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

Sam VargheseThe Indian Government cheated my late father of Rs 332,775

Back in 1976. the Indian Government, for whom my father, Ipe Samuel Varghese, worked in Colombo, cheated him of Rs 13,500 – the gratuity that he was supposed to be paid when he was dismissed from the Indian High Commission (the equivalent of the embassy) in Colombo.

That sum, adjusted for inflation, works out to Rs 332,775 in today’s rupees.

But he was not paid this amount because the embassy said he had contravened rules by working at a second job, something which everyone at the embassy was doing, because what people were paid was basically a starvation wage. But my father had rubbed against powerful interests in the embassy who were making money by taking bribes from poor Sri Lankan Tamils who were applying for Indian passports to return to India.

But let me start at the beginning. My father went to Sri Lanka (then known as Ceylon) in 1947, looking for employment after the war. He took up a job as a teacher, something which was his first love. But in 1956, when the Sri Lankan Government nationalised the teaching profession, he was left without a job.

It was then that he began working for the Indian High Commission which was located in Colpetty, and later in Fort. As he was a local recruit, he was not given diplomatic status. The one benefit was that our family did not need visas to stay in Sri Lanka – we were all Indian citizens – but only needed to obtain passports once we reached the age of 14.

As my father had six children, the pay from the High Commission was not enough to provide for the household. He would tutor some students, either at our house, or else at their houses. He was very strict about his work, and was unwilling to compromise on any rules.

There were numerous people who worked alongside him and they would occasionally take a bribe from here and there and push the case of some person or the other for a passport. The Tamils, who had gone to Sri Lanka to work on the tea plantations, were being repatriated under a pact negotiated by Sirima Bandaranaike, the Sri Lankan prime minister, and Lal Bahadur Shastri, her Indian counterpart. It was thus known as the Sirima-Shastri pact.

There was a lot of anti-Tamil sentiment brewing in Sri Lanka at the time, feelings that blew up into the civil war from 1983 onwards, a conflict that only ended in May 2009. Thus, many Tamils were anxious and wanted to do whatever it took to get an Indian passport.

And in this, they found many High Commission employees more than willing to accept bribes in order to push their cases. But they came up against a brick wall in my father. There was another gentleman who was an impediment too, a man named Navamoni. The others used to call him Koranga Moonji Dorai – meaning monkey face man – as he was a wizened old man. He would lose his temper and shout at people when they tried to mollify him with this or that to push their cases.

Thus, it was only a matter of time before some of my father’s colleagues went to the higher-ups and complained that he was earning money outside his High Commission job. They all were as well, but nobody had rubbed up against the powers-that-be. By then, due to his competence, my father had been put in charge of the passport section, a very powerful post, because he could approve or turn down any application.

The men who wanted to make money through bribes found him a terrible obstacle. One day in May, when my mother called up the High Commission, she was told that my father no longer worked there. Shocked, she waited until he came home to find out the truth. We had no telephone at home.

The family was not in the best financial position at the time. We had a few weeks to return to India as we had been staying in Sri Lanka on the strength of my father’s employment. And then came the biggest shock: the money my father had worked for all those 20 years was denied to him.

We came back to India by train and ferry; we could not afford to fly back. It was a miserable journey and for many years after that we suffered financial hardship because we had no money to tide us over that period.

Many years later, after I migrated to Australia, I went to Indian Consulate in Coburg, a suburb of Melbourne, to get a new passport, I happened to speak to the consul and asked him what I should do with my old passport. He made my blood boil by telling me that it was my patriotic duty to send it by post to the Indian embassy in Canberra. I told him that I owed India nothing considering the manner in which it had treated my father. And I added that if the Indian authorities wanted my old passport, then they could damn well pay for the postage. He was not happy with my reply.

India is the only country in the world which will not restore a person’s citizenship if he asks for it in his later years for sentimental reasons, just so that he can die in the land of his birth. India is also the only country that insists its own former citizens obtain a visa to enter what is their own homeland. Money is not the only thing for the Indian Government; it is everything.

Every other country will restore a person’s citizenship in their latter years of they ask for it for sentimental reasons. Not India.

,

Kevin RuddInterview: UN Youth Australia

INTERVIEW VIDEO
UN YOUTH AUSTRALIA
PUBLISHED 18 JULY 2020

The post Interview: UN Youth Australia appeared first on Kevin Rudd.

Kevin RuddSMH: Stimulus Opportunity Knocks for Climate Action

By Kevin Rudd and Patrick Suckling

As the International Monetary Fund recently underlined in sharply revising down global growth prospects, recovering from the biggest peacetime shock to the global economy since the Great Depression will be a long haul.

There is a global imperative to put in place the strongest, most durable economic recovery. This is not a time for governments to retreat. Recovery will require massive and sustained support.

At the same time, spending decisions by governments now will shape our economic future for decades to come. In other words, we have a once-in-a-generation opportunity and can’t blow it.

But it’s looking like we might. This is because too few stimulus packages globally are reaping the double-dividend of both investing in growth and jobs, and in the transition to low emissions, more climate-resilient economies. And in Australia, this means we risk lagging even further behind the rest of the world as a result.

As Australia’s summer of hell demonstrated, climate change is only getting worse. It remains the greatest threat to our future welfare and economic prosperity. And while the world has legitimately been preoccupied with COVID-19, few have noticed that this year is on track to be the warmest in recorded history. Perhaps even fewer still have also made the connection between climate and biodiversity habitat loss and the outbreak of infectious diseases.

Stimulus decisions that do not address this climate threat therefore don’t just sell us short; they sell us out. And they cut against the grain of the global economy. This is the irreducible logic that flows from the 2015 Paris Agreement to which the world – including Australia – signed up to.

Unfortunately, as things stand today, many of the biggest stimulus efforts around the world are in danger of failing this logic.

For example, the US economic recovery is heavily focused on high emitting industries. The same is true in China, India, Japan and the large South-East Asian economies.

In fact, Beijing is approving plans for new coal-fired power plants at the fastest rate since 2015. And whether these plants are now actually built by China’s regional and provincial governments is increasingly becoming the global bellwether for whether we will emerge from this crisis better or worse off in the global fight against climate change.

And for our own part, Australia’s COVID Recovery Commission has placed limited emphasis on renewables despite advances in energy storage technologies and plummeting costs.

But as we know from our experience in the Global Financial Crisis a decade ago, it is entirely possible to design an economic recovery that is also good for the planet. This means investing in clean energy, energy efficiency systems, new transport systems, more sustainable homes and buildings and improved agricultural production, water and waste management. In fact, as McKinsey recently found, government spending on renewable energy technologies creates five more jobs per million dollars than spending on fossil fuels.

Despite these cautionary tales, there are thankfully also bright spots.

Take the European Union and its massive 750 billion Euro stimulus package. It will invest heavily in areas like energy efficiency, turbocharging renewable energy, accelerating hydrogen technologies, rolling out clean transport and promoting the circular economy.

To be fair, China is also emphasising new infrastructure like electric transport in its US$500 billion stimulus package. India is doubling down on its world-leading renewable energy investments. Indonesia has announced a major investment in solar energy. And Japan and South Korea are now announcing climate transition spending. But whether these are just bright spots amongst a dark haze of pollution, or genuinely light the way is the key question that confronts these economies.

In Australia, the government has confirmed significant investment in pumped hydro-power for “Snowy 2.0.” The government has also indicated acceleration of important projects such as the Marinus Link to ensure more renewable energy from Tasmania for the mainland. But much more is now needed.

An obvious starting point could be a nation-building stimulus investment around our decrepit energy system. By now the federal and state governments have a much stronger grasp of what we need for success, encouragingly evident on the recent $2 billion Commonwealth-NSW government package for better access, security and affordability.

Turbocharging this with a stimulus package for more renewable energy and storage of all sorts (including hydrogen), accompanying extension and stabilisation technologies for our electricity grid, and investment in dramatically improving energy efficiency would – literally and figuratively – power our economy forward.

In the aftermath of our drought and bushfires, another obvious area for nation-building investment is our land sector. Farm productivity can be dramatically improved by precision agriculture and regenerative farming technologies while building resilience to drought. New sources of revenue for farmers can be created through soil carbon and forest carbon farming – with carbon trading from these activities internationally set to be worth hundreds of billions of dollars over the coming decade.

Importantly, the Australian business community is not just calling for policy certainty, but actively ushering in change themselves. The Australian Industry Group, for instance, has called for a stronger climate-focused recovery. And in recent days, our largest private energy generator, AGL, has announced a significant strengthening of its commitment to climate transition, linking performance pay to progress in the company’s goal of achieving net-zero emissions by 2050 – a goal that BHP, Qantas and every Australian state and territory has signed up. HESTA has also announced it will be the first major Australian superannuation fund to align its investment portfolio to this end as well.

These sort of decisions are being replicated at a growing rate by companies around the world and show that business is leading and increasingly it is time for governments – including our own – to do the same. Whether we can use this crisis as an opportunity to emerge in a better place to tackle other global challenges remains to be seen, but rests on many of the decisions that will continue to be taken in the months to come.

Kevin Rudd is a former Prime Minister of Australia and now President of the Asia Society Policy Institute in New York. 

Patrick Suckling is a Senior Fellow at the Asia Society Policy Institute; Senior Partner at Pollination (pollinationgroup.com), a specialist climate investment and advisory firm; and was Australia’s Ambassador for the Environment.

 

First published in the Sydney Morning Herald and The Age on 18 February 2020.

The post SMH: Stimulus Opportunity Knocks for Climate Action appeared first on Kevin Rudd.

,

CryptogramFriday Squid Blogging: Squid Found on Provincetown Sandbar

Headline: "Dozens of squid found on Provincetown sandbar." Slow news day.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramTwitter Hackers May Have Bribed an Insider

Motherboard is reporting that this week's Twitter hack involved a bribed insider. Twitter has denied it.

I have been taking press calls all day about this. And while I know everyone wants to speculate about the details of the hack, we just don't know -- and probably won't for a couple of weeks.

Dave HallIf You’re not Using YAML for CloudFormation Templates, You’re Doing it Wrong

In my last blog post, I promised a rant about using YAML for CloudFormation templates. Here it is. If you persevere to the end I’ll also show you have to convert your existing JSON based templates to YAML.

Many of the points I raise below don’t just apply to CloudFormation. They are general comments about why you should use YAML over JSON for configuration when you have a choice.

One criticism of YAML is its reliance on indentation. A lot of the code I write these days is Python, so indentation being significant is normal. Use a decent editor or IDE and this isn’t a problem. It doesn’t matter if you’re using JSON or YAML, you will want to validate and lint your files anyway. How else will you find that trailing comma in your JSON object?

Now we’ve got that out of the way, let me try to convince you to use YAML.

As developers we are regularly told that we need to document our code. CloudFormation is Infrastructure as Code. If it is code, then we need to document it. That starts with the Description property at the top of the file. If you JSON for your templates, that’s it, you have no other opportunity to document your templates. On the other hand, if you use YAML you can add inline comments. Anywhere you need a comment, drop in a hash # and your comment. Your team mates will thank you.

JSON templates don’t support multiline strings. These days many developers have 4K or ultra wide monitors, we don’t want a string that spans the full width of our 34” screen. Text becomes harder to read once you exceed that “90ish” character limit. With JSON your multiline string becomes "[90ish-characters]\n[another-90ish-characters]\n[and-so-on"]. If you opt for YAML, you can use the greater than symbol (>) and then start your multiline comment like so:

Description: >
  This is the first line of my Description
  and it continues on my second line
  and I'll finish it on my third line.

As you can see it much easier to work with multiline string in YAML than JSON.

“Folded blocks” like the one above are created using the > replace new lines with spaces. This allows you to format your text in a more readable format, but allow a machine to use it as intended. If you want to preserve the new line, use the pipe (|) to create a “literal block”. This is great for an inline Lambda functions where the code remains readable and maintainable.

  APIFunction:
    Type: AWS::Lambda::Function
    Properties:
      Code:
        ZipFile: |
          import json
          import random


          def lambda_handler(event, context):
              return {"statusCode": 200, "body": json.dumps({"value": random.random()})}
      FunctionName: "GetRandom"
      Handler: "index.lambda_handler"
      MemorySize: 128
      Role: !GetAtt LambdaServiceRole.Arn
      Runtime: "python3.7"
		Timeout: 5

Both JSON and YAML require you to escape multibyte characters. That’s less of an issue with CloudFormation templates as generally you’re only using the ASCII character set.

In a YAML file generally you don’t need to quote your strings, but in JSON double quotes are used every where, keys, string values and so on. If your string contains a quote you need to escape it. The same goes for tabs, new lines, backslashes and and so on. JSON based CloudFormation templates can be hard to read because of all the escaping. It also makes it harder to handcraft your JSON when your code is a long escaped string on a single line.

Some configuration in CloudFormation can only be expressed as JSON. Step Functions and some of the AppSync objects in CloudFormation only allow inline JSON configuration. You can still use a YAML template and it is easier if you do when working with these objects.

The JSON only configuration needs to be inlined in your template. If you’re using JSON you have to supply this as an escaped string, rather than nested objects. If you’re using YAML you can inline it as a literal block. Both YAML and JSON templates support functions such as Sub being applied to these strings, it is so much more readable with YAML. See this Step Function example lifted from the AWS documentation:

MyStateMachine:
  Type: "AWS::StepFunctions::StateMachine"
  Properties:
    DefinitionString:
      !Sub |
        {
          "Comment": "A simple AWS Step Functions state machine that automates a call center support session.",
          "StartAt": "Open Case",
          "States": {
            "Open Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:open_case",
              "Next": "Assign Case"
            }, 
            "Assign Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:assign_case",
              "Next": "Work on Case"
            },
            "Work on Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:work_on_case",
              "Next": "Is Case Resolved"
            },
            "Is Case Resolved": {
                "Type" : "Choice",
                "Choices": [ 
                  {
                    "Variable": "$.Status",
                    "NumericEquals": 1,
                    "Next": "Close Case"
                  },
                  {
                    "Variable": "$.Status",
                    "NumericEquals": 0,
                    "Next": "Escalate Case"
                  }
              ]
            },
             "Close Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:close_case",
              "End": true
            },
            "Escalate Case": {
              "Type": "Task",
              "Resource": "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:escalate_case",
              "Next": "Fail"
            },
            "Fail": {
              "Type": "Fail",
              "Cause": "Engage Tier 2 Support."    }   
          }
        }

If you’re feeling lazy you can use inline JSON for IAM policies that you’ve copied from elsewhere. It’s quicker than converting them to YAML.

YAML templates are smaller and more compact than the same configuration stored in a JSON based template. Smaller yet more readable is winning all round in my book.

If you’re still not convinced that you should use YAML for your CloudFormation templates, go read Amazon’s blog post from 2017 advocating the use of YAML based templates.

Amazon makes it easy to convert your existing templates from JSON to YAML. cfn-flip is aPython based AWS Labs tool for converting CloudFormation templates between JSON and YAML. I will assume you’ve already installed cfn-flip. Once you’ve done that, converting your templates with some automated cleanups is just a command away:

cfn-flip --clean template.json template.yaml

git rm the old json file, git add the new one and git commit and git push your changes. Now you’re all set for your new life using YAML based CloudFormation templates.

If you want to learn more about YAML files in general, I recommend you check our Learn X in Y Minutes’ Guide to YAML. If you want to learn more about YAML based CloudFormation templates, check Amazon’s Guide to CloudFormation Templates.

LongNowLong Now partners with Avenues: The World School for year-long, online program on the future of invention

The best way to predict the future is to invent it.” – Alan Kay

The Long Now Foundation has partnered with Avenues: The World School to offer a program on the past, present, and future of innovation. A fully online program for ages 17 and above, the Avenues Mastery Year is designed to equip aspiring inventors with the ability to: 

  • Conceive original ideas and translate those ideas into inventions through design and prototyping, 
  • Communicate the impact of the invention with an effective pitch deck and business plan, 
  • Ultimately file for and receive patent pending status with the United States Patent and Trademark Office. 

Applicants select a concentration in either Making and Design or Future Sustainability

Participants will hack, reverse engineer, and re-invent a series of world-changing technologies such as the smartphone, bioplastics, and the photovoltaic cell, all while immersing themselves in curated readings about the origins and possible trajectories of great inventions. 

The Long Now Foundation will host monthly fireside chats for participants where special guests offer feedback, spark new ideas and insights, and share advice and wisdom. Confirmed guests include Kim Polese (Long Now Board Member), Alexander Rose (Long Now Executive Director and Board Member), Samo Burja (Long Now Research Fellow), Jason Crawford (Roots of Progress), and Nick Pinkston (Volition). Additional guests from the Long Now Board and community are being finalized over the coming weeks.

The goal of Avenues Mastery Year is to equip aspiring inventors with the technical skills and long-term perspective needed to envision and invent the future. Visit Avenues Mastery Year to learn more, or get in touch directly by writing to ama@avenues.org.

Worse Than FailureError'd: Not Applicable

"Why yes, I have always pictured myself as not applicable," Olivia T. wrote.

 

"Hey Amazon, now I'm no doctor, but you may need to reconsider your 'Choice' of Acetaminophen as a 'stool softener'," writes Peter.

 

Ivan K. wrote, "Initially, I balked at the price of my new broadband plan, but the speed is just so good that sometimes it's so fast that the reply packets arrive before I even send a request!"

 

"I wanted to check if a site was being slow and, well, I figured it was good time to go read a book," Tero P. writes.

 

Robin L. writes, "I just can't wait to try Edge!"

 

"Yeah, one car stays in the garage, the other is out there tailgating Starman," Keith wrote.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

Krebs on SecurityWho’s Behind Wednesday’s Epic Twitter Hack?

Twitter was thrown into chaos on Wednesday after accounts for some of the world’s most recognizable public figures, executives and celebrities starting tweeting out links to bitcoin scams. Twitter says the attack happened because someone tricked or coerced an employee into providing access to internal Twitter administrative tools. This post is an attempt to lay out some of the timeline of the attack, and point to clues about who may have been behind it.

The first public signs of the intrusion came around 3 PM EDT, when the Twitter account for the cryptocurrency exchange Binance tweeted a message saying it had partnered with “CryptoForHealth” to give back 5000 bitcoin to the community, with a link where people could donate or send money.

Minutes after that, similar tweets went out from the accounts of other cryptocurrency exchanges, and from the Twitter accounts for democratic presidential candidate Joe Biden, Amazon CEO Jeff Bezos, President Barack Obama, Tesla CEO Elon Musk, former New York Mayor Michael Bloomberg and investment mogul Warren Buffett.

While it may sound ridiculous that anyone would be fooled into sending bitcoin in response to these tweets, an analysis of the BTC wallet promoted by many of the hacked Twitter profiles shows that over the past 24 hours the account has processed 383 transactions and received almost 13 bitcoin — or approximately USD $117,000.

Twitter issued a statement saying it detected “a coordinated social engineering attack by people who successfully targeted some of our employees with access to internal systems and tools. We know they used this access to take control of many highly-visible (including verified) accounts and Tweet on their behalf. We’re looking into what other malicious activity they may have conducted or information they may have accessed and will share more here as we have it.”

There are strong indications that this attack was perpetrated by individuals who’ve traditionally specialized in hijacking social media accounts via “SIM swapping,” an increasingly rampant form of crime that involves bribing, hacking or coercing employees at mobile phone and social media companies into providing access to a target’s account.

People within the SIM swapping community are obsessed with hijacking so-called “OG” social media accounts. Short for “original gangster,” OG accounts typically are those with short profile names (such as @B or @joe). Possession of these OG accounts confers a measure of status and perceived influence and wealth in SIM swapping circles, as such accounts can often fetch thousands of dollars when resold in the underground.

In the days leading up to Wednesday’s attack on Twitter, there were signs that some actors in the SIM swapping community were selling the ability to change an email address tied to any Twitter account. In a post on OGusers — a forum dedicated to account hijacking — a user named “Chaewon” advertised they could change email address tied to any Twitter account for $250, and provide direct access to accounts for between $2,000 and $3,000 apiece.

The OGUsers forum user “Chaewon” taking requests to modify the email address tied to any twitter account.

“This is NOT a method, you will be given a full refund if for any reason you aren’t given the email/@, however if it is revered/suspended I will not be held accountable,” Chaewon wrote in their sales thread, which was titled “Pulling email for any Twitter/Taking Requests.”

Hours before any of the Twitter accounts for cryptocurrency platforms or public figures began blasting out bitcoin scams on Wednesday, the attackers appear to have focused their attention on hijacking a handful of OG accounts, including “@6.

That Twitter account was formerly owned by Adrian Lamo — the now-deceased “homeless hacker” perhaps best known for breaking into the New York Times’s network and for reporting Chelsea Manning‘s theft of classified documents. @6 is now controlled by Lamo’s longtime friend, a security researcher and phone phreaker who asked to be identified in this story only by his Twitter nickname, “Lucky225.”

Lucky225 said that just before 2 p.m. EDT on Wednesday, he received a password reset confirmation code via Google Voice for the @6 Twitter account. Lucky said he’d previously disabled SMS notifications as a means of receiving multi-factor codes from Twitter, opting instead to have one-time codes generated by a mobile authentication app.

But because the attackers were able to change the email address tied to the @6 account and disable multi-factor authentication, the one-time authentication code was sent to both his Google Voice account and to the new email address added by the attackers.

“The way the attack worked was that within Twitter’s admin tools, apparently you can update the email address of any Twitter user, and it does this without sending any kind of notification to the user,” Lucky told KrebsOnSecurity. “So [the attackers] could avoid detection by updating the email address on the account first, and then turning off 2FA.”

Lucky said he hasn’t been able to review whether any tweets were sent from his account during the time it was hijacked because he still doesn’t have access to it (he has put together a breakdown of the entire episode at this Medium post).

But around the same time @6 was hijacked, another OG account – @B — was swiped. Someone then began tweeting out pictures of Twitter’s internal tools panel showing the @B account.

A screenshot of the hijacked OG Twitter account “@B,” shows the hijackers logged in to Twitter’s internal account tools interface.

Twitter responded by removing any tweets across its platform that included screenshots of its internal tools, and in some cases temporarily suspended the ability of those accounts to tweet further.

Another Twitter account — @shinji — also was tweeting out screenshots of Twitter’s internal tools. Minutes before Twitter terminated the @shinji account, it was seen publishing a tweet saying “follow @6,” referring to the account hijacked from Lucky225.

The account “@shinji” tweeting a screenshot of Twitter’s internal tools interface.

Cached copies of @Shinji’s tweets prior to Wednesday’s attack on Twitter are available here and here from the Internet Archive. Those caches show Shinji claims ownership of two OG accounts on Instagram — “j0e” and “dead.”

KrebsOnSecurity heard from a source who works in security at one of the largest U.S.-based mobile carriers, who said the “j0e” and “dead” Instagram accounts are tied to a notorious SIM swapper who goes by the nickname “PlugWalkJoe.” Investigators have been tracking PlugWalkJoe because he is thought to have been involved in multiple SIM swapping attacks over the years that preceded high-dollar bitcoin heists.

Archived copies of the @Shinji account on twitter shows one of Joe’s OG Instagram accounts, “Dead.”

Now look at the profile image in the other Archive.org index of the @shinji Twitter account (pictured below). It is the same image as the one included in the @Shinji screenshot above from Wednesday in which Joseph/@Shinji was tweeting out pictures of Twitter’s internal tools.

Image: Archive.org

This individual, the source said, was a key participant in a group of SIM swappers that adopted the nickname “ChucklingSquad,” and was thought to be behind the hijacking of Twitter CEO Jack Dorsey‘s Twitter account last year. As Wired.com recounted, @jack was hijacked after the attackers conducted a SIM swap attack against AT&T, the mobile provider for the phone number tied to Dorsey’s Twitter account.

A tweet sent out from Twitter CEO Jack Dorsey’s account while it was hijacked shouted out to PlugWalkJoe and other Chuckling Squad members.

The mobile industry security source told KrebsOnSecurity that PlugWalkJoe in real life is a 21-year-old from Liverpool, U.K. named Joseph James O’Connor. The source said PlugWalkJoe is in Spain where he was attending a university until earlier this year. He added that PlugWalkJoe has been unable to return home on account of travel restrictions due to the COVID-19 pandemic.

The mobile industry source said PlugWalkJoe was the subject of an investigation in which a female investigator was hired to strike up a conversation with PlugWalkJoe and convince him to agree to a video chat. The source further explained that a video which they recorded of that chat showed a distinctive swimming pool in the background.

According to that same source, the pool pictured on PlugWalkJoe’s Instagram account (instagram.com/j0e) is the same one they saw in their video chat with him.

If PlugWalkJoe was in fact pivotal to this Twitter compromise, it’s perhaps fitting that he was identified in part via social engineering. Maybe we should all be grateful the perpetrators of this attack on Twitter did not set their sights on more ambitious aims, such as disrupting an election or the stock market, or attempting to start a war by issuing false, inflammatory tweets from world leaders.

Also, it seems clear that this Twitter hack could have let the attackers view the direct messages of anyone on Twitter, information that is difficult to put a price on but which nevertheless would be of great interest to a variety of parties, from nation states to corporate spies and blackmailers.

This is a fast-moving story. There were multiple people involved in the Twitter heist. Please stay tuned for further updates. KrebsOnSecurity would like to thank Unit 221B for their assistance in connecting some of the dots in this story.

Worse Than FailureCodeSOD: Because of the Implication

Even when you’re using TypeScript, you’re still bound by JavaScript’s type system. You’re also stuck with its object system, which means that each object is really just a dict, and there’s no guarantee that any object has any given key at runtime.

Madison sends us some TypeScript code that is, perhaps not strictly bad, in and of itself, though it certainly contains some badness. It is more of a symptom. It implies a WTF.

    private _filterEmptyValues(value: any): any {
        const filteredValue = {};
        Object.keys(value)
            .filter(key => {
                const v = value[key];

                if (v === null) {
                    return false;
                }
                if (v.von !== undefined || v.bis !== undefined) {
                    return (v.von !== null && v.von !== 'undefined' && v.von !== '') ||
                        (v.bis !== null && v.bis !== 'undefined' && v.bis !== '');
                }
                return (v !== 'undefined' && v !== '');

            }).forEach(key => {
            filteredValue[key] = value[key];
        });
        return filteredValue;
    }

At a guess, this code is meant to be used as part of prepping objects for being part of a request: clean out unused keys before sending or storing them. And as a core methodology, it’s not wrong, and it’s pretty similar to your standard StackOverflow solution to the problem. It’s just… forcing me to ask some questions.

Let’s trace through it. We start by doing an Object.keys to get all the fields on the object. We then filter to remove the “empty” ones.

First, if the value is null, that’s empty. That makes sense.

Then, if the value is an object which contains a von or bis property, we’ll do some more checks. This is a weird definition of “empty”, but fine. We’ll check that they’re both non-null, not an empty string, and not… 'undefined'.

Uh oh.

We then do a similar check on the value itself, to ensure it’s not an empty string, and not 'undefined'.

What this is telling me is that somewhere in processing, sometimes, the actual string “undefined” can be stored, and it’s meant to be treated as JavaScript’s type undefined. That probably shouldn’t be happening, and implies a WTF somewhere else.

Similarly, the von and bis check has to raise a few eyebrows. If an object contains these fields, these fields must contain a value to pass this check. Why? I have no idea.

In the end, this code isn’t the WTF itself, it’s all the questions that it raises that tell me the shape of the WTF. It’s like looking at a black hole: I can’t see the object itself, I can only see the effect it has on the space around it.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

MEWindows 10 on Debian under KVM

Here are some things that you need to do to get Windows 10 running on a Debian host under KVM.

UEFI Booting

UEFI is big and complex, but most of what it does isn’t needed at all. If all you want to do is boot from an image of a disk with a GPT partition table then you just install the package ovmf and add something like the following to your KVM start script:

UEFI="-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd"

Note that some of the documentation on this doesn’t have the OVMF_VARS.fd file set to readonly. Allowing writes to that file means that the VM boot process (and maybe later) can change EFI variables that affect later boots and other VMs if they all share the same file. For a basic boot you don’t need to change variables so you want it read-only. Also having it read-only is necessary if you want to run KVM as non-root.

As an experiment I tried booting without the OVMF_VARS.fd file, it didn’t boot and then even after configuring it to use the OVMF_VARS.fd file again Windows gave a boot error about the “boot configuration data file” that required booting from recovery media. Apparently configuration mistakes with EFI can mess up the Windows installation, so be careful and backup the Windows installation regularly!

Linux can boot from EFI but you generally don’t want to unless the boot device is larger than 2TB. It’s relatively easy to convert a Linux installation on a GPT disk to a virtual image on a DOS partition table disk or on block devices without partition tables and that gives a faster boot. If the same person runs the host hardware and the VMs then the best choice for Linux is to have no partition tables just one filesystem per block device (which makes resizing much easier) and have the kernel passed as a parameter to kvm. So booting a VM from EFI is probably only useful for booting Windows VMs and for Linux boot loader development and testing.

As an aside, the Debian Wiki page about Secure Boot on a VM [4] was useful for this. It’s unfortunate that it and so much of the documentation about UEFI is about secure boot which isn’t so useful if you just want to boot a system without regard to the secure boot features.

Emulated IDE Disks

Debian kernels (and probably kernels from many other distributions) are compiled with the paravirtualised storage device drivers. Windows by default doesn’t support such devices so you need to emulate an IDE/SATA disk so you can boot Windows and install the paravirtualised storage driver. The following configuration snippet has a commented line for paravirtualised IO (which is fast) and an uncommented line for a virtual IDE/SATA disk that will allow an unmodified Windows 10 installation to boot.

#DRIVE="-drive format=raw,file=/home/kvm/windows10,if=virtio"
DRIVE="-drive id=disk,format=raw,file=/home/kvm/windows10,if=none -device ahci,id=ahci -device ide-drive,drive=disk,bus=ahci.0"

Spice Video

Spice is an alternative to VNC, Here is the main web site for Spice [1]. Spice has many features that could be really useful for some people, like audio, sharing USB devices from the client, and streaming video support. I don’t have a need for those features right now but it’s handy to have options. My main reason for choosing Spice over VNC is that the mouse cursor in the ssvnc doesn’t follow the actual mouse and can be difficult or impossible to click on items near edges of the screen.

The following configuration will make the QEMU code listen with SSL on port 1234 on all IPv4 addresses. Note that this exposes the Spice password to anyone who can run ps on the KVM server, I’ve filed Debian bug #965061 requesting the option of a password file to address this. Also note that the “qxl” virtual video hardware is VGA compatible and can be expected to work with OS images that haven’t been modified for virtualisation, but that they work better with special video drivers.

KEYDIR=/etc/letsencrypt/live/kvm.example.com-0001
-spice password=xxxxxxxx,x509-cacert-file=$KEYDIR/chain.pem,x509-key-file=$KEYDIR/privkey.pem,x509-cert-file=$KEYDIR/cert.pem,tls-port=1234,tls-channel=main -vga qxl

To connect to the Spice server I installed the spice-client-gtk package in Debian and ran the following command:

spicy -h kvm.example.com -s 1234 -w xxxxxxxx

Note that this exposes the Spice password to anyone who can run ps on the system used as a client for Spice, I’ve filed Debian bug #965060 requesting the option of a password file to address this.

This configuration with an unmodified Windows 10 image only supported 800*600 resolution VGA display.

Networking

To set up bridged networking as non-root you need to do something like the following as root:

chgrp kvm /usr/lib/qemu/qemu-bridge-helper
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper
mkdir -p /etc/qemu
echo "allow all" > /etc/qemu/bridge.conf
chgrp kvm /etc/qemu/bridge.conf
chmod 640 /etc/qemu/bridge.conf

Windows 10 supports the emulated Intel E1000 network card. Configuration like the following configures networking on a bridge named br0 with an emulated E1000 card. MAC addresses that have a 1 in the second least significant bit of the first octet are “locally administered” (like IPv4 addresses starting with “10.”), see the Wikipedia page about MAC Address for details.

The following is an example of network configuration where $ID is an ID number for the virtual machine. So far I haven’t come close to 256 VMs on one network so I’ve only needed one octet.

NET="-device e1000,netdev=net0,mac=02:00:00:00:01:$ID -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper,br=br0"

Final KVM Settings

KEYDIR=/etc/letsencrypt/live/kvm.example.com-0001
SPICE="-spice password=xxxxxxxx,x509-cacert-file=$KEYDIR/chain.pem,x509-key-file=$KEYDIR/privkey.pem,x509-cert-file=$KEYDIR/cert.pem,tls-port=1234,tls-channel=main -vga qxl"

UEFI="-drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_CODE.fd -drive if=pflash,format=raw,readonly,file=/usr/share/OVMF/OVMF_VARS.fd"

DRIVE="-drive format=raw,file=/home/kvm/windows10,if=virtio"

NET="-device e1000,netdev=net0,mac=02:00:00:00:01:$ID -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper,br=br0"

kvm -m 4000 -smp 2 $SPICE $UEFI $DRIVE $NET

Windows Settings

The Spice Download page has a link for “spice-guest-tools” that has the QNX video driver among other things [2]. This seems to be needed for resolutions greater than 800*600.

The Virt-Manager Download page has a link for “virt-viewer” which is the Spice client for Windows systems [3], they have MSI files for both i386 and AMD64 Windows.

It’s probably a good idea to set display and system to sleep after never (I haven’t tested what happens if you don’t do that, but there’s no benefit in sleeping). Before uploading an image I disabled the pagefile and set the partition to the minimum size so I had less data to upload.

Problems

Here are some things I haven’t solved yet.

The aSpice Android client for the Spice protocol fails to connect with the QEMU code at the server giving the following message on stderr: “error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:../ssl/record/rec_layer_s3.c:1544:SSL alert number 48“.

Spice is supposed to support dynamic changes to screen resolution on the VM to match the window size at the client, this doesn’t work for me, not even with the Red Hat QNX drivers installed.

The Windows Spice client doesn’t seem to support TLS, I guess running some sort of proxy for TLS would work but I haven’t tried that yet.

,

CryptogramNSA on Securing VPNs

The NSA's Central Security Service -- that's the part that's supposed to work on defense -- has released two documents (a full and an abridged version) on securing virtual private networks. Some of it is basic, but it contains good information.

Maintaining a secure VPN tunnel can be complex and requires regular maintenance. To maintain a secure VPN, network administrators should perform the following tasks on a regular basis:

  • Reduce the VPN gateway attack surface
  • Verify that cryptographic algorithms are Committee on National Security Systems Policy (CNSSP) 15-compliant
  • Avoid using default VPN settings
  • Remove unused or non-compliant cryptography suites
  • Apply vendor-provided updates (i.e. patches) for VPN gateways and clients

Worse Than FailureCodeSOD: Dates by the Dozen

Before our regularly scheduled programming, Code & Supply, a developer community group we've collaborated with in the past, is running a salary survey, to gauge the state of the industry. More responses are always helpful, so I encourage you to take a few minutes and pitch in.

Cid was recently combing through an inherited Java codebase, and it predates Java 8. That’s a fancy way of saying “there were no good date builtins, just a mess of cruddy APIs”. That’s not to say that there weren’t date builtins prior to Java 8- they were just bad.

Bad, but better than this. Cid sent along a lot of code, and instead of going through it all, let’s get to some of the “highlights”. Much of this is stuff we’ve seen variations on before, but have been combined in ways to really elevate the badness. There are dozens of these methods, which we are only going to look at a sample of.

Let’s start with the String getLocalDate() method, which attempts to construct a timestamp in the form yyyyMMdd. As you can already predict, it does a bunch of string munging to get there, with blocks like:

switch (calendar.get(Calendar.MONTH)){
      case Calendar.JANUARY:
        sb.append("01");
        break;
      case Calendar.FEBRUARY:
        sb.append("02");
        break;
      …
}

Plus, we get the added bonus of one of those delightful “how do I pad an integer out to two digits?” blocks:

if (calendar.get(Calendar.DAY_OF_MONTH) < 10) {
  sb.append("0" + calendar.get(Calendar.DAY_OF_MONTH));
}
else {
  sb.append(calendar.get(Calendar.DAY_OF_MONTH));
}

Elsewhere, they expect a timestamp to be in the form yyyyMMddHHmmssZ, so they wrote a handy void checkTimestamp method. Wait, void you say? Shouldn’t it be boolean?

Well here’s the full signature:

public static void checkTimestamp(String timestamp, String name)
  throws IOException

Why return a boolean when you can throw an exception on bad input? Unless the bad input is a null, in which case:

if (timestamp == null) {
  return;
}

Nulls are valid timestamps, which is useful to know. We next get a lovely block of checking each character to ensure that they’re digits, and a second check to ensure that the last is the letter Z, which turns out to be double work, since the very next step is:

int year = Integer.parseInt(timestamp.substring(0,4));
int month = Integer.parseInt(timestamp.substring(4,6));
int day = Integer.parseInt(timestamp.substring(6,8));
int hour = Integer.parseInt(timestamp.substring(8,10));
int minute = Integer.parseInt(timestamp.substring(10,12));
int second = Integer.parseInt(timestamp.substring(12,14));

Followed by a validation check for day and month:

if (day < 1) {
  throw new IOException(msg);
}
if ((month < 1) || (month > 12)) {
  throw new IOException(msg);
}
if (month == 2) {
  if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
    if (day > 29) {
      throw new IOException(msg);
    }
  }
  else {
    if (day > 28) {
      throw new IOException(msg);
  }
  }
}
if (month == 1 || month == 3 || month == 5 || month == 7
|| month == 8 || month == 10 || month == 12) {
  if (day > 31) {
    throw new IOException(msg);
  }
}
if (month == 4 || month == 6 || month == 9 || month == 11) {
  if (day > 30) {
    throw new IOException(msg);
  }
"

The upshot is they at least got the logic right.

What’s fun about this is that the original developer never once considered “maybe I need an intermediate data structure beside a string to manipulate dates”. Nope, we’re just gonna munge that string all day. And that is our entire plan for all date operations, which brings us to the real exciting part, where this transcends from “just regular old bad date code” into full on WTF territory.

Would you like to see how they handle adding units of time? Like days?

public static String additionOfDays(String timestamp, int intervall) {
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int len = timestamp.length();
  String timestamp_rest = timestamp.substring(8, len);
  int lastDayOfMonth = 31;
  int current_intervall = intervall;
  while (current_intervall > 0) {
    lastDayOfMonth = getDaysOfMonth(year, month);
    if (day + current_intervall > lastDayOfMonth) {
      current_intervall = current_intervall - (lastDayOfMonth - day);
      if (month < 12) {
        month++;
      }
      else {
        year++;
        month = 1;
      }
      day = 0;
    }
    else {
      day = day + current_intervall;
      current_intervall = 0;
    }
  }
  String new_year = "" + year + "";
  String new_month = null;
  if (month < 10) {
    new_month = "0" + month + "";
  }
  else {
    new_month = "" + month + "";
  }
  String new_day = null;
  if (day < 10) {
    new_day = "0" + day + "";
  }
  else {
    new_day = "" + day + "";
  }
  return new String(new_year + new_month + new_day + timestamp_rest);
}

The only thing I can say is that here they realized that “hey, wait, maybe I can modularize” and figured out how to stuff their “how many days are in a month” logic into getDaysOfMonth, which you can see invoked above.

Beyond that, they manually handle carrying, and never once pause to think, “hey, maybe there’s a better way”.

And speaking of repeating code, guess what- there’s also a public static String additionOfSeconds(String timestamp, int intervall) method, too.

There are dozens of similar methods, Cid has only provided us a sample. Cid adds:

This particular developer didn’t trust in too fine modularization and code reusing (DRY!). So for every of this dozen of methods, he has implemented these date parsing/formatting algorithms again and again! And no, not just copy/paste; every time it is a real wheel-reinvention. The code blocks and the position of single code lines look different for every method.

Once Cid got too frustrated by this code, they went and reimplemented it in modern Java date APIs, shrinking the codebase by hundreds of lines.

The full blob of code Cid sent in follows, for your “enjoyment”:

public static String getLocalDate() {
  TimeZone tz = TimeZone.getDefault();
  GregorianCalendar calendar = new GregorianCalendar(tz);
  calendar.setTime(new Date());
  StringBuffer sb = new StringBuffer();
  sb.append(calendar.get(Calendar.YEAR));
  switch (calendar.get(Calendar.MONTH)){
    case Calendar.JANUARY:
      sb.append("01");
      break;
    case Calendar.FEBRUARY:
      sb.append("02");
      break;
    case Calendar.MARCH:
      sb.append("03");
      break;
    case Calendar.APRIL:
      sb.append("04");
      break;
    case Calendar.MAY:
      sb.append("05");
      break;
    case Calendar.JUNE:
      sb.append("06");
      break;
    case Calendar.JULY:
      sb.append("07");
      break;
    case Calendar.AUGUST:
      sb.append("08");
      break;
    case Calendar.SEPTEMBER:
      sb.append("09");
      break;
    case Calendar.OCTOBER:
      sb.append("10");
      break;
    case Calendar.NOVEMBER:
      sb.append("11");
      break;
    case Calendar.DECEMBER:
      sb.append("12");
      break;
  }
  if (calendar.get(Calendar.DAY_OF_MONTH) < 10) {
    sb.append("0" + calendar.get(Calendar.DAY_OF_MONTH));
  }
  else {
    sb.append(calendar.get(Calendar.DAY_OF_MONTH));
  }
  return sb.toString();
}

public static void checkTimestamp(String timestamp, String name)
throws IOException {
  if (timestamp == null) {
    return;
  }
  String msg = new String(
      "Wrong date or time. (" + name + "=\"" + timestamp + "\")");
  int len = timestamp.length();
  if (len != 15) {
    throw new IOException(msg);
  }
  for (int i = 0; i < (len - 1); i++) {
    if (! Character.isDigit(timestamp.charAt(i))) {
      throw new IOException(msg);
    }
  }
  if (timestamp.charAt(len - 1) != 'Z') {
    throw new IOException(msg);
  }
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int hour = Integer.parseInt(timestamp.substring(8,10));
  int minute = Integer.parseInt(timestamp.substring(10,12));
  int second = Integer.parseInt(timestamp.substring(12,14));
  if (day < 1) {
    throw new IOException(msg);
  }
  if ((month < 1) || (month > 12)) {
    throw new IOException(msg);
  }
  if (month == 2) {
    if ((year %4 == 0 && year%100 != 0) || year%400 == 0) {
      if (day > 29) {
        throw new IOException(msg);
      }
    }
    else {
      if (day > 28) {
        throw new IOException(msg);
    }
    }
  }
  if (month == 1 || month == 3 || month == 5 || month == 7
  || month == 8 || month == 10 || month == 12) {
    if (day > 31) {
      throw new IOException(msg);
    }
  }
  if (month == 4 || month == 6 || month == 9 || month == 11) {
    if (day > 30) {
      throw new IOException(msg);
    }
  }
  if ((hour < 0) || (hour > 24)) {
    throw new IOException(msg);
  }
  if ((minute < 0) || (minute > 59)) {
    throw new IOException(msg);
  }
  if ((second < 0) || (second > 59)) {
    throw new IOException(msg);
  }
}

public static String additionOfDays(String timestamp, int intervall) {
  int year = Integer.parseInt(timestamp.substring(0,4));
  int month = Integer.parseInt(timestamp.substring(4,6));
  int day = Integer.parseInt(timestamp.substring(6,8));
  int len = timestamp.length();
  String timestamp_rest = timestamp.substring(8, len);
  int lastDayOfMonth = 31;
  int current_intervall = intervall;
  while (current_intervall > 0) {
    lastDayOfMonth = getDaysOfMonth(year, month);
    if (day + current_intervall > lastDayOfMonth) {
      current_intervall = current_intervall - (lastDayOfMonth - day);
      if (month < 12) {
        month++;
      }
      else {
        year++;
        month = 1;
      }
      day = 0;
    }
    else {
      day = day + current_intervall;
      current_intervall = 0;
    }
  }
  String new_year = "" + year + "";
  String new_month = null;
  if (month < 10) {
    new_month = "0" + month + "";
  }
  else {
    new_month = "" + month + "";
  }
  String new_day = null;
  if (day < 10) {
    new_day = "0" + day + "";
  }
  else {
    new_day = "" + day + "";
  }
  return new String(new_year + new_month + new_day + timestamp_rest);
}

public static String additionOfSeconds(String timestamp, int intervall) {
  int hour = Integer.parseInt(timestamp.substring(8,10));
  int minute = Integer.parseInt(timestamp.substring(10,12));
  int second = Integer.parseInt(timestamp.substring(12,14));
  int new_second = (second + intervall) % 60;
  int minute_intervall = (second + intervall) / 60;
  int new_minute = (minute + minute_intervall) % 60;
  int hour_intervall = (minute + minute_intervall) / 60;
  int new_hour = (hour + hour_intervall) % 24;
  int day_intervall = (hour + hour_intervall) / 24;
  StringBuffer new_time = new StringBuffer();
  if (new_hour < 10) {
    new_time.append("0" + new_hour + "");
  }
  else {
    new_time.append("" + new_hour + "");
  }
  if (new_minute < 10) {
    new_time.append("0" + new_minute + "");
  }
  else {
    new_time.append("" + new_minute + "");
  }
  if (new_second < 10) {
    new_time.append("0" + new_second + "");
  }
  else {
    new_time.append("" + new_second + "");
  }
  if (day_intervall > 0) {
    return additionOfDays(timestamp.substring(0,8) + new_time.toString() + "Z", day_intervall);
  }
  else {
    return (timestamp.substring(0,8) + new_time.toString() + "Z");
  }
}

public static int getDaysOfMonth(int year, int month) {
  int lastDayOfMonth = 31;
  switch (month) {
    case 1: case 3: case 5: case 7: case 8: case 10: case 12:
      lastDayOfMonth = 31;
      break;
    case 2:
      if ((year % 4 == 0 && year % 100 != 0) || year %400 == 0) {
        lastDayOfMonth = 29;
      }
      else {
        lastDayOfMonth = 28;
      }
      break;
    case 4: case 6: case 9: case 11:
      lastDayOfMonth = 30;
      break;
  }
  return lastDayOfMonth;
}
[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

CryptogramEFF's 30th Anniversary Livestream

It's the EFF's 30th birthday, and the organization is having a celebratory livestream today from 3:00 to 10:00 pm PDT.

There are a lot of interesting discussions and things. I am having a fireside chat at 4:10 pm PDT to talk about the Crypto Wars and more.

Stop by. And thank you for supporting EFF.

EDITED TO ADD: This event is over, but you can watch a recorded version on YouTube.

,

Krebs on Security‘Wormable’ Flaw Leads July Microsoft Patches

Microsoft today released updates to plug a whopping 123 security holes in Windows and related software, including fixes for a critical, “wormable” flaw in Windows Server versions that Microsoft says is likely to be exploited soon. While this particular weakness mainly affects enterprises, July’s care package from Redmond has a little something for everyone. So if you’re a Windows (ab)user, it’s time once again to back up and patch up (preferably in that order).

Top of the heap this month in terms of outright scariness is CVE-2020-1350, which concerns a remotely exploitable bug in more or less all versions of Windows Server that attackers could use to install malicious software simply by sending a specially crafted DNS request.

Microsoft said it is not aware of reports that anyone is exploiting the weakness (yet), but the flaw has been assigned a CVSS score of 10, which translates to “easy to attack” and “likely to be exploited.”

“We consider this to be a wormable vulnerability, meaning that it has the potential to spread via malware between vulnerable computers without user interaction,” Microsoft wrote in its documentation of CVE-2020-1350. “DNS is a foundational networking component and commonly installed on Domain Controllers, so a compromise could lead to significant service interruptions and the compromise of high level domain accounts.”

CVE-2020-1350 is just the latest worry for enterprise system administrators in charge of patching dangerous bugs in widely-used software. Over the past couple of weeks, fixes for flaws with high severity ratings have been released for a broad array of software products typically used by businesses, including Citrix, F5, Juniper, Oracle and SAP. This at a time when many organizations are already short-staffed and dealing with employees working remotely thanks to the COVID-19 pandemic.

The Windows Server vulnerability isn’t the only nasty one addressed this month that malware or malcontents can use to break into systems without any help from users. A full 17 other critical flaws fixed in this release tackle security weaknesses that Microsoft assigned its most dire “critical” rating, such as in Office, Internet Exploder, SharePoint, Visual Studio, and Microsoft’s .NET Framework.

Some of the more eyebrow-raising critical bugs addressed this month include CVE-2020-1410, which according to Recorded Future concerns the Windows Address Book and could be exploited via a malicious vcard file. Then there’s CVE-2020-1421, which protects against potentially malicious .LNK files (think Stuxnet) that could be exploited via an infected removable drive or remote share. And we have the dynamic duo of CVE-2020-1435 and CVE-2020-1436, which involve problems with the way Windows handles images and fonts that could both be exploited to install malware just by getting a user to click a booby-trapped link or document.

Not to say flaws rated “important” as opposed to critical aren’t also a concern. Chief among those is CVE-2020-1463, a problem within Windows 10 and Server 2016 or later that was detailed publicly prior to this month’s Patch Tuesday.

Before you update with this month’s patch batch, please make sure you have backed up your system and/or important files. It’s not uncommon for a particular Windows update to hose one’s system or prevent it from booting properly, and some updates even have been known to erase or corrupt files. Last month’s bundle of joy from Microsoft sent my Windows 10 system into a perpetual crash state. Thankfully, I was able to restore from a recent backup.

So do yourself a favor and backup before installing any patches. Windows 10 even has some built-in tools to help you do that, either on a per-file/folder basis or by making a complete and bootable copy of your hard drive all at once.

Also, keep in mind that Windows 10 is set to apply patches on its own schedule, which means if you delay backing up you could be in for a wild ride. If you wish to ensure the operating system has been set to pause updating so you can back up your files and/or system before the operating system decides to reboot and install patches whenever it sees fit, see this guide.

As always, if you experience glitches or problems installing any of these patches this month, please consider leaving a comment about it below; there’s a better-than-even chance other readers have experienced the same and may chime in here with some helpful tips. Also, keep an eye on the AskWoody blog from Woody Leonhard, who keeps a reliable lookout for buggy Microsoft updates each month.

CryptogramEnigma Machine for Sale

A four-rotor Enigma machine -- with rotors -- is up for auction.

CryptogramHalf a Million IoT Passwords Leaked

It is amazing that this sort of thing can still happen:

...the list was compiled by scanning the entire internet for devices that were exposing their Telnet port. The hacker then tried using (1) factory-set default usernames and passwords, or (2) custom, but easy-to-guess password combinations.

Telnet? Default passwords? In 2020?

We have a long way to go to secure the IoT.

EDITED TO ADD (7/14): Apologies, but I previously blogged this story in January.

Worse Than FailureRepresentative Line: An Exceptional Leader

IniTech’s IniTest division makes a number of hardware products, like a protocol analyzer which you can plug into a network and use to monitor data in transport. As you can imagine, it involves a fair bit of software, and it involves a fair bit of hardware. Since it’s a testing and debugging tool, reliability, accuracy, and stability are the watchwords of the day.

Which is why the software development process was overseen by Russel. Russel was the “Alpha Geek”, blessed by the C-level to make sure that the software was up to snuff. This lead to some conflict- Russel had a bad habit of shoulder-surfing his fellow developers and telling them what to type- but otherwise worked very well. Foibles aside, Russel was technically competent, knew the problem domain well, and had a clean, precise, and readable coding style which all the other developers tried to imitate.

It was that last bit which got Ashleigh’s attention. Because, scattered throughout the entire C# codebase, there are exception handlers which look like this:

try
{
	// some code, doesn't matter what
	// ...
}
catch (Exception ex)
{
   ex = ex;
}

This isn’t the sort of thing which one developer did. Nearly everyone on the team had a commit like that, and when Ashleigh asked about it, she was told “It’s just a best practice. We’re following Russel’s lead. It’s for debugging.”

Ashleigh asked Russel about it, but he just grumbled and had no interest in talking about it beyond, “Just… do it if it makes sense to you, or ignore it. It’s not necessary.”

If it wasn’t necessary, why was it so common in the codebase? Why was everyone “following Russel’s lead”?

Ashleigh tracked down the original commit which started this pattern. It was made by Russel, but the exception handler had one tiny, important difference:

catch (Exception ex)
{
   ex = ex; //putting this here to set a breakpoint
}

Yes, this was just a bit of debugging code. It was never meant to be committed. Russel pushed it into the main history by accident, and the other developers saw it, and thought to themselves, “If Russel does it, it must be the right thing to do,” and started copying him.

By the time Russel noticed what was going on, it was too late. The standard had been set while he wasn’t looking, and whether it was ego or cowardice, Russel just could never get the team to follow his lead away from the pointless pattern.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

MEDebian PPC64EL Emulation

In my post on Debian S390X Emulation [1] I mentioned having problems booting a Debian PPC64EL kernel under QEMU. Giovanni commented that they had PPC64EL working and gave a link to their site with Debian QEMU images for various architectures [2]. I tried their image which worked then tried mine again which also worked – it seemed that a recent update in Debian/Unstable fixed the bug that made QEMU not work with the PPC64EL kernel.

Here are the instructions on how to do it.

First you need to create a filesystem in an an image file with commands like the following:

truncate -s 4g /vmstore/ppc
mkfs.ext4 /vmstore/ppc
mount -o loop /vmstore/ppc /mnt/tmp

Then visit the Debian Netinst page [3] to download the PPC64EL net install ISO. Then loopback mount it somewhere convenient like /mnt/tmp2.

The package qemu-system-ppc has the program for emulating a PPC64LE system, the qemu-user-static package has the program for emulating PPC64LE for a single program (IE a statically linked program or a chroot environment), you need this to run debootstrap. The following commands should be most of what you need.

apt install qemu-system-ppc qemu-user-static

update-binfmts --display

# qemu ppc64 needs exec stack to solve "Could not allocate dynamic translator buffer"
# so enable that on SE Linux systems
setsebool -P allow_execstack 1

debootstrap --foreign --arch=ppc64el --no-check-gpg buster /mnt/tmp file:///mnt/tmp2
chroot /mnt/tmp /debootstrap/debootstrap --second-stage

cat << END > /mnt/tmp/etc/apt/sources.list
deb http://mirror.internode.on.net/pub/debian/ buster main
deb http://security.debian.org/ buster/updates main
END
echo "APT::Install-Recommends False;" > /mnt/tmp/etc/apt/apt.conf

echo ppc64 > /mnt/tmp/etc/hostname

# /usr/bin/awk: error while loading shared libraries: cannot restore segment prot after reloc: Permission denied
# only needed for chroot
setsebool allow_execmod 1

chroot /mnt/tmp apt update
# why aren't they in the default install?
chroot /mnt/tmp apt install perl dialog
chroot /mnt/tmp apt dist-upgrade
chroot /mnt/tmp apt install bash-completion locales man-db openssh-server build-essential systemd-sysv ifupdown vim ca-certificates gnupg
# install kernel last because systemd install rebuilds initrd
chroot /mnt/tmp apt install linux-image-ppc64el
chroot /mnt/tmp dpkg-reconfigure locales
chroot /mnt/tmp passwd

cat << END > /mnt/tmp/etc/fstab
/dev/vda / ext4 noatime 0 0
#/dev/vdb none swap defaults 0 0
END

mkdir /mnt/tmp/root/.ssh
chmod 700 /mnt/tmp/root/.ssh
cp ~/.ssh/id_rsa.pub /mnt/tmp/root/.ssh/authorized_keys
chmod 600 /mnt/tmp/root/.ssh/authorized_keys

rm /mnt/tmp/vmlinux* /mnt/tmp/initrd*
mkdir /boot/ppc64
cp /mnt/tmp/boot/[vi]* /boot/ppc64

# clean up
umount /mnt/tmp
umount /mnt/tmp2

# setcap binary for starting bridged networking
setcap cap_net_admin+ep /usr/lib/qemu/qemu-bridge-helper

# afterwards set the access on /etc/qemu/bridge.conf so it can only
# be read by the user/group permitted to start qemu/kvm
echo "allow all" > /etc/qemu/bridge.conf

Here is an example script for starting kvm. It can be run by any user that can read /etc/qemu/bridge.conf.

#!/bin/bash
set -e

KERN="kernel /boot/ppc64/vmlinux-4.19.0-9-powerpc64le -initrd /boot/ppc64/initrd.img-4.19.0-9-powerpc64le"

# single network device, can have multiple
NET="-device e1000,netdev=net0,mac=02:02:00:00:01:04 -netdev tap,id=net0,helper=/usr/lib/qemu/qemu-bridge-helper"

# random number generator for fast start of sshd etc
RNG="-object rng-random,filename=/dev/urandom,id=rng0 -device virtio-rng-pci,rng=rng0"

# I have lockdown because it does no harm now and is good for future kernels
# I enable SE Linux everywhere
KERNCMD="net.ifnames=0 noresume security=selinux root=/dev/vda ro lockdown=confidentiality"

kvm -drive format=raw,file=/vmstore/ppc64,if=virtio $RNG -nographic -m 1024 -smp 2 $KERN -curses -append "$KERNCMD" $NET