Planet Russell

,

Planet Linux AustraliaJonathan Adamczewski: Watch as the OS rewrites my buggy program.

I didn’t know that SetErrorMode(SEM_NOALIGNMENTFAULTEXCEPT) was a thing, until I wrote a bad test that wouldn’t crash.

Digging into it, I found that a movaps instruction was being rewritten as movups, which was a thoroughly confusing thing to see.

The one clue I had was that a fault due to an unaligned load had been observed in non-test code, but did not reproduce when written as a test using the google-test framework. A short hunt later (including a failed attempt at writing a small repro case), I found an explanation: google test suppresses this class of failure.

The code below will successfully demonstrate the behavior, printing out the SIMD load instruction before and after calling the function with an unaligned pointer.

[Gist]

View the code on Gist.

Planet DebianKeith Packard: CompositeAcceleration

Composite acceleration in the X server

One of the persistent problems with the modern X desktop is the number of moving parts required to display application content. Consider a simple PresentPixmap call as made by the Vulkan WSI or GL using DRI3:

  1. Application calls PresentPixmap with new contents for its window

  2. X server receives that call and pends any operation until the target frame

  3. At the target frame, the X server copies the new contents into the window pixmap and delivers a Damage event to the compositor

  4. The compositor responds to the damage event by copying the window pixmap contents into the next screen pixmap

  5. The compositor calls PresentPixmap with the new screen contents

  6. The X server receives that call and either posts a Swap call to the kernel or delays any action until the target frame

This sequence has a number of issues:

  • The operation is serialized between three processes with at least three context switches involved.

  • There is no traceable relation between when the application asked for the frame to be shown and when it is finally presented. Nor do we even have any way to tell the application what time that was.

  • There are at least two copies of the application contents, from DRI3 buffer to window pixmap and from window pixmap to screen pixmap.

We'd also like to be able to take advantage of the multi-plane capabilities in the display engine (where available) to directly display the application contents.

Previous Attempts

I've tried to come up with solutions to this issue a couple of times in the past.

Composite Redirection

My first attempt to solve (some of) this problem was through composite redirection. The idea there was to directly pass the Present'd pixmap to the compositor and let it copy the contents directly from there in constructing the new screen pixmap image. With some additional hand waving, the idea was that we could associate that final presentation with all of the associated redirected compositing operations and at least provide applications with accurate information about when their images were presented.

This fell apart when I tried to figure out how to plumb the necessary events through to the compositor and back. With that, and the realization that we still weren't solving problems inherent with the three-process dance, nor providing any path to using overlays, this solution just didn't seem worth pursuing further.

Automatic Compositing

More recently, Eric Anholt and I have been discussing how to have the X server do all of the compositing work by natively supporting ARGB window content. By changing compositors to place all screen content in windows, the X server could then generate the screen image by itself and not require any external compositing manager assistance for each frame.

Given that a primitive form of automatic compositing is already supported, extending that to support ARGB windows and having the X server manage the stack seemed pretty tractable. We would extend the driver interface so that drivers could perform the compositing themselves using a mixture of GPU operations and overlays.

This runs up against five hard problems though.

  1. Making transitions between Manual and Automatic compositing seamless. We've seen how well the current compositing environment works when flipping compositing on and off to allow full-screen applications to use page flipping. Lots of screen flashing and application repaints.

  2. Dealing with RGB windows with ARGB decorations. Right now, the window frame can be an ARGB window with the client being RGB; painting the client into the frame yields an ARGB result with the A values being 1 everywhere the client window is present.

  3. Mesa currently allocates buffers exactly the size of the target drawable and assumes that the upper left corner of the buffer is the upper left corner of the drawable. If we want to place window manager decorations in the same buffer as the client and not need to copy the client contents, we would need to allocate a buffer large enough for both client and decorations, and then offset the client within that larger buffer.

  4. Synchronizing window configuration and content updates with the screen presentation. One of the major features of a compositing manager is that it can construct complete and consistent frames for display; partial updates to application windows need never be shown to the user, nor does the user ever need to see the window tree partially reconfigured. To make this work with automatic compositing, we'd need to both codify frame markers within the 2D rendering stream and provide some method for collecting window configuration operations together.

  5. Existing compositing managers don't do this today. Compositing managers are currently free to paint whatever they like into the screen image; requiring that they place all screen content into windows would mean they'd have to buy in to the new mechanism completely. That could still work with older X servers, but the additional overhead of more windows containing decoration content would slow performance with those systems, making migration less attractive.

I can think of plausible ways to solve the first three of these without requiring application changes, but the last two require significant systemic changes to compositing managers. Ick.

Semi-Automatic Compositing

I was up visiting Pierre-Loup at Valve recently and we sat down for a few hours to consider how to help applications regularly present content at known times, and to always know precisely when content was actually presented. That names just one of the above issues, but when you consider the additional work required by pure manual compositing, solving that one issue is likely best achieved by solving all three.

I presented the Automatic Compositing plan and we discussed the range of issues. Pierre-Loup focused on the last problem -- getting existing Compositing Managers to adopt whatever solution we came up with. Without any easy migration path for them, it seemed like a lot to ask.

He suggested that we come up with a mechanism which would allow Compositing Managers to ease into the new architecture and slowly improve things for applications. Towards that, we focused on a much simpler problem

How can we get a single application at the top of the window stack to reliably display frames at the desired time, and to know when that doesn't occur.

Coming up with a solution for this led to a good discussion and a possible path to a broader solution in the future.

Steady-state Behavior

Let's start by ignoring how we start and stop this new mode and look at how we want applications to work when things are stable:

  1. Windows not moving around
  2. Other applications idle

Let's get a picture I can use to describe this:

In this picture, the compositing manager is triple buffered (as is normal for a page flipping application) with three buffers:

  1. Scanout. The image currently on the screen

  2. Queued. The image queued to be displayed next

  3. Render. The image being constructed from various window pixmaps and other elements.

The contents of the Scanout and Queued buffers are identical with the exception of the orange window.

The application is double buffered:

  1. Current. What it has displayed for the last frame

  2. Next. What it is constructing for the next frame

Ok, so in the steady state, here's what we want to happen:

  1. Application calls PresentPixmap with 'Next' for its window

  2. X server receives that call and copies Next to Queued.

  3. X server posts a Page Flip to the kernel with the Queued buffer

  4. Once the flip happens, the X server swaps the names of the Scanout and Queued buffers.

If the X server supports Overlays, then the sequence can look like:

  1. Application calls PresentPixmap

  2. X server receives that call and posts a Page Flip for the overlay

  3. When the page flip completes, the X server notifies the client that the previous Current buffer is now idle.

When the Compositing Manager has content to update outside of the orange window, it will:

  1. Compositing Manager calls PresentPixmap

  2. X server receives that call and paints the Current client image into the Render buffer

  3. X server swaps Render and Queued buffers

  4. X server posts Page Flip for the Queued buffer

  5. When the page flip occurs, the server can mark the Scanout buffer as idle and notify the Compositing Manager

If the Orange window is in an overlay, then the X server can skip step 2.

The Auto List

To give the Compositing Manager control over the presentation of all windows, each call to PresentPixmap by the Compositing Manager will be associated with the list of windows, the "Auto List", for which the X server will be responsible for providing suitable content. Transitioning from manual to automatic compositing can therefore be performed on a window-by-window basis, and each frame provided by the Compositing Manager will separately control how that happens.

The Steady State behavior above would be represented by having the same set of windows in the Auto List for the Scanout and Queued buffers, and when the Compositing Manager presents the Render buffer, it would also provide the same Auto List for that.

Importantly, the Auto List need not contain only children of the screen Root window. Any descendant window at all can be included, and the contents of that drawn into the image using appropriate clipping. This allows the Compositing Manager to draw the window manager frame while the client window is drawn by the X server.

Any window at all can be in the Auto List. Windows with PresentPixmap contents available would be drawn from those. Other windows would be drawn from their window pixmaps.

Transitioning from Manual to Auto

To transition a window from Manual mode to Auto mode, the Compositing Manager would add it to the Auto List for the Render image, and associate that Auto List with the PresentPixmap request for that image. For the first frame, the X server may not have received a PresentPixmap for the client window, and so the window contents would have to come from the Window Pixmap for the client.

I'm not sure how we'd get the Compositing Manager to provide another matching image that the X server can use for subsequent client frames; perhaps it would just create one itself?

Transitioning from Auto to Manual

To transition a window from Auto mode to Manual mode, the Compositing manager would remove it from the Auto List for the Render image and then paint the window contents into the render image itself. To do that, the X server would have to paint any PresentPixmap data from the client into the window pixmap; that would be done when the Compositing Manager called GetWindowPixmap.

New Messages Required

For this to work, we need some way for the Compositing Manager to discover windows that are suitable for Auto composting. Normally, these will be windows managed by the Window Manager, but it's possible for them to be nested further within the application hierarchy, depending on how the application is constructed.

I think what we want is to tag Damage events with the source window, and perhaps additional information to help Compositing Managers determine whether it should be automatically presenting those source windows or a parent of them. Perhaps it would be helpful to also know whether the Damage event was actually caused by a PresentPixmap for the whole window?

To notify the server about the Auto List, a new request will be needed in the Present extension to set the value for a subsequent PresentPixmap request.

Actually Drawing Frames

The DRM module in the Linux kernel doesn't provide any mechanism to remove or replace a Page Flip request. While this may get fixed at some point, we need to deal with how it works today, if only to provide reasonable support for existing kernels.

I think about the best we can do is to set a timer to fire a suitable time before vblank and have the X server wake up and execute any necessary drawing and Page Flip kernel calls. We can use feedback from the kernel to know how much slack time there was between any drawing and the vblank and adjust the timer as needed.

Given that the goal is to provide for reliable display of the client window, it might actually be sufficient to let the client PresentPixmap request drive the display; if the Compositing Manager provides new content for a frame where the client does not, we can schedule that for display using a timer before vblank. When the Compositing Manager provides new content after the client, it would be delayed until the next frame.

Changes in Compositing Managers

As described above, one explicit goal is to ease the burden on Compositing Managers by making them able to opt-in to this new mechanism for a limited set of windows and only for a limited set of frames. Any time they need to take control over the screen presentation, a new frame can be constructed with an empty Auto List.

Implementation Plans

This post is the first step in developing these ideas to the point where a prototype can be built. The next step will be to take feedback and adapt the design to suit. Of course, there's always the possibility that this design will also prove unworkable in practice, but I'm hoping that this third attempt will actually succeed.

,

Planet DebianDirk Eddelbuettel: RVowpalWabbit 0.0.12

And yet another boring little RVowpalWabbit package update, now to version 0.0.12, and still in response to the CRAN request of not writing files where we should not (as caught by new tests added by Kurt). I had misinterpreted one flag and actually instructed to examples and tests to write model files back to the installed directory. Oops. Now fixed. I also added a reusable script for such tests in the repo for everybody's perusal (but it will require Linux and bindfs).

No new code or features were added.

We should mention once more that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch. I am also thinking that rvw extensions / work may make for a good GSoC 2018 project (and wrote up a short note). Again, if interested, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramFriday Squid Blogging: Kraken Pie

Pretty, but contains no actual squid ingredients.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianAntoine Beaupré: January 2018 report: LTS

I have already published a yearly report which covers all of 2017 but also some of my January 2018 work, so I'll try to keep this short.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I was happy to switch to the new Git repository for the security tracker this month. It feels like some operations (namely pull / push) are a little slower, but others, like commits or log inspection, are much faster. So I think it is a net win.

jQuery

I did some work on trying to cleanup a situation with the jQuery package, which I explained in more details in a long post. It turns out there are multiple databases out there that track security issues in web development environemnts (like Javascript, Ruby or Python) but do not follow the normal CVE tracking system. This means that Debian had a few vulnerabilities in its jQuery packages that were not tracked by the security team, in particular three that were only on Snyk.io (CVE-2012-6708, CVE-2015-9251 and CVE-2016-10707). The resulting discussion was interesting and is worth reading in full.

A more worrying aspect of the problem is that this problem is not limited to flashy new web frameworks. Ben Hutchings estimated that almost half of the Linux kernel vulnerabilities are not tracked by CVE. It seems the concensus is that we want to try to follow the CVE process, and Mitre has been helpful in distributing this by letting other entities, called CVE Numbering Authorities or CNA, issue their own CVEs. After contacting Snyk, it turns out that they have started the process of becoming a CNA and are trying to get this part of their workflow, so that's a good sign.

LTS meta-work

I've done some documentation and architecture work on the LTS project itself, mostly around updating the wiki with current practices.

OpenSSH DLA

I've done a quick security update of OpenSSH for LTS, which resulted in DLA-1257-1. Unfortunately, after a discussion with the security researcher that published that CVE, it turned out that this was only a "self-DOS", i.e. that the NEWKEYS attack would only make the SSH client terminate its own connection, and therefore not impact the rest of the server. One has to wonder, in that case, why this was issue a CVE at all: presumably the vulnerability could be leveraged somehow, but I did not look deeply enough into it to figure that out.

Hopefully the patch won't introduce a regression: I tested this summarily and it didn't seem to cause issue at first glance.

An interesting attack (CVE-2017-18078) was discovered against systemd where the "tmpfiles" feature could be abused to bypass filesystem access restrictions through hardlinks. The trick is that the attack is possible only if kernel hardening (specifically fs.protected_hardlinks) is turned off. That feature is available in the Linux kernel since the 3.6 release, but was actually turned off by default in 3.7. In the commit message, Linus Torvalds explained the change was breaking some userland applications, which is a huge taboo in Linux, and recommended that distros configure this at boot instead. Debian took the reverse approach and Hutchings issued a patch which reverts the default to the more secure default. But this means users of custom kernels are still vulnerable to this issue.

But, more importantly, this affects more than systemd. The vulnerability also happens when using plain old chown with hardening turned off, when running a simple command like this:

chown -R non-root /path/owned/by/non-root

I didn't realize this, but hardlinks share permissions: if you change permissions on file a that's hardlinked to file b, both files have the new permissions. This is especially nasty if users can hardlink to critical files like /etc/password or suid binaries, which is why the hardening was introduced in the first place.

In Debian, this is especially an issue in maintainer scripts, which often call chown -R on arbitrary, non-root directories. Daniel Kahn Gillmor had reported this to the Debian security team all the way back in 2011, but it didn't get traction back then. He now opened Debian bug #889066 to at least enable a warning in lintian and an issue was also opened on colord Debian bug #889060, as an example, but many more packages are vulnerable. Again, this is only if hardening is somewhat turned off.

Normally, systemd is supposed to turn that hardening on, which should protect custom kernels, but this was turned off in Debian. Anyways, Debian still supports non-systemd init systems (although those users mostly probably all migrated to Devuan) so the fix wouldn't be complete. I have therefore filed Debian bug #889098 against procps (which owns /etc/sysctl.conf and related files) to try and fix the issue more broadly there.

And to be fair, this was very explicitly mentioned in the jessie release notes so those people without the protection kind of get what they desserve here...

p7zip

Lastly, I did a fairly trivial update of the p7zip package, which resulted in DLA-1268-1. The patch was sent upstream and went through a few iterations, including coordination with the security researcher.

Unfortunately, the latter wasn't willing to share the proof of concept (PoC) so that we could test the patch. We are therefore trusting the researcher that the fix works, which is too bad because they do not trust us with the PoC...

Other free software work

I probably did more stuff in January that wasn't documented in the previous report. But I don't know if it's worth my time going through all this. Expect a report in February instead! :)

Have happy new year and all that stuff.

Planet DebianJoachim Breitner: The magic “Just do it” type class

One of the great strengths of strongly typed functional programming is that it allows type driven development. When I have some non-trivial function to write, I first write its type signature, and then the writing the implementation often very obvious.

Once more, I am feeling silly

In fact, it often is completely mechanical. Consider the following function:

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))

This is somewhat like the bind for a combination of the error monad and the reader monad, and remembers the intermediate result, but that doesn’t really matter now. What matters is that once I wrote that type signature, I feel silly having to also write the code, because there isn’t really anything interesting about that.

Instead, I’d like to tell the compiler to just do it for me! I want to be able to write

foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

And now I can! Assuming I am using GHC HEAD (or eventually GHC 8.6), I can run cabal install ghc-justdoit, and then the following code actually works:

{-# OPTIONS_GHC -fplugin=GHC.JustDoIt.Plugin #-}
import GHC.JustDoIt
foo :: (r -> Either e a) -> (a -> (r -> Either e b)) -> (r -> Either e (a,b))
foo = justDoIt

What is this justDoIt?

*GHC.LJT GHC.JustDoIt> :browse GHC.JustDoIt
class JustDoIt a
justDoIt :: JustDoIt a => a
(…) :: JustDoIt a => a

Note that there are no instances for the JustDoIt class -- they are created, on the fly, by the GHC plugin GHC.JustDoIt.Plugin. During type-checking, it looks as these JustDoIt t constraints and tries to construct a term of type t. It is based on LJT proof search in intuitionistic propositional calculus, which I have implemented to work directly on GHC’s types and terms (and I find it pretty slick). Those who like Unicode can write (…) instead.

What is supported right now?

Because I am working directly in GHC’s representation, it is pretty easy to support user-defined data types and newtypes. So it works just as well for

data Result a b = Failure a | Success b
newtype ErrRead r e a = ErrRead { unErrRead :: r -> Result e a }
foo2 :: ErrRead r e a -> (a -> ErrRead r e b) -> ErrRead r e (a,b)
foo2 = (…)

It doesn’t infer coercions or type arguments or any of that fancy stuff, and carefully steps around anything that looks like it might be recursive.

How do I know that it creates a sensible implementation?

You can check the generated Core using -ddump-simpl of course. But it is much more convenient to use inspection-testing to test such things, as I am doing in the Demo file, which you can skim to see a few more examples of justDoIt in action. I very much enjoyed reaping the benefits of the work I put into inspection-testing, as this is so much more convenient than manually checking the output.

Is this for real? Should I use it?

Of course you are welcome to play around with it, and it will not launch any missiles, but at the moment, I consider this a prototype that I created for two purposes:

  • To demonstrates that you can use type checker plugins for program synthesis. Depending on what you need, this might allow you to provide a smoother user experience than the alternatives, which are:

    • Preprocessors
    • Template Haskell
    • Generic programming together with type-level computation (e.g. generic-lens)
    • GHC Core-to-Core plugins

    In order to make this viable, I slightly changed the API for type checker plugins, which are now free to produce arbitrary Core terms as they solve constraints.

  • To advertise the idea of taking type-driven computation to its logical conclusion and free users from having to implement functions that they have already specified sufficiently precisely by their type.

What needs to happen for this to become real?

A bunch of things:

  • The LJT implementation is somewhat neat, but I probably did not implement backtracking properly, and there might be more bugs.
  • The implementation is very much unoptimized.
  • For this to be practically useful, the user needs to be able to use it with confidence. In particular, the user should be able to predict what code comes out. If there a multiple possible implementations, i.e. a clear specification which implementations are more desirable than others, and it should probably fail if there is ambiguity.
  • It ignores any recursive type, so it cannot do anything with lists. It would be much more useful if it could do some best-effort thing her as well.

If someone wants to pick it up from here, that’d be great!

I have seen this before…

Indeed, the idea is not new.

Most famously in the Haskell work is certainly Lennart Augustssons’s Djinn tool that creates Haskell source expression based on types. Alejandro Serrano has connected that to GHC in the library djinn-ghc, but I coudn’t use this because it was still outputting Haskell source terms (and it is easier to re-implement LJT rather than to implement type inference).

Lennart Spitzner’s exference is a much more sophisticated tool that also takes library API functions into account.

In the Scala world, Sergei Winitzki very recently presented the pretty neat curryhoward library that uses for Scala macros. He seems to have some good ideas about ordering solutions by likely desirability.

And in Idris, Joomy Korkut has created hezarfen.

Planet DebianJoey Hess: improving powertop autotuning

I'm wondering about improving powertop's auto-tuning. Currently the situation is that, if you want to tune your laptop's power consumption, you can run powertop and turn on all the tunables and try it for a while to see if anything breaks. The breakage might be something subtle.

Then after a while you reboot and your laptop is using too much power again until you remember to run powertop again. This happens a half dozen or so times. You then automate running powertop --auto-tune or individual tuning commands on boot, probably using instructions you find in the Arch wiki.

Everyone has to do this separately, which is a lot of duplicated and rather technical effort for users, while developers are left with a lot of work to manually collect information, like Hans de Goede is doing now for enabling PSR by default.

To improve this, powertop could come with a service file to start it on boot, read a config file, and apply tunings if enabled.

There could be a simple GUI to configure it, where the user can report when it's causing a problem. In case the problem prevents booting, there would need to be a boot option that disables the autotuning too.

When the user reports a problem, the GUI could optionally walk them through a bisection to find the problematic tuning, which would probably take only 4 or so steps.

Information could be uploaded, anonymously to a hardware tunings database. Developers could then use that to find and whitelist safe tunings. Powertop could also query that to avoid tunings that are known to cause problems on the laptop.

I don't know if this is a new idea, but if it's not been tried before, it seems worth working on.

Krebs on SecurityAttackers Exploiting Unpatched Flaw in Flash

Adobe warned on Thursday that attackers are exploiting a previously unknown security hole in its Flash Player software to break into Microsoft Windows computers. Adobe said it plans to issue a fix for the flaw in the next few days, but now might be a good time to check your exposure to this still-ubiquitous program and harden your defenses.

Adobe said a critical vulnerability (CVE-2018-4878) exists in Adobe Flash Player 28.0.0.137 and earlier versions. Successful exploitation could allow an attacker to take control of the affected system.

The software company warns that an exploit for the flaw is being used in the wild, and that so far the attacks leverage Microsoft Office documents with embedded malicious Flash content. Adobe said it plans to address this vulnerability in a release planned for the week of February 5.

According to Adobe’s advisory, beginning with Flash Player 27, administrators have the ability to change Flash Player’s behavior when running on Internet Explorer on Windows 7 and below by prompting the user before playing Flash content. A guide on how to do that is here (PDF). Administrators may also consider implementing Protected View for Office. Protected View opens a file marked as potentially unsafe in Read-only mode.

Hopefully, most readers here have taken my longstanding advice to disable or at least hobble Flash, a buggy and insecure component that nonetheless ships by default with Google Chrome and Internet Explorer. More on that approach (as well as slightly less radical solutions) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Another, perhaps less elegant, alternative to wholesale kicking Flash to the curb is to keeping it installed in a browser that you don’t normally use, and then only using that browser on sites that require Flash.

CryptogramSigned Malware

Stuxnet famously used legitimate digital certificates to sign its malware. A research paper from last year found that the practice is much more common than previously thought.

Now, researchers have presented proof that digitally signed malware is much more common than previously believed. What's more, it predated Stuxnet, with the first known instance occurring in 2003. The researchers said they found 189 malware samples bearing valid digital signatures that were created using compromised certificates issued by recognized certificate authorities and used to sign legitimate software. In total, 109 of those abused certificates remain valid. The researchers, who presented their findings Wednesday at the ACM Conference on Computer and Communications Security, found another 136 malware samples signed by legitimate CA-issued certificates, although the signatures were malformed.

The results are significant because digitally signed software is often able to bypass User Account Control and other Windows measures designed to prevent malicious code from being installed. Forged signatures also represent a significant breach of trust because certificates provide what's supposed to be an unassailable assurance to end users that the software was developed by the company named in the certificate and hasn't been modified by anyone else. The forgeries also allow malware to evade antivirus protections. Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren't valid.

Worse Than FailureError'd: The Biggest Loser

"I don't know what's more surprising - losing $2,000,000 or that Yahoo! thought I had $2,000,000 to lose," writes Bruce W.

 

"Autodesk sent out an email about my account's password being changed recently. Now it's up to me to guess which $productName it is!" wrote Tom G.

 

Kurt C. writes, "I kept repeating my mantra: 'Must not click forbidden radio buttons...'"

 

"My son boarded a bus in Toronto and got a free ride when the driver showed him this crash message," Ari S. writes.

 

"For those who are in denial about global warming, may I please direct you to conditions in Wisconsin," wrote Chelsie S.

 

Billie J. wrote, "Sorry there, Walmart, but that's not how math works."

 

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianWouter Verhelst: Day four of the pre-FOSDEM Debconf Videoteam sprint

Day four was a day of pancakes and stew. Oh, and some video work, too.

Nattie

Did more documentation review. She finished SReview documentation, got started on the documentation of the examples of our ansible repository.

Kyle

Finished splitting out the ansible configuration from the ansible code repository. The code repository now includes an example configuration that is well documented for getting started, whereas our production configuration lives in a separate repository.

Stefano

Spent much time on the debconf website, mostly working on a new upstream release of wafer.

He also helped review Kyle's documentation, and spent some time together with Tzafrir debugging our ansible test setup.

Tzafrir

Worked on documentation, and did a test run of the ansible repository. Found and fixed issues that cropped up during that.

Wouter

Spent much time trying to figure out why SReview was not doing what he was expecting it to do. Side note: I hate video codecs. Things are working now, though, and most of the fixes were implemented in a way that makes it reusable for other conferences.

There's one more day coming up today. Hopefully won't forget to blog about it tonight.

Planet DebianDaniel Pocock: Everything you didn't know about FSFE in a picture

As FSFE's community begins exploring our future, I thought it would be helpful to start with a visual guide to the current structure.

All the information I've gathered here is publicly available but people rarely see it in one place, hence the heading. There is no suggestion that anything has been deliberately hidden.

The drawing at the bottom includes Venn diagrams to show the overlapping relationships clearly and visually. For example, in the circle for the General Assembly, all the numbers add up to 29, the total number of people listed on the "People" page of the web site. In the circle for Council, there are 4 people in total and in the circle for Staff, there are 6 people, 2 of them also in Council and 4 of them in the GA but not council.

The financial figures on this diagram are taken from the 2016 financial summary. The summary published by FSFE is very basic so the exact amount paid in salaries is not clear, the number in the Staff circle probably pays a lot more than just salaries and I feel FSFE gets good value for this money.

Some observations about the numbers:

  • The volunteers don't elect any representatives to the GA, although some GA members are also volunteers
  • From 1,700 fellowship members, only 2 are elected in 2 of the 29 GA seats yet they provide over thirty percent of the funding through recurring payments
  • Out of 6 staff, all 6 are members of the GA (6 out of 29) since a decision to accept them at the last GA meeting
  • Only the 29 people in the General Assembly are full (legal) members of the FSFE e.V. association with the right to vote on things like constitutional changes. Those people are all above the dotted line on the page. All the people below the line have been referred to by other names, like fellow, supporter, community, donor and volunteer.

If you ever clicked the "Join the FSFE" button or filled out the "Join the FSFE" form on the FSFE web site and made a payment, did you believe you were becoming a member with an equal vote? If so, one procedure you can follow is to contact the president as described here and ask to be recognized as a full member. I feel it is important for everybody who filled out the "Join" form to clarify their rights and their status before the constitutional change is made.

I have not presented these figures to create conflict between staff and volunteers. Both have an important role to play. Many people who contribute time or money to FSFE are very satisfied with the concept that "somebody else" (the staff) can do difficult and sometimes tedious work for the organization's mission and software freedom in general. As I've been elected as a fellowship representative, I feel a responsibility to ensure the people I represent are adequately informed about the organization and their relationship with it and I hope these notes and the diagram helps to fulfil that responsibility.

Therefore, this diagram is presented to the community not for the purpose of criticizing anybody but for the purpose of helping make sure everybody is on the same page about our current structure before it changes.

If anybody has time to make a better diagram it would be very welcome.

Planet DebianJohn Goerzen: How are you handling building local Debian/Ubuntu packages?

I’m in the middle of some conversations about Debian/Ubuntu repositories, and I’m curious how others are handling this.

How are people maintaining repos for an organization? Are you integrating them with a git/CI (github/gitlab, jenkins/travis, etc) workflow? How do packages propagate into repos? How do you separate prod from testing? Is anyone running buildd locally, or integrating with more common CI tools?

I’m also interested in how people handle local modifications of packages — anything from newer versions of C libraries to newer interpreters. Do you just use the regular Debian toolchain, packaging them up for (potentially) the multiple distros/versions that you have in production? Pin them in apt?

Just curious what’s out there.

Some Googling has so far turned up just one relevant hit: Michael Prokop’s DebConf15 slides, “Continuous Delivery of Debian packages”. Looks very interesting, and discusses jenkins-debian-glue.

Some tangentially-related but interesting items:

Edit 2018-02-02: I should have also mentioned BuildStream

Planet DebianBits from Debian: Debian welcomes its Outreachy interns

We'd like to welcome our three Outreachy interns for this round, lasting from December 2017 to March 2018.

Juliana Oliveira is working on reproducible builds for Debian and free software.

Kira Obrezkova is working on bringing open-source mobile technologies to a new level with Debian (Osmocom).

Renata D'Avila is working on a calendar database of social events and conferences for free software developers.

Congratulations, Juliana, Kira and Renata!

From the official website: Outreachy provides three-month internships for people from groups traditionally underrepresented in tech. Interns work remotely with mentors from Free and Open Source Software (FOSS) communities on projects ranging from programming, user experience, documentation, illustration and graphical design, to data science.

The Outreachy programme is possible in Debian thanks to the efforts of Debian developers and contributors who dedicate their free time to mentor students and outreach tasks, and the Software Freedom Conservancy's administrative support, as well as the continued support of Debian's donors, who provide funding for the internships.

Debian will also participate this summer in the next round for Outreachy, and is currently applying as mentoring organisation for the Google Summer of Code 2018 programme. Have a look at the projects wiki page and contact the Debian Outreach Team mailing list to join as a mentor or welcome applicants into the Outreachy or GSoC programme.

Join us and help extend Debian!

Planet Linux AustraliaOpenSTEM: Welcome Back!

Well, most of our schools are back, or about to start the new year. Did you know that there are schools using OpenSTEM materials in every state and territory of Australia? Our wide range of resources, especially those on Australian history, give detailed information about the history of all our states and territories. We pride […]

,

Cory DoctorowThe 2017 Locus List: a must-read list of the best science fiction and fantasy of the past year

Every year, Locus Magazine’s panel of editors reviews the entire field of science fiction and fantasy and produces its Recommended Reading List; the 2017 list is now out, and I’m proud to say that it features my novel Walkaway, in excellent company with dozens of other works I enjoyed in the past year.


2017 Locus Recommended Reading List
[Locus Magazine]

Planet DebianRitesh Raj Sarraf: Laptop Mode Tools 1.72

What a way to make a gift!

I'm pleased to announce the 1.72 release of Laptop Mode Tools. Major changes include the port of the GUI configuration utility to Python 3 and PyQt5. Some tweaks, fixes and enhancements in current modules. Extending {black,white}list of devices to types other than USB. Listing of devices by their devtype attribute.

A filtered list of changes is mentioned below. For the full log, please refer to the git repository. 

Source tarball, Feodra/SUSE RPM Packages available at:
https://github.com/rickysarraf/laptop-mode-tools/releases

Debian packages will be available soon in Unstable.

Homepage: https://github.com/rickysarraf/laptop-mode-tools/wiki
Mailing List: https://groups.google.com/d/forum/laptop-mode-tools

 

 

1.72 - Thu Feb  1 21:59:24 IST 2018
    * Switch to PyQt5 and Python3
    * Add btrfs to list of filesystems for which we can set commit interval
    * Add pkexec invocation script
    * Add desktop file to upstream repo and invoke script
    * Update installer to includes gui wrappers
    * Install new SVG pixmap
    * Control all available cards in radeon-dpm
    * Prefer to use the new runtime pm autosuspend_delay_ms interface
    * tolerate broken device interfaces quietly
    * runtime-pm: Make {black,white}lists work with non-USB devices
    * send echo errors to verbose log
    * Extend blacklist by device types of devtype

 

What is Laptop Mode Tools

Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 .
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 .
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power

PS: This release took around 13 months. A lot of things changed, for me, personally. Some life lessons learnt. Some idiots uncovered. But the best of 2017, I got married. I am hopeful to keep work-life balanced, including time for FOSS.

Categories: 

Keywords: 

Like: 

Planet DebianRaphaël Hertzog: My Free Software Activities in January 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

While I continue to manage the administrative side of Debian LTS, I’m taking a break of the technical work (i.e. preparing and releasing security updates). The hope is that it will help me focus more on my book which (still) needs to be updated for stretch. In truth, this did not happen in January but I hope to do better in the upcoming months.

Salsa and related

The switch to salsa.debian.org is a major event in our community. Last month I started with the QA team and the distro-tracker repository as an experiment. This month I took this opportunity to bring to fruition a merge between the pkg-security team and the forensics team that I already proposed in the past and that we postponed because it was deemed busy work for no gains. Now that both teams had to migrate anyway, it was easier to migrate everything at once under a single project.

All our repositories are now managed under the same team in salsa: https://salsa.debian.org/pkg-security-team/ But for the mailing list we are still waiting for the new list to be created on lists.debian.org (#888136).

As part of this work, I contributed some fixes to the scripts maintained by Mehdi Dogguy. I also filed a wishlist request for a new script to make it easy to share repositories with the Debian group.

With the expected demise of alioth mailing lists, there’s some interest in getting the Debian package tracker to host the official maintainer email. As the central hub for most emails related to packages, it seems natural indeed. We made some progress lately on making it possible to use @packages.debian.org emails (with the downside of receiving duplicate emails currently) but that’s not an really an option when you maintain many packages and want to see them grouped under the same maintainer email. Furthermore it doesn’t allow for automatic association of a package to its maintainer team. So I implemented a team+slug@tracker.debian.org email that works for each team registered on the package tracker and that will automatically associate the package to its team. The email is just a black hole for now (not really a problem as most automatic emails are already received through another email) but I expect to forward non-automatic mails to team members to make it useful as a way to discuss between team members.

The package tracker also learned to recognize commit mails generated by GitLab and it will now forward them to the source package whose name is matching the name of the GitLab project that generated them (see #886114).

Misc Debian stuff

Distro Tracker. I got my two first merge requests which I reviewed and merged. One adds native HTML support to toggle action items (i.e. without javascript on recent browsers) and the other improves some of the messages shown by the vcswatch integration. In #886450, we discussed how to better filter build failure mails sent by the build daemons. New headers have been added.

Bug reports and patches. I forwarded and/or got moving a couple of bugs that we encountered in Kali (glibc: new data brought to #820826, raspi3-firmware: #887062, glibc: tracking down #886506 to a glibc regression affecting busybox, gr-fcdproplus: #888853 new watch file, gjs: upstream bug #33). I also needed a new feature in live-build so I filed #888507 which I implemented almost immediately (but released only in Kali because it’s not documented yet and can possibly be improved a bit further).

While doing my yearly accounting, I opened an issue on tryton and pushed a fix after approval. While running unit tests on distro-tracker, I got an unexpected warning that seems to be caused by virtualenv (see upstream issue #1120).

Debian Packaging. I uploaded zim 0.68~rc1-1 to experimental.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianShirish Agarwal: webmail saga continues

I was pleased to see a reply from Daniel as a reaction to my post. I read and re-read the blog couple of times yesterday and another time today to question my own understanding and see if there is anyway I could make life easier and simpler for myself and other people whom I interact with but finding it somewhat of an uphill task. I will not be limiting myself to e-mail alone as I feel until we don’t get/share the big picture it would remain incomplete.

Allow to share me few observations below –

1. The first one is probably cultural in nature (either specific to India or its worldwide I have no contextual information.) Very early in my professional and personal life I understood that e-mails are leaky by design. By leaky I mean being leaked by individuals for profit or some similar motive.

Also e-mails are and were used as misinformation tools by companies and individuals then and now or using sub-set or superset of them without providing contextual information in which they were written. While this could be construed as giving straw man I do not know any other way. So the best way, at least for me is to construct e-mails in a way where even if some information is leaked, I’m ok with it being leaked or being in public domain. It just hurts less. I could probably give 10-15 high-profile public outings in the last 2-3 years itself. And these are millionaires and billionaires, people on whom many people rely on their livelihoods should have known better. In Indian companies, for communications they do have specific clauses where any communication you had with them is subject to privacy and if you share it with somebody you would be prosecuted, on the other hand if the company does it, it gets a free pass.

2. Because of my own experiences I have been pretty circumspect/slightly paranoid of anybody promising or selling the cool-aid of total privacy. Another example which is of slightly recentish vintage and pains me even today was a Mozilla add-on for which I had done RFP (Request for Package) which a person for pkg-mozext-maintainers@lists.alioth.debian.org (probably will be moved to salsa in near future) packaged and I thanked him/her for it. Two years later it came to fore that under the guise of protecting us from bad cookies or whatever the add-on was supposed to do, it was actually tracking us and selling this information to third-parties.

This was found out by some security researcher casually auditing the code two years down the line (not mozilla) and then being confirmed by other security researchers as well. It was a moment of anguish for me as so many people’s privacy had been invaded even though there were good intentions from my side.

It was also a bit sad as I had assumed (perhaps incorrectly) that Debian does do some automated security audit along with hardening flags that it uses when a package is built. This isn’t to show Debian in a bad light but to understand and realize that Debian has its own shortcomings in many ways. I did hear recently that lot of packages from Kali would make it to Debian core, hopefully some of those packages could also serve as an additional tool to look at packages when they are being built 🙂

I do know it’s a lot to ask for as Debian is a volunteer effort. I am happy to test or whichever way I can contribute to Debian if in doing so we can raise the bar for intended or unintended malicious apps. to go through. I am not a programmer but still I’m sure there might be somehow I could add strength to the effort.

3. The other part is I don’t deny that Google is intrusive. Google is intrusive not just in e-mail but in every way, every page that uses Google analytics or the google Spider search-engine be used for tracking where you are and what you are doing. The way they have embedded themselves in web-pages is it has become almost impossible to see almost all web-pages (some exceptions remain) without allowing google.com to see what you are seeing. I use requestpolicy-continued to know what third-party domains are there on web-page and I see fonts.googleapis.com, google.com and some of the others almost all the time. The problem there is we also don’t know how much information google gathers. For e.g. even if I don’t use Google search engine and if I am searching on any particular topic and if 3-4 of the websites use google for any form or manner, it would be easy to know the information and the line/mode or form of the info. I’m looking for. That actually is same if not more of a problem to me than e-mails and I have no solution for it. Tor and torbrowser-launcher are and were supposed to be an answer to this problem but most big CDNs (Content Distributor Networks) like cloudfare.com are against it so privacy remains an elusive dream there as well.

5. It becomes all the more dangerous when in mobile space where Google Android is the only vendor. The rise of carrier-handset locking which is prevalent in the west has also started making inroads in India. In the manufacturer-carrier-Operating System complex such things will become more common. I have no idea about other vendors but from what I have seen I think the majority might probably be doing the same. IPhone is supposed to also have lot of nastiness where it comes to surveillance.

6. My main worry for protonmail or any other vendor is should we just take them at face-value or is there some other way for people around the world to be assured and in case things take a worse time be possible to file claim for damages if those terms and conditions are not met. I looked to see if I could find an answer to this question which I asked in my previous post and I looked but didn’t find any appropriate answer in your post. The only way I see out of is decentralized networks and apps but they too leave much to be desired. Two examples I can share of the latter. Diaspora started with the idea that I could have my profile in one pod, if for some reason I didn’t like the pod, I could take all the info. to another pod with all the messages, everything in an instant. At least till few months back, I tried to migrate to another pod and found that feature doesn’t work/still a work in progress.

Similarly, zeronet.io is another service which claimed to use de-centralization but for last year or so I haven’t been able to send one email to another user till date.

I used both these examples as both are foss and both have considerable communities and traction built around them. Security or/and anonymity is still at a lower path though as of yet.

I hope I was able to share where I’m coming from.

Planet DebianDaniel Pocock: Our future relationship with FSFE

Below is an email that has been distributed to the FSFE community today. FSFE aims to be an open organization and people are welcome to discuss it through the main discussion group (join, thread and reply) whether you are a member or not.

For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site. The "No Cloud" stickers and the Public Money Public Code campaign are examples of initiatives started by FSFE - you can request free stickers and posters by filling in this form.


Dear FSFE Community,

I'm writing to you today as one of your elected fellowship representatives rather than to convey my own views, which you may have already encountered in my blog or mailing list discussions.

The recent meeting of the General Assembly (GA) decided that the annual elections will be abolished but this change has not yet been ratified in the constitution.

Personally, I support an overhaul of FSFE's democratic processes and the bulk of the reasons for this change are quite valid. One of the reasons proposed for the change, the suggestion that the election was a popularity contest, is an argument I don't agree with: the same argument could be used to abolish elections anywhere.

One point that came up in discussions about the elections is that people don't need to wait for the elections to be considered for GA membership. Matthias Kirschner, our president, has emphasized this to me personally as well, he looks at each new request with an open mind and forwards it to all of the GA for discussion. According to our constitution, anybody can write to the president at any time and request to join the GA. In practice, the president and the existing GA members will probably need to have seen some of your activities in one of the FSFE teams or local groups before accepting you as a member. I want to encourage people to become familiar with the GA membership process and discuss it within their teams and local groups and think about whether you or anybody you know may be a good candidate.

According to the minutes of the last GA meeting, several new members were already accepted this way in the last year. It is particularly important for the organization to increase diversity in the GA at this time.

The response rate for the last fellowship election was lower than in previous years and there is also concern that emails don't reach everybody thanks to spam filters or the Google Promotions tab (if you use gmail). If you had problems receiving emails about the last election, please consider sharing that feedback on the discussion list.

Understanding where the organization will go beyond the extinction of the fellowship representative is critical. The Identity review process, championed by Jonas Oberg and Kristi Progri, is actively looking at these questions. Please contact Kristi if you wish to participate and look out for updates about this process in emails and Planet FSFE. Kristi will be at FOSDEM this weekend if you want to speak to her personally.

I'll be at FOSDEM this weekend and would welcome the opportunity to meet with you personally. I will be visiting many different parts of FOSDEM at different times, including the FSFE booth, the Debian booth, the real-time lounge (K-building) and the Real-Time Communications (RTC) dev-room on Sunday, where I'm giving a talk. Many other members of the FSFE community will also be present, if you don't know where to start, simply come to the FSFE booth. The next European event I visit after FOSDEM will potentially be OSCAL in Tirana, it is in May and I would highly recommend this event for anybody who doesn't regularly travel to events outside their own region.

Changing the world begins with the change we make ourselves. If you only do one thing for free software this year and you are not sure what it is going to be, then I would recommend this: visit an event that you never visited before, in a city or country you never visited before. It doesn't necessarily have to be a free software or IT event. In 2017 I attended OSCAL in Tirana and the Digital-Born Media Carnival in Kotor for the first time. You can ask FSFE to send you some free stickers and posters (online request with optional donation) to give to the new friends you meet on your travels. Change starts with each of us doing something new or different and I hope our paths may cross in one of these places.


For more information about joining FSFE, local groups, campaigns and other activities please visit the FSFE web site.

Please feel free to discuss this through the FSFE discussion group (join, thread and reply)

CryptogramJackpotting Attacks Against US ATMs

Brian Krebs is reporting sophisticated jackpotting attacks against US ATMs. The attacker gains physical access to the ATM, plants malware using specialized electronics, and then later returns and forces the machine to dispense all the cash it has inside.

The Secret Service alert explains that the attackers typically use an endoscope -- a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body -- to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM's computer.

"Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers," reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

"In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds," the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

Lots of details in the article.

Worse Than FailureWe Sell Bonds!

We Sell Bonds!

The quaint, brick-faced downtown office building was exactly the sort of place Alexis wanted her first real programming job to be. She took a moment to just soak in the big picture. The building's façade oozed history, and culture. The busy downtown crowd flowed around her like a tranquil stream. And this was where she landed right out of college-- if this interview went well.

Alexis went inside, got a really groovy start-up vibe from the place. The lobby slash waiting room slash employee lounge slash kitchen slash receptionist desk was jam packed full of boxes of paperwork waiting to be unpacked and filed (once a filing cabinet was bought). The walls, still the color of unpainted drywall, accented with spats of plaster and glue-tape. Everything was permeated with chaotic beginnings and untapped potential.

Her interviewer, Mr. Chen, the CEO of the company, lead her into the main conference room, which she suspected was the main conference room by virtue of being the only conference room. The faux-wood table, though moderately sized, still barely left room for herself and the five interviewers, crammed into a mish-mash of conference-room chairs, office chairs and one barstool. At least this room's walls had seen a coat of paint-- if only a single coat. Framed artwork sat on the ground, leaned up gently against the wall. She shared the artwork's anticipation-- waiting for the last coat of paint and touch-ups to dry, to hang proudly for all to see, fulfilling their destiny as the company grew and evolved around them.

"Thank you for coming in," said Mr. Chen as he sat at the head of the conference table.

"Thank you for having me," Alexis replied, sitting opposite him, flanked by the five other interviewers. She was glad she'd decided to play cautious and wear her formal 'Interview Suit'. She fit right in with the suits and ties everyone else was wearing. "I really dig the office space. How long have you been here?"

"Five years," Mr. Chen answered.

Her contextual awareness barely had time to register the whiplash of unpainted walls and unhung pictures in a long occupied office-- not that she had time to process that thought anyways.

"Let the interview begin now," Mr. Chen said abruptly. "Tell me your qualifications."

"I-- uh, okay," Alexis sat up straight and opened her leather folder, "Does everyone have a copy of my resume? I printed extra in case-- "

"We are a green company," Mr. Chen stated.

Alexis froze for a moment, her hand inches from the stack of resumes. She folded her hands over her own copy, smiled, and recovered. "Okay. My qualifications..." She filled them in on the usual details-- college education, GPA, co-op jobs, known languages and frameworks, contributions to open source projects. It was as natural as any practice interview she'd ever had. Smile. Talk clearly. Make eye contact with each member of the interview team for an appropriate length of time before looking at the next person.

Though doing so made her acutely aware that she had no idea who the other people were. They'd never been introduced. They were just-- there.

As soon as she'd finished her last qualification point, Mr. Chen spoke. "Are you familiar with the bonds market?"

She'd done some cursory Wikipedia research before her interview, but admitted, "An introductory familiarity."

"You are not expected to know it," Mr. Chen said, "The bond market is complicated. Very complicated. Even experienced brokers who know about futures contracts, forward contractions, options, swaps and warrants often have no idea how bonds work. But their customers still want to buy a bond. The brokers are our customers, and allowing them to buy bonds is the sole purpose of 'We Sell Bonds!'."

Though Mr. Chen had a distinctly dry and matter-of-fact way of speaking, she could viscerally HEAR the exclamation point in the company's name.

"Very interesting," Alexis said. Always be sure to compliment the interviewer at some point. "What sort of programming projects would I be working on?"

"The system is very complicated," Mr. Chen retorted. "Benny is our programmer."

One of the suited individuals to her left nodded, and she smiled back at him. At least now she knew one other person's name.

"He will train you before you may modify the system. It is very important the system be working properly, and any development must be done carefully. At least six months of training. But the system gathers lots of data, from markets, and from our customers. That data must be imported into the system. That will be part of your duties."

Again, Alexis found herself sitting behind a default smile while her brain processed. The ad she'd answered had clearly been for a junior developer. It had asked for developer skills, listed must-know programming languages, and even been titled 'Junior Developer'. Without breaking the smile, she asked, "What would you say the ratio of data handling to programming would be?"

"I would say close to one hundred percent."

Alexis' heart sank, and she curled her toes to keep any physical sign of her disappointment showing. She mentally looked to her sliver-linings view. Sure, it was data entry-- but she'd still be hands-on with a complicated financial system. She'd get training. Six months of training, which would be a breeze compared to full-time college. And if there really was that much data entry, then the job would be perfect for a fresh mind. There'd be TONS of room for introducing automation and efficiency. What more could a junior developer want?

"That sounds great," Alexis said, enthusiastic as ever.

"Good," Mr. Chen said. "The job starts on Monday."

Her whiplash systems had already long gone offline from overload. Was that a job offer?

"That-- sounds great!" Alexis repeated.

"Good. Nadine will email your paperwork. Email it back promptly."

And now Alexis knew a second person's name. "I look forward to meeting the whole company," she said aloud.

"You have," Mr. Chen replied, gesturing to the others at the table. "We will return to work now. Good day."

Alexis found herself back on the sidewalk outside the quaint brick-faced downtown office building, gainfully employed and not sure if she actually understood what the heck had just happened. But that was a problem for Monday.

#

Alexis arrived fifteen minutes early for her first day at the quaint brick-faced downtown office-- no, make that HER quaint brick-faced downtown office.

Fourteen minutes later, Mr. Chen unlocked the front-door from the inside, and let her in.

"You're early," he stated, locking the door behind her.

"The early bird gets the worm," she clichéd.

"You don't need to be early if you are punctual. Follow."

Mr. Chen lead her through the lobby, and once again into the main boardroom. As before, five people sat around the conference table. Alexis figured there'd be formalities and paperwork to file before she got a desk. HER desk! The whole company (all six of them-- though now it was seven) were here to greet her. And, for some reason, they'd brought their laptops.

"You will sit beside Benny," Mr. Chen said, taking his seat.

"I-- huh?"

Next to Benny, there was an empty chair, and an unoccupied laptop. Alexis slunk around the other chairs, careful not to knock over the framed posters that were still propped against the wall, and sat beside the lead programmer.

"Morning meeting before getting down to work, huh?" she said, smiling at him.

Benny gave her a sideways glance. "We are working."

Alexis wasn't sure what he meant-- and then she noticed, for the first time, that everyone was heads down, looking at their screens, typing away. This wasn't just a boardroom. This was her desk. This was everyone's desk.

Over the morning, Benny gave her his split attention-- interspersing his work with muttering instructions to her; how to log in, where the data files were, how to install Excel. He would only talk to her in-between steps of his task; never interrupting a step to give her attention. Sometimes she just sat there and watched him watch a progress bar. She gathered he was upgrading a server's instance of SQL Server from version "way too old" to version "still way too old, but not as much".

After lunch (also eaten at the shared desk), Benny actually looked at her.

"Time for your first task," he said, giving her a sheet of paper. "We have a new financial institution buying bonds from us. They will use our Single SignOn Solution. You will need to create these accounts."

She took the sheet of paper, a simple printed table with first name, last name, company name, username and password.

Alexis was recently enough out of college that "Advanced Security Practices 401" was still fresh in her mind-- and seeing a plaintext password made her bones nearly crawl out of her skin.

"I-- um-- are there supposed to be passwords here?"

Benny nodded. "Yes. To facilitate single sign-on, usernames and passwords in 'We Sell Bonds!' website must exactly match those used in the broker's own system. Their company signs up for 'We Sell Bonds!', and they are provided with a website skinned to look like their own. The company's employees are given a seamless experience. Most don't even know they are on a different site."

Her skin gnawed on her bones to keep them in place. "But, if the passwords here are in plaintext, that is their real, actual password?"

Benny gave her the same nod. "They must be. Otherwise we could not log in to test their account."

That either made perfect sense, or had dumbfounded all the sense out of Alexis, so she just said "Ok." The rest of the day was spent creating accounts through an ASP interface, then logging into the site to test them.

When she arrived at the quaint brick-faced office building the next day, there was a large stack of papers at her spot at the communal desk. Benny said, "Mr. Chen was happy with your data entry yesterday."

Mr. Chen, who was seated at the head of the shared desk, didn't look up from his laptop screen. "You are allowed to enter this data too."

"Thank you?" Alexis settled in, and got to work. For every record she entered, a different way of optimizing this system would flitter through her mind. A better entry form, maybe auto-focus the first field? How about an XML dump on a USB disk? Or a SOAP service that could interface directly with the database? There could be a whole validation layer to check the data and pre-emptively catch data errors.

Data errors like the one she was looking at right now. She waited patiently for Benny to complete whatever step of his task he was on, and showed him the offending records.

"I don't see the problem," Benny said, shortly.

"John Smith and Jon Smith both have the same username, jsmith" she said, not sure how to make it more clear.

"Yes, they do," Benny confirmed.

"They can't both have the same username."

"They can!" Mr. Chen's sudden interjection startled her-- though she wasn't sure if it was because of the sharpness of his tone, or because she hadn't actually heard him speak for a day and a half. "Do you not see that they have different passwords?"

"Uh," Alexis checked, "They do. But the username is still the same."

There was no response. Mr. Chen was already looking back at his screen. Benny was looking at her expectantly.

"So users are differentiated by their-- password?" she said, trying to grasp what the implications of that would be. "What if someone changes their password to be-- "

"Users don't change passwords," Benny replied. "That would break single sign-on. If a user changes their password in their home system, their company will submit a change request to us to modify the password on 'We Sell Bonds!'."

Alexis blinked-- this time certain that this made no sense, and she was actually dumbfounded. But Benny must have taken her silence as 'question answered', and immediately started his next task. It made no sense, but she was still a junior developer, fresh out of school; full of ideas but short on experience. Maybe-- yeah, maybe there was a reason to do it this one. One that made sense once she learned to stop thinking so academically. That must be it.

She dutifully entered two records for jsmith and kept working on the pile.

#

Friday. The end of her first real work week. Such a long, long week of data entry, interspersed by being allowed to witness a small piece of the system as Benny worked on his upgrades. At least she knew now which version of SQL Server was in use; and that Benny avoided the braces-verses-no-braces argument by just using vbscript which was "pure and brace-free"; and that stored procedures were allowed because raw SQL was too advanced to trust to human hands.

Alexis stood in front of the quaint brick-faced office building. It was familiar enough now, after even just a week, that she could see the discoloured mortar, and cracked bricks near the foundation, and the smatterings of dirt and debris that came with being on a busy downtown street.

She went into the office, and sat down at the desk. Another stack of papers for her to enter, just like the day before, just like every day this week. Though something was different. In the middle of the table, there was a box of donuts from the local bakery.

"Well, that's nice," she said as she sat down. "Happy Friday?"

Everyone looked up at her at the same time.

"No," Mr. Chen stated, "Friday is not a celebration; please do not detract from Benny's success."

She felt like she wanted to apologize, but she didn't know why. "What's the occasion, Benny?"

"He has completed the upgrade of the database. We celebrate his success."

That seemed reasonable enough. Mr. Chen opened the box. There was an assortment of donuts. Seven donuts. Exactly seven donuts. Not a dozen. Not half a dozen. Seven. Who buys seven donuts?

Mr. Chen selected one, and then the box was passed around. Alexis didn't know much about her coworkers (a fact that, upon thinking about it, was not normal)-- but she did know enough about their positions to recognize the box being passed around in order of seniority. She took the last one, a plain cake donut.

Of course.

"Well," she said, making a silver lining out of a plain donut, "Congratulations, Benny. Cheers to you."

"Thank you," he said, "I was finally able to successfully update the server for the first time last night."

"Nice. When do we roll it out to the live website?"

Benny looked at her a blankly. "The website is live."

"Yeah, I know," Alexis said, swallowing the last bit of donut. It landed hard on the weird feeling she had in her stomach. "But, y'know-- you upgraded whatever environment you were experimenting on, right? So now that that's done, are you, like-- going to upgrade the live, production server over the weekend or something-- like, off hours?"

"I have upgraded the live, production server. That is our server. That is where we do all the work."

Alexis became acutely aware that the weird feeling in her stomach was a perfectly normal and natural reaction to thinking about doing work directly on a live, production server that served hundreds of customers handling millions of dollars.

"Oh."

Mr. Chen finished his donut and said, "Benny is a proper, careful worker. There is no need to waste resources on many environments when one can just do the job correctly in the first place. Again, good work, Benny, and now the day begins."

Everyone turned to their laptops, and so did Alexis, reflectively. She started in on the first stack of papers to enter into the database-- the live, production database she was interfacing directly with-- when she heard a sound she'd never heard before.

A phone rang.

The man beside Mr. Chen-- Trevor, she thought his name was, stood up and excused himself to the lobby to answer the phone. He returned after a few moments, and put a piece of paper on top of her pile.

"That request should be queued at the bottom of her pile," Mr. Chen said as soon as Trevor's hand was off the paper.

"I believe this may be a case of priority," Trevor replied. He had a nice voice. Why hadn't she heard her co-worker's voice after a week of working here? "A user cannot log in."

She glanced down at the paper. There was a username and password jotted down. When she looked back up at Mr. Chen, he waved her to proceed. Alexis pulled up "We Sell Bonds!" home page, and tried to log in as "a.sanders"

The logged-in page popped up. "Huh, seems to be working now."

"No," Trevor said, "You should be logged in as Andrew Sanders from Initech Bonds and Holdings, not Andrew Sanders from Fiduciary Interests."

"But I am logged in as a.sanders from Initiech, see?" she brought up the account details to show Trevor.

"No, I tried it myself. I will show you." Trevor took her laptop, repeated the login steps. "There."

"Huh." Alexis stared at the account information for Andrew Sanders from Fiduciary Interests. "Maybe one of us is typing in the wrong password?"

Alexis tried again, and Trevor tried-- and this time got the results reversed. They tried a new browser session, and both got Initech. Then try tried different browsers and Trevor got Initech twice in a row. They copy and pasted usernames and passwords to and from Notepad. No matter what they tried, they couldn't consistently reproduce which Andrew Sanders they got.

As Alexis tried over and over to come up with something or anything to explain it, Benny was frantically running through code, adding Response.Write("<!-- some debugging message -->") anywhere he could think might help.

By this point the whole company was watching them. While that shouldn't be noteworthy since the entire company was in the same room, being paid attention to by this particular group of coworkers was extremely noticeable.

And of all the looks that fell on her, the most disconcerting was Mr. Chen's gaze.

"Determine the cause of this disruption to our website," he said flatly.

"I don't get it," Alexis said, "This doesn't make any sense. We should be able to determine what's causing this bug, but-- um-- hang on."

Determine-- the word tugged at her, begging to be noticed. Or begging her to notice something. Something she'd seen on Benny's screen. A SQL query. It reminded her of a term from one of her Database Management exams. Deterministic. Yes, of course!

"Benny, go back to that query you had on screen!" she exclaimed! "Yes, that one!"

As she pointed at Benny's screen, Mr. Chen was already on his feet, heading over. A perfect chance for her to finally prove her worth as a developer.

"That query, right there, for getting the user record. It's using a view and-- may I?" she took over Benny's laptop, focused on the SQL Management Studio, but excitedly talking aloud as she went.

"Programmability... views... VIEW_ALL_USERS... aha! Check it out."

SELECT TOP 100 PERCENT *
FROM TABLE_USERS
ORDER BY UserCreatedDate

"Which," she clicked back to the query, "Is used here..."

SELECT *
FROM TABLE_USERS
WHERE username=@Username and password=@Password

"... and we only use the first record we return, but I've read about this! Okay, like, the select without an ORDER BY returns in random order-- no no no, NON-DETERMINISTIC order, basically however the optimizer felt like returning them, based on which ones returned faster, or what it had for breakfast that day, but totally non-deterministic. No "ORDER BY" means no order. Or at least it is supposed to, but, like, SQL Server 2000 had this bug in the optimizer, which became this epic 'undocumented feature'. When you did TOP 100 PERCENT along with an ORDER BY in a view, the optimizer just bugged the fudge out and did the sorting wrong, but did the sorting wrong in a deterministic way. In effect, it would obey the ORDER BY inside the view, but only by accident. But, like I said, that was a bug in SQL Server 2000, and Benny, WE JUST UPGRADED TO SQL SERVER 2005!"

She held her hands out, the solution at last. Mr. Chen was standing right there. Okay, perfect-- because what had Logical Thinking and Troubleshooting 302 taught her? Don't just identify a PROBLEM, also present a SOLUTION!

"Okay, look-- I bet if I query for users with this username and this password-- " she typed the query in frantically-- "see, right there, that's Andrew Sanders from Initech AND Andrew Sanders from Fiduciary Interests. They both have the same username and password, so they're both returned. I bet no one ever noticed before. That other guy has no activity on his account. So all we really have to do is put the same ORDER BY into the query itself-- and-- click click, copy paste-- there! Log in and-- there's Mr. Initech. Log out, log in, log out, log in-- I could do this all day and we'll get the same results. Tah-dah!"

She sat back in her chair, grinning at her captive audience. But they weren't grinning back. Instead they were averting their gaze. Everyone-- except for Mr. Chen. There was no doubt he was staring right at her. Glaring.

"Undo that immediately," he said, in an extremely calm voice that did not match his reddening face.

"I, uh-- okay?" she reached for the keyboard.

"BENNY!" Mr. Chen rebuked, "Benny, you undo those changes."

Benny snatched the laptop away, and with a barrage of CTRL-Z wiped away her change.

"But-- that fixes the bug-- "

"No," Mr. Chen stated, "The CORRECT fix is to delete the second record, and inform Fiduciary Interest that their Andrew Sanders may not have access to this system until he changes his password to something unique. Then there is no code change needed."

"But but-- " Alexis stumbled, "It's a documented Microsoft bug, and if-- "

"The code of 'We Sell Bonds!' is properly and carefully written. We do not change it to conform to someone else's mistakes. This complex code change you unilaterally impose is unknown, untested, unreliable and utterly unacceptable. You would determine the course of a financial business based on an outrageous outside case?"

"But, it's happening and causing a problem now and-- "

Benny pointed at his screen, where he'd entered a query with a GROUP BY and HAVING. "Only eight usernames are duplicated like that."

"Vanishingly small," Mr. Chen said, "Benny, print out those users, and then delete them. Nadine, contact those companies and inform them those users will not have access to the website until their information is corrected. With that solved, we can all resume work."

Everyone at the company returned to their tasks. Alexis stared at her screen for a moment, at the ASP management screen that waited for her data to be entered. It didn't implement any change. It didn't introduce any progress. It was just an ASP form for data entry. And that was her job.

She entered her data.

At lunch, when everyone in the company got up to take their break, Mr. Chen motioned for her to sit back down. After the rest of the company filed out, he spoke.

"Alexis, although your ability to interface with the system is adequate, I am afraid your inability to focus on your task is not. I require a worker who is careful and proper, and you are not. Thank you for your time. You will be paid for the remainder of the day, but may go home now. I will see you out."

Alexis erred on the better side of valor, and did not shout in his face that he can't fire her, because she was quitting.

Mr. Chen ushered her out the front door, and locked it behind her.

Alexis stood on the busy sidewalk, the lunchtime crowd pushing and shoving their way past her. She looked back on the quaint, brick-faced office building. On the surface, it had been exactly what she'd wanted from her first programming job. She only got one "first" job, and it had ended up being-- that.

She wallowed for a moment, and then pulled herself back together. No. Data entry did not a programming job make. Her real first programming job was still ahead of her, somewhere. And next time, when she thought she'd found it, she would first look-- properly and carefully-- past the quaint surface to what lay beneath.

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet Linux AustraliaCraige McWhirter: Querying Installed Package Versions Across An Openstack Cloud

AKA: The Joy of juju run

Package upgrades across an OpenStack cloud do not always happen at the same time. In most cases they may happen within an hour or so across your cloud but for a variety reasons, some upgrades may be applied inconsistently, delayed or blocked on some servers.

As these packages may be rolling out a much needed patch or perhaps carrying a bug, you may wish to know which services are impacted in fairly short order.

If your OpenStack cloud is running Ubuntu and managed by Juju and MAAS, here's where juju run can come to the rescue.

For example, perhaps there's an update to the Corosync library libcpg4 and you wish to know which of your HA clusters have what version installed.

From your Juju controller, create a list of servers managed by Juju:

Juju 1.x:

$ juju stat --format tabular > jsft.out

Now you could fashion a query like this, utilising juju run:

$ for i in $(egrep -o '[a-z]+-hacluster/[0-9]+' jsft.out | cut -d/ -f1 | sort -u);
do juju run --timeout 30s --service $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';
done

The output returned will look something like this:

2.3.3-1ubuntu4 == ceilometer-hacluster/1
2.3.3-1ubuntu4 == ceilometer-hacluster/0
2.3.3-1ubuntu4 == ceilometer-hacluster/2
2.3.3-1ubuntu4 == cinder-hacluster/0
2.3.3-1ubuntu4 == cinder-hacluster/1
2.3.3-1ubuntu4 == cinder-hacluster/2
2.3.3-1ubuntu4 == glance-hacluster/3
2.3.3-1ubuntu4 == glance-hacluster/4
2.3.3-1ubuntu4 == glance-hacluster/5
2.3.3-1ubuntu4 == keystone-hacluster/1
2.3.3-1ubuntu4 == keystone-hacluster/0
2.3.3-1ubuntu4 == keystone-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/1
2.3.3-1ubuntu4 == mysql-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/1
2.3.3-1ubuntu4 == ncc-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/1
2.3.3-1ubuntu4 == neutron-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/1
2.3.3-1ubuntu4 == osd-hacluster/2
2.3.3-1ubuntu4 == swift-hacluster/1
2.3.3-1ubuntu4 == swift-hacluster/0
2.3.3-1ubuntu4 == swift-hacluster/2

Juju 2.x:

$ juju status > jsft.out

Now you could fashion a query like this:

$ for i in $(egrep -o 'hacluster-[a-z]+/[0-9]+' jsft.out | cut -d/ -f1 |sort -u);
do juju run --timeout 30s --application $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';
done

The output returned will look something like this:

2.3.5-3ubuntu2 == hacluster-ceilometer/1
2.3.5-3ubuntu2 == hacluster-ceilometer/0
2.3.5-3ubuntu2 == hacluster-ceilometer/2
2.3.5-3ubuntu2 == hacluster-cinder/1
2.3.5-3ubuntu2 == hacluster-cinder/0
2.3.5-3ubuntu2 == hacluster-cinder/2
2.3.5-3ubuntu2 == hacluster-glance/0
2.3.5-3ubuntu2 == hacluster-glance/1
2.3.5-3ubuntu2 == hacluster-glance/2
2.3.5-3ubuntu2 == hacluster-heat/0
2.3.5-3ubuntu2 == hacluster-heat/1
2.3.5-3ubuntu2 == hacluster-heat/2
2.3.5-3ubuntu2 == hacluster-horizon/0
2.3.5-3ubuntu2 == hacluster-horizon/1
2.3.5-3ubuntu2 == hacluster-horizon/2
2.3.5-3ubuntu2 == hacluster-keystone/0
2.3.5-3ubuntu2 == hacluster-keystone/1
2.3.5-3ubuntu2 == hacluster-keystone/2
2.3.5-3ubuntu2 == hacluster-mysql/0
2.3.5-3ubuntu2 == hacluster-mysql/1
2.3.5-3ubuntu2 == hacluster-mysql/2
2.3.5-3ubuntu2 == hacluster-neutron/0
2.3.5-3ubuntu2 == hacluster-neutron/2
2.3.5-3ubuntu2 == hacluster-neutron/1
2.3.5-3ubuntu2 == hacluster-nova/1
2.3.5-3ubuntu2 == hacluster-nova/2
2.3.5-3ubuntu2 == hacluster-nova/0

You can of course substitute libcpg4 in the above query for any package that you need to check.

By far and away my most favourite feature of Juju at present, juju run reminds me of knife ssh, which is unsurprisingly one of my favourite features of Chef.

Sociological ImagesSelling the Sport Spectacle

That large (and largely trademarked) sporting event is this weekend. In honor of its reputation for massive advertising, Lisa Wade tipped me off about this interesting content analysis of last year’s event by the Media Education Foundation.

MEF watched last year’s big game and tallied just how much time was devoted to playing and how much was devoted to ads and other branded content during the game. According to their data, the ball was only in play “for a mere 18 minutes and 43 seconds, or roughly 8% of the entire broadcast.”

MEF used a pie chart to illustrate their findings, but readers can get better information from comparing different heights instead of different angles. Using their data, I quickly made this chart to more easily compare branded and non-branded content.

Data Source: Media Education Foundation, 2018

One surprising thing that jumps out of this data is that, for all the hubbub about commercials, far and away the most time is devoted to replays, shots of the crowd, and shots of the field without the ball in play. We know “the big game” is a big sell, but it is interesting to see how the thing it sells the most is the spectacle of the event itself.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianPaul Wise: FLOSS Activities January 2018

Changes

Issues

Review

Administration

  • Debian: try to regain OOB access to a host, try to connect with a hoster, restart bacula after db restart, provide some details to a hoster, add debsnap to snapshot host, debug external email issue, redirect users to support channels
  • Debian mentors: redirect to sponsors, teach someone about dput .upload files, check why a package disappeared
  • Debian wiki: unblacklist IP address, whitelist email addresses, whitelist email domain, investigate DocBook output crash

Communication

  • Initiate discussion about ingestion of more security issue feeds
  • Invite LinuxCNC to the Debian derivatives census

Sponsors

I renewed my support of Software Freedom Conservancy.

The Discord related uploads (harmony, librecaptcha, purple-discord) and the Debian fakeupstream change were sponsored by my employer. All other work was done on a volunteer basis.

,

Planet DebianChris Lamb: Free software activities in January 2018

Here is my monthly update covering what I have been doing in the free software world in January 2018 (previous month):


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:



I also made the following changes to our tooling:

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • New features:
    • Compare JSON files using the jsondiff module. (#888112)
    • Report differences in extended file attributes when comparing files. (#888401)
    • Show extended filesystem metadata when directly comparing two files not just when we specify two directories. (#888402)
    • Do some fuzzy parsing to detect JSON files not named .json. [...]
  • Bug fixes:
    • Return unknown if we can't parse the readelf version number for (eg.) FreeBSD. (#886963)
    • If the LLVM disassembler does not work, try the internal one. (#886736)
  • Misc:
    • Explicitly depend on e2fsprogs. (#887180)
    • Clarify Unidentified file log message as we did try and lookup via the comparators first. [...]

I also fixed an issue in the "trydiffoscope" command-line client that was preventing installation on non-Debian systems (#888882).


disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.

  • Correct "explicitly" typo in disorderfs.1.txt. [...]
  • Bump Standards-Version to 4.1.3. [...]
  • Drop trailing whitespace in debian/control. [...]


Debian

My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

In addition to this, I:

  • Published whydoesaptnotusehttps.com, an overview of why APT does not rely solely on SSL for validation of downloaded packages as I noticed it was being asked a lot on support forums.
  • Reported a number of issues for the mentors.debian.net review service.

Patches contributed

  • dput: Suggest --force if package has already been uploaded. (#886829)
  • linux: Add link to the Firmware page on the wiki to failed to load log messages. (#888405)
  • markdown: Make markdown exit with a non-zero exit code if cannot open input file. (#886032)
  • spectre-meltdown-checker: Return a sensible exit code. (#887077)

Debian LTS


This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:

  • Initial draft of a script to automatically detect when CVEs should be assigned to multiple source packages in the case of legacy renames, duplicates or embedded code copies.
  • Issued DLA 1228-1 for the poppler PDF library to fix an overflow vulnerability.
  • Issued DLA 1229-1 for imagemagick correcting two potential denial-of-service attacks.
  • Issued DLA 1233-1 for gifsicle — a command-line tool for manipulating GIF images — to fix a use-after-free vulnerability.
  • Issued DLA 1234-1 to fix multiple integer overflows in the GTK gdk-pixbuf graphics library.
  • Issued DLA 1247-1 for rsync, fixing a command-injection vulnerability.
  • Issued DLA 1248-1 for libgd2 to prevent a potential infinite loop caused by signedness confusion.
  • Issued DLA 1249-1 for smarty3 fixing an arbitrary code execution vulnerability.
  • "Frontdesk" duties, triaging CVEs, etc.

Uploads

  • adminer (4.5.0-1) — New upstream release.
  • bfs (1.2-1) — New upstream release.
  • dbus-cpp (5.0.0+18.04.20171031-1) — Initial upload to Debian.
  • installation-birthday (7) — Add e2fsprogfs to Depends so it can drop Essential: yes. (#887275
  • process-cpp:
    • 3.0.1-1 — Initial upload to Debian.
    • 3.0.1-2 — Fix FTBFS due to symbol versioning.
  • python-django (1:1.11.9-1 & 2:2.0.1-1) — New upstream releases.
  • python-gflags (1.5.1-4) — Always use SOURCE_DATE_EPOCH from the environment.
  • redis:
    • 5:4.0.6-3 — Use --clients argument to runtest to force single-threaded operation over using taskset.
    • 5:4.0.6-4 — Re-add procps to Build-Depends. (#887075)
    • 5:4.0.6-5 — Fix a dangling symlink (and thus a broken package). (#884321)
    • 5:4.0.7-1 — New upstream release.
  • redisearch (1.0.3-1, 1.0.4-1 & 1.0.5-1) — New upstream releases.
  • trydiffoscope (67.0.0) — New upstream release.

I also sponsored the following uploads:


Debian bugs filed

  • gdebi: Invalid gnome-mime-application-x-deb icon in AppStream metadata. (#887056)
  • git-buildpackage: Please make gbp clone not quieten the output by default. (#886992)
  • git-buildpackage: Please word-wrap generated changelog lines. (#887055)
  • isort: Don't install test_isort.py to global Python namespace. (#887816)
  • restrictedpython: Please add Homepage. (#888759)
  • xcal: Missing patches due to 00List != 00list. (#888542)

I also filed 4 bugs against packages missing patches due to incomplete quilt conversions against cernlib geant321, mclibs & paw.


RC bugs

  • gnome-shell-extension-tilix-shortcut: Invalid date in debian/changelog. (#886950)
  • python-qrencode: Missing PIL dependencies due to use of Python 2 substvars in Python 3 package. (#887811)


I also filed 7 FTBFS bugs against lintian, netsniff-ng, node-coveralls, node-macaddress, node-timed-out, python-pyocr & sleepyhead.


FTP Team


As a Debian FTP assistant I ACCEPTed 173 packages: appmenu-gtk-module, atlas-cpp, canid, check-manifest, cider, citation-style-language-locales, citation-style-language-styles, cloudkitty, coreapi, coreschema, cypari2, dablin, dconf, debian-dad, deepin-icon-theme, dh-dlang, django-js-reverse, flask-security, fpylll, gcc-8, gcc-8-cross, gdbm, gitlint, gnome-tweaks, gnupg-pkcs11-scd, gnustep-back, golang-github-juju-ansiterm, golang-github-juju-httprequest, golang-github-juju-schema, golang-github-juju-testing, golang-github-juju-webbrowser, golang-github-posener-complete, golang-gopkg-juju-environschema.v1, golang-gopkg-macaroon-bakery.v2, golang-gopkg-macaroon.v2, harmony, hellfire, hoel, iem-plugin-suite, ignore-me, itypes, json-tricks, jstimezonedetect.js, libcdio, libfuture-asyncawait-perl, libgig, libjs-cssrelpreload, liblxi, libmail-box-imap4-perl, libmail-box-pop3-perl, libmail-message-perl, libmatekbd, libmoosex-traitfor-meta-class-betteranonclassnames-perl, libmoosex-util-perl, libpath-iter-perl, libplacebo, librecaptcha, libsyntax-keyword-try-perl, libt3highlight, libt3key, libt3widget, libtree-r-perl, liburcu, linux, mali-midgard-driver, mate-panel, memleax, movit, mpfr4, mstch, multitime, mwclient, network-manager-fortisslvpn, node-babel-preset-airbnb, node-babel-preset-env, node-boxen, node-browserslist, node-caniuse-lite, node-cli-boxes, node-clone-deep, node-d3-axis, node-d3-brush, node-d3-dsv, node-d3-force, node-d3-hierarchy, node-d3-request, node-d3-scale, node-d3-transition, node-d3-zoom, node-fbjs, node-fetch, node-grunt-webpack, node-gulp-flatten, node-gulp-rename, node-handlebars, node-ip, node-is-npm, node-isomorphic-fetch, node-js-beautify, node-js-cookie, node-jschardet, node-json-buffer, node-json3, node-latest-version, node-npm-bundled, node-plugin-error, node-postcss, node-postcss-value-parser, node-preact, node-prop-types, node-qw, node-sellside-emitter, node-stream-to-observable, node-strict-uri-encode, node-vue-template-compiler, ntl, olivetti-mode, org-mode-doc, otb, othman, papirus-icon-theme, pgq-node, php7.2, piu-piu, prometheus-sql-exporter, py-radix, pyparted, pytest-salt, pytest-tempdir, python-backports.tempfile, python-backports.weakref, python-certbot, python-certbot-apache, python-certbot-nginx, python-cloudkittyclient, python-josepy, python-jsondiff, python-magic, python-nose-random, python-pygerrit2, python-static3, r-cran-broom, r-cran-cli, r-cran-dbplyr, r-cran-devtools, r-cran-dt, r-cran-ggvis, r-cran-git2r, r-cran-pillar, r-cran-plotly, r-cran-psych, r-cran-rhandsontable, r-cran-rlist, r-cran-shinydashboard, r-cran-utf8, r-cran-whisker, r-cran-wordcloud, recoll, restrictedpython, rkt, rtklib, ruby-handlebars-assets, sasmodels, spectre-meltdown-checker, sphinx-gallery, stepic, tilde, togl, ums2net, vala-panel, vprerex, wafw00f & wireguard.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: fpylll, gnome-tweaks, org-mode-doc & py-radix.

CryptogramIsraeli Scientists Accidentally Reveal Classified Information

According to this story (non-paywall English version here), Israeli scientists released some information to the public they shouldn't have.

Defense establishment officials are now trying to erase any trace of the secret information from the web, but they have run into difficulties because the information was copied and is found on a number of platforms.

Those officials have managed to ensure that the Haaretz article doesn't have any actual information about the information. I have reason to believe the information is related to Internet security. Does anyone know more?

Planet DebianWouter Verhelst: Day three of the pre-FOSDEM Debconf Videoteam sprint

This should really have been the "day two" post, but I forgot to do that yesterday, and now it's the end of day three already, so let's just do the two together for now.

Kyle

Has been hacking on the opsis so we can get audio through it, but so far without much success. In addition, he's been working a bit more on documentation, as well as splitting up some data that's currently in our ansible repository into a separate one so that other people can use our ansible configuration more easily, without having to fork too much.

Tzafrir

Did some tests on the ansible setup, and did some documentation work, and worked on a kodi plugin for parsing the metadata that we've generated.

Stefano

Did some work on the DebConf website. This wasn't meant to be much, but yak shaving sucks. Additionally, he's been doing some work on the youtube uploader as well.

Nattie

Did more work reviewing our documentation, and has been working on rewording some of the more awkward bits.

Wouter

Spent much time on improving the SReview installation for FOSDEM. While at it, fixed a number of bugs in some of the newer code that were exposed by full tests of the FOSDEM installation. Additionally, added code to SReview to generate metadata files that can be handed to Stefano's youtube uploader.

Pollo

Although he had less time yesterday than he did on monday (and apparently no time today) to sprint remotely, Pollo still managed to add a basic CI infrastructure to lint our ansible playbooks.

Planet DebianAbhijith PA: Swatantra17

Its very late but here it goes..

Swatantra

Last month Thiruvananthapuram witnessed one of the biggest Free and Open Source Software conference called Swatantra17. Swatantra is a flagship triennial ( actually used to be triennial, but from now on organizers decided to conduct in every 2 years.) FOSS conference from ICFOSS. This year there were more than 30 speakers from all around the world. The event held from 20-21 December at Mascot hotel, Thiruvananthapuram. I was one of the community volunteer for the event and was excited from the day it announced :) .

Pinarayi Vijayan

Current Kerala Chief Minister Pinarayi Vijayan inaugurated Swatantra17. The first day session started with keynote from Software Freedom Conservancy executive director Karen Sandler. Karen told about safety of medical devices like defibrillator which runs proprietary software. After that there were many parallel talks about various free software projects,technologies and tools. This edition of Swatantra focused more on art. It was good to know more about artist’s free software stack. Most amazing thing is through out the conference I met so many people from FSCI whom I only know through matrix/IRC/emails.

Karen Sandler

The first day talks were ended at 6PM. After that Oorali band performed for us. This band is well-known in Kerala because they speak for many social and political issues. This make them best match for a free software conference cultural program :). Their songs are mainly about birds, forests, freedom and we danced to the many of the songs.

Oorali

Last day evening there was kind of BoF from FSF person Benjamin Mako Hill. Half way through I came to know he is also a Debian Developer :D. Unfortunately this Bof stopped as he was called for a panel discussion. After the panel discussion we all Debian people gathered and had a chat.

Panel Discussion

Planet DebianDaniel Leidert: Migrating the debichem group subversion repository to Git - Part 1: svn-all-fast-export basics

With the deprecation of alioth.debian.org the subversion service hosted there will be shut down too. According to lintian the estimated date is May 1st 2018 and there are currently more then 1500 source packages affected. In the debichem group we've used the subversion service since 2006. Our repository contains around 7500 commits done by around 20 different alioth user accounts and the packaging history of around 70 to 80 packages, including packaging attempts. I've spent the last days to prepare the Git migration, comparing different tools, controlling the created repositories and testing possibilities to automate the process as much as possible. The resulting scripts can currently be found here.

Of course I began as described at the Debian Wiki. But following this guide, using git-svn and converting the tags with the script supplied under rubric Convert remote tags and branches to local one gave me really weird results. The tags were pointing to the wrong commit-IDs. I thought, that git-svn was to blame and reported this as bug #887881. In the following mail exchange Andreas Kaesorg explained to me, that the issue is caused by so-called mixed-revision-tags in our repository as shown in the following example:


$ svn log -v -r7405
------------------------------------------------------------------------
r7405 | dleidert | 2018-01-17 18:14:57 +0100 (Mi, 17. Jan 2018) | 1 Zeile
Geänderte Pfade:
A /tags/shelxle/1.0.888-1 (von /unstable/shelxle:7396)
R /tags/shelxle/1.0.888-1/debian/changelog (von /unstable/shelxle/debian/changelog:7404)
R /tags/shelxle/1.0.888-1/debian/control (von /unstable/shelxle/debian/control:7403)
D /tags/shelxle/1.0.888-1/debian/patches/qt5.patch
R /tags/shelxle/1.0.888-1/debian/patches/series (von /unstable/shelxle/debian/patches/series:7402)
R /tags/shelxle/1.0.888-1/debian/rules (von /unstable/shelxle/debian/rules:7403)

[svn-buildpackage] Tagging shelxle 1.0.888-1
------------------------------------------------------------------------

Looking into the git log, the tags deteremined by git-svn are really not in their right place in the history line, even before running the script to convert the branches into real Git tags. So IMHO git-svn is not able to cope with this kind of situation. Because it also cannot handle our branch model, where we use /branch/package/, I began to look for different tools and found svn-all-fast-export, a tool created (by KDE?) to convert even large subversion repositories based on a ruleset. My attempt using this tool was so successful (not to speak of, how fast it is), that I want to describe it more. Maybe it will prove to be useful for others as well and it won't hurt to give some more information about this poorly documented tool :)

Step 1: Setting up a local subversion mirror

First I suggest setting up a local copy of the subversion repository to migrate, that is kept in sync with the remote repository. This can be achieved using the svnsync command. There are several howtos for this, so I won't describe this step here. Please check out this guide. In my case I have such a copy in /srv/svn/debichem.

Step 2: Creating the identity map

svn-all-fast-export needs at least two files to work. One is the so called identity map. This file contains the mapping between subversion user IDs (login names) and the (Git) committer info, like real name and mail address. The format is the same as used by git-svn:

loginname = author name <mail address>

e.g.

dleidert = Daniel Leidert <dleidert@debian.org>

The list of subversion user IDs can be obtained the same way as described in the Wiki:

svn log SVN_URL | awk -F'|' '/^r[0-9]+/ { print $2 }' | sort -u

Just replace the placeholder SVN_URL with your subversion URL. Here is the complete file for the debichem group.

Step 3: Creating the rules

The most important thing is the second file, which contains the processing rules. There is really not much documentation out there. So when in doubt, one has to read the source file src/ruleparser.cpp. I'll describe, what I already found out. If you are impatient, here is my result so far.

The basic rules are:


create repository REPOSITORY
...
end repository

and


match PATTERN
...
end match

The first rule creates a bare git repository with the name you've chosen (above represented by REPOSITORY). It can have one child, that is the repository description to be put into the repositories description file. There are AFAIK no other elements allowed here. So in case of e.g. ShelXle the rule might look like this:


create repository shelxle
description packaging of ShelXle, a graphical user interface for SHELXL
end repository

You'll have to create every repository, before you can put something into it. Else svn-all-fast-export will exit with an error. JFTR: It won't complain, if you create a repository, but don't put anything into it. You will just end up with an empty Git repository.

Now the second type of rule is the most important one. Based on regular expression match patterns (above represented by PATTERN), one can define actions, including the possibility to limit these actions to repositories, branches and revisions. The patterns are applied in their order of appearance. Thus if a matching pattern is found, other patterns matching but appearing later in the rules file, won't apply! So a special rule should always be put above a general rule. The patterns, that can be used, seem to be of type QRegExp and seem like basic Perl regular expressions including e.g. capturing, backreferences and lookahead capabilities. For a multi-package subversion repository with standard layout (that is /PACKAGE/{trunk,tags,branches}/), clean naming and subversion history, the rules could be:


match /([^/]+)/trunk/
repository \1
branch master
end match

match /([^/]+)/tags/([^/]+)/
repository \1
branch refs/tags/debian/\2
annotated true
end match

match /([^/]+)/branches/([^/]+)/
repository \1
branch \2
end match

The first rule captures the (source) package name from the path and puts it into the backreference \1. It applies to the trunk directory history and will put everything it finds there into the repository named after the directory - here we simply use the backreference \1 to that name - and there into the master branch. Note, that svn-all-fast-export will error out, if it tries to access a repository, which has not been created. So make sure, all repositories are created as shown with the create repository rule. The second rule captures the (source) package name from the path too and puts it into the backreference \1. But in backreference \2 it further captures (and applies to) all the tag directories under the /tags/ directory. Usually these have a Debian package version as name. With the branch statement as shown in this rule, the tags, which are really just branches in subversion, are automatically converted to annotated Git tags (another advantage of svn-all-fast-export over git-svn). Without enabling the annotated statement, the tags created will be lightweight tags. So the tag name (here: debian/VERSION) is determined via backreference \2. The third rule is almost the same, except that everything found in the matching path will be pushed into a Git branch named after the top-level directory captured from the subversion path.

Now in an ideal world, this might be enough and the actual conversion can be done. The command should only be executed in an empty directory. I'll assume, that the identity map is called authors and the rules file is called rules and that both are in the parent directory. I'll also assume, that the local subversion mirror of the packaging repository is at /srv/svn/mymirror. So ...

svn-all-fast-export --stats --identity-map=../authors.txt --rules=../debichem.rules --stats /srv/svn/mymirror

... will create one or more bare Git repositories (depending on your rules file) in the current directory. After the command succeeded, you can test the results ...


git -C REPOSITORY/ --bare show-ref
git -C REPOSITORY/ --bare log --all --graph

... and you will find your repositories description (if you added one to the rules file) in REPOSITORY/description:

cat REPOSITORY/description

Please note, that not all the debian version strings are well formed Git reference names and therefor need fixing. There might also be gaps shown in the Git history log. Or maybe the command didn't even suceed or complained (without you noticing it) or you ended up with an empty repository, although the matching rules applied. I encountered all of these issues and I'll describe the cause and fixes in the next blog article.

But if everything went well (you have no history gaps, the tags are in their right place within the linearized history and the repository looks fine) and you can and want to proceed, you might want to skip to the next step.

In the debichem group we used a different layout. The packaging directories were under /{unstable,experimental,wheezy,lenny,non-free}/PACKAGE/. This translates to /unstable/PACKAGE/ and /non-free/PACKAGE/ being the trunk directories and the others being the branches. The tags are in /tags/PACKAGE/. And packages, that are yet to upload are located in /wnpp/PACKAGE/. With this layout, the basic rules are:


# trunk handling
# e.g. /unstable/espresso/
# e.g. /non-free/molden/
match /(?:unstable|non-free)/([^/]+)/
repository \1
branch master
end match

# handling wnpp
# e.g. /wnpp/osra/
match /(wnpp)/([^/]+)/
repository \2
branch \1
end match

# branch handling
# e.g. /wheezy/espresso/
match /(lenny|wheezy|experimental)/([^/]+)/
repository \2
branch \1
end match

# tags handling
# e.g. /tags/espresso/VERSION/
match /tags/([^/]+)/([^/]+)/
repository \1
annotated true
branch refs/tags/debian/\2
substitute branch s/~/_/
substitute branch s/:/_/
end match

In the first rule, there is a non-capturing expression (?: ... ), which simply means, that the rule applies to /unstable/ and /non-free/. Thus the backreference \1 refers to second part of the path, the package directory name. The contents found are pushed to the master branch. In the second rule, the contents from the wnpp directory are not pushed to master, but instead to a branch called wnpp. This was necessary because of overlaps between /unstable/ and /wnpp/ history and already shows, that the repositories history makes things complicated. In the third rule, the first backreference \1 determines the branch (note the capturing expression in contrast to the first rule) and the second backreference \2 the package repository to act on. The last rule is similar, but now \1 determines the package repository and \2 the tag name (debian package version) based on the matching path. The example also shows another issue, which I'd like to explain more in the next article: some characters we use in debian package versions, e.g. the tilde sign and the colon, are not allowed within Git tag names and must therefor be substituted, which is done by the substitute branch EXPRESSION instructions.

Step 4: Cleaning the bare repository

The tool documentation suggests to run ...

git -C REPOSITORY/ repack -a -d -f

... before you upload this bare repository to another location. But Stuart Prescott told me on the debichem list, that this might not be enough and still leave some garbage behind. I'm not experienved enough to judge here, but his suggestion is, to clone the repository, either a bare clone or clone and init a new bare. I used the first approach:


git -C REPOSITORY/ --bare clone --bare REPOSITORY.git
git -C REPOSITORY.git/ repack -a -d -f

Please note, that this won't copy the repositories description file. You'll have to copy it manually, if you wanna keep it. The resulting bare repository can be uploaded (e.g. to git.debian.org as personal repository:


cp REPOSITORY/description REPOSITORY.git/description
touch REPOSITORY.git/git-daemon-export-ok
rsync -avz REPOSITORY.git git.debian.org:~/public_git/

Or you clone the repository, add a remote origin and push everything there. It is even possible to use the gitlab API at salsa.debian.org to create a project and push there. I'll save the latter for another post. If you are hasty, you'll find a script here.

CryptogramAfter Section 702 Reauthorization

For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We've just lost an important battle. On January 18, President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law.

Section 702 was initially passed in 2008, as an amendment to the Foreign Intelligence Surveillance Act of 1978. As the title of that law says, it was billed as a way for the NSA to spy on non-Americans located outside the United States. It was supposed to be an efficiency and cost-saving measure: the NSA was already permitted to tap communications cables located outside the country, and it was already permitted to tap communications cables from one foreign country to another that passed through the United States. Section 702 allowed it to tap those cables from inside the United States, where it was easier. It also allowed the NSA to request surveillance data directly from Internet companies under a program called PRISM.

The problem is that this authority also gave the NSA the ability to collect foreign communications and data in a way that inherently and intentionally also swept up Americans' communications as well, without a warrant. Other law enforcement agencies are allowed to ask the NSA to search those communications, give their contents to the FBI and other agencies and then lie about their origins in court.

In 1978, after Watergate had revealed the Nixon administration's abuses of power, we erected a wall between intelligence and law enforcement that prevented precisely this kind of sharing of surveillance data under any authority less restrictive than the Fourth Amendment. Weakening that wall is incredibly dangerous, and the NSA should never have been given this authority in the first place.

Arguably, it never was. The NSA had been doing this type of surveillance illegally for years, something that was first made public in 2006. Section 702 was secretly used as a way to paper over that illegal collection, but nothing in the text of the later amendment gives the NSA this authority. We didn't know that the NSA was using this law as the statutory basis for this surveillance until Edward Snowden showed us in 2013.

Civil libertarians have been battling this law in both Congress and the courts ever since it was proposed, and the NSA's domestic surveillance activities even longer. What this most recent vote tells me is that we've lost that fight.

Section 702 was passed under George W. Bush in 2008, reauthorized under Barack Obama in 2012, and now reauthorized again under Trump. In all three cases, congressional support was bipartisan. It has survived multiple lawsuits by the Electronic Frontier Foundation, the ACLU, and others. It has survived the revelations by Snowden that it was being used far more extensively than Congress or the public believed, and numerous public reports of violations of the law. It has even survived Trump's belief that he was being personally spied on by the intelligence community, as well as any congressional fears that Trump could abuse the authority in the coming years. And though this extension lasts only six years, it's inconceivable to me that it will ever be repealed at this point.

So what do we do? If we can't fight this particular statutory authority, where's the new front on surveillance? There are, it turns out, reasonable modifications that target surveillance more generally, and not in terms of any particular statutory authority. We need to look at US surveillance law more generally.

First, we need to strengthen the minimization procedures to limit incidental collection. Since the Internet was developed, all the world's communications travel around in a single global network. It's impossible to collect only foreign communications, because they're invariably mixed in with domestic communications. This is called "incidental" collection, but that's a misleading name. It's collected knowingly, and searched regularly. The intelligence community needs much stronger restrictions on which American communications channels it can access without a court order, and rules that require they delete the data if they inadvertently collect it. More importantly, "collection" is defined as the point the NSA takes a copy of the communications, and not later when they search their databases.

Second, we need to limit how other law enforcement agencies can use incidentally collected information. Today, those agencies can query a database of incidental collection on Americans. The NSA can legally pass information to those other agencies. This has to stop. Data collected by the NSA under its foreign surveillance authority should not be used as a vehicle for domestic surveillance.

The most recent reauthorization modified this lightly, forcing the FBI to obtain a court order when querying the 702 data for a criminal investigation. There are still exceptions and loopholes, though.

Third, we need to end what's called "parallel construction." Today, when a law enforcement agency uses evidence found in this NSA database to arrest someone, it doesn't have to disclose that fact in court. It can reconstruct the evidence in some other manner once it knows about it, and then pretend it learned of it that way. This right to lie to the judge and the defense is corrosive to liberty, and it must end.

Pressure to reform the NSA will probably first come from Europe. Already, European Union courts have pointed to warrantless NSA surveillance as a reason to keep Europeans' data out of US hands. Right now, there is a fragile agreement between the EU and the United States ­-- called "Privacy Shield" -- ­that requires Americans to maintain certain safeguards for international data flows. NSA surveillance goes against that, and it's only a matter of time before EU courts start ruling this way. That'll have significant effects on both government and corporate surveillance of Europeans and, by extension, the entire world.

Further pressure will come from the increased surveillance coming from the Internet of Things. When your home, car, and body are awash in sensors, privacy from both governments and corporations will become increasingly important. Sooner or later, society will reach a tipping point where it's all too much. When that happens, we're going to see significant pushback against surveillance of all kinds. That's when we'll get new laws that revise all government authorities in this area: a clean sweep for a new world, one with new norms and new fears.

It's possible that a federal court will rule on Section 702. Although there have been many lawsuits challenging the legality of what the NSA is doing and the constitutionality of the 702 program, no court has ever ruled on those questions. The Bush and Obama administrations successfully argued that defendants don't have legal standing to sue. That is, they have no right to sue because they don't know they're being targeted. If any of the lawsuits can get past that, things might change dramatically.

Meanwhile, much of this is the responsibility of the tech sector. This problem exists primarily because Internet companies collect and retain so much personal data and allow it to be sent across the network with minimal security. Since the government has abdicated its responsibility to protect our privacy and security, these companies need to step up: Minimize data collection. Don't save data longer than absolutely necessary. Encrypt what has to be saved. Well-designed Internet services will safeguard users, regardless of government surveillance authority.

For the rest of us concerned about this, it's important not to give up hope. Everything we do to keep the issue in the public eye ­-- and not just when the authority comes up for reauthorization again in 2024 -- hastens the day when we will reaffirm our rights to privacy in the digital age.

This essay previously appeared in the Washington Post.

Planet DebianJohn Goerzen: An old DOS BBS in a Docker container

Awhile back, I wrote about my Debian Docker base images. I decided to extend this concept a bit further: to running DOS applications in Docker.

But first, a screenshot:

It turns out this is possible, but difficult. I went through all three major DOS emulators available (dosbox, qemu, and dosemu). I got them all running inside the Docker container, but had a number of, er, fun issues to resolve.

The general thing one has to do here is present a fake modem to the DOS environment. This needs to be exposed outside the container as a TCP port. That much is possible in various ways — I wound up using tcpser. dosbox had a TCP modem interface, but it turned out to be too buggy for this purpose.

The challenge comes in where you want to be able to accept more than one incoming telnet (or TCP) connection at a time. DOS was not a multitasking operating system, so there were any number of hackish solutions back then. One might have had multiple physical computers, one for each incoming phone line. Or they might have run multiple pseudo-DOS instances under a multitasking layer like DESQview, OS/2, or even Windows 3.1.

(Side note: I just learned of DESQview/X, which integrated DESQview with X11R5 and replaced the Windows 3 drivers to allow running Windows as an X application).

For various reasons, I didn’t want to try running one of those systems inside Docker. That left me with emulating the original multiple physical node setup. In theory, pretty easy — spin up a bunch of DOS boxes, each using at most 1MB of emulated RAM, and go to town. But here came the challenge.

In a multiple-physical-node setup, you need some sort of file sharing, because your nodes have to access the shared message and file store. There were a myriad of clunky ways to do this in the old DOS days – Netware, LAN manager, even some PC NFS clients. I didn’t have access to Netware. I tried the Microsoft LM client in DOS, talking to a Samba server running inside the Docker container. This I got working, but the LM client used so much RAM that, even with various high memory tricks, BBS software wasn’t going to run. I couldn’t just mount an underlying filesystem in multiple dosbox instances either, because dosbox did caching that wasn’t going to be compatible.

This is why I wound up using dosemu. Besides being a more complete emulator than dosbox, it had a way of sharing the host’s filesystems that was going to work.

So, all of this wound up with this: jgoerzen/docker-bbs-renegade.

I also prepared building blocks for others that want to do something similar: docker-dos-bbs and the lower-level docker-dosemu.

As a side bonus, I also attempted running this under Joyent’s Triton (SmartOS, Solaris-based). I was pleasantly impressed that I got it all almost working there. So yes, a Renegade DOS BBS running under a Linux-based DOS emulator in a container on a Solaris machine.

Worse Than FailureCodeSOD: The Pythonic Wheel Reinvention

Starting with Java, a robust built-in class library is practically a default feature of modern programming languages. Why struggle with OS-specific behaviors, or with writing your own code, or managing a third party library to handle problems like accessing files or network resources.

One common class of WTF is the developer who steadfastly refuses to use it. They inevitably reinvent the wheel as a triangle with no axle. Another is the developer who is simply ignorant of what the language offers, and is too lazy to Google it. They don’t know what a wheel is, so they invent a coffee-table instead.

My personal favorite, though, is the rare person who knows about the class library, that uses the class library… to reinvent methods which exist in the class library. They’ve seen a wheel, they know what a wheel is for, and they still insist on inventing a coffee-table.

Anneke sends us one such method.

The method in question is called thus:

if output_exists("/some/path.dat"):
    do_something()

I want to stress, this is the only use of this method. The purpose is to check if a file containing output from a different process exists. If you’re familiar with Python, you might be thinking, “Wait, isn’t that just os.path.exists?”

Of course not.

def output_exists(full_path):
    path = os.path.dirname(full_path) + "/*"
    filename2=full_path.split('/')[-1]
    filename = '%s' % filename2
    files = glob.glob(path)
    back = []
    for f in re.findall(filename, " ".join(files)):
        back.append(os.path.join(os.path.dirname(full_path), f))
    return back

Now, in general, most of your directory-tree manipulating functions live in the os.path package, and you can see os.path.dirname used. That splits off the directory-only part. Then they throw a glob on it. I could, at this point, bring up the importance of os.path.join for that sort of operation, but why bother?

They knew enough to use os.path.dirname to get the directory portion of the path, but not os.path.split which can pick off the file portion of the path. The “Pythonic” way of writing that line would be (path, filename) = os.path.split(full_path). Wait, I misspoke: the “Pythonic” way would be to not write any part of this method.

'%s' % filename2 is how Python’s version of printf and I cannot for the life of me guess why it’s being done here. A misguided attempt at doing an strcpy-type operation?

glob.glob isn’t just the best method name in anything, it also does a filesystem search using globs, so files contains a list of all files in that directory.

" ".join(files) is the Python idiom for joining an array, so we turn the list of files into an array and search it using re.findall… which uses a regex for searching. Note that they’re using the filename for the regex, and they haven’t placed any guards around it, so if the input file is “foo.c”, and the directory contains “foo.cpp”, this will think that’s fine.

And then last but not least, it returns the array of matches, relying on the fact that an empty array in Python is false.

To write this code required at least some familiarity with three different major packages in the class library- os.path, glob, and re, but just one ounce of more familiarity ith os.path would have replaced the entire thing with a simple call to os.path.exists. Which is what Anneke did.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Sam VargheseBlack money continues to pour in to IPL

A little more than a year ago, Indian Prime Minister Narendra Modi announced that 500 and 1000 rupee notes would be removed from circulation as a step to flushing out all the black money in the country.

He made the announcement on TV in prime time on 8 November 2016 and gave people four hours time to be ready for the change!

But judging by the amounts which cricketers were bought for in the Indian Premier League Twenty20 auction last week, there is more black money than ever in the country.

Else, sums like US$1.5 million would not be available for the Kolkata Knight Riders to buy a cricketer like Mitchell Starc. This is black money being flushed out and made ready to be used as legal tender, the main reason why the Indian Government turns a blind eye to the process.

Former Indian spin king Bishen Singh Bedi accused the IPL of being a centre for money-laundering and he may not be far off the mark.

A little history will help explain India’s black money problem: Back in 1967, the then Indian finance minister Morarji Desai had the brilliant idea of raising taxes well beyond their existing level; the maximum marginal tax rate was raised as high as 97.75 percent.

Desai, who was better known for drinking his own urine, reasoned that people would pay up and that India’s budgetary problems would become more manageable.

Instead, the reverse happened. India has always had a problem with undeclared wealth, a kind of parallel economy. The amount of black money increased by leaps and bounds after Desai’s ridiculous laws were promulgated.

Seven years later, in 1974, the new finance minister Y.B. Chavan brought down rates by some 20 percentage points, but by then the damage had been done. The amount of black money in India today is estimated to be anything from 30 to 100 times the national budget.

The IPL attracts the best cricketers from around the world because of the money on offer. The amounts that are bid are paid for three years, and the player has to play for two months every year, with some additional promotional activity also involved.

The competition is in its 11th season and it has been dogged by controversy; in 2015, two teams were suspended for match-fixing and betting, with the incidents taking place in 2012 and 2013.

So, despite all the platitudes from Modi, don’t expect anything to change in India as far as black money is concerned. If anything, the amount will increase now that people know that these kinds of measures will be announced at the drop of a hat. They will be ready the next time Modi or anyone else comes up with some crazy initiative like this.

Planet Linux AustraliaDonna Benjamin: Inkscape at linux.conf.au Sydney 2018

Planet DebianRuss Allbery: Review: My Grandmother Asked Me to Tell You She's Sorry

Review: My Grandmother Asked Me to Tell You She's Sorry, by Fredrik Backman

Series: Britt-Marie #1
Translator: Henning Koch
Publisher: Washington Square
Copyright: 2014
Printing: April 2016
ISBN: 1-5011-1507-3
Format: Trade paperback
Pages: 372

Elsa is seven, going on eight. She's not very good at it; she knows she's different and annoying, which is why she gets chased and bullied constantly at school and why her only friend is her grandmother. But Granny is a superhero, who's also very bad at being old. Her superpowers are lifesaving and driving people nuts. She made a career of being a doctor in crisis zones; now she makes a second career of, well, this sort of thing:

Or that time she made a snowman in Britt-Marie and Kent's garden right under their balcony and dressed it up in grown-up clothes so it looked as if a person had fallen from the roof. Or that time those prim men wearing spectacles started ringing all the doorbells and wanted to talk about God and Jesus and heaven, and Granny stood on her balcony with her dressing gown flapping open, shooting at them with her paintball gun

The other thing Granny is good at is telling fairy tales. She's been telling Elsa fairy tales since she was small and her mom and dad had just gotten divorced and Elsa was having trouble sleeping. The fairy tales are all about Miamas and the other kingdoms of the Land-of-Almost-Awake, where the fearsome War-Without-End was fought against the shadows. Miamas is the land from which all fairy tales come, and Granny has endless stories from there, featuring princesses and knights, sorrows and victories, and kingdoms like Miploris where all the sorrows are stored.

Granny and Miamas and the Land-of-Almost-Awake make Elsa's life not too bad, even though she has no other friends and she's chased at school. But then Granny dies, right after giving Elsa one final quest, her greatest quest. It starts with a letter and a key, addressed to the Monster who lives downstairs. (Elsa calls him that because he's a huge man who only seems to come out at night.) And Granny's words:

"Promise you won't hate me when you find out who I've been. And promise me you'll protect the castle. Protect your friends."

My Grandmother Asked Me to Tell You She's Sorry is written in third person, but it's close third person focused on Elsa and her perspective on the world. She's a precocious seven-year-old who I thought was nearly perfect (rare praise for me for children in books), which probably means some folks will think she's a little too precocious. But she has a wonderful voice, a combination of creative imagination, thoughtfulness, and good taste in literature (particularly Harry Potter and Marvel Comics). The book is all about what it's like to be seven, going on eight, with a complicated family situation and an awful time at school, but enough strong emotional support from her family that she's still full of stubbornness, curiosity, and fire.

Her grandmother's quest gets her to meet the other residents of the apartment building she lives in, turning them into more than the backdrop of her life. That, in turn, adds new depth to the fairy tales her Granny told her. Their events turn out to not be pure fabrication. They were about people, the many people in her Granny's life, reshaped by Granny's wild imagination and seen through the lens of a child. They leave Elsa surprisingly well-equipped to navigate and start to untangle the complicated relationships surrounding her.

This is where Backman pulls off the triumph of this book. Elsa's discoveries that her childhood fairy tales are about the people around her, people with a long history with her grandmother, could have been disillusioning. This could be the story of magic fading into reality and thereby losing its luster. And at first Elsa is quite angry that other people have this deep connection to things she thought were hers, shared with her favorite person. But Backman perfectly walks that line, letting Elsa keep her imaginative view of the world while intelligently mapping her new discoveries onto it. The Miamas framework withstands serious weight in this story because Elsa is flexible, thoughtful, and knows how to hold on to the pieces of her story that carry deeper truth. She sees the people around her more clearly than anyone else because she has a deep grasp of her grandmother's highly perceptive, if chaotic, wisdom, baked into all the stories she grew up with.

This book starts out extremely funny, turns heartwarming and touching, and develops real suspense by the end. It starts out as Elsa nearly alone against the world and ends with a complicated matrix of friends and family, some of whom were always supporting each other beneath Elsa's notice and some of whom are re-learning the knack. It's a beautiful story, and for the second half of the book I could barely put it down.

I am, as a side note, once again struck by the subtle difference in stories from cultures with a functional safety net. I caught my American brain puzzling through ways that some of the people in this book could still be alive and living in this apartment building since they don't seem capable of holding down jobs, before realizing this story is not set in a brutal Hobbesian jungle of all against all like the United States. The existence of this safety net plays no significant role in this book apart from putting a floor under how far people can fall, and yet it makes all the difference in the world and in some ways makes Backman's plot possible. Perhaps publishers should market Swedish literary novels as utopian science fiction in the US.

This is great stuff. The back and forth between fairy tales and Elsa's resilient and slightly sarcastic life can take a bit to get used to, but stick with it. All the details of the fairy tales matter, and are tied back together wonderfully by the end of the book. Highly recommended. In its own way, this is fully as good as A Man Called Ove.

There is a subsequent book, Britt-Marie Was Here, that follows one of the supporting characters of this novel, but My Grandmother Asked Me to Tell You She's Sorry stands alone and reaches a very satisfying conclusion (including for that character).

Rating: 10 out of 10

Planet DebianJeremy Bicha: logo.png for default avatar for GitLab repos

Debian and GNOME have both recently adopted self-hosted GitLab for their git hosting. GNOME’s service is named simply https://gitlab.gnome.org/ ; Debian’s has the more intriguing name https://salsa.debian.org/ . If you ask the Salsa sysadmins, they’ll explain that they were in a Mexican restaurant when they needed to decide on a name!

There’s a useful under-documented feature I found. If you place a logo.png in the root of your repository, it will be automatically used as the default “avatar” for your project (in other words, the logo that shows up on the web page next to your project).

I added a logo.png to GNOME Tweaks at GNOME and it automatically showed up in Salsa when I imported the new version.

Other Notes

I first tried with a symlink to my app icon, but it didn’t work. I had to actually copy the icon.

The logo.png convention doesn’t seem to be supported at GitHub currently.

,

Planet DebianDaniel Pocock: Fair communication requires mutual consent

I was pleased to read Shirish Agarwal's blog in reply to the blog I posted last week Do the little things matter?

Given the militaristic theme used in my own post, I was also somewhat amused to see news this week of the Strava app leaking locations and layouts of secret US military facilities like Area 51. What a way to mark International Data Privacy Day. Maybe rather than inadvertently misleading people to wonder if I was suggesting that Gmail users don't make their beds, I should have emphasized that Admiral McRaven's boot camp regime for Navy SEALS needs to incorporate some of my suggestions about data privacy?

Strava leaks layouts and locations of secret US bases like Area 51

A highlight of Agarwal's blog is his comment I usually wait for a day or more when I feel myself getting inflamed/heated and I wish this had occurred in some of the other places where my ideas were discussed. Even though my ideas are sometimes provocative, I would kindly ask people to keep point 2 of the Debian Code of Conduct in mind, Assume good faith.

One thing that became clear to me after reading Agarwal's blog is that some people saw my example one-line change to Postfix's configuration as a suggestion that people need to run their own mail server. In fact, I had seen such comments before but I hadn't realized why people were reaching a conclusion that I expect everybody to run a mail server. The purpose of that line was simply to emphasize the content of the proposed bounce message, to help people understand, the receiver of an email may never have agreed to Google's non-privacy policy but if you do use Gmail, you impose that surveillance regime on them, and not just yourself, if you send them a message from a Gmail account.

Communication requires mutual agreement about the medium. Think about it another way: if you go to a meeting with your doctor and some stranger in a foreign military uniform is in the room, you might choose to leave and find another doctor rather than communicate under surveillance.

As it turns out, many people are using alternative email services, even if they only want a web interface. There is already a feature request discussion in ProtonMail about letting users choose to opt-out of receiving messages monitored by Google and send back the bounce message suggested in my blog. Would you like to have that choice, even if you didn't use it immediately? You can vote for that issue or leave your own feedback comments in there too.

Planet DebianDaniel Pocock: Imagine the world's biggest Kanban / Scrumboard

Imagine a Kanban board that could aggregate issues from multiple backends, including your CalDAV task list, Bugzilla systems (Fedora, Mozilla, GNOME communities), Github issue lists and the Debian Bug Tracking System, visualize them together and coordinate your upstream fixes and packaging fixes in a single sprint.

It is not so farfetched - all of those systems already provide read access using iCalendar URLs as described in my earlier blog. There are REST APIs to manipulate most of them too. Why not write a front end to poll them and merge the content into a Kanban board view?

We've added this as a potential GSoC project using Python and PyQt.

If you'd like to see this or any of the other proposed projects go ahead, you don't need to be a Debian Developer to suggest ideas, refer a student or be a co-mentor. Many of our projects have relevance in multiple communities. Feel free to get in touch with us through the debian-outreach mailing list.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #144

Here's what happened in the Reproducible Builds effort between Sunday January 21 and Saturday January 27 2018:

Media coverage

Development and fixes in key packages

  • Mattia uploaded dpkg (1.19.0.5.0~reproducible1) to our experimental toolchain.

  • cpython-3.7 now has .pyc files without timestamps. Most work happening in PEP 552 but older Python versions probably still need variants of the mtime patch because the new .pyc format is not compatible.

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

35 package reviews have been added, 37 have been updated and 91 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (24)
  • Niels Thykier (8)

diffoscope development

reproducible-website development

jenkins.debian.net development

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Krebs on SecurityDrugs Tripped Up Suspects In First Known ATM “Jackpotting” Attacks in the US

On Jan. 27, 2018, KrebsOnSecurity published what this author thought was a scoop about the first known incidence of U.S. ATMs being hit with “jackpotting” attacks, a crime in which thieves deploy malware that forces cash machines to spit out money like a loose Las Vegas slot machine. As it happens, the first known jackpotting attacks in the United States were reported in November 2017 by local media on the west coast, although the reporters in those cases seem to have completely buried the lede.

Isaac Rafael Jorge Romero, Jose Alejandro Osorio Echegaray, and Elio Moren Gozalez have been charged with carrying out ATM “jackpotting” attacks that force ATMs to spit out cash like a Las Vegas casino.

On Nov. 20, 2017, Oil City News — a community publication in Wyoming — reported on the arrest of three Venezuelan nationals who were busted on charges of marijuana possession after being stopped by police.

After pulling over the van the men were driving, police on the scene reportedly detected the unmistakable aroma of pot smoke wafting from the vehicle. When the cops searched the van, they discovered small amounts of pot, THC edible gummy candies, and several backpacks full of cash.

FBI agents had already been looking for the men, who were allegedly caught on surveillance footage tinkering with cash machines in Wyoming, Colorado and Utah, shortly before those ATMs were relieved of tens of thousands of dollars.

According to a complaint filed in the U.S. District Court for the District of Colorado, the men first hit an ATM at a credit union in Parker, Colo. on October 10, 2017. The robbery occurred after business hours, but the cash machine in question was located in a vestibule available to customers 24/7.

The complaint says surveillance videos showed the men opening the top of the ATM, which housed the computer and hard drive for the ATM — but not the secured vault where the cash was stored. The video showed the subjects reaching into the ATM, and then closing it and exiting the vestibule. On the video, one of the subjects appears to be carrying an object consistent with the size and appearance of the hard drive from the ATM.

Approximately ten minutes later, the subjects returned and opened up the cash machine again. Then they closed the top of the ATM and appeared to wait while the ATM computer restarted. After that, both subjects could be seen on the video using their mobile phones. One of the subjects reportedly appeared to be holding a small wireless mini-computer keyboard.

Soon after, the ATM began spitting out cash, netting the thieves more than $24,000. When they they were done, the suspects allegedly retrieved their equipment from the ATM and left.

Forensic analysis of the ATM hard drive determined that the thieves installed the Ploutus.D malware on the cash machine’s hard drive. Ploutus.D is an advanced malware strain that lets crooks interact directly with the ATM’s computer and force it to dispense money.

“Often the malware requires entering of codes to dispense cash,” reads an FBI affidavit (PDF). “These codes can be obtained by a third party, not at the location, who then provides the codes to the subjects at the ATM. This allows the third party to know how much cash is dispensed from the ATM, preventing those who are physically at the ATM from keeping cash for themselves instead of providing it to the criminal organization. The use of mobile phones is often used to obtain these dispensing codes.”

In November 2017, similar ATM jackpotting attacks were discovered in the Saint George, Utah area. Surveillance footage from those ATMs showed the same subjects were at work.

The FBI’s investigation determined that the vehicles used by the suspects in the Utah thefts were rented by Venezuelan nationals.

On Nov. 16, Isaac Rafael Jorge Romero, 29, Jose Alejandro Osorio Echegaray, 36, and two other Venezuelan nationals were detained in Teton County, Wyo. for drug possession. Two other suspects in the Utah theft were arrested in San Diego when they tried to return a rental car that was caught on surveillance camera at one of the hacked ATMs.

To carry out a jackpotting attack, thieves first must gain physical access to the cash machine. From there they can use malware or specialized electronics — often a combination of both — to control the operations of the ATM.

All of the known ATM jackpotting attacks in the U.S. so far appear to be targeting a handful of older model cash machines manufactured by ATM giant Diebold Nixdorf. However, security firm FireEye notes that — with minor modifications to the malware code — Plotus.D could be used to target software that runs on 40 different ATM vendors in 80 countries.

Diebold’s advisory on hardening ATMs against jackpotting attacks is available here (PDF).

Jackpotting is not a new crime: Indeed, it has been a problem for ATM operators in most of the world for many years now. But for some reason, jackpotting attacks have until recently eluded U.S. ATM operators.

Jackpotting has been a real threat to ATM owners and manufacturers since at least 2010, when the late security researcher Barnaby Michael Douglas Jack (known to most as simply “Barnaby Jack”) demonstrated the attack to a cheering audience at the Black Hat security conference. A recording of that presentation is below.

Planet DebianNorbert Preining: 2018 and CD burning still painful

Ok, we are in 2018, and for the first time in ages I wanted to burn a Audio CD, and dared to think about CD_TEXT. You know what – it is hard as in impossible for a normal user. And that in 2018. Debian, you could try to do better.

Before I start I guess it is necessary to make clear that this is a newly installed system, less than a few months old. That I am as Debian Developer not completely new to system administration. Furthermore, the user trying to burn is member of the cdrom group.

The problem with most GUI frontends is that they rely on wodim, a member of the cdrkit family. And wodim itself simply doesn’t work:

$ wodim -dummy  -v speed=16 dev=/dev/sr0 -audio track*
wodim: No write mode specified.
wodim: Assuming -tao mode.
wodim: Future versions of wodim may have different drive dependent defaults.
TOC Type: 0 = CD-DA
wodim: Operation not permitted. Warning: Cannot raise RLIMIT_MEMLOCK limits.
wodim: Resource temporarily unavailable. Cannot get mmap for 12587008 Bytes on /dev/zero.
$

Yes I know, the easy solution is to make wodim setuid root, but this is not what I want. Unfortunately the – in contrast to cdrkit/wodim still in development – parent of wodim, cdrecord, works, but only because it is setuid root after a standard installation.

That is all complicated by the fact that the main front-ends are, well, broken:

  • K3b: feels like completely broken: it cannot open its own saved project files, the .inf files generated for an audio CD project are completely broken and void of any content, it hangs regularly without any response
  • Nautilus DVD/CD burning: incapable of doing Audio CDs, offers only Data cd
  • Brasero: terminates with “ejecting disc” and “An unknown error occurred”, looking at the log file it shows that again wodim is the culprit. But brasero could be a bit more helpful! Additional minus point: no cddb interface.

The only exception I found was Xfburn which managed to burn the CD without any hitch or problem. Wow – besides it doesn’t support CD_TEXT, which is also not optimal.

A solution for burning CD_TEXT

So in case you really want to burn CD_TEXT, there is at the moment, as far as I see, only one option, and that is the command line using Cdrdao. Thanks to this excellent article I managed to burn using the following command (as user, not root, nothing special):

cdrdao write --device /dev/sr0 --driver generic-mmc:0x10 -v 2 -n --eject mycd.toc

The format of the .toc file is a bit complicated but documented, see the linked article.

All in all a very depressing situation I have to say, especially for being in 2018 …

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 03


Here’s part three of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.

MP3

CryptogramSubway Elevators and Movie-Plot Threats

Local residents are opposing adding an elevator to a subway station because terrorists might use it to detonate a bomb. No, really. There's no actual threat analysis, only fear:

"The idea that people can then ride in on the subway with a bomb or whatever and come straight up in an elevator is awful to me," said Claudia Ward, who lives in 15 Broad Street and was among a group of neighbors who denounced the plan at a recent meeting of the local community board. "It's too easy for someone to slip through. And I just don't want my family and my neighbors to be the collateral on that."

[...]

Local residents plan to continue to fight, said Ms. Gerstman, noting that her building's board decided against putting decorative planters at the building's entrance over fears that shards could injure people in the event of a blast.

"Knowing that, and then seeing the proposal for giant glass structures in front of my building ­- ding ding ding! -- what does a giant glass structure become in the event of an explosion?" she said.

In 2005, I coined the term "movie-plot threat" to denote a threat scenario that caused undue fear solely because of its specificity. Longtime readers of this blog will remember my annual Movie-Plot Threat Contests. I ended the contest in 2015 because I thought the meme had played itself out. Clearly there's more work to be done.

Worse Than FailureRepresentative Line: As the Clock Terns

Hydranix” can’t provide much detail about today’s code, because they’re under a “strict NDA”. All they could tell us was that it’s C++, and it’s part of a “mission critical” front end package. Honestly, I think this line speaks for itself:

(mil == 999 ? (!(mil = 0) && (sec == 59 ? 
  (!(sec = 0) && (min == 59 ? 
    (!(min = 0) && (++hou)) : ++min)) : ++sec)) : ++mil);

“Hydranix” suspects that this powers some sort of stopwatch, but they’re not really certain what its role is.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianWouter Verhelst: Day one of the pre-FOSDEM Debconf Videoteam sprint

I'm at the Linux Belgium training center, where this last week before FOSDEM the DebConf video team is holding a sprint. The nice folks of Linux Belgium made us feel pretty welcome:

Linux Belgium message

Yesterday was the first day of that sprint, where I had planned to blog about things, but I forgot, so here goes (first thing this morning)

Nattie and Tzafrir

Nattie and Tzafrir have been spending much of their time proofreading our documentation, and giving us feedback to improve their readability and accuracy.

Stefano

Spent some time working on his youtube uploader. He didn't finish it to a committable state yet, but more is to be expected today.

He also worked on landing a gstreamer pipeline change that was suggested at LCA last week (which he also visited), and did some work on setting up the debconf18 dev website.

Finally, he fixed the irker config on salsa so that it would actually work and send commit messages to IRC after a push.

Kyle

Wrote a lot of extra documentation on the opsis that we use and various other subjects, and also fixed some of the templates of the documentation, so that things would look better and link correctly.

Wouter

I spent much of my time working on the FOSDEM SReview instance, which will be used next weekend; that also allowed me to improve the code quality of some of the newer stuff that I wrote over the past few months. In between things, being the local guy here, I also drove around getting a bit of stuff that we needed.

Pollo

Pollo isn't here, but he's sprinting remotely from home. He spent some time setting up gitlab-ci so that it would build our documentation after pushing to salsa.

Planet DebianRuss Allbery: Review: Reap the Wild Wind

Review: Reap the Wild Wind, by Julie E. Czerneda

Series: Stratification #1
Publisher: DAW
Copyright: 2007
Printing: September 2008
ISBN: 0-7564-0487-8
Format: Mass market
Pages: 459

Reap the Wild Wind is the first book in the Stratification series. This is set in the same universe as the Trade Pact series (which starts with A Thousand Words for Stranger), but goes back in time, telling the story of the Om'ray before they left Cersi to become the Clan. You may have more interest in this series if you read and enjoyed the Trade Pact trilogy, but it's not a prerequisite. It's been over ten years since I read that series, I've forgotten nearly everything about it except the weird gender roles, and I didn't have any trouble following the story.

Aryl Sarc is member of the Yena Clan, who live a precarious existence in the trees above a vast swamp filled with swarms of carnivorous creatures. They are one of several isolated clans of Om'ray on the planet Cersi. Everything about the clans is tightly constrained by an agreement between the Om'ray, the Tiktik, and the Oud to maintain a wary peace. The agreement calls for nothing about the nature of the world or its three species to ever change.

Reap the Wild Wind opens with the annual dresel harvest: every fall, a great, dry wind called the M'hir flows down the mountains and across the forest in which the Yena live, blowing free the ripe dresel for collection at the treetops. Dresel is so deeply a part of Aryl's world that the book never explains it, but the reader can intuit that it contains some essential nutrient without which all the Yena would die. But disaster strikes while Aryl is watching the dresel harvest, disaster in the form of a strange flying vehicle no one has seen before and an explosion that kills many of the Yena and ruins the harvest essential to life.

The early part of the book is the emotional and political fallout of this disaster. Aryl discovers an unknown new talent, saving the man she's in love with (although they're too young to psychically join in the way of the Om'ray) at the cost of her brother. There's a lot of angst, a lot of cliched descriptions of internal psychic chaos (the M'hir that will be familiar to readers of the Trade Pact books), and a lot of her mother being nasty and abusive in ways that Aryl doesn't recognize as abuse. I struggled to get into the story; Aryl was an aimless mess, and none of the other characters were appealing. The saving grace for me in the early going were the interludes with Enris, an Om'ray from a far different clan, a metalworker whose primary dealings are with the Oud instead of the Tiktik.

This stage of the story thankfully doesn't last. Aryl eventually ends up among the Tiktik, struggling to understand their far different perspective on the world, and then meets the visitors who caused the disaster. They're not only from outside of Aryl's limited experience; they shouldn't even exist by the rules of Aryl's world. As Aryl slowly tries to understand what they're doing, the scope of the story expands, with hints that Aryl's world is far more complicated than she realized.

Czerneda sticks with a tight viewpoint focus on Aryl and Enris. That's frustrating when Aryl is uninterested in, or cannot understand, key pieces of the larger picture that the reader wants to know. But it creates a sense of slow discovery from an alien viewpoint that occasionally reminded me of Rosemary Kirstein's Steerswoman series. Steerswoman is much better, but it's much better than almost everything, and Aryl's growing understanding of her world is still fun. I particularly liked how Aryl's psychic species defines the world by the sensed locations of the Om'ray clans, making it extremely hard for her to understand geography in the traditional sense.

I was also happy to see Czerneda undermine the strict sexual dimorphism of Clan society a tiny bit with an Om'ray who doesn't want to participate in the pair-bonding of Choosing. She painted herself into a corner with the extreme gender roles in the Trade Pact series and there's still a lot of that here, but at least a few questions raised about that structure.

Reap the Wild Wind is all setup with little payoff. By the end of the book, we still just have hints of the history of Cersi, the goals of the Oud or Tiktik, or the true shape of what the visitors are investigating. But it had grabbed my interest, mostly because of Aryl's consistent, thoughtful curiosity. I wish this first book had gotten into the interesting meat of the story faster and had gotten farther, but this is good enough that I'll probably keep reading.

Followed by Riders of the Storm.

Rating: 7 out of 10

Planet DebianErich Schubert: Homepage reboot

I haven’t blogged in a long time, and that probably won’t change.

Yet, I wanted to reboot my website on a different technology underneath.

I just didn’t want to have to touch the old XSLT scripts powering the old website anymore. I now converted all my XML input to Markdown instead.

If you notice anything broken, let me know by my usual email adresses.

Planet DebianErich Schubert: Homepage reboot

I haven’t blogged in a long time, and that probably won’t change.

Yet, I wanted to reboot my website on a different technology underneath.

I just didn’t want to have to touch the old XSLT scripts powering the old website anymore. I now converted all my XML input to Markdown instead.

If you notice anything broken, let me know by my usual email adresses.

,

Planet DebianJeremy Bicha: GNOME Tweaks 3.28 Progress Report 1

A few days ago, I released GNOME Tweaks 3.27.4, a development snapshot on the way to the next stable version 3.28 which will be released alongside GNOME 3.28 in March. Here are some highlights of what’s changed since 3.26.

New Name (Part 2)

For 3.26, we renamed GNOME Tweak Tool to GNOME Tweaks. It was only a partial rename since many underlying parts still used the gnome-tweak-tool name. For 3.28, we have completed the rename. We have renamed the binary, the source tarball releases, the git repository, the .desktop, and app icons. For upgrade compatibility, the autostart file and helper script for the Suspend on Lid Close inhibitor keeps the old name.

New Home

GNOME Tweaks has moved from the classic GNOME Git and Bugzilla to the new GNOME-hosted gitlab.gnome.org. The new hosting includes git hosting, a bug tracker and merge requests. Much of GNOME Core has moved this cycle, and I expect many more projects will move for the 3.30 cycle later this year.

Dark Theme Switch Removed

As promised, the Global Dark Theme switch has been removed. Read my previous post for more explanation of why it’s removed and a brief mention of how theme developers should adapt (provide a separate Dark theme!).

Improved Theme Handling

The theme chooser has been improved in several small ways. Now that it’s quite possible to have a GNOME desktop without any gtk2 apps, it doesn’t make sense to require that a theme provide a gtk2 version to show up in the theme chooser so that requirement has been dropped.

The theme chooser will no longer show the same theme name multiple times if you have a system-wide installed theme and a theme in your user theme directory with the same name. Additionally, GNOME Tweaks does better at supporting the  XDG_DATA_DIRS standard in case you use custom locations to store your themes or gsettings overrides.

GNOME Tweaks 3.27.4 with the HighContrastInverse theme

Finally, gtk3 still offers a HighContrastInverse theme but most people probably weren’t aware of that since it didn’t show up in Tweaks. It does now! It is much darker than Adwaita Dark.

Several of these theme improvements (including HighContrastInverse) have also been included in 3.26.4.

For more details about what’s changed and who’s done the changing, see the project NEWS file.

Planet DebianVincent Bernat: L3 routing to the hypervisor with BGP

On layer 2 networks, high availability can be achieved by:

Layer 2 networks need very little configuration but come with a major drawback in highly available scenarios: an incident is likely to bring the whole network down.2 Therefore, it is safer to limit the scope of a single layer 2 network by, for example, using one distinct network in each rack and connecting them together with layer 3 routing. Incidents are unlikely to impact a whole IP network.

In the illustration below, top of the rack switches provide a default gateway for hosts. To provide redundancy, they use an MC-LAG implementation. Layer 2 fault domains are scoped to a rack. Each IP subnet is bound to a specific rack and routing information is shared between top of the rack switches and core routers using a routing protocol like OSPF.

Legacy L2 design

There are two main issues with this design:

  1. The L2 domains are still large. A rack could host several dozen hypervisors and several thousand virtual guests. Therefore, a network incident will have a large impact.

  2. IP subnets are pinned to each rack. A virtual guest cannot move to another rack and unused IP addresses in a rack cannot be used in another one.

To solve both these problems, it is possible to push L3 routing further to the south, turning each hypervisor into a L3 router. However, we need to ensure customer virtual guests are blind to this change: they should keep getting their configuration from DHCP (IP, subnet and gateway).

Hypervisor as a router

In a nutshell, for a guest with an IPv4 address:

  • the hosting hypervisor configures a /32 route with the virtual interface as next-hop, and
  • this route is distributed to other hypervisors (and routers) using BGP.

Our design also handles two routing domains: a public one (hosting virtual guests from multiple tenants with direct Internet access) and a private one (used by our own infrastructure, hypervisors included). Each hypervisor uses two routing tables for this purpose.

The following illustration shows the configuration of an hypervisor with 5 guests. No bridge is needed.

L3 routing inside an hypervisor

The complete configuration described below is also available on GitHub. In real life, a piece of software is needed to update the hypervisor configuration when an instance is added or removed. It would listen to notifications from your cloud orchestrator.

Calico is a project fulfilling the same objective (L3 routing to the hypervisor) with mostly the same ideas (except it heavily relies on Netfilter to ensure separation between administrative domains). It provides an agent (Felix) to serve as an interface with orchestrators like OpenStack or Kubernetes. Check it if you want a turnkey solution.

Routing setup

Using IP rules, each interface is “attached” to a routing table:

$ ip rule show
0:  from all lookup local
20: from all iif lo lookup main
21: from all iif lo lookup local-out
30: from all iif eth0.private lookup private
30: from all iif eth1.private lookup private
30: from all iif vnet8 lookup private
30: from all iif vnet9 lookup private
40: from all lookup public

The most important rules are the highlighted ones (priority 30 and 40): any traffic coming from a “private” interface uses the private table. Any remaining traffic uses the public table.

The two iif lo rules manage routing for packets originated from the hypervisor itself. The local-out table is a mix of the private and public tables. The hypervisor mostly needs the routes from the private table but also needs to contact local virtual guests (for example, to answer to a ping request) using the public table. Both tables contain a default route (no chaining possible), so we build a third table, local-out, by copying all routes from the private table and directly connected routes from the public table.

To avoid an accidental leak of traffic, public, private and local-out routing tables contain a default last-resort route with a large metric.3 On normal operations, these routes should be shadowed by a regular default route:

ip route add blackhole default metric 4294967294 table public
ip route add blackhole default metric 4294967294 table private
ip route add blackhole default metric 4294967294 table local-out

IPv6 is far simpler as we have only have one routing domain. While we keep a public table, there is no need for a local-out table:

$ ip -6 rule show
0:  from all lookup local
20: from all lookup main
40: from all lookup public

As a last step, forwarding is enabled and the number of maximum routes for IPv6 is increased (default is only 4096):

sysctl -qw net.ipv4.conf.all.forwarding=1
sysctl -qw net.ipv6.conf.all.forwarding=1
sysctl -qw net.ipv6.route.max_size=4194304

Guest routes

The second step is to configure routes to each guest. For IPv6, we use the link-local address, derived from the remote MAC address, as next-hop:

ip -6 route add 2001:db8:cb00:7100:5254:33ff:fe00:f/128 \
    via fe80::5254:33ff:fe00:f dev vnet6 \
    table public

Assigning several IP addresses (or subnets) to each guest can be done by adding more routes:

ip -6 route add 2001:db8:cb00:7107::/64 \
    via fe80::5254:33ff:fe00:f dev vnet6 \
    table public

For IPv4, the route uses the guest interface as a next-hop. Linux will issue an ARP request before being able to forward the packet:4

ip route add 192.0.2.15/32 dev vnet6 \
  table public

Additional IP addresses and subnets can be configured the same way but each IP address would have to answer to ARP requests. To avoid this, it is possible to route additional subnets through the first IP address:5

ip route add 203.0.113.128/28 \
  via 192.0.2.15 dev vnet6 onlink \
  table public

BGP setup

The third step is to share routes between hypervisors, through BGP. This part is dependant on how hypervisors are connected to each others.

Fabric design

Several designs are possible to connect hypervisors. The most obvious one is to use a full L3 leaf-spine fabric:

Full L3 fabric design

Each hypervisor establishes an eBGP session to each of the leaf top-of-the-rack routers. These routers establishes an iBGP session with their neighbor and an eBGP session with each spine router. This solution can be expensive because the spine routers need to handle all routes. With the current generation of switches/routers, this puts a limit around the maximum number of routes with respect to the expected density.6 IP and BGP configuration can also be tedious unless some uncommon autoconfiguration mechanisms are used. On the other hand, leaf routers (and hypervisors) may optionally learn less routes as they can push non-local traffic north.

Another potential design is to use an L2 fabric. This may sound surprising after bad mouthing L2 networks for their unreliability but we don’t need them to be highly available. They can provide a very scalable and cost-efficient design:7

L2 fabric design

Each hypervisor is connected to 4 distinct L2 networks, limiting the scope of a single failure to a quarter of the available bandwidth. In this design, only iBGP is used. To avoid a full-mesh topology between all hypervisors, route reflectors are used. Each hypervisor has an iBGP session with one or several route reflectors from each of the L2 networks. Route reflectors on the same L2 network share their routes using iBGP. Calico documents this design in more detail.

This is the solution described below. Public and private domains share the same infrastructure but use distinct VLANs.

Route reflectors

Route reflectors are BGP-speaking boxes acting as a hub for all routes on a given L2 network but not routing any traffic. We need at least one of them on each L2 network. We can use more for redundancy.

Here is an example of configuration for JunOS:8

protocols {
    bgp {
        group public-v4 {
            family inet {
                unicast {
                    no-install; # ❶
                }
            }
            type internal;
            cluster 198.51.100.126; # ❷
            allow 198.51.100.0/25; # ❸
            neighbor 198.51.100.127;
        }
        group public-v6 {
            family inet6 {
                unicast {
                    no-install;
                }
            }
            type internal;
            cluster 198.51.100.126;
            allow 2001:db8:c633:6401::/64;
            neighbor 2001:db8:c633:6401::198.51.100.127;
        }
        ttl 255;
        bfd-liveness-detection { # ❹
            minimum-interval 100;
            multiplier 5;
        }
    }
}
routing-options {
    router-id 198.51.100.126;
    autonomous-system 65000;
}

This route reflector accepts and redistributes IPv4 and IPv6 routes. In ❶, we ensure received routes are not installed in FIB: route reflectors are not routers.

Each route reflector needs to be assigned a cluster identifier, which is used in loop detection. In our case, we use the IPv4 address for this purpose (in ❷). Having a different cluster identifier for each route reflector on the same network ensure they share the routes they receive—increasing resiliency.

Instead of explicitely declaring all hypervisors allowed to connect to this route reflector, a whole subnet is authorized in ❸.9 We also declare the second route reflector for the same network as neighbor to ensure they connect to each other.

Another important point of this setup is how to quickly react to unavailable paths. With directly connected BGP sessions, a faulty link may be detected immediately and the associated BGP sessions will be brought down. This may not always be reliable. Moreover, in our case, BGP sessions are established over several switches: a link down on a path may be left undetected until the hold timer expires. Therefore, in ❹, we enable BFD, a protocol to quickly detect faults in path between two BGP peers (RFC 5880).

A last point to consider is whetever you want to allow anycast on your network: if an IP is advertised from more than one hypervisor, you may want to:

  • send all flows to only one hypervisor, or
  • load-balance flows between hypervisors.

The second choice provides a scalable L3 load-balancer. With the above configuration, for each prefix, route reflectors choose one path and distribute it. Therefore, only one hypervisor will receive packets. To get load-balancing, you need to enable advertisement of multiple paths in BGP (RFC 7911):10

set protocols bgp group public-v4 family inet  unicast add-path send path-count 4
set protocols bgp group public-v6 family inet6 unicast add-path send path-count 4

Here is an excerpt of show route exhibiting “simple” routes as well as an anycast route:

> show route protocol bgp
inet.0: 6 destinations, 7 routes (7 active, 1 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

0.0.0.0/0        *[BGP/170] 00:09:01, localpref 100
                    AS path: I, validation-state: unverified
                  > to 198.51.100.1 via em1.90
192.0.2.15/32    *[BGP/170] 00:09:00, localpref 100
                    AS path: I, validation-state: unverified
                  > to 198.51.100.101 via em1.90
203.0.113.1/32   *[BGP/170] 00:09:00, localpref 100
                    AS path: I, validation-state: unverified
                  > to 198.51.100.101 via em1.90
203.0.113.6/32   *[BGP/170] 00:09:00, localpref 100
                    AS path: I, validation-state: unverified
                  > to 198.51.100.102 via em1.90
203.0.113.18/32  *[BGP/170] 00:09:00, localpref 100
                    AS path: I, validation-state: unverified
                  > to 198.51.100.103 via em1.90
203.0.113.10/32  *[BGP/170] 00:09:00, localpref 100
                    AS path: I, validation-state: unverified
                  > to 198.51.100.101 via em1.90
                  [BGP/170] 00:09:00, localpref 100
                    AS path: I, validation-state: unverified
                  > to 198.51.100.102 via em1.90

Complete configuration is available on GitHub. Configurations for GoBGP, BIRD or FRR (running on Cumulus Linux) are also available.11 The configuration for the private routing domain is similar. To avoid dedicated boxes, a solution is to run the route reflectors on some of the top of the rack switches.

Hypervisor configuration

So, let’s tackle the last step: the hypervisor configuration. We use BIRD (1.6.x) as a BGP daemon. It maintains three internal routing tables (public, private and local-out). We use a template with the common properties to connect to a route reflector:

template bgp rr_client {
  local as 65000;   # Local ASN matches route reflector ASN
  import all;       # Accept all received routes
  export all;       # Send all routes to remote peer
  next hop self;    # Modify next-hop with the IP used for this BGP session
  bfd yes;          # Enable BFD
  direct;           # Not a multi-hop BGP session
  ttl security yes; # GTSM is enabled
  add paths rx;     # Enable ADD-PATH reception (for anycast)

  # Low timers to establish sessions faster
  connect delay time 1;
  connect retry time 5;
  error wait time 1,5;
  error forget time 10;
}

table public;
protocol bgp RR1_public from rr_client {
  neighbor 198.51.100.126 as 65000;
  table public;
}
# […]

With the above configuration, all routes in BIRD’s public table are sent to the route reflector 198.51.100.126. Any route from the route reflector is accepted. We also need to connect BIRD’s public table to the kernel’s one:12

protocol kernel kernel_public {
  persist;
  scan time 10;
  import filter {
    # Take any route from kernel,
    # except our last-resort default route
    if krt_metric < 4294967294 then accept;
    reject;
  };
  export all;      # Put all routes into kernel
  learn;           # Learn routes not added by BIRD
  merge paths yes; # Build ECMP routes if possible
  table public;    # BIRD's table name
  kernel table 90; # Kernel table number
}

We also need to enable BFD on all interfaces:

protocol bfd {
  interface "*" {
    interval 100ms;
    multiplier 5;
  };
}

To avoid loosing BFD packets when the conntrack table is full, it is safer to disable connection tracking for these datagrams:

ip46tables -t raw -A PREROUTING -p udp --dport 3784 \
  -m addrtype --dst-type LOCAL -j CT --notrack
ip46tables -t raw -A OUTPUT -p udp --dport 3784 \
  -m addrtype --src-type LOCAL -j CT --notrack

Some missing bits are:

Once the BGP sessions have been established, we can query the kernel for the installed routes:

$ ip route show table public proto bird
default
        nexthop via 198.51.100.1 dev eth0.public weight 1
        nexthop via 198.51.100.254 dev eth1.public weight 1
203.0.113.6
        nexthop via 198.51.100.102 dev eth0.public weight 1
        nexthop via 198.51.100.202 dev eth1.public weight 1
203.0.113.18
        nexthop via 198.51.100.103 dev eth0.public weight 1
        nexthop via 198.51.100.203 dev eth1.public weight 1

Performance

You may be worried on how much memory Linux may use when handling many routes. Well, don’t:

  • 128 MiB can fit 1 million IPv4 routes, and
  • 512 MiB can fit 1 million IPv6 routes.

BIRD uses about the same amount of memory for its own usage. As for lookup times, performance are also excellent with IPv4 and still quite good with IPv6:

  • 30 ns per lookup with 1 million IPv4 routes, and
  • 1.25 µs per lookup with 1 million IPv6 routes.

Therefore, the impact on letting Linux handle many routes is very low. For more details, see “IPv4 route lookup on Linux” and “IPv6 route lookup on Linux.”

Reverse path filtering

To avoid spoofing, reverse path filtering is enabled on virtual guest interfaces: Linux will verify the source address is legit by checking the answer would use the incoming interface as an outgoing interface. This effectively prevent any possible spoofing from guests.

For IPv4, reverse path filtering can be enabled either through a per-interface sysctl14 or through the rpfilter match of Netfilter. For IPv6, only the second method is available.

# For IPv6, use NetFilter
ip6tables -t raw -N RPFILTER
ip6tables -t raw -A RPFILTER -m rpfilter -j RETURN
ip6tables -t raw -A RPFILTER -m rpfilter --accept-local \
  -m addrtype --dst-type MULTICAST -j DROP
ip6tables -t raw -A RPFILTER -m limit --limit 5/s --limit-burst 5 \
  -j LOG --log-prefix "NF: rpfilter: " --log-level warning
ip6tables -t raw -A RPFILTER -j DROP
ip6tables -t raw -A PREROUTING -i vnet+ -j RPFILTER

# For IPv4, use sysctls
sysctl -qw net.ipv4.conf.all.rp_filter=0
for iface in /sys/class/net/vnet*; do
    sysctl -qw net.ipv4.conf.${iface##*/}.rp_filter=1
done

There is no need to prevent L2 spoofing as there is no gain for the attacker.

Keeping guests in the dark

An important aspect of the solution is to ensure guests believe they are attached to a classic L2 network (with an IP in a subnet).

The first step is to provide them with a working default gateway. On the hypervisor, this can be done by assigning the default gateway IP directly to the guest interface:

ip addr add 203.0.113.254/32 dev vnet5 scope link

Our main goal is to ensure Linux will answer to ARP requests for the gateway IP. Configuring a /32 is enough for this and we do not want to configure a larger subnet as, by default, Linux would install a route for the subnet to this interface, which would be incorrect.15

For IPv6, this is not needed as we rely on link-local addresses instead.

A guest may also try to speak with other guests on the same subnet. The hypervisor will answer ARP requests on their behalf. Once it starts receiving IP traffic, it will route it to the appropriate interface. This can be done by enabling ARP proxying on the guest interface:

sysctl -qw net.ipv4.conf.vnet5.proxy_arp=1
sysctl -qw net.ipv4.neigh.vnet5.proxy_delay=0

For IPv6, Linux NDP proxying is far less convenient. Instead, ndppd can handle this task. For each interface, we use the following configuration snippet:

proxy vnet5 {
  rule 2001:db8:cb00:7100::/64 {
    static
  }
}

For DHCP, some daemons may have difficulties to handle this odd configuration (with the /32 IP address on the interface), but Dnsmasq accepts such an oddity. For IPv6, assuming the assigned IP address is an EUI-64 one, radvd works with the following configuration on each interface:

interface vnet5 {
  AdvSendAdvert on;
  prefix 2001:db8:cb00:7100::/64 {
    AdvOnLink on;
    AdvAutonomous on;
    AdvRouterAddr on;
  };
};

Conclusion and future work

This setup should work with BIRD 1.6.3 and a Linux 3.15+ kernel. Compared to legacy L2 networks, it brings flexibility and resiliency while keeping guests unaware of the change. By handing over routing to Linux, this design is also cheap as existing equipments can be reused. Still, exploitation of such solution is simple enough once the basic concepts are understood—IP rules, routing tables and BGP sessions.

There are several potential improvements:

using VRF
Starting from Linux 4.3, L3 VRF domains enable binding interfaces to routing tables. We could have three VRFs: public, private and local-out. This would improve performances by removing most IP rules (but until Linux 4.8, performances are crippled due to offloading features not enabled, see commit 7889681f4a6c). For more information, have a look at the kernel documentation.
full L3 routing
The BGP setup can be enhanced and simplified by using an L3 fabric and using some autoconfiguration features. Cumulus published a nice book, “BGP in the datacenter,” on this topic. However, this requires all BGP speakers to support these features. On the hypervisors, this would mean using FRR while the various network equipments would need to run Cumulus Linux.
BGP resiliency with BGP LLGR
Using short BFD timers make our network react fast to any disruption by quickly invalidating faulty paths without relying on link status. However, under load or congestion, BFD packets may be lost, making the whole hypervisor unreachable until BGP sessions can be brought up again. Some BGP implementations support Long-Lived BGP Graceful Restart, an extension allowing stale routes to be retained with a lower priority (see draft-uttaro-idr-bgp-persistence-03). This is an ideal solution to our problem: these stale routes are used only in last resort, after all links have failed. Currently, no open source implementation supports this draft.

  1. MC-LAG has been standardized in IEEE 802.1AX-2014. However, most vendors are likely to stick with their implementations. With MC-LAG, control planes remain independent. 

  2. An incident can stem from an operator mistake, but also from software bugs, which are more likely to happen in complex implementations during infrequent operations, like a software upgrade. 

  3. These routes should not be distributed over BGP. Hypervisors should receive a default route with a lower metric from edge routers. 

  4. A static ARP entry can also be added with the remote MAC address. 

  5. For example, a Juniper QFX5100 supports about 200k IPv4 routes (about $10,000, with Broadcom Trident II chipset). On the other hand, an Arista 7208SR supports 1.2M IPv4 routes (about $20,000, with Broadcom Jericho chipset), through the use of an external TCAM. A Juniper MX240 would support more than 2M IPv4 routes (about $30,000 for an empty chassis with two routing engines, with Juniper Trio chipset) with a lower density. 

  6. From a scalability point of view, with switches able to handle 32k MAC addresses, the fabric can host more than 8,000 hypervisors (more than 5 million virtual guests). Therefore, cost-effective switches can be used as both leaves and spines. Each hypervisor has to handle all routes, an easy task for Linux. 

  7. Use of routing instances would enable hosting several route reflectors on the same box. This is not used in this example but should be considered to reduce costs. 

  8. This prevents the use of any authentication mechanism: BGP usually relies on TCP MD5 signature (RFC 2385) to authenticate BGP sessions. On most OS, this requires to know allowed peers. To tighten a bit the security in absence of authentication, we use the Generalized TTL Security Mechanism (RFC 5082). For JunOS, the configuration presented here (with ttl 255) is incomplete. A firewall filter is also needed

  9. Unfortunately, for no good reason, JunOS doesn’t support the BGP add-path extension in a routing instance. Such a configuration is possible with Cumulus Linux. 

  10. Only BIRD comes with BFD support out of the box but it does not support implicit peers. FRR needs Cumulus’ PTMD. If you don’t care about BFD, GoBGP is really nice as a route reflector. 

  11. Kernel tables are numbered. ip can use names declared in /etc/iproute2/rt_tables

  12. It should be noted that ECMP with IPv6 only works from BIRD 1.6.1. Moreover, when using Linux 4.11 or more recent, you need to apply commit 98bb80a243b5

  13. For a given interface, Linux uses the maximum value between the sysctl for all and the one for the interface. 

  14. It is possible to prevent Linux to install a connected route by using the noprefixroute flag. However, this flag is only available since Linux 4.4 for IPv4. Only use this flag if your DHCP server is giving you a hard time as it may trigger other issues (related to the promotion of secondary addresses). 

CryptogramLocating Secret Military Bases via Fitness Data

In November, the company Strava released an anonymous data-visualization map showing all the fitness activity by everyone using the app.

Over this weekend, someone realized that it could be used to locate secret military bases: just look for repeated fitness activity in the middle of nowhere.

News article.

Planet DebianShirish Agarwal: Webmail and whole class of problems.

Yesterday I was reading Daniel Pocock’s ‘Do the little things matter’ and while I agree with parts of his assessment I feel it is incomplete unless taken from user’s perspective having limited resources, knowledge etc. I am a gmail user so trying to put a bit of perspective here. I usually wait for a day or more when I feel myself getting inflamed/heated as it seemed to me a bit of arrogant perspective, meaning gmail users don’t have any sense of privacy. While he is perfectly entitled to his opinion, I *think* just blaming gmail is an easy way out, the problems are multi-faceted. Allow me to explain what I mean.

The problems he has shared I do not think are Gmail’s alone but all webmail providers, those providing services free of cost as well as those providing services for a fee. Regardless of what you think, the same questions arise whether you use one provider or the other. Almost all webmail providers give you a mailbox, an e-mail id and a web interface to interact with the mails you get.

The first problem which Daniel essentially tries to convey is the deficit of trust. I *think* that applies to all webmail providers. Until and unless you can audit and inspect the code you are just ‘trusting’ somebody else to provide you a service. What pained me while reading his blog post is that he could have gone much further but chose not to. What happens when webmail providers break your trust was not explored at all.

Most of the webmail providers I know are outside my geographical jurisdictions. While in one way it is good that the government of the day cannot directly order them to check my mails, it also means that I have no means to file a suit or prosecute the company in case if breaches do occur. I am talking here as an everyday user, a student and not a corporation who can negotiate, make iron-clad agreements and have some sort of liability claim for many an unforeseen circumstances. So no matter how you skin it, most users or to put it more bluntly almost all non-corporate users are at a disadvantage to negotiate terms of a contract with their mail provider.

So whether the user uses one webmail provider or other, it’s the same thing. Even startups like riseup who updated/shared the canary do show that even they are vulnerable. Also it probably is easier for webmail services to have backdoors as they can be pressurized for one government or the other.

So the only way to solve it really is having your own mail server which to say truthfully is no solution as it’s a full-time job. The reason is because you are responsible for everything. Each new vulnerability you come to know, you are supposed to either patch it or get it patched, or have some sort of workaround. In the last 4-5 years itself, it has become more complex as more and more security features are being added as each new vulnerability or class of vulnerabilities has revealed itself. Add to that at the very least a mail server should at the very least have something like RAID 1 at the very least to lessen data corruption. While I have seen friends who have the space and the money to invest and maintain a mail server most people won’t have the time, energy and the space to do the needful. I don’t see that changing in the near future at least.

Add to that over the years when I did work for companies most of the times I have found I needed to have more than one e-mail client as emails in professional setting need to be responded quickly and most of the GUI based mail clients could have subtle bugs which you come to know only when you are using it.

Couple of years back I was working with Hamaralinux. They have their own mail server. Without going into any technical details, looking into the features needed and wanted for both the parties. I started out using Thunderbird. I was using stable releases of Thunderbird. Even then, I used to see subtle bugs which sometimes used to corrupt the mail database or do one thing or the other. I had to resort to using Evolution which provided comparable features and even there I found bugs so for most of the time I had to resort between hopping between the two mail clients.

Now if you look at the history of the two clients you would assume that most of the bugs should not be there but surprisingly they were. At least for Thunderbird, I remember gecko used to create lot of problems besides other things. I did report the bugs I encountered and while some of them were worked upon, the solution used to often take days and sometimes even weeks to be resolved. Somewhat similar was the case with Evolution also. At times I also witnessed broken formatting and things like that but that is our of the preview of the topic.

Crudely, AFAIK these the basic functions an email client absolutely needs to do –

a. Authenticate the user to the mail server
b. If the user is genuine, go ahead to next step or reject the user at this stage itself.
c. If the user is genuine. let them go to their mailbox.
d. Once you enter the mailbox (mbox) it probably looks at the time-stamp when the last mail was delivered and see if any new mail has come looking at the diff between timesw (either using GMT or using epoch+GMT).
e. If any new mail has come it starts transferring those mails to your box.
f. If there are any new mails which need to be sent it would transfer them at this point.
g. If there are any automatically acknowledgments of mails received and that feature is available it would do that as well.
h. Ideally you should be able to view and compose replies offline at will.

In reality, at times I used to see transfers not completed meaning that the mail server still has mails but for some reason the connection got broken (maybe due to some path in-between or something else entirely)

At times even notification of new mails used to not come.

Sometimes offline Thunderbird used to lock mails or mbox at my end and I had to either use evolution or use some third-party tool to read the mails and rely on webmail to give my reply.

Notice in all this I haven’t mentioned ssh or any sort of encryption or anything like that.

It took me long time to figure out https://wiki.mozilla.org/MailNews:Logging but as you can see it deviates you from the work you wanted to do in the first place.

I am sure some people would suggest either Emacs or alpine or some other tool which works and I’m sure it worked right out of bat for them, for me I wanted to have something which had a GUI and I didn’t have to think too much about it. It also points out the reason why Thunderbird was eventually moved out of mozilla in a sense so that community could do feature and bug-fixing more faster than either mozilla did or had the resources or the will to do so.

From a user perspective I find webmail more compelling even with leakages as Daniel described because even though it’s ‘free’ it also has in-built redundancy. AFAIK they have enough redundant copies of mail database so that even if the node where my mails are dies, it simply will resurrect it from the other copies and give it to me in timely fashion.

While I do hope that in the long-run we do get better tools, in the short-to-medium term at least from my perspective its more about which compromises you are able to live with.

While I’m too small and too common a citizen for the government to take notice of me, I think it’s too easy to blame ‘X’ or ‘Y’ . I believe the real revolution will only begin when there are universal data protection laws for all citizens irrespective of countries and companies and governments are made answerable and liable for any sort of interactive digital services provided. Unless we raise the consciousness of people about security in general and have some sort of multi-stake holders meetings and understanding in real life including people from security, e- mail providers, general users and free software hackers, regulators and if possible even people from legislature I believe we would just be running about in circles.

TEDLee Cronin’s ongoing quest for print-your-own medicine, and more news from TED speakers

Behold, your recap of TED-related news:

Print your own pharmaceutical factory. As part of an ongoing quest to make pharmaceuticals easier to manufacture, chemist Lee Cronin and his team at the University of Glasgow have designed a way to 3D-print a portable “factory” for the complicated and multi-step chemical reactions needed to create useful drugs. It’s a grouping of vessels about the size of water bottles; each vessel houses a different type of chemical reaction. Pharmacists or doctors could create specific drugs by adding the right ingredients to each vessel from pre-measured cartridges, following a simple step-by-step recipe. The process could help replace or supplement large chemical factories, and bring helpful drugs to new markets. (Watch Cronin’s TED Talk)

How Amit Sood’s TED Talk spawned the Google Art selfie craze. While Amit Sood was preparing for his 2016 TED Talk about Google’s Cultural Institute and Art Project, his co-presenter Cyril Diagne, a digital interaction artist, suggested that he include a prototype of a project they’d been playing with, one that matched selfies to famous pieces of art. Amit added the prototype to his talk, in which he matched live video of Cyril’s face to classic artworks — and when the TED audience saw it, they broke out in spontaneous applause. Inspired, Amit decided to develop the feature and add it to the Google Arts & Culture app. The new feature launched in December 2017, and it went viral in January 2018. Just like the live TED audience before it, online users loved it so much that the app shot to the number one spot in both the Android and iOS app stores. (Watch Sood’s TED Talk)

A lyrical film about the very real danger of climate change. Funded by a Kickstarter campaign by director and producer Matthieu Rytz, the documentary Anote’s Ark focuses on the clear and present danger of global warming to the Pacific Island nation of Kiribati (population: 100,000). As sea levels rise, the small, low-lying islands that make up Kiribati will soon be entirely covered by the ocean, displacing the population and their culture. Former president Anote Tong, who’s long been fighting global warming to save his country and his constituents, provides one of two central stories within the documentary. The film (here’s the trailer) premiered at the 2018 Sundance Festival in late January, and will be available more widely soon; follow on Facebook for news. (Watch Tong’s TED Talk)

An animated series about global challenges. Sometimes the best way to understand a big idea is on a whiteboard. Throughout 2018, Rabbi Jonathan Sacks and his team are producing a six-part whiteboard-animation series that explains key themes in his theology and philosophy around contemporary global issues. The first video, called “The Politics of Hope,” examines political strife in the West, and ways to change the culture from the politics of anger to the politics of hope. Future whiteboard videos will delve into integrated diversity, the relationship between religion and science, the dignity of difference, confronting religious violence, and the ethics of responsibility. (Watch Rabbi Sacks’ TED Talk)

Nobody wins the Google Lunar X Prize competition :( Launched in 2007, the Google Lunar X Prize competition challenged entrepreneurs and engineers to design low-cost ways to explore space. The premise, if not the work itself, was simple — the first privately funded team to get a robotic spacecraft to the moon, send high-resolution photos and video back to Earth, and move the spacecraft 500 meters would win a $20 million prize, from a total prize fund of $30 million. The deadline was set for 2012, and was pushed back four times; the latest deadline was set to be March 31, 2018. On January 23, X Prize founder and executive chair Peter Diamandis and CEO Marcus Shingles announced that the competition was over: It was clear none of the five remaining teams stood a chance of launching by March 31. Of course, the teams may continue to compete without the incentive of this cash prize, and some plan to. (Watch Diamandis’ TED Talk)

15 photos of America’s journey towards inclusivity. Art historian Sarah Lewis took control of the New Yorker photo team’s Instagram last week, sharing pictures that answered the timely question: “What are 15 images that chronicle America’s journey toward a more inclusive level of citizenship?” Among the iconic images from Gordon Parks and Carrie Mae Weems, Lewis also includes an image of her grandfather, who was expelled from the eleventh grade in 1926 for asking why history books ignored his own history. In the caption, she tells how he became a jazz musician and an artist, “inserting images of African Americans in scenes where he thought they should—and knew they did—exist.” All the photos are ones she uses in her “Vision and Justice” course at Harvard, which focuses on art, race and justice. (Watch Lewis’ TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.

Krebs on SecurityFile Your Taxes Before Scammers Do It For You

Today, Jan. 29, is officially the first day of the 2018 tax-filing season, also known as the day fraudsters start requesting phony tax refunds in the names of identity theft victims. Want to minimize the chances of getting hit by tax refund fraud this year? File your taxes before the bad guys can!

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

According to the IRS, consumer complaints over tax refund fraud have been declining steadily over the years as the IRS and states enact more stringent measures for screening potentially fraudulent applications.

If you file your taxes electronically and the return is rejected, and if you were the victim of identity theft (e.g., if your Social Security number and other information was leaked in the Equifax breach last year), you should submit an Identity Theft Affidavit (Form 14039). The IRS advises that if you suspect you are a victim of identity theft, continue to pay your taxes and file your tax return, even if you must do so by paper.

If the IRS believes you were likely the victim of tax refund fraud in the previous tax year they will likely send you a special filing PIN that needs to be entered along with this year’s return before the filing will be accepted by the IRS electronically. This year marks the third out of the last five that I’ve received one of these PINs from the IRS.

Of course, filing your taxes early to beat the fraudsters requires one to have all of the tax forms needed to do so. As a sole proprietor, this is a great challenge because many companies take their sweet time sending out 1099 forms and such (even though they’re required to do so by Jan. 31).

A great many companies are now turning to online services to deliver tax forms to contractors, employees and others. For example, I have received several notices via email regarding the availability of 1099 forms online; most say they are sending the forms in snail mail, but that if I need them sooner I can get them online if I just create an account or enter some personal information at some third-party site.

Having seen how so many of these sites handle personal information, I’m not terribly interested in volunteering more of it. According to Bankrate, taxpayers can still file their returns even if they don’t yet have all of their 1099s — as long as you have the correct information about how much you earned.

“Unlike a W-2, you generally don’t have to attach 1099s to your tax return,” Bankrate explains. “They are just issued so you’ll know how much to report, with copies going to the IRS so return processors can double-check your entries. As long as you have the correct information, you can put it on your tax form without having the statement in hand.”

In past tax years, identity thieves have used data gleaned from a variety of third-party and government Web sites to file phony tax refund requests — including from the IRS itself! One of their perennial favorites was the IRS’s Get Transcript service, which previously had fairly lax authentication measures.

After hundreds of thousands of taxpayers had their tax data accessed through the online tool, the IRS took it offline for a bit and then brought it back online but requiring a host of new data elements.

But many of those elements — such as your personal account number from a credit card, mortgage, home equity loan, home equity line of credit or car loan — can be gathered from multiple locations online with almost no authentication. For example, earlier this week I heard from Jason, a longtime reader who was shocked at how little information was required to get a copy of his 2017 mortgage interest statement from his former lender.

“I called our old mortgage company (Chase) to retrieve our 1098 from an old loan today,” Jason wrote. “After I provided the last four digits of the social security # to their IVR [interactive voice response system] that was enough to validate me to request a fax of the tax form, which would have included sensitive information. I asked for a supervisor who explained to me that it was sufficient to check the SSN last 4 + the caller id phone number to validate the account.”

If you’ve taken my advice and placed a security freeze on your credit file with the major credit bureaus, you don’t have to worry about thieves somehow bypassing the security on the IRS’s Get Transcript site. That’s because the IRS uses Experian to ask a series of knowledge-based authentication questions before an online account can even be created at the IRS’s site to access the transcript.

Now, anyone who reads this site regularly should know I’ve been highly critical of these KBA questions as a means of authentication. But the upshot here is that if you have a freeze in place at Experian (and I sincerely hope you do), Experian won’t even be able to ask those questions. Thus, thieves should not be able to create an account in your name at the IRS’s site (unless of course thieves manage to successfully request your freeze PIN from Experian’s site, in which case all bets are off).

While you’re getting your taxes in order this filing season, be on guard against fake emails or Web sites that may try to phish your personal or tax data. The IRS stresses that it will never initiate contact with taxpayers about a bill or refund. If you receive a phishing email that spoofs the IRS, consider forwarding it to phishing@irs.gov.

Finally, tax season also is when the phone-based tax scams kick into high gear, with fraudsters threatening taxpayers with arrest, deportation and other penalties if they don’t make an immediate payment over the phone. If you care for older parents or relatives, this may be a good time to remind them about these and other phone-based scams.

CryptogramEstimating the Cost of Internet Insecurity

It's really hard to estimate the cost of an insecure Internet. Studies are all over the map. A methodical study by RAND is the best work I've seen at trying to put a number on this. The results are, well, all over the map:

"Estimating the Global Cost of Cyber Risk: Methodology and Examples":

Abstract: There is marked variability from study to study in the estimated direct and systemic costs of cyber incidents, which is further complicated by the considerable variation in cyber risk in different countries and industry sectors. This report shares a transparent and adaptable methodology for estimating present and future global costs of cyber risk that acknowledges the considerable uncertainty in the frequencies and costs of cyber incidents. Specifically, this methodology (1) identifies the value at risk by country and industry sector; (2) computes direct costs by considering multiple financial exposures for each industry sector and the fraction of each exposure that is potentially at risk to cyber incidents; and (3) computes the systemic costs of cyber risk between industry sectors using Organisation for Economic Co-operation and Development input, output, and value-added data across sectors in more than 60 countries. The report has a companion Excel-based modeling and simulation platform that allows users to alter assumptions and investigate a wide variety of research questions. The authors used a literature review and data to create multiple sample sets of parameters. They then ran a set of case studies to show the model's functionality and to compare the results against those in the existing literature. The resulting values are highly sensitive to input parameters; for instance, the global cost of cyber crime has direct gross domestic product (GDP) costs of $275 billion to $6.6 trillion and total GDP costs (direct plus systemic) of $799 billion to $22.5 trillion (1.1 to 32.4 percent of GDP).

Here's Rand's risk calculator, if you want to play with the parameters yourself.

Note: I was an advisor to the project.

Separately, Symantec has published a new cybercrime report with their own statistics.

Worse Than FailureRepresentative Line: The Mystery of the SmallInt

PT didn’t provide very much information about today’s Representative Line.

Clearly bits and bytes was not something studied in this SQL stored procedure author. Additionally, Source control versions are managed with comments. OVER 90 Thousand!

        --Declare @PrevNumber smallint
                --2015/11/18 - SMALLINT causes overflow error when it goes over 32000 something 
                -- - this sp errors but can only see that when
                -- code is run in SQL Query analyzer 
                -- - we should also check if it goes higher than 99999
        DECLARE @PrevNumber int         --2015/11/18

Fortunately, I am Remy Poirot, the world’s greatest code detective. To your untrained eyes, you simply see the kind of comment which would annoy you. But I, an expert, with experience of the worst sorts of code man may imagine, can piece together the entire lineage of this code.

Let us begin with the facts: no source control is in use. Version history is managed in the comments. From this, we can deduce a number of things: the database where this code runs is also where it is stored. Changes are almost certainly made directly in production.

A statue of Hercule Poirot in Belgium

Which, when those changes fail, they may only be detected when the “code is run in SQL Query Analyzer”. This ties in with the “changes in production/no source control”, but it also tells us that it is possible to run this code, have it fail, and no one notices. This means this code must be part of an unattended process, a batch job of some kind. Even an overflow error vanishes into the ether.

This code also, according to the comments, should “also check if [@PrevNumber] goes higher than 99999”. This is our most vital clue, for it tells us that the content of the value has a maximum width- more than 5 characters to represent it is a problem. This obviously means that the target system is a mainframe with a flat-file storage model.

Already, from one line and a handful of comments, we’ve learned a great deal about this code, but one need not be the world’s greatest code detective to figure out this much. Let’s see what else we can tease out.

@PrevNumber must tie to some ID in the database, likely the “last processed ID” from the previous run of the batch job. The confusion over smallint and need to enforce a five-digit limit implies that this database isn’t actually in control of its data. Either the data comes from a front-end with no validation- certainly possible- or it comes from an external system. But a value greater than 99999 isn’t invalid in the database- otherwise they could enforce that restriction via a constraint. This means the database holds data coming from and going to different places- it’s a “business integration” database.

With these clues, we can assemble the final picture of the crime scene.

In a dark corner of a datacenter are the components of a mainframe system. The original install was likely in the 70s, and while it saw updates and maintenance for the next twenty years, starting in the 90s it was put on life support. “It’s going away, any day now…” Unfortunately, huge swathes of the business depended on it and no one is entirely certain what it does. They can’t simply port business processes to a new system, because no one knows what the business processes are. They’re encoded as millions of lines of code in a dialect of COBOL no one understands.

A conga-line of managers have passed through the organization, trying to kill the mainframe. Over time, they’ve brought in ERP systems from Oracle or SAP. Maybe they’ve taken some home-grown ERP written by users in Access and tried to extend them to an enterprise solution. Worse, over the years, acquisitions come along, bringing new systems from other vendors under the umbrella of one corporate IT group.

All these silos need to talk to each other. They also have to be integrated into the suite of home-grown reporting tools, automation, and Excel spreadsheets that run the business. Shipping can only pull from the mainframe, while the P/L dashboard the executives use can only pull from SAP. At first, it’s a zoo of ETL jobs, until someone gets the bright idea to centralize it.

They launch a project, and give it a slick three-letter-acronym, like “QPR” or “LMP”. Or, since there are only so many TLAs one can invent, they probably reuse an existing one, like “GPS” or “FBI”. The goal: have a central datastore that integrates data from every silo in the organization, and then the silos can pull back out the data they want for their processes. This project has a multi-million dollar budget, and has exceeded that budget twice over, and is not yet finished.

The code PT supplied for us is but one slice of that architecture. It’s pulling data from one of those centralized tables in the business integration database, and massaging it into a format it can pass off to the mainframe. Like a murder reconstructed from just a bit of fingernail, we’ve laid bare the entire crime with our powers of observation, our experience, and our knowledge of bad code.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianSean Whitton: Debian Policy call for participation -- January 2018

Looks like there won’t be a release of Policy this month, but please consider contributing so we can get one out next month. Here’s a selection of bugs:

Consensus has been reached and help is needed to write a patch

#685746 debian-policy Consider clarifying the use of recommends

#749826 [multiarch] please document the use of Multi-Arch field in debian/c…

#757760 please document build profiles

#759316 Document the use of /etc/default for cron jobs

#761219 document versioned Provides

#767839 Linking documentation of arch:any package to arch:all

#770440 policy should mention systemd timers

#793499 The Installed-Size algorithm is out-of-date

#823256 Update maintscript arguments with dpkg >= 1.18.5

#835451 Building as root should be discouraged

#853779 Clarify requirements about update-rc.d and invoke-rc.d usage in mai…

#874206 allow a trailing comma in package relationship fields

Wording proposed, awaiting review from anyone and/or seconds by DDs

#582109 document triggers where appropriate

#610083 Remove requirement to document upstream source location in debian/c…

#649530 [copyright-format] clearer definitions and more consistent License:…

#662998 stripping static libraries

#682347 mark ‘editor’ virtual package name as obsolete

#737796 copyright-format: support Files: paragraph with both abbreviated na…

#742364 Document debian/missing-sources

#756835 Extension of the syntax of the Packages-List field.

#786470 [copyright-format] Add an optional “License-Grant” field

#835451 Building as root should be discouraged

#845255 Include best practices for packaging database applications

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

Merged for the next release

#299007 Transitioning perms of /usr/local

#515856 remove get-orig-source

#886890 Fix for found typos

#888437 Several example scripts are not valid.

Planet DebianRuss Allbery: Review: Roads and Bridges

Review: Roads and Bridges, by Nadia Eghbal

Publisher: Ford Foundation
Copyright: July 2016
Format: Epub
Pages: 143

Subtitled The Unseen Labor Behind Our Digital Infrastructure, Roads and Bridges isn't a book. It's a long report for the Ford Foundation, available for free from their web site. But I read it like a book, so you get a review anyway.

If, like me, you've spent years in the free software community, you'll know much of this already. Eghbal starts with a survey of the history of free software and open source (skewed towards the practical and economic open source analysis, and essentially silent on ethics), and then a survey of how projects are currently organized. The emphasis, consistent with the rest of the report, is on how these free software building blocks underlie nearly all the consumer software and services used today. Eghbal singles out OpenSSL as her example of lack of infrastructure support due to Heartbleed and the subsequent discussion of how vital OpenSSL is but how little funding it received.

Eghbal hit her stride for me at the end of the third section, which tries to understand why people contribute to open source without being paid. I'm a bit dubious that many people contribute to build their personal reputation, since that's not a commonly stated reason in my areas of the free software community, but Eghbal's analysis of the risk of this motive from the infrastructure perspective seemed on point if this is becoming common. Better was her second motive: "the project became unexpectedly popular, and the maintainer feels obligated to support it." Yes. There is so much of this in free software, and it's a real threat to the sustainability of projects because it's a description of the trajectory of burnout. It's also a place where a volunteer culture and the unfairness of unpaid labor come into uncomfortable tension. Eghbal very correctly portrays her third reason, "a labor of love," as not that obviously distinct from that feeling of obligation.

The following discussion of challenges rightfully focuses on the projects that are larger than a weekend hobby but smaller than Linux:

However, many projects are trapped somewhere in the middle: large enough to require significant maintenance, but not quite so large that corporations are clamoring to offer support. These are the stories that go unnoticed and untold. From both sides, these maintainers are told they are the problem: Small project maintainers think mid-sized maintainers should just learn to cope, and large project maintainers think if the project were "good enough," institutional support would have already come to them.

Eghbal includes a thoughtful analysis of the problems posed by the vast increase in the number of programmers and the new economic focus on software development. This should be a good thing for free software, and in some ways it is, but the nature of software and human psychology tends towards fragmentation. It's more fun to write a new thing than do hard maintenance work on an old code base. The money is also almost entirely in building new things while spending as little time as possible on existing components. Industry perception is that open source accelerates new business models by allowing someone to build their new work on top of a solid foundation, but this use is mostly parasitic in practice, and the solid foundation doesn't stay solid if no one contributes to it.

Eghbal closes with a survey of current funding models for open source software, from corporate sponsorship to crowdfunding to foundations, and some tentative suggestions for principles of successful funding. This is primarily a problem report, though; don't expect much in the way of solutions. Putting together an effective funding model is difficult, community-specific, and requires thoughtful understanding of what resources are most needed and who can answer that need. It's also socially fraught. A lot of people work on these projects because they're not part of the capitalist system of money-seeking, and don't want to deal with the conflicts and overhead that funding brings.

I was hoping Eghbal would propose more solutions than she did, but I'm not surprised. I've been through several of these funding conversations in various communities. The problem is very hard, on both economic and social levels. But despite the lack of solutions, the whole report was more interesting than I was expecting given how familiar I already am with this problem. Eghbal's background is in venture capital, so she looks at infrastructure primarily through the company-building lens, but she's not blind to the infrastructure gaps those companies leave behind or even make worse. It's a different and clarifying angle on the problem than mine.

I realized, reading this, that while I think of myself as working on infrastructure, nearly all of my contributions have been in the small project. Only INN (back in the heyday of Usenet), OpenAFS (which I'm no longer involved in), and Debian rise to the level of significant infrastructure projects that might benefit from funding. Debian is large enough that, while it has resource challenges, it's partly transitioned into the lower echelons of institutional support. And INN is back in weekend project territory, since Usenet isn't what it was.

This report made me want to get involved in some more significant infrastructure project in need of this kind of support, but simultaneously made it clear how difficult it is to do this on a hobby basis. And it's remarkably hard to find corporate sponsorship for this sort of work that doesn't involve so much complexity and uncertainty that it's hard to justify leaving a stable and well-understood job. Which, of course, is much of Eghbal's point.

Eghbal also surfaces the significant tension between the volunteer, interest-based allocation of resources native to free software, and the money-based allocation of resources of a surrounding capitalist society. Projects are usually the healthiest and the happiest when they function as volunteer communities: they spontaneously develop governance structures that work for people as volunteers, they tend towards governance where investment of effort translates into influence (not without its problems, but generally better than other models), and each contributor has a volunteer's freedom to stop doing things they aren't enjoying (although one shouldn't underestimate the obligation factor of working on a project used by other people). But since nearly everyone has to spend the majority of their time on paying work, it's very difficult to find sustained and significant resources for volunteer projects. You need funding, so that people can be paid, but once people are paid they're no longer volunteers, and that fundamentally alters the social structure of the project. Those changes are rarely for the better, since the motives of those paying are both external to the project (not part of the collaborative decision-making process) and potentially overriding given how vital they are to the project.

It's a hard problem. I avoided it for years by living in the academic world, which is much better at reconciling these elements than for-profit companies, but the academic world doesn't have enough total resources, or the right incentives, to maintain this type of infrastructure.

The largest oversight I saw in this report was the lack of discussion of the international nature of open source development coupled with the huge discrepancy in cost of living in different parts of the world. This poses strange and significant fairness issues for project funding that I'm quite surprised Eghbal didn't notice: for the same money required to support full-time work by a current maintainer who lives in New York or San Francisco, one could fund two or three (or even more) developers in, say, some eastern European or southeast Asian countries with much lower costs of living and average incomes. Eghbal doesn't say a word about the social challenges this creates.

Other than that, though, this is a thoughtful and well-written survey of the resource problems facing the foundations of nearly all of our digital world. Free software developers will be annoyed but unsurprised by the near-total disregard of ethical considerations, but here the economic and ethical case arrive at roughly the same conclusion: nearly all the resources are going to companies and projects that are parasitical on a free software foundation, that foundation is nowhere near as healthy as people think it is, and the charity-based funding and occasional corporate sponsorship is inadequate and concentrated on the most visible large projects. For every Linux, with full-time paid developers, heavy corporate sponsorship, and sustained development with a wide variety of funding models, there are dozens of key libraries or programs developed by a single person in their scant free time, and dozens of core frameworks that are barely maintained and draw more vitriol than useful assistance.

Worth a read if you have an interest in free software governance or funding models, particularly since it's free.

Rating: 7 out of 10

TEDTalks from TEDNYC Idea Search 2018

Cloe Shasha and Kelly Stoetzel hosted the fast-paced TED Idea Search 2018 program on January 24, 2018 at TED HQ in New York, NY. (Photo: Ryan Lash / TED)

TED is always looking for new voices with fresh ideas — and earlier this winter, we opened a challenge to the world: make a one-minute audition video that makes the case for your TED Talk. More than 1,200 people applied to be a part of the Idea Search program this year, and on Wednesday night at our New York headquarters, 13 audition finalists shared their ideas in a fast-paced program. Here are voices you may not have heard before — but that you’ll want to hear more from soon.

Ruth Morgan shares her work preventing the misinterpretation of forensic evidence. (Photo: Ryan Lash / TED)

Forensic evidence isn’t as clear-cut as you think. For years, forensic science research has focused on making it easier and more accurate to figure out what a trace — such as DNA or a jacket fiber — is and who it came from, but that doesn’t help us interpret what the evidence means. “What we need to know if we find your DNA on a weapon or gunshot residue on you is how did it get there and when did it get there,” says forensic scientist Ruth Morgan. These gaps in understanding have real consequences: forensic evidence is often misinterpreted and used to convict people of crimes they didn’t commit. Morgan and her team are committed to finding ways to answer the why and how, such as determining whether it’s possible to get trace evidence on you during normal daily activities (it is) and how trace DNA can be transferred. “We need to dramatically reduce the chance of forensic evidence being misinterpreted,” she says. “We need that to happen so that you never have to be that innocent person in the dock.”

The intersection of our senses. An experienced composer and filmmaker, Philip Clemo has been on a quest to determine if people can experience imagery with the same depth that they experience music. Research has shown that sound can impact how we perceive visuals, but can visuals have a similarly profound impact? In his live performances, Clemo and his band use abstract imagery in addition to abstract music to create a visual experience for the audience. He hopes that people can have these same experiences in their everyday lives by quieting their minds to fully experience the “visual music” of our surrounding environment — and improve our connection to our world.

Reading the Bible … without omission. At a time when he was a recovering fundamentalist and longtime atheist, David Ellis Dickerson received a job offer as a quiz question writer and Bible fact-checker for the game show The American Bible Challenge. Among his responsibilities: coming up with questions that conveniently ignored the sections of the Bible that mention slavery, concubines and incest. The omission expectations he was faced with made him realize that evangelicals read the Bible in the same way they watch reality television: “with a willing, even loving, suspension of disbelief.” Now, he invites devout Christians to read the Bible in its most unedited form, to recognize its internal contradictions and to grapple with its imperfections.

Danielle Bilot Bilot suggests three simple but productive actions we can take to help bees: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food. (Photo: Ryan Lash / TED)

To bee or not to bee? The most famous bee species of recent memory has overwhelmingly been the honey bee. For years, their concerning disappearance has made international news and been the center of research. Environmental designer Danielle Bilot believes that the honey bee should share the spotlight with the 3,600 other species that pollinate much of the food we eat every day in the US, such as blueberries, tomatoes and eggplants. Honey bees, she says, aren’t even native to North America (they were originally brought over from Europe) and therefore have a tougher time successfully pollinating these and many other indigenous crops. Regardless of species, human activity is harming them, and Bilot suggests three simple but productive actions we can take to make their lives easier and revive their populations: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food.

What if technology protected your hearing? Computers that talk to us and voice-enabled technology like Siri, Alexa and Google Home are changing the importance of hearing, says ear surgeon Matthew Bromwich. And with more than 500 million people suffering from disabling hearing loss globally, the importance of democratizing hearing health care is more relevant than ever. “How do we use our technology to improve the clarity of our communication?” Bromwich asks. He and his team have created a hearing testing technology called “SHOEBOX,” which gives hearing health care access to more than 70,000 people in 32 countries. He proposes using technology to help prevent this disability, amplify sound clarity, and paint a new future for hearing.

Welcome to the 2025 Technology Design Awards, with your host, Tom Wujec. Rocking a glittery dinner jacket, design guru Tom Wujec presents a science-fiction-y awards show from eight years into the future, honoring the designs that made the biggest impact in technology, consumer goods and transportation — capped off by a grand impact award chosen live onstage by an AI. While the designs seem fictional — a hacked auto, a self-rising house, a cutting-edge prosthetic — the kicker to Tom’s future award show is that everything he shows is in development right now.

In collaboration with the MIT Media Lab, Nilay Kulkarni used his skills as a self-taught programmer to build a simple tech solution to prevent human stampedes during the Kumbh Mela, one of the world’s largest crowd gatherings, in India. (Photo: Ryan Lash / TED)

A 15-year-old solves a deadly problem with a low-cost device. Every four years, more than 30 million Hindus gather for the Kumbh Mela, the world’s largest religious gathering, in order to wash themselves of their sins. Once every 12 years, it takes place in Nashik, a city on the western coast of India that ordinarily contains 1.5 million residents. With such a massive crowd in a small space, stampedes inevitably happen, and in 2003, 39 people were killed during the festival in Nashik. In 2014, then-15-year-old Nilay Kulkarni decided he wanted to find a solution. He recalls: “It seemed like a mission impossible, a dream too big.” After much trial and error, he and collaborators at the MIT Media Lab came up with a cheap, portable, effective stampede stopper called “Ashioto” (meaning footstep in Japanese): a pressure-sensor-equipped mat which counts the number of people stepping on it and sends the data over the internet to authorities so they can monitor the flow of people in real time. Five mats were deployed at the 2015 Nashik Kumbh Mela, and thanks to their use and other innovations, no stampedes occurred for the first time ever there. Much of the code is now available to the public to use for free, and Kulkarni is trying to improve the device. His dream: for Ashiotos to be used at all large gatherings, like the other Kumbh Melas, the Hajj and even at major concerts and sports events.

A new way to think about health care. Though doctors and nurses dominate people’s minds when it comes to health care, Tamekia MizLadi Smith is more interested in the roles of non-clinical staff in creating in creating effective and positive patient experiences. As an educator and spoken word poet, Smith uses the acronym “G.R.A.C.E.D.” to empower non-clinical staff to be accountable for data collection and to provide compassionate care to patients. Under the belief that compassionate care doesn’t begin and end with just clinicians, Smith asks that desk specialists, parking attendants and other non-clinical staff are trained and treated as integral parts of well-functioning health care systems.

Mad? Try some humor. “The world seems humor-impaired,” says comedian Judy Carter. “It just seems like everyone is going around angry: going, ‘Politics is making me so mad; my boss is just making me so mad.'” In a sharp, zippy talk, Carter makes the case that no one can actually make you mad — you always have a choice of how to respond — and that anger actually might be the wrong way to react. Instead, she suggests trying humor. “Comedy rips negativity to shreds,” she says.

Want a happier, healthier life? Look to your friends. In the relationship pantheon, friends tend to place third in importance (after spouses and blood relatives). But we should make them a priority in our lives, argues science writer Lydia Denworth. “The science of friendship suggests we should invest in relationships that deliver strong bonds. We should value people who are positive, stable, cooperative forces,” Denworth says. While friendship was long ignored by academics, researchers are now studying it and finding it provides us with strong physical and emotional benefits. “In this time when we’re struggling with an epidemic of loneliness and bitter political divisions, [friendships] remind us what joy and connection look like and why they matter,” Denworth says.

An accessible musical wonderland. With quick but impressive musical interludes, Matan Berkowitz introduces the “Airstrument” — a new type of instrument that allows anyone to create music in a matter of minutes by translating movement into sound. This technology is part of a series of devices Berkowitz has developed to enable musicians with disabilities (and anyone who wants to make music) to express themselves in non-traditional ways. “Creation with intention,” he raps, accompanied by a song created on the Airstrument. “Now it’s up to us to wake up and act.”

Divya Chander shares her work studying what human brains look like when they lose and regain consciousness. (Photo: Ryan Lash / TED)

Where does consciousness go when you’re under anesthesia? Divya Chander is an anesthesiologist, delivering specialized doses of drugs that put people in an altered state before surgery. She often wonders: Where do peoples brains do while they’re under? What do they perceive? The question has led her into a deeper exploration of perception, awareness and consciousness itself. In a thoughtful talk, she suggests that we have a lot to learn about consciousness … that we could learn by studying unconsciousness.

The art of creation without preparation. To close out the night, vocalist and improviser Lex Empress creates coherent lyrics from words written by audience members on paper airplanes thrown onto the stage. Empress is accompanied by virtuoso pianist and producer Gilian Baracs, who also improvises everything he plays. Their music reminds us to enjoy the great improvisation that is life.

,

Planet DebianDaniel Pocock: Let's talk about Hacking (EPFL, Lausanne, 20 February 2018)

I've been very fortunate to have the support from several free software organizations to travel to events around the world and share what I do with other people. It's an important mission in a world where technology is having an increasing impact on our lives. With that in mind, I'm always looking for ways to improve my presentations and my presentation skills. As part of mentoring programs like GSoC and Outreachy, I'm also looking for ways to help newcomers in our industry to maximize their skills in communicating about the great work they do when they attend their first event.

With that in mind, one of the initiatives I've taken this year is participating in the Toastmasters organization. I've attended several meetings of the Toastmasters group at EPFL and on 20 February 2018, I'll give my first talk there on the subject of Hacking.

If you live in the area, please come along. Entrance is free, there is plenty of parking available in the evening and it is close to the metro too. Please try to arrive early so as not to disrupt the first speaker. Location map, Add to your calendar.

The Toastmasters system encourages participants to deliver a series of ten short (5-7 minute) speeches, practicing a new skill each time.

The first of these, the The Ice Breaker, encourages speakers to begin using their existing skills and experience. When I read that in the guide, I couldn't help wondering if that is a cue to unleash some gadget on the audience.

Every group provides a system for positive feedback, support and mentoring for speakers at every level. It is really wonderful to see the impact that this positive environment has for everybody. At the EPFL meetings, I've met a range of people, some with far more speaking experience than me but most of them are working their way through the first ten speeches.

One of the longest running threads on the FSFE discussion list in 2017 saw several people arguing that it is impossible to share ideas without social media. If you have an important topic you want to share with the world, could public speaking be one way to go about it and does this possibility refute the argument that we "need" social media to share ideas? Is it more valuable to learn how to engage with a small audience for five minutes than to have an audience of hundreds on Twitter who scrolls past you in half a second as they search for cat photos? If you are not in Lausanne, you can easily find a Toastmasters club near you anywhere in the world.

Planet DebianDirk Eddelbuettel: digest 0.6.15

And yet another small maintenance release, now at version 0.6.15, of the digest package arrived on CRAN and in Debian today.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'sha-512', 'crc32', 'xxhash32', 'xxhash64' and 'murmur32' algorithms) permitting easy comparison of R language objects.

Just like release 0.6.13 in December, and release 0.6.14 two weeks ago, this release accomodates a request by R Core. This time it was Kurt who improved POSIXlt extraction yesterday which required a one-line change to sha1() summaries---which he kindly sent along. We also already had a change by Thierry who had generalized sha1() to accept a new argument allowing sha256 and sha512 summaries to be created.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RVowpalWabbit 0.0.11

Another boring little RVowpalWabbit package update to version 0.0.11 came in response to another CRAN request: We were writing temporary output (a cache file for the fit/prediction, to be precise) to a non-temporary directory, which is now being caught by new tests added by Kurt. And as this is frowned upon, we made the requested change.

No new code or features were added.

We should mention once more that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch. I am also thinking that rvw extensions / work may make for a good GSoC 2018 project. Again, if interested, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMichael Stapelberg: pristine-tar considered harmful

If you want to follow along at home, clone this repository:

% GBP_CONF_FILES=:debian/gbp.conf gbp clone https://anonscm.debian.org/git/pkg-go/packages/golang-github-go-macaron-inject.git

Now, in the golang-github-go-macaron-inject directory, I’m aware of three ways to obtain an orig tarball (please correct me if there are more):

  1. Run gbp buildpackage, creating an orig tarball from git (upstream/0.0_git20160627.0.d8a0b86)
    The resulting sha1sum is d085a04b7b35856be24f8cc4a9a6d9799cdb59b4.
  2. Run pristine-tar checkout
    The resulting sha1sum is d51575c0b00db5fe2bbf8eea65bc7c4f767ee954.
  3. Run origtargz
    The resulting sha1sum is d51575c0b00db5fe2bbf8eea65bc7c4f767ee954.

Have a look at the archive’s golang-github-go-macaron-inject_0.0~git20160627.0.d8a0b86-2.dsc, however: the file entry orig tarball reads:

f5d5941c7b77e8941498910b64542f3db6daa3c2 7688 golang-github-go-macaron-inject_0.0~git20160627.0.d8a0b86.orig.tar.xz

So, why did we get a different tarball? Let’s go through the methods:

  1. The uploader must not have used gbp buildpackage to create their tarball. Perhaps they imported from a tarball created by dh-make-golang, or created manually, and then left that tarball in place (which is a perfectly fine, normal workflow).
  2. I’m not entirely sure why pristine-tar resulted in a different tarball than what’s in the archive. I think the most likely theory is that the uploader had to go back and modify the tarball, but forgot to update (or made a mistake while updating) the pristine-tar branch.
  3. origtargz, when it detects pristine-tar data, uses pristine-tar, hence the same tarball as ②.

Had we not used pristine-tar for this repository at all, origtargz would have pulled the correct tarball from the archive.

The above anecdote illustrates the fragility of the pristine-tar approach. In my experience from the pkg-go team, when the pristine-tar branch doesn’t contain outright incorrect data, it is often outdated. Even when everything is working correctly, a number of packagers are disgruntled about the extra work/mental complexity.

In the pkg-go team, we have (independently of this specific anecdote) collectively decided to have the upstream branch track the upstream remote’s master (or similar) branch directly, and get rid of pristine-tar in our repositories. This should result in method ① and ③ working correctly.

In conclusion, my recommendation for any repository is: don’t bother with pristine-tar. Instead, configure origtargz as a git-buildpackage postclone hook in your ~/.gbp.conf to always work with archive orig tarballs:

[clone]
# Ensure the correct orig tarball is present.
postclone=origtargz

[buildpackage]
# Pick up the orig tarballs created by the origtargz postclone hook.
tarball-dir = ..

Planet DebianShirish Agarwal: Economic Migration, Unemployment, Retirement benefits in advanced countries etc.

After my South African Debconf experience and especially the Doha, Qatar layover experience soon after my return back, a friend from Kerala had sent me a link of a movie called Pathemari . For various reasons I could not see the movie till I had come from the Hospital few months back. I would recommend everybody to see that movie if they want to see issues from a blue-collar migrant worker’s views.

Before I venture further, I think a lot of people confuse between economic migration and immigration. As can be seen in the movie, the idea of economic migrants is to do work and come back to his/her own country while immigration is more about political asylum, freedom of expression those kind of ideas. The difference between the two can be starkly seen in one of my favorite movies of all times ‘Moscow on Hudson’.

I have had quite a few discussions with some friends from Kerala last year and years before seeing this movie and had been sort of flabbergasted with the answers shared by them with me at the time. Most of them were on the lines of ‘we don’t want/need any development. I/We would go to X (Any Oil producing country) or west to make money and then come back home. Then why should we have industry ? While this is from personal anecdotal experiences while I was in the hospital, I also saw similar observations online as well. For e.g. Northwestern did an article which explains some of the complexity years ago . More recently has been an IIM, Bangalore Working Paper which corroborates the importance of Nursing to Kerala, the state as well as to the Indian economy as a whole. It’s a pretty interesting paper specifically for those wishing to understand aspects of Indian migration outward (nursing) and some info. about expectations from such migrants who want to join in the labor markets in Netherlands and Denmark (local language, culture, adaptions etc. all of which is good.).

In hindsight, I now agree with parts of the reasons shared by my colleagues and peers from Kerala in context to what has occurred in Goa in recent past and how that affected tourism of the state. While it has been few years since I last stayed in Goa for 2 weeks or more, I have always found it to be a little piece of Paradise tucked in the corner.

Also similarly in the context of median age of Americans rising which was shared in the previous article, I don’t see them replenishing their own ranks with young blood. The baby boom years for America seem to be over (for now and bit into the future). On the medicine side, since we have been talking about nursing. another observation is it seems that the American Government will cut off whole lot of Americans from medical care which Mr. Trump did few days back. The statements shared therein seems much a spin story as no numbers were shared or anything. There was this report I read last year which tells how an urban middle-class American family might suffer depending on how much medicare is cut.

I have seen something very similar happening in Pune, India, with quite a few insurance companies, medical practitioners, staff etc. giving needless medicines or tests where they aren’t needed, more so if you have insurance. Of course after you have availed it, your individual premium will rise as the ‘risk’ has increased but this is veering off the main story. There were quite a few patients who shared their horror stories and lessons with me during my stay in the hospital.

On the labor front I don’t see a way out for Americans to work. For e.g. Patels (a caste and a community) went to States and found that most Americans do not or did not like maintaining motels and they provided/took over the that service, partly as it’s a risky business and partly most motels are run-down etc. Apart from the spin being put in the context of both legal and illegal immigration in States, it seems, at least to me there would be more undocumented illegal Americans living then those coming legally and America would suffer economically due to that.

You can see Qatar doing it already as well as Saudi Arabia trying to be more open, while States seems to be dancing on another beat altogether.

Coming to the India perspective –

Note – Mrs. Sushma Swaraj, Ministry of External Affairs, India has been particularly active and robust in seeking welfare for Indian brethren trying to find work abroad.

One of the few good things that the present Government has done is have a pro-active foreign policy minister and being given a free hand to operate, she also seems to trust herself and others to do the right thing. Although she hasn’t done much apart from taking the lowest apples which were ripe for taking for years, it also tells/reminds that what apathy most Governments had towards foreign policy partly due to the socialist structure and culture in education, culture and even affairs of the State.

While I was reading on the subject I came across I,Daniel Blake . I saw the movie and shared with my mother. We were both shocked as we saw the trials that the gentleman had to go through and eventually his passing away. We thought that only bureaucracy in India was bad, now we come to know its the same at least as UK is concerned.

https://www.theguardian.com/society/2018/jan/19/esther-mcvey-makes-disability-benefits-u-turn-over-payments

https://www.theguardian.com/society/2017/nov/17/benefit-claimants-underpaid-employment-support-allowance

https://www.theguardian.com/society/2016/mar/29/employment-and-support-allowance-the-disability-benefit-cuts-you-have-not-heard-about

https://www.theguardian.com/commentisfree/2018/jan/16/government-policy-poor-people-debt-benefits-universal-credit

http://www.dailymail.co.uk/news/article-3138853/Britain-s-mid-life-crisis-UK-average-age-hits-40-time-population-jumps-500-000-64-6-million.html

https://www.theguardian.com/business/2018/jan/28/freedom-great-deal-of-that-inside-the-eu-brexit

After seeing the movie saw the. The above does give some of the understanding why UK opted for Brexit and the expected fallout that probably will be.

Before I end I want to give a shout out, kudos to Daniel Echeverry for putting guake ported to gtk3 with dark theme. I really like the theme and do hope more themes follow in upcoming days, weeks and months.

Guake, dark theme and gtk3

Also another shoutout to Timo Aaltonen for getting a newer snapshot of xserver-xorg-video-intel for testing .

I do hope to explore a bit more of the new system, see what the new CPU, GPU can do in the coming days and weeks. I did some explorations about libsdl1.2 recently http://lists.alioth.debian.org/pipermail/pkg-sdl-maintainers/2018-January/002711.html and do hope to at least get some know-how where the newer integrated graphics and power options would become more useful in short and medium-term.

I also was thinking about the impending python3 transition and it seems that 90% of the big libraries are ready to make the transition. The biggest laggards seem to be mozilla, which I guess is still trying to deal with the fallout from firefox 57.0, the whole web-extensions bit etc.

Atm it seems a huge setback for mozilla, whether they will be able to survive is entirely on the third-party add-on developers. If that ecosystem doesn’t get enriched to the status they were before the transition, we could see firefox losing lot of users, at least in the short and medium-term.

Lastly, I did try to add a new usb device in the usb-database at https://usb-ids.gowdy.us/read/UD/1ecb/02e2 but there doesn’t seem to be a way to know whether that entry got accepted or not 😦

Planet DebianRenata D'Avila: The right to be included in the conversation

"To be GOVERNED is to be watched, inspected, spied upon, directed, law-driven, numbered, regulated, enrolled, indoctrinated, preached at, controlled, checked, estimated, valued, censured, commanded, by creatures who have neither the right nor the wisdom nor the virtue to do so. To be GOVERNED is to be at every operation, at every transaction noted, registered, counted, taxed, stamped, measured, numbered, assessed, licensed, authorized, admonished, prevented, forbidden, reformed, corrected, punished. It is, under pretext of public utility, and in the name of the general interest, to be placed under contribution, drilled, fleeced, exploited, monopolized, extorted from, squeezed, hoaxed, robbed; then, at the slightest resistance, the first word of complaint, to be repressed, fined, vilified, harassed, hunted down, abused, clubbed, disarmed, bound, choked, imprisoned, judged, condemned, shot, deported, sacrificed, sold, betrayed; and to crown all, mocked, ridiculed, derided, outraged, dishonored. That is government; that is its justice; that is its morality."

Pierre-Joseph Proudhon

Lately, to be on the internet is to be governed. A lot of those actions Proudhon mentions might as well refer to our interactions with the companies that 'rule' the web.

A while back, I applied to a tech event made for women, but I didn't get selected to attend it. I really, really wanted to attend it and because, for me, it wasn't clear on why I hadn't been selected (and what should I do next time to get to attend), I sent an email asking. Their answer? Even though the registration was put up on a public website (for the event) and anyone could sign up there, the spots for this event had been reserved for girls who had filled the form they had put up on the group's Facebook page. I didn't use Facebook and I wasn't one of these people, so I had been excluded. Okay.

When I began PyLadies Porto Alegre, I insisted with Liliane that we didn't make the same mistake. That any information about our meetings should be open to anyone who wanted to join. That is why we set up an website which hopefully any person with internet access could visit and get the next meeting date. We also created a mailing list (indeed, with the third-party company MailChimp) which you only need an e-mail to subscribe to and get the next meeting date delivered to your inbox. We publish event dates on Quitter.se (an instance of GNU Social), which then replicates to Twitter.

But the feedback we got on Python Brasil after presenting the work we've been doing with PyLadies Porto Alegre for the past year was the following:

"I didn't even know PyLadies Porto Alegre was so active, that it had organized this much stuff. You people don't divulge much."

"What do you mean?"

"Well, I always check PyLadies Brazil Facebook and there is always stuff about other groups there, but there isn't anything about PyLadies Porto Alegre".

"Yes, well, all our stuff is public in our website, that anyone can access without needing a Facebook account. We publish to GNU Social and Twitter. Our website link is on the PyLadies Brazil site. But if your only source of news is Facebook..."

If your only source of news is Facebook you will miss tons of all the amazing things that are happening out there and that are not on Facebook. The internet and the web itself are way more than Facebook (even though, depending on your access, it might be a bit complicated to get past this. This is not the case for the privileged people that I usually talk to and that simply chose to have Facebook as their 'internet portal/newsfeed').

The point is: regardless of which service the people looking to engage with PyLadies Porto Alegre had agreed to provide data for (Mailchimp, Twitter or their an email provider) - or even if they chose not to provide data at all, simply visiting our site using proxy or VPNs or Tor or scriptblockers that allowed them to anonimize their location - we tried ensure that anyone could get the information about the meetings.

Because Facebook actively excludes people that don't have an account there (requiring login to even look at some posts or events), we didn't bother with a profile there at first (even though we do have a Facebook page now -.-).

Even with all that, sometimes it feels like a one-women cruzade, to ensure that the information and participation stays open and free. Because it's a sad reality that if I don't take part and if I don't stir this conversation on this group, all these efforts are quickly forgotten and people easily fall back into the closed options "that everyone uses" (namely Facebook, Whatsapp, Google Hangouts).

It's important to say that it's not just companies that exclude people. We exclude people from the conversation, from the ability to interact with us when we chose proprietary (and closed) services to communicate with each other.

In name of using 'easy' and 'free' (as in beer) tools, people simply overlook that all these closed options require that you (and everyone else that want to contact you and has to use those plataforms) sign a contract with a company to use these services. And it's not even a contract that you can negotiate the terms, it's a contract that the company dictates the terms and it's terms that can be changed at at point in time, always to benefit these companies.

A Google Map with several spots marked, linked by red lines. Those are places I had visited in my city during a whole month.

This is a picture of all the data Google collected about my movements during one month in Porto Alegre, simply because I owned an Android smartphone. If a person, a company or a government knowing and cataloging every movement you make doesn't freak you out, I honestly don't know what would.

People overlook that all those 'free' services aren't really free. They have a cost. A cost that is being paid by our actual freedom as human beings. They take away our freedom of having a really free internet and we are complicit. We are allowing them to do that. By giving away our privacy (and the privacy of other fellow human beings), we are allowing these companies (and, a lot of times, governments) to watch and know every step we take, to chose what we read and which websites we access, how we think, to limit our freedom of expression, our freedom of chosing not to have our data in their database and even our freedom of being. A lot of times, they act without us even knowing what they are doing.

If there is one wish I have for this Data Privacy Day, is for people to start considering the services they are using and how this affects everyone else around them. I do not choose to have my phone number indexed by Google, you do that for me when you add me to your contacts. I do not choose to have my face indentified and indexed by Facebook, it's you who do that every time you upload a picture with me to your timeline. But, most of all, it's not me who choses to be 'out of reach', 'not to participate in your community or your meeting', 'to isolate myself from communicating on the internet' (even though I am constantly online). It's you who choose to hide behind proprietary services with terms I cannot consciously agree to.

Planet DebianAntoine Beaupré: 4TB+ large disk price review

For my personal backups, I am now looking at 4TB+ single-disk long-term storage. I currently have 3.5TB of offline storage, split in two disks: this is rather inconvenient as I need to plug both in a toaster-like SATA enclosure which gathers dusts and performs like crap. Now I'm looking at hosting offline backups at a friend's place so I need to store everything in a single drive, to save space.

This means I need at least 4TB of storage, and those needs are going to continuously expand in the future. Since this is going to be offsite, swapping the drive isn't really convenient (especially because syncing all that data takes a long time), so I figured I would also look at more than 4 TB.

So I built those neat little tables. I took the prices from Newegg.ca or Newegg.com as a fallback when the item wasn't available in Canada. I used to order from NCIX because it was "more" local, but they unfortunately went bankrupt and in the worse possible way: the website is still up and you can order stuff, but those orders never ship. Sad to see a 20-year old institution go out like that; I blame Jeff Bezos.

I also used failure rate figures from the latest Backblaze review, although those should always be taken with a grain of salt. For example, the apparently stellar 0.00% failure rates are all on sample sizes too small to be statistically significant (<100 drives).

All prices are in CAD, sometimes after conversion from USD for items that are not on newegg.ca, as of today.

8TB

Brand Model Price $/TB fail% Notes
HGST 0S04012 280$ 35$ N/A
Seagate ST8000NM0055 320$ 40$ 1.04%
WD WD80EZFX 364$ 46$ N/A
Seagate ST8000DM002 380$ 48$ 0.72%
HGST HUH728080ALE600 791$ 99$ 0.00%

6TB

Brand Model Price $/TB fail% Notes
HGST 0S04007 220$ 37$ N/A
Seagate ST6000DX000 ~222$ 56$ 0.42% not on .ca, refurbished
Seagate ST6000AS0002 230$ 38$ N/A
WD WD60EFRX 280$ 47$ 1.80%
Seagate STBD6000100 343$ 58$ N/A

4TB

Brand Model Price $/TB fail% Notes
Seagate ST4000DM004 125$ 31$ N/A
Seagate ST4000DM000 150$ 38$ 3.28%
WD WD40EFRX 155$ 39$ 0.00%
HGST HMS5C4040BLE640 ~242$ 61$ 0.36% not on .ca
Toshiba MB04ABA400V ~300$ 74$ 0.00% not on .ca

Conclusion

Cheapest per TB costs seem to be in the 4TB range, but the 8TB HGST comes really close. Reliabilty for this drive could be an issue, however - I can't explain why it is so cheap compared to other devices... But I guess we'll see where it goes as I'll just order the darn thing and try it out.

,

Krebs on SecurityFirst ‘Jackpotting’ Attacks Hit U.S. ATMs

ATM “jackpotting” — a sophisticated crime in which thieves install malicious software and/or hardware at ATMs that forces the machines to spit out huge volumes of cash on demand — has long been a threat for banks in Europe and Asia, yet these attacks somehow have eluded U.S. ATM operators. But all that changed this week after the U.S. Secret Service quietly began warning financial institutions that jackpotting attacks have now been spotted targeting cash machines here in the United States.

To carry out a jackpotting attack, thieves first must gain physical access to the cash machine. From there they can use malware or specialized electronics — often a combination of both — to control the operations of the ATM.

A keyboard attached to the ATM port. Image: FireEye

On Jan. 21, 2018, KrebsOnSecurity began hearing rumblings about jackpotting attacks, also known as “logical attacks,” hitting U.S. ATM operators. I quickly reached out to ATM giant NCR Corp. to see if they’d heard anything. NCR said at the time it had received unconfirmed reports, but nothing solid yet.

On Jan. 26, NCR sent an advisory to its customers saying it had received reports from the Secret Service and other sources about jackpotting attacks against ATMs in the United States.

“While at present these appear focused on non-NCR ATMs, logical attacks are an industry-wide issue,” the NCR alert reads. “This represents the first confirmed cases of losses due to logical attacks in the US. This should be treated as a call to action to take appropriate steps to protect their ATMs against these forms of attack and mitigate any consequences.”

The NCR memo does not mention the type of jackpotting malware used against U.S. ATMs. But a source close to the matter said the Secret Service is warning that organized criminal gangs have been attacking stand-alone ATMs in the United States using “Ploutus.D,” an advanced strain of jackpotting malware first spotted in 2013.

According to that source — who asked to remain anonymous because he was not authorized to speak on the record — the Secret Service has received credible information that crooks are activating so-called “cash out crews” to attack front-loading ATMs manufactured by ATM vendor Diebold Nixdorf.

The source said the Secret Service is warning that thieves appear to be targeting Opteva 500 and 700 series Dielbold ATMs using the Ploutus.D malware in a series of coordinated attacks over the past 10 days, and that there is evidence that further attacks are being planned across the country.

“The targeted stand-alone ATMs are routinely located in pharmacies, big box retailers, and drive-thru ATMs,” reads a confidential Secret Service alert sent to multiple financial institutions and obtained by KrebsOnSecurity. “During previous attacks, fraudsters dressed as ATM technicians and attached a laptop computer with a mirror image of the ATMs operating system along with a mobile device to the targeted ATM.

Reached for comment, Diebold shared an alert it sent to customers Friday warning of potential jackpotting attacks in the United States. Diebold’s alert confirms the attacks so far appear to be targeting front-loaded Opteva cash machines.

“As in Mexico last year, the attack mode involves a series of different steps to overcome security mechanism and the authorization process for setting the communication with the [cash] dispenser,” the Diebold security alert reads. A copy of the entire Diebold alert, complete with advice on how to mitigate these attacks, is available here (PDF).

The Secret Service alert explains that the attackers typically use an endoscope — a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body — to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM’s computer.

An endoscope made to work in tandem with a mobile device. Source: gadgetsforgeeks.com.au

“Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers,” reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

“In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds,” the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

An 2017 analysis of Ploutus.D by security firm FireEye called it “one of the most advanced ATM malware families we’ve seen in the last few years.”

“Discovered for the first time in Mexico back in 2013, Ploutus enabled criminals to empty ATMs using either an external keyboard attached to the machine or via SMS message, a technique that had never been seen before,” FireEye’s Daniel Regalado wrote.

According to FireEye, the Ploutus attacks seen so far require thieves to somehow gain physical access to an ATM — either by picking its locks, using a stolen master key or otherwise removing or destroying part of the machine.

Regalado says the crime gangs typically responsible for these attacks deploy “money mules” to conduct the attacks and siphon cash from ATMs. The term refers to low-level operators within a criminal organization who are assigned high-risk jobs, such as installing ATM skimmers and otherwise physically tampering with cash machines.

“From there, the attackers can attach a physical keyboard to connect to the machine, and [use] an activation code provided by the boss in charge of the operation in order to dispense money from the ATM,” he wrote. “Once deployed to an ATM, Ploutus makes it possible for criminals to obtain thousands of dollars in minutes. While there are some risks of the money mule being caught by cameras, the speed in which the operation is carried out minimizes the mule’s risk.”

Indeed, the Secret Service memo shared by my source says the cash out crew/money mules typically take the dispensed cash and place it in a large bag. After the cash is taken from the ATM and the mule leaves, the phony technician(s) return to the site and remove their equipment from the compromised ATM.

“The last thing the fraudsters do before leaving the site is to plug the Ethernet cable back in,” the alert notes.

FireEye said all of the samples of Ploutus.D it examined targeted Diebold ATMs, but it warned that small changes to the malware’s code could enable it to be used against 40 different ATM vendors in 80 countries.

The Secret Service alert says ATMs still running on Windows XP are particularly vulnerable, and it urged ATM operators to update to a version of Windows 7 to defeat this specific type of attack.

This is a quickly developing story and may be updated multiple times over the next few days as more information becomes available.

Planet DebianAntoine Beaupré: A summary of my 2017 work

New years are strange things: for most arbitrary reasons, around January 1st we reset a bunch of stuff, change calendars and forget about work for a while. This is also when I forget to do my monthly report and then procrastinate until I figure out I might as well do a year report while I'm at it, and then do nothing at all for a while.

So this is my humble attempt at fixing this, about a month late. I'll try to cover December as well, but since not much has happened then, I figured I could also review the last year and think back on the trends there. Oh, and you'll get chocolate cookies of course. Hang on to your eyeballs, this won't hurt a bit.

Debian Long Term Support (LTS)

Those of you used to reading those reports might be tempted to skip this part, but wait! I actually don't have much to report here and instead you will find an incredibly insightful and relevant rant.

So I didn't actually do any LTS work in December. I actually reduced my available hours to focus on writing (more on that later). Overall, I ended up working about 11 hours per month on LTS in 2017. That is less than the 16-20 hours I was available during that time. Part of that is me regularly procrastinating, but another part is that finding work to do is sometimes difficult. The "easy" tasks often get picked and dispatched quickly, so the stuff that remains, when you're not constantly looking, is often very difficult packages.

I especially remember the pain of working on libreoffice, the KRACK update, more tiff, GraphicsMagick and ImageMagick vulnerabilities than I care to remember, and, ugh, Ruby... Masochists (also known as "security researchers") can find the details of those excruciating experiments in debian-lts for the monthly reports.

I don't want to sound like an old idiot, but I must admit, after working on LTS for two years, that working on patching old software for security bugs is hard work, and not particularly pleasant on top of it. You're basically always dealing with other people's garbage: badly written code that hasn't been touched in years, sometimes decades, that no one wants to take care of.

Yet someone needs to take care of it. A large part of the technical community considers Linux distributions in general, and LTS releases in particular, as "too old to care for". As if our elders, once they passed a certain age, should just be rolled out to the nearest dumpster or just left rotting on the curb. I suspect most people don't realize that Debian "stable" (stretch) was released less than a year ago, and "oldstable" (jessie) is a little over two years old. LTS (wheezy), our oldest supported release, is only four years old now, and will become unsupported this summer, on its fifth year anniversary. Five years may seem like a long time in computing but really, there's a whole universe out there and five years is absolutely nothing in the range of changes I'm interested in: politics, society and the environment range much beyond that shortsightedness.

To put things in perspective, some people I know still run their office on an Apple II, which celebrated its 40th anniversary this year. That is "old". And the fact that the damn thing still works should command respect and admiration, more than contempt. In comparison, the phone I have, an LG G3, is running an unpatched, vulnerable version of Android because it cannot be updated, because it's locked out of the telcos networks, because it was found in a taxi and reported "lost or stolen" (same thing, right?). And DRM protections in the bootloader keep me from doing the right thing and unbricking this device.

We should build devices that last decades. Instead we fill junkyards with tons and tons of precious computing devices that have more precious metals than most people carry as jewelry. We are wasting generations of programmers, hardware engineers, human robots and precious, rare metals on speculative, useless devices that are destroying our society. Working on supporting LTS is a small part in trying to fix the problem, but right now I can't help but think we have a problem upstream, in the way we build those tools in the first place. It's just depressing to be at the receiving end of the billions of lines of code that get created every year. Hopefully, the death of Moore's law could change that, but I'm afraid it's going to take another generation before programmers figure out how far away from their roots they have strayed. Maybe too long to keep ourselves from a civilization collapse.

LWN publications

With that gloomy conclusion, let's switch gears and talk about something happier. So as I mentioned, in December, I reduced my LTS hours and focused instead on finishing my coverage of KubeCon Austin for LWN.net. Three articles have already been published on the blog here:

... and two more articles, about Prometheus, are currently published as exclusives by LWN:

I was surprised to see that the container runtimes article got such traction. It wasn't the most important debate in the whole conference, but there were some amazingly juicy bits, some of which we didn't even cover because. Those were... uh... rather controversial and we want the community to stay sane. Or saner, if that word can be applied at all to the container community at this point.

I ended up publishing 16 articles at LWN this year. I'm really happy about that: I just love writing and even if it's in English (my native language is French), it's still better than rambling on my own like I do here. My editors allow me to publish well polished articles, and I am hugely grateful for the privilege. Each article takes about 13 hours to write, on average. I'm less happy about that: I wish delivery was more streamlined and I spare you the miserable story of last minute major changes I sent in some recent articles, to which I again apologize profusely to my editors.

I'm often at a loss when I need to explain to friends and family what I write about. I often give the example of the password series: I wrote a whole article about just how to pick a passphrase then a review of two geeky password managers and then a review of something that's not quite a password manager and you shouldn't be using. And on top of that, I even wrote an history of those but by that time my editors were sick and tired of passwords and understandably made me go away. At this point, neophytes are just scratching their heads and I remind them of the TL;DR:

  1. choose a proper password with a bunch of words picked at random (really random, check out Diceware!)

  2. use a password manager so you have to remember only one good password

  3. watch out where you type those damn things

I covered two other conferences this year as well: one was the NetDev conference, for which I wrote 4 articles (1, 2, 3, 4). It turned out I couldn't cover NetDev in Korea even though I wanted to, but hopefully that is just "partie remise" as we say in french... I also covered DebConf in Montreal, but that ended up being much harder than I thought: I got involved in networking and volunteered all over the place. By the time the conference started, I was too exhausted to do actually write anything, even though I took notes like crazy and ran around trying to attend everything. I found it's harder to write about topics that are close to home: nothing is new, so you don't get excited as much. I still enjoyed writing about the supposed decline of copyleft, which was based on a talk by FSF executive director John Sullivan, and I ended up writing about offline PGP key storage strategies and cryptographic keycards, after buying a token from friendly gniibe at DebConf.

I also wrote about Alioth moving to Pagure, unknowingly joining up with a long tradition of failed predictions at LWN: a few months later, the tide turned and Debian launched the Alioth replacement as a beta running... GitLab. Go figure - maybe this is the a version of the quantum observer effect applied to journalism?

Two articles seemed to have been less successful. The GitHub TOS update was less controversial than I expected it would be and didn't seem to have a significant impact, although GitHub did rephrase some bits of their TOS eventually. The ROCA review didn't seem to bring excited crowds either, maybe because no one actually understood anything I was saying (including myself).

Still, 2017 has been a great ride in LWN-land: I'm hoping to publish even more during the next year and encourage people to subscribe to the magazine, as it helps us publish new articles, if you like what you're reading here of course.

Free software work

Last but not least is my free software work. This was just nuts.

New programs

I have written a bunch of completely new programs:

  • Stressant - a small wrapper script to stress-test new machines. no idea if anyone's actually using the darn thing, but I have found it useful from time to time.

  • Wallabako - a bridge between Wallabag and my e-reader. This is probably one of my most popular programs ever. I get random strangers asking me about it in random places, which is quite nice. Also my first Golang program, something I am quite excited about and wish I was doing more of.

  • Ecdysis - a pile of (Python) code snippets, documentation and standard community practices I reuse across projects. Ended up being really useful when bootstrapping new projects, but probably just for me.

  • numpy-stats - a dumb commandline tool to extract stats from streams. didn't really reuse it so maybe not so useful. interestingly, found similar tools called multitime and hyperfine that will be useful for future benchmarks

  • feed2exec - a new feed reader (just that) which I have been using ever since for many different purposes. I have now replaced feed2imap and feed2tweet with that simple tool, and have added support for storing my articles on https://archive.org/, checking for dead links with linkchecker (below) and pushing to the growing Mastodon federation

  • undertime - a simple tool to show possible meeting times across different timezones. a must if you are working with people internationally!

If I count this right (and I'm omitting a bunch of smaller, less general purpose programs), that is six new software projects, just this year. This seems crazy, but that's what the numbers say. I guess I like programming too, which is arguably a form of writing. Talk about contributing to the pile of lines of code...

New maintainerships

I also got more or less deeply involved in various communities:

And those are just the major ones... I have about 100 repositories active on GitHub, most of which are forks of existing repositories, so actual contributions to existing free software projects. Hard numbers for this are annoyingly hard to come by as well, especially in terms of issues vs commits and so on. GitHub says I have made about 600 contributions in the last year, which is an interesting figure as well.

Debian contributions

I also did a bunch of things in the Debian project, apart from my LTS involvement:

  • Gave up on debmans, a tool I had written to rebuild https://manpages.debian.org, in the face of the overwhelming superiority of the Golang alternative. This is one of the things which lead me to finally try the language and write Wallabako. So: net win.

  • Proposed standard procedures for third-party repositories, which didn't seem to have caught up significantly in the real world. Hopefully just a matter of time...

  • Co-hosted a bug squashing party for the Debian stretch release, also as a way to have the DebConf team meet up.

  • That lead to a two hour workshop at Montreal DebConf which was packed and really appreciated. I'm thinking of organizing this at every DebConf I attend, in a (so far) secret plot to standardize packaging practices by evangelizing new package maintainers to my peculiar ways. I hope to teach again in Taiwan this year, but I'm not sure I'll make it that far across the globe...

  • And of course, I did a lot of regular package maintenance. I don't have good numbers on the exact activity stats here (any way to pull that out easily?) but I now directly maintain 34 Debian packages, a somewhat manageable number.

What's next?

This year, I'll need to figure out what to do with legacy projects. Gameclock and Monkeysign both need to be ported away from GTK2, which is deprecated. I will probably abandon the GUI in Monkeysign but gameclock will probably need a rewrite of its GUI. This begs the question of how we can maintain software in the longterm if even the graphical interface (even Xorg is going away!) is swept away under our feet all the time. Without this change, both software could have kept on going for another decade without trouble. But now, I need to spend time just to keep those tools from failing to build at all.

Wallabako seems to be doing well on its own, but I'd like to fix the refresh issues that make the reader sometimes unstable: maybe I can write directly to the SQLite database? I tried statically linking sqlite to do some tests about that, but that's apparently impossible and failed.

Feed2exec just works for me. I'm not very proud of the design, but it does its job well. I'll fix bugs and maybe push out a 1.0 release when a long enough delay goes by without any critical issues coming up. So try it out and report back!

As for the other projects, I'm not sure how it's going to go. It's possible that my involvement in paid work means I cannot commit as much to general free software work, but I can't help but just doing those drive-by contributions all the time. There's just too much stuff broken out there to sit by and watch the dumpster fire burn down the whole city.

I'll try to keep doing those reports, of which you can find an archive in monthly-report. Your comments, encouragements, and support make this worth it, so keep those coming!

Happy new year everyone: may it be better than the last, shouldn't be too hard...

PS: Here is the promised chocolate cookie: 🍪 Well, technically, that is a plain cookie, but the only chocolate-related symbol was 🍫 (chocolate bar): modernity is to be expected with technology...

Cory DoctorowI’m speaking at UCSD on Feb 9!

I’m appearing at UCSD on February 9, with a talk called “Scarcity, Abundance and the Finite Planet: Nothing Exceeds Like Excess,” in which I’ll discuss the potentials for scarcity and abundance — and bright-green vs austere-green futurism — drawing on my novels Walkaway, Makers and Down and Out in the Magic Kingdom.


The talk is free and open to the public; the organizers would appreciate an RSVP to galleryinfo@calit2.net.


The lecture will take place in the Calit2 Auditorium in the Qualcomm Institute’s Atkinson Hall headquarters. The talk will begin at 5:00 p.m., and it will be moderated by Visual Arts professor Jordan Crandall, who chairs the gallery@calit2’s 2017-2018 faculty committee. Following the talk and Q&A session, attendees are invited to stay for a public reception.


Doctorow will discuss the economics, material science, psychology and politics of scarcity and abundance as described in his novels WALKAWAY, MAKERS and DOWN AND OUT IN THE MAGIC KINGDOM. Together, they represent what he calls “a 15-year literature of technology, fabrication and fairness.”

Among the questions he’ll pose: How can everyone in the world live like an American when we need six planets’ worth of materials to realize that dream? Doctorow also asks, “Can fully automated luxury communism get us there, or will our futures be miserable austerity-ecology hairshirts where we all make do with less?”


Author Cory Doctorow to Speak at UC San Diego on Scarcity, Abundance and the Finite Planet [Doug Ramsey/UCSD News]

Planet DebianSteinar H. Gunderson: Exploring minimax polynomials with Sollya

Following Fabian Giesen's advice, I took a look at Sollya—I'm not really that much into numerics (and Sollya, like the other stuff that comes out of the same group, is really written by hardcode numerics nerds), but approximation is often useful.

A simple example: When converting linear light values to sRGB, you need to be able to compute the formula f(x) = (x + ɑ - 1) / ɑ)^ɣ for a given (non-integer) ɑ and ɣ. (Movit frequently needs this. For the specific case of sRGB, GPUs often have hard-coded lookup tables, but they are not always applicable, for instance if the data comes from Y'CbCr.) However, even after simplifications, the exponentiation is rather expensive to run for every pixel, so we'd like some sort of approximation.

If you've done any calculus, you may have heard of Taylor series, which looks at the derivatives in a certain point and creates a polynomial from that. One of the perhaps most famous is arctan(x) = x - 1/3 x³ + 1/5 x⁵ - 1/7 x⁷ + ..., which gives rise to a simple formula for approximating pi if you set x=1 (since arctan(1) = pi/4). However, for practical approximation, Taylor series are fairly useless; they're accurate near the origin point of the expansion, but don't care at all about what happens far from it. Minimax polynomials are better; they minimize the maximum error over the range of interest.

In the past, I've been using Maple for this (I never liked Mathematica much); it's non-free, but not particularly expensive for a personal license, and it can do pretty much everything I expect from a computer algebra system. However, it would be interesting to see if Sollya could do better. After toying around a bit, it seems there are pros and cons:

  • Sollya appears to be faster. I haven't made any formal benchmarks, but I just feel like I have to wait a lot less for it.
  • I find Sollya's syntax maybe a bit more obscure (e.g., [| to start a list), although this is probably partially personal preference. Its syntax error handling is also a lot less friendly.
  • Sollya appears to be a lot more robust towards actually terminating with a working result. E.g., Maple just fails on optimizing sqrt(x) over 0..1 (a surprisingly hard case), whereas I haven't really been able to make Sollya fail yet except in the case of malformed problems (e.g. asking for optimizing for relative error of an error which is zero at certain points). Granted, I haven't pushed it that hard.
  • Maple supports a much wider range of functions. This is a killer for me; I frequently need something as simple as piecewise functions, and Sollya simply doesn't appear to support them.
  • Maple supports rational expansions, ie. two polynomials divided by each other (which can often increase performance dramatically—although the execution cost also balloons, of course). Sollya doesn't. On the other hand, Sollya supports expansion over given base functions, e.g. if you happen to sin(x) computed for whatever obscure reason, you can get an expansion of the type f(x) = a + bsin(x) + cx + dsin(x)² + ex².
  • Maple supports arbitrary weighing of the error (e.g. if you care more about errors at the endpoints)—I find this super-useful, especially if you are dealing with transformed variables or piecewise approximations. Sollya only supports relative and absolute errors, which is more limiting.
  • Sollya can seemingly be embedded as a library. Useful for some, not really relevant for me.
  • And finally, Sollya doesn't optimize coefficients over arbitrary precision; you tell it what accuracy you have to deal with (number of bits in floating or fixed point) and it optimizes the coefficients with that round-off error in mind. (I don't know if it also deals with intermediate roundoff errors when evaluating the polynomial.) Fabian makes a big deal of this, but for fp32, it doesn't really seem to matter much; I did some tests relative to what I had already gotten out of Maple, and the difference in maximum error was microscopic.

So, the verdict? Sollya is certainly good, and I can see myself using it in the future, but for me, it's more of an augmentation than replacing Maple for this use.

Planet Linux AustraliaDonna Benjamin: Turning stories into software at LCA2018

Donna speaking in front of a large screen showing a survey and colourful graph. Photo Credit: Josh Simmons
I love free software, but sometimes, I feel, that free software does not love me.
 
Why is it so hard to use? Why is it still so buggy? Why do the things I can do simply with other tools, take so much effort? Why is the documentation so inscrutable?  Why have all the config settings been removed from the GUI? Why does this HowTo assume I can find a config file, and edit it with VI? Do I have to learn to use VI before I can stop my window manager getting in the way of the application I’m trying to use?
 
Tis a mystery. Or is it?
 
It’s fair to say, that the Free Software community is still largely made up of blokes, who are software developers.  The idea that “user centered design” is a “Good Thing” is not evenly distributed. In fact, some seem to think it’s not a good thing at all, “patches welcome” they say, “go fix it yourself”. 
 
The web community on the other hand, has discovered that the key to their success is understanding and meeting the needs of the people who use their software. Ideological purity is great, but enabling people to meet their objectives, is better.
 
As technologists, we get excited by technology. Of course we do! Technology is modern magic. And we are wizards. It’s wonderful. But the people who use our software are not necessarily interested in the tech itself, they probably just want to use it to get something done. They probably don’t even care what language it’s written in.
 
Let’s say a customer walks into a hardware store and says they want a drill.  Or perhaps they walk in and stand in front of a shelf simply contemplating a dizzying array of drills, drill bits and other accessories. Which one is right for the job they wonder. Should I get a cordless one? Will I really need diamond tipped drill bits? 
 
There's a technique called the 5 Why's that's useful to get under the surface of a requirement. The idea is, you keep asking why until you uncover the real reason for a request, need, feature or widget. For example, we could ask this customer...
 
Why do you want this drill? To drill a hole. 
Why? To hang a picture on my wall.  
Why? To be able to share and enjoy this amazing photo from my recent holiday.
 
So we discover our customer did not, in fact, want a drill. Our customer wanted to express something about their identity by decorating their home.  So telling them all about the voltage of the drill, and the huge range of drill bits available, may have helped them choose the right drill for the job, but if we stop to understand the job in the first place, we’re more likely to be able to help that person get what they need to get their job done.
 
User stories are one way we can explore the “Why” behind the software we build. Check out my talk from the Developers Developers miniconf at linux.conf.au on Monday “Turning stories, into software.”
 

 

References

 

Photo by Josh Simmons

,

CryptogramFriday Squid Blogging: Squid that Mate, Die, and Then Sink

The mating and death characteristics of some squid are fascinating.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityRegistered at SSA.GOV? Good for You, But Keep Your Guard Up

KrebsOnSecurity has long warned readers to plant your own flag at the my Social Security online portal of the U.S. Social Security Administration (SSA) — even if you are not yet drawing benefits from the agency — because identity thieves have been registering accounts in peoples’ names and siphoning retirement and/or disability funds. This is the story of a Midwest couple that took all the right precautions and still got hit by ID thieves who impersonated them to the SSA directly over the phone.

In mid-December 2017 this author heard from Ed Eckenstein, a longtime reader in Oklahoma whose wife Ruth had just received a snail mail letter from the SSA about successfully applying to withdraw benefits. The letter confirmed she’d requested a one-time transfer of more than $11,000 from her SSA account. The couple said they were perplexed because both previously had taken my advice and registered accounts with MySocialSecurity, even though Ruth had not yet chosen to start receiving SSA benefits.

The fraudulent one-time payment that scammers tried to siphon from Ruth Eckenstein’s Social Security account.

Sure enough, when Ruth logged into her MySocialSecurity account online, there was a pending $11,665 withdrawal destined to be deposited into a Green Dot prepaid debit card account (funds deposited onto a Green Dot card can be spent like cash at any store that accepts credit or debit cards). The $11,655 amount was available for a one-time transfer because it was intended to retroactively cover monthly retirement payments back to her 65th birthday.

The letter the Eckensteins received from the SSA indicated that the benefits had been requested over the phone, meaning the crook(s) had called the SSA pretending to be Ruth and supplied them with enough information about her to enroll her to begin receiving benefits. Ed said he and his wife immediately called the SSA to notify them of fraudulent enrollment and pending withdrawal, and they were instructed to appear in person at an SSA office in Oklahoma City.

The SSA ultimately put a hold on the fraudulent $11,665 transfer, but Ed said it took more than four hours at the SSA office to sort it all out. Mr. Eckenstein said the agency also informed them that the thieves had signed his wife up for disability payments. In addition, her profile at the SSA had been changed to include a phone number in the 786 area code (Miami, Fla.).

“They didn’t change the physical address perhaps thinking that would trigger a letter to be sent to us,” Ed explained.

Thankfully, the SSA sent a letter anyway. Ed said many additional hours spent researching the matter with SSA personnel revealed that in order to open the claim on Ruth’s retirement benefits, the thieves had to supply the SSA with a short list of static identifiers about her, including her birthday, place of birth, mother’s maiden name, current address and phone number.

Unfortunately, most (if not all) of this data is available on a broad swath of the American populace for free online (think Zillow, Ancestry.com, Facebook, etc.) or else for sale in the cybercrime underground for about the cost of a latte at Starbucks.

The Eckensteins thought the matter had been resolved until Jan. 14, when Ruth received a 1099 form from the SSA indicating they’d reported to the IRS that she had in fact received an $11,665 payment.

“We’ve emailed our tax guy for guidance on how to deal with this on our taxes,” Mr. Eckenstein wrote in an email to KrebsOnSecurity. “My wife logged into SSA portal and there was a note indicating that corrected/updated 1099s would be available at the end of the month. She’s not sure whether that message was specific to her or whether everyone’s seeing that.”

NOT SMALL IF IT HAPPENS TO YOU

Identity thieves have been exploiting authentication weaknesses to divert retirement account funds almost since the SSA launched its portal eight years ago. But the crime really picked up in 2013, around the same time KrebsOnSecurity first began warning readers to register their own accounts at the MySSA portal. That uptick coincided with a move by the U.S. Treasury to start requiring that all beneficiaries receive payments through direct deposit (though the SSA says paper checks are still available to some beneficiaries under limited circumstances).

More than 34 million Americans now conduct business with the Social Security Administration (SSA) online. A story this week from Reuters says the SSA doesn’t track data on the prevalence of identity theft. Nevertheless, the agency assured the news outlet that its anti-fraud efforts have made the problem “very rare.”

But Reuters notes that a 2015 investigation by the SSA’s Office of Inspector General investigation identified more than 30,000 suspicious MySSA registrations, and more than 58,000 allegations of fraud related to MySSA accounts from February 2013 to February 2016.

“Those figures are small in the context of overall MySSA activity – but it will not seem small if it happens to you,” writes Mark Miller for Reuters.

The SSA has not yet responded to a request for comment.

Ed and Ruth’s experience notwithstanding, it’s still a good idea to set up a MySSA account — particularly if you or your spouse will be eligible to withdraw benefits soon. The agency has been trying to beef up online authentication for citizens logging into its MySSA portal. Last summer the SSA began requiring all users to enter a username and password in addition to a one-time security code sent their email or phone, although as previously reported here that authentication process could be far more robust.

The Reuters story reminds readers to periodically use the MySSA portal to make sure that your personal information – such as date of birth and mailing address – are correct. “For current beneficiaries, if you notice that a monthly payment has not arrived, you should notify the SSA immediately via the agency’s toll-free line (1-800-772-1213) or at your local field office,” Miller advised. “In most cases, the SSA will make you whole if the theft is reported quickly.”

Another option is to use the SSA’s “Block Electronic Access” feature, which blocks any automatic telephone or online access to your Social Security record – including by you (although it’s unclear if blocking access this way would have stopped ID thieves who manage to speak with a live SSA representative). To restore electronic access, you’ll need to contact the Social Security Administration and provide proof of your identity.

Planet DebianEddy Petrișor: Detecting binary files in the history of a git repository

Git, VCSes and binary files

Git is famous and has become popular even in the enterprise/commercial environments. But Git is also infamous regarding storage of large and/or binary files that change often, in spite of the fact they can be efficiently stored. For large files there have been several attempts to fix the issue, with varying degree of success, the most successful being git-lfs and git-annex.

My personal view is that, contrary to many practices, is a bad idea to store binaries in any VCS. Still, this practice has been and still is in use in may projects, especially in closed source projects. I won't go into the reasons, and how legitimate they are, let's say that we might finally convince people that binaries should be removed from the VCS, git, in particular.

Since the purpose of a VCS is to make sure all versions of the stored objects are never lost, Linus designed git in such a way that knowing the exact hash of the tip/head of your git branch, it is guaranteed the whole history of that branch hasn't changed even if the repository was stored in a non-trusted location (I will ignore hash collisions, for practical reasons).

The consequence of this is that if the history is changed one bit, all commit hashes and history after that change will change also. This is what people refer to when they say they rewrite the (git) history, most often, in the context of a rebase.

But did you know that you could use git rebase to traverse the history of a branch and do all sorts of operations such as detecting all binary files that were ever stored in the branch?

Detecting any binary files, only in the current commit

As with everything on *nix, we start with some building blocks, and construct our solution on top of them. Let's first find all files, except the ones in .git:

find . -type f -print | grep -v '^\.\/\.git\/'
Then we can use the 'file' utility to list for non-text files:
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text'
And if there are any such file, then it means the current git commit is one that needs our attention, otherwise, we're fine.
(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
 Of course, we assume here, the work tree is clean.

Checking all commits in a branch

Since we want to make this an efficient process and we only care if the history contains binaries, and branches are cheap in git, we can use a temporary branch that can be thrown away after our processing is finalized.
Making a new branch for some experiments is also a good idea to avoid losing the history, in case we do some stupid mistakes during our experiment.

Hence, we first create a new branch which points to the exact same tip the branch to be checked points to, and move to it:
git checkout -b test_bins
Git has many commands that facilitate automation, and my case I want to basically run the chain of commands on all commits. For this we can put our chain of commands in a script:

cat > ../check_file_text.sh
#!/bin/sh

(find . -type f -print | grep -v '^\.\/\.git\/' | xargs file )| egrep -v '(ASCII|Unicode) text' && (echo 'ERROR:' && git show --oneline -s) || echo OK
then (ab)use 'git rebase' to execute that for us for all commits:
git rebase --exec="sh ../check_file_text.sh" -i $startcommit
After we execute this, the editor window will pop up, just save and exit. Assuming $startcommit is the hash of the first commit we know to be clean or beyond which we don't care to search for binaries, this will look in all commits since then.

Here is an example output when checking the newest 5 commits:

$ git rebase --exec="sh ../check_file_text.sh" -i HEAD~5
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Executing: sh ../check_file_text.sh
OK
Successfully rebased and updated refs/heads/test_bins.

Please note this process can change the history on the test_bins branch, but that is why we used a throw-away branch anyway, right? After we're done, we can go back to another branch and delete the test branch.

$ git co master
Switched to branch 'master'

Your branch is up-to-date with 'origin/master'
$ git branch -D test_bins
Deleted branch test_bins (was 6358b91).
Enjoy!

CryptogramThe Effects of the Spectre and Meltdown Vulnerabilities

On January 3, the world learned about a series of major security vulnerabilities in modern microprocessors. Called Spectre and Meltdown, these vulnerabilities were discovered by several different researchers last summer, disclosed to the microprocessors' manufacturers, and patched­ -- at least to the extent possible.

This news isn't really any different from the usual endless stream of security vulnerabilities and patches, but it's also a harbinger of the sorts of security problems we're going to be seeing in the coming years. These are vulnerabilities in computer hardware, not software. They affect virtually all high-end microprocessors produced in the last 20 years. Patching them requires large-scale coordination across the industry, and in some cases drastically affects the performance of the computers. And sometimes patching isn't possible; the vulnerability will remain until the computer is discarded.

Spectre and Meltdown aren't anomalies. They represent a new area to look for vulnerabilities and a new avenue of attack. They're the future of security­ -- and it doesn't look good for the defenders.

Modern computers do lots of things at the same time. Your computer and your phone simultaneously run several applications -- ­or apps. Your browser has several windows open. A cloud computer runs applications for many different computers. All of those applications need to be isolated from each other. For security, one application isn't supposed to be able to peek at what another one is doing, except in very controlled circumstances. Otherwise, a malicious advertisement on a website you're visiting could eavesdrop on your banking details, or the cloud service purchased by some foreign intelligence organization could eavesdrop on every other cloud customer, and so on. The companies that write browsers, operating systems, and cloud infrastructure spend a lot of time making sure this isolation works.

Both Spectre and Meltdown break that isolation, deep down at the microprocessor level, by exploiting performance optimizations that have been implemented for the past decade or so. Basically, microprocessors have become so fast that they spend a lot of time waiting for data to move in and out of memory. To increase performance, these processors guess what data they're going to receive and execute instructions based on that. If the guess turns out to be correct, it's a performance win. If it's wrong, the microprocessors throw away what they've done without losing any time. This feature is called speculative execution.

Spectre and Meltdown attack speculative execution in different ways. Meltdown is more of a conventional vulnerability; the designers of the speculative-execution process made a mistake, so they just needed to fix it. Spectre is worse; it's a flaw in the very concept of speculative execution. There's no way to patch that vulnerability; the chips need to be redesigned in such a way as to eliminate it.

Since the announcement, manufacturers have been rolling out patches to these vulnerabilities to the extent possible. Operating systems have been patched so that attackers can't make use of the vulnerabilities. Web browsers have been patched. Chips have been patched. From the user's perspective, these are routine fixes. But several aspects of these vulnerabilities illustrate the sorts of security problems we're only going to be seeing more of.

First, attacks against hardware, as opposed to software, will become more common. Last fall, vulnerabilities were discovered in Intel's Management Engine, a remote-administration feature on its microprocessors. Like Spectre and Meltdown, they affected how the chips operate. Looking for vulnerabilities on computer chips is new. Now that researchers know this is a fruitful area to explore, security researchers, foreign intelligence agencies, and criminals will be on the hunt.

Second, because microprocessors are fundamental parts of computers, patching requires coordination between many companies. Even when manufacturers like Intel and AMD can write a patch for a vulnerability, computer makers and application vendors still have to customize and push the patch out to the users. This makes it much harder to keep vulnerabilities secret while patches are being written. Spectre and Meltdown were announced prematurely because details were leaking and rumors were swirling. Situations like this give malicious actors more opportunity to attack systems before they're guarded.

Third, these vulnerabilities will affect computers' functionality. In some cases, the patches for Spectre and Meltdown result in significant reductions in speed. The press initially reported 30%, but that only seems true for certain servers running in the cloud. For your personal computer or phone, the performance hit from the patch is minimal. But as more vulnerabilities are discovered in hardware, patches will affect performance in noticeable ways.

And then there are the unpatchable vulnerabilities. For decades, the computer industry has kept things secure by finding vulnerabilities in fielded products and quickly patching them. Now there are cases where that doesn't work. Sometimes it's because computers are in cheap products that don't have a patch mechanism, like many of the DVRs and webcams that are vulnerable to the Mirai (and other) botnets -- ­groups of Internet-connected devices sabotaged for coordinated digital attacks. Sometimes it's because a computer chip's functionality is so core to a computer's design that patching it effectively means turning the computer off. This, too, is becoming more common.

Increasingly, everything is a computer: not just your laptop and phone, but your car, your appliances, your medical devices, and global infrastructure. These computers are and always will be vulnerable, but Spectre and Meltdown represent a new class of vulnerability. Unpatchable vulnerabilities in the deepest recesses of the world's computer hardware is the new normal. It's going to leave us all much more vulnerable in the future.

This essay previously appeared on TheAtlantic.com.

Planet DebianDirk Eddelbuettel: prrd 0.0.2: Many improvements

The prrd package was introduced recently, and made it to CRAN shortly thereafter. The idea of prrd is simple, and described in some more detail on its webpage and its GitHub repo. Reverse dependency checks are an important part of package development and is easily done in a (serial) loop. But these checks are also generally embarassingly parallel as there is no or little interdependency between them (besides maybe shared build depedencies). See the following screenshot (running six parallel workers, arranged in split byobu session).

This note announce the second, and much improved, release. The package now runs on all operating systems supported by R and no longer has external system requirements. Several functions were improved, two new helper functions were added in a so-far still preliminary form, and everything is more robust now.

The release is summarised in the NEWS entry:

Changes in prrd version 0.0.2 (2018-01-24)

  • The package no longer require wget.

  • Enhanced sanity checker function.

  • Expanded and improved dequeue function.

  • No longer use $HOME in xvfb-run-safe (#2).

  • The use of xvfb-run use is now conditional on the OS (#3).

  • The set of available packages is no longer constrained to CRAN, but could be via the local setup script (#4).

  • The dequeue() function now uses system2().

  • The enqueue() functions checks if no reverse dependencies are found and stops (#6).

  • The enqueue() functions checks for repository information being set (#5).

CRANberries provides the usual summary of changes to the previous version. See the aforementioned webpage and its repo for details. For more questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureError'd: #TITLE_OF_ERRORD2#

Joe P. wrote, When I tried to buy a coffee at the airport with my contactless VISA card, it apparently thought my name was '%s'."

 

"Instead of outsourcing to Eastern Europe or the Asian subcontinent, companies should be hiring from Malta. Just look at these people! They speak fluent base64!" writes Michael J.

 

Raffael wrote, "While I can proudly say that I am working on bugs, the Salesforce Chatter site should probably consider doing the same."

 

"Wow! Thanks! Happy Null Year to you too!" Alexander K. writes.

 

Joel B. wrote, "Yesterday was the first time I've ever seen a phone with a 'License Violation'. Phone still works, so I guess there's that."

 

"They missed me so much, they decided to give me...nothing," writes Timothy.

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Light Talks and Close

Lightning Talk

  • Usability Fails
  • Etching
  • Diverse Events
  • Kids Space – fairly unstructured and self organising
  • Opening up LandSat imagery – NBAR-T available on NCI
  • Project Nacho – HTML -> VPN/RDP gateway . Apache Guacomle
  • Vocaloids
  • Blockchain
  • Using j2 to create C++ code
  • Memory model code update
  • CLIs are user interface too
  • Complicated git things
  • Mollygive -matching donations
  • Abusing Docker

Closing

  • LCA 2019 will be in Christchurch, New Zealand – http://lca2019.linux.org.au
  • 700 Attendees at 2018
  • 400 talk and 36 Miniconf submissions

 

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Session 2

QUIC: Replacing TCP for the Web Jana Iyengar

  • History
    • Protocol for http transport
    • Deployed Inside Google 2014 and Chrome / mobile apps
    • Improved performance: Youtube rebuffers 15-18% , Google search latency 3.6 – 8 %
    • 35% of Google’s egree traffic (7% of Internet)
    • Working group started in 2016 to standardized QUIC
    • Turned off at the start of 2016 due to security problem
    • Doubled in Sept 2016 due turned on for the youtube app
  • Technology
    • Previously – ip _> TCP -> TLS -> HTTP/2
    • QUIC -> udp -> QUIC -> http over QUIC
    • Includes crypto and tcp handshake
    • congestion control
    • loss recovery
    • TLS 1.3 has some of the same features that QUIC pioneered, being updated to take account
  • HTTP/1
    • 1 trip for TCP
    • 2 trips for TLS
    • Single connection – Head Of Line blocking
    • Multiple TCP connections workaround.
  • HTTP/2
    • Streams within a single transport connection
    • Packet loss will stall the TCP layer
    • Unresolved problems
      • Connection setup latency
      • Middlebox interference with TCP – makes it hard to change TCP
      • Head of line blocking within TCP
  • QUIC
    • Connection setup
      • 0 round trips, handshake packet followed directly by data packet
      • 1 round-trips if crypto keys are not new
      • 2 round trips if QUIC version needs renegotiation
    • Streams
      • http/2 streams are sent as quic streams
  • Aspirations of protocol
    • Deployable and evolveable
    • Low latency connection establishment
    • Stream multiplexing
    • Better loss recovery and flexible congestion control
      • richer signalling (unique packet number)
      • better RTT estimates
    • Resilience to NAT-rebinding ( UDP Nat-mapping changes often, maybe every few seconds)
  • UDP is not a transport, you put something in top of UDP to build a transport
  • Why not a new protocol instead of UDP? Almost impossible to get a new protocol in middle boxes around the Internet.
  • Metrics
    • Search Latency (see paper for other metrics)
    • Enter search term > entire page is loaded
    • Mean: desktop improve 8% , mobile 3.6 %
    • Low latency: Desktop 1% , Mobile none
    • Highest Latency 90-99% of users: Desktop & mobile 15-16%
    • Video similar
    • Big gain is from 0 RTT handshake
  • QUIC – Search Latency Improvements by Country
    • South Korea – 38ms RTT – 1% improvement
    • USA – 50ms – 2 – 3.5 %
    • India – 188ms – 5 – 13%
  • Middlebox ossification
    • Vendor ossified first byte of QUIC packet – flags byte
    • since it seemed to be the same on all QUIC packets
    • broke QUIC deployment when a flag was fixed
    • Encryption is the only way to protect against network ossification
    • “Greasing” by randomly changing options is also an option.
  • Other Protocols over QUIC?
    • Concentrating on http/2
    • Looking at Web RPC

Remote Work: My first decade working from the far end of the earth John Dalton

  • “Remote work has given me a fulfilling technical career while still being able to raise my family in Tasmania”
  • First son both in 2015, wanted to start in Tasmania with family to raise them, rather than moving to a tech hub.
  • 2017 working with High Performance Computing at University Tasmania
  • If everything is going to be outsourced, I want to be the one they outsourced to.
  • Wanted to do big web stuff, nobody in Tasmania doing that.
  • Was a user at LibraryThing
    • They were searching for Sysadmin/DBA in Portland, Maine
    • Knew he could do the job even though was on other side of the world
    • Negotiated into it over a couple of months
    • Knew could do the work, but not sure how the position would work out

Challenges

  • Discipline
    • Feels he is not organised. Doesn’t keep planner uptodate or todo lists etc
    • “You can spend a lot of time reading about time management without actually doing it”
    • Do you need to have the minimum level
  • Isolation
    • Lives 20 minutes out of Hobart
    • In semi-rural area for days at a time, doesn’t leave house all week except to ferry kids on weekends.
    • “Never considered myself an extrovert, but I do enjoy talking to people at least weekly”
    • Need to work to hook in with Hobart tech community, Goes to meetups. Plays D&D with friends.
    • Considering going to coworking space. sometimes goes to Cafes etc
  • Setting Boundries
    • Hard to Leave work.
    • Have a dedicated work space.
  • Internet Access
    • Prioritise Coverage over cost these days for mobile.
    • Sometimes fixed provider go down, need to have a backup
  • Communication
    • Less random communicated with other employees
    • Cannot assume any particular knowledge when talking with other people
    • Aware of particular cultural differences
    • Multiple chance of a miscommunication

Opportunities

  • Access to companies and jobs and technologies that could get locally
  • Access to people with a wider range of experiences and backgrounds

Finding remote work

  • Talk your way into it
  • Networking
  • Job Bof
  • stackoverflow.com/jobs can filter
  • weworkremotely.com

Making it work

  • Be Visable
  • Go home at the end of the day
  • Remember real people are at the end of the email

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Session 1

Self-Documenting Coders: Writing Workshop for Devs Heidi Waterhouse

History of Technical documentation

  • Linear Writing
    • On Paper, usually books
    • Emphasis on understanding and doing
  • Task-based writing
    • Early 90s
    • DITA
    • Concept, Procedure, Reference
  • Object-orientated writing
    • High art for of tech writers
    • Content as code
    • Only works when compiled
    • Favoured by tech writers, translated. Up to $2000 per seat
  • Guerilla Writing
    • Stack Overflow
    • Wikis
    • YouTube
    • frustrated non-writers trying to help peers
  • Search-first writing
    • Every page is page one
    • Search-index driven

Writing Words

  • 5 W’s of journalism.
  • Documentation needs to be tested
  • Audiences
    • eg Users, future-self, Sysadmins, experts, End users, installers
  • Writing Basics
    • Sentences short
    • Graphics for concepts
    • Avoid screencaps (too easily outdated)
    • User style guides and linters
    • Accessibility is a real thing
  • Words with pictures
    • Never include settings only in an image ( “set your screen to look like this” is bad)
    • Use images for concepts not instructions
  • Not all your users are readers
    • Can’t see well
    • Can’t parse easily
    • Some have terrible equipment
    • Some of the “some people” is us
    • Accessibility is not a checklist, although that helps, it is us
  • Using templates to write
    • Organising your thoughts and avoid forgetting parts
    • Add a standard look at low mental cost
  • Search-first writing – page one
    • If you didn’t answer the question or point to the answer you failed
    • answer “How do I?”
  • Indexing and search
    • All the words present are indexed
    • No false pointers
    • Use words people use and search for, Don’t use just your internal names for things
  • Semantic tagging and reuse
    • Semantic text splits form and content
    • Semantic tagging allows reuse
    • Reuse saves duplication
    • Reuse requires compiling
  • Sorting topics into buckets
    • Even with search you need some organisation
    • Group items by how they get used not by how they get prammed
    • Grouping similar items allows serendipity
  • Links, menus and flow
    • give people a next step
    • Provide related info on same page
    • show location
    • offer a chance to see the document structure

Distributing Words

  • Static Sites
  • Hosted Sites
  • Baked into the product
    • Only available to customers
    • only updates with the product
    • Hard to encourage average user to input
  • Knowledge based / CMS
    • Useful to community that known what it wants
    • Prone to aging and rot
    • Sometimes diverges from published docs or company message
  • Professional Writing Tools
    • Shiny and powerful
    • Learning Cliff
    • IDE
    • Super features
    • Not going to happen again
  • Paper-ish things
    • Essential for some topics
    • Reassuring to many people
    • touch is a sense we can bond with
    • Need to understand if people using docs will be online or offline when they want them.
  • Using templates to publish
    • Unified look and feel
    • Consistency and not missing things
    • Built-in checklist

Collaborating on Words

  • One weird trick, write it up as your best guess and let them correct it
  • Have a hack day
    • Ste a goal of things to delete
    • Set a goal of things to fix
    • Keep track of debt you can’t handle today
    • team-building doesn’t have to be about activities

Deleting Words

  • What needs to go
    • Old stuff that is wrong and terrible
    • Wrong stuff that hides right stuff
  • What to delete
    • Anything wrong
    • Anything dangerious
    • Anything used of updated in year
  • How
    • Delete temporarily (put aside for a while)
    • Based on analytics
    • Ruthlessly
    • Delete or update

Documentation Must be

  • True
  • Timely
  • Testable
  • Tuned

Documentation Components

  • Who is reading and why
    • Assuming no one likes reading docs
    • What is driving them to be here
  • Pre Requisites
    • What does a user need to succeed
    • Can I change the product to reduce documentation
    • Is there any hazard in this process
  • How do I do this task
    • Steps
    • Results
    • Next steps
  • Test – How do I know that it worked
    • If you can’t test i, it is not a procedure
    • What will the system do, how does the state change
  • Reference
    • What other stuff that affects this
    • What are the optionsal settings
    • What are the related things
  • Code and code samples
    • Best: code you can modify and run in the docs
    • 2nd Best: Code you can copy easily
    • Worst: retyping code
  • Option
    • Why did we build it this way
    • What else might you want to know
    • Have other people done this
    • Lifecycle

Documentation Types

  • Instructions
  • Ideas (arch, problem space,discarded options, process)
  • Action required (release notes, updates, deprecation)
  • Historical (roads maps, projects plans, retrospective documents)
  • Invisible docs (user experience, microinteractions, error messages)
    • Error messages – Unique ID, what caused, What mitigation, optional: Link to report

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 5 – Keynote – Jess Frazelle

Keynote: Containers aka crazy user space fun

  • Work at Microsoft on Open Source and containers, specifically on kubernetes
  • Containers vs Zones vs Jails vs VMs
  • Containers are not a first class concept in the kernel.
    • Namespaces
    • Cgroups
    • AppArmour in LSM (prevent mounting, writing to /proc etc) (or SELinux)
    • Seccomp (syscall filters, which allowed or denied) – Prevent 150 other syscalls which are uncommon or dangerous.
      • Got list from testing all of dockerhub
      • eg CLONE, UNSHARE
      • NoNewPrivs (exposed as “AllowPrivilegeEsculation” in K8s)
      • rkt and systemd-nspawn don’t 100% follow
  • Intel Clear containers are really VMs

History of Containers

  • OpenVZ – released 2005
  • Linux-Vserver (2008)
  • LXC ( 2008)
  • Docker ( 2013)
    • Initially used LXC as a backend
    • Switched to libcontainer in v0.7
  • lmctfy (2013)
    • By Google
  • rkt (2014)
  • runc (2015)
    • Part of Open container Initiative
  • Container runtimes are like the new Javascript frameworks

Are Containers Secure

  • Yes
  • and I can prove it
  • VMs / Zones and Jails are like all the Lego pieces are already glued togeather
  • Containers you have the parts seperate
    • You can turn on and off certain namespaces
    • You can share namespaces between containers
    • Every container in k8s shares PID and NET namespaces
    • Docker has sane defaults
    • You can sandbox apps every further though
  • https://contained.af/
    • No one has managed to break out of the container
    • Has a very strict seccomp profile applied
    • You’d be better off attacking the app, but you are still running a containers default seccomp filters

Containerizing the Desktop

  • Switched to runc from docker (had to convert stuff)
  • rootless containers
  • Runc hook “netns” to do networking
  • Sandboxed desktop apps, running in containers
  • Switch from Debian to CoreOS Container Linux as base OS
    • Verify the integrity of the OS
    • Just had to add graphics drivers
    • Based on gentoo, emerge all the way down

What if we applied the the same defaults to programming languages?

  • Generate seccomp filters at build-time
    • Previously tried at run time, doesn’t work that well, something always missed
    • At build time we can ensure all code is included in the filter
    • The go compiler writes the assembly for all the syscalls, you can hijack and grab the list of these, create a seccomp filter
    • No quite that simply
      • plugins
      • exec external stuff
      • can directly exec a syscall in go code, the name passed in via arguments at runtime
  • metaparticle.io
    • Library for cloud-native applications

Linux Containers in secure enclaves (SCONE)

  • Currently Slow
  • Lots of tradeoffs or what executes where (trusted area or untrsuted area)

Soft multi-tenancy

  • Reduced threat model, users not actively malicious
  • Hard Multi-tenancy would have potentially malicious containers running next to others
  • Host OS – eg CoreOs
  • Container Runtime – Look at glasshouse VMs
  • Network – Lots to do, default deny in k8s is a good start
  • DNS – Needs to be namespaced properly or turned off. option: kube-dns as a sidecar
  • Authentication and Authorisation – rbac
  • Isolation of master and System nodes from nodes running containers
  • Restricting access to host resources (k8s hostpath for volumes, pod security policy)
  • making sure everything else is “very dumb” to it’s surroundings

 

Share

,

Planet DebianDaniel Pocock: Do the little things matter?

In a widely shared video, US Admiral McRaven addressing University of Texas at Austin's Class of 2014 chooses to deliver a simple message: make your bed every day.

A highlight of this talk is the quote The little things in life matter. If you can't do the little things right, you'll never be able to do the big things right.

In the world of free software engineering, we have lofty goals: the FSF's High Priority Project list identifies goals like private real-time communication, security and diversity in our communities. Those deploying free software in industry have equally high ambitions, ranging from self-driving cars to beating the stock market.

Yet over and over again, we can see people taking little shortcuts and compromises. If Admiral McRaven is right, our failure to take care of little decisions, like how we choose an email provider, may be the reason those big projects, like privacy or diversity, appear to be no more than a pie-in-the-sky.

The IT industry has relatively few regulations compared to other fields such as aviation, medicine or even hospitality. Consider a doctor who re-uses a syringe - how many laws would he be breaking? Would he be violating conditions of his insurance? Yet if an IT worker overlooks the contempt for the privacy of Gmail users and their correspondents that is dripping off the pages of the so-called "privacy" policy, nobody questions them. Many people will applaud their IT staff for choices or recommendations like this, because, of course, "it works". A used syringe "just works" too, but who would want one of those?

Google's CEO Eric Schmidt tells us that if you don't have anything to hide, you don't need to worry.

Compare this to the advice of Sun Tzu, author of the indispensable book on strategy, The Art of War. The very first chapter is dedicated to estimating, calculating and planning: what we might call data science today. Tzu unambiguously advises to deceive your opponent, not to let him know the truth about your strengths and weaknesses.

In the third chapter, Offense, Tzu starts out that The best policy is to take a state intact ... to subdue the enemy without fighting is the supreme excellence. Surely this is only possible in theory and not in the real world? Yet when I speak to a group of people new to free software and they tell me "everybody uses Windows in our country", Tzu's words take on meaning he never could have imagined 2,500 years ago.

In many tech startups and even some teams in larger organizations, the oft-repeated mantra is "take the shortcut". But the shortcuts and the things you get without paying anything, without satisfying the conditions of genuinely free software, compromises such as Gmail, frequently involve giving up a little bit too much information about yourself: otherwise, why would they leave the bait out for you? As Mr Tzu puts it, you have just been subdued without fighting.

In one community that has taken a prominent role in addressing the challenges of diversity, one of the leaders recently expressed serious concern that their efforts had been subdued in another way: Gmail's Promotions Tab. Essential emails dispatched to people who had committed to their program were routinely being shunted into the Promotions Tab along with all that marketing nonsense that most people never asked for and the recipients never saw them.

I pointed out many people have concerns about Gmail and that I had been having thoughts about simply blocking it at my mail server. It is quite easy to configure a mail server to send an official bounce message, for example, in Postfix, it is just one line in the /etc/postfix/access file:

gmail.com   REJECT  The person you are trying to contact hasn't accepted Gmail's privacy policy.  Please try sending the email from a regular email provider.

(NOTE: some people read this and thought I meant everybody should run their own email server, but the above code is just an example to encourage discussion. There is discussion about adding a similar feature to block messages from Gmail to ProtonMail webmail accounts, so anybody can do this without their own server and take back control over their privacy)

Some communities could go further, refusing to accept Gmail addresses on mailing lists or registration forms: would that be the lesser evil compared to a miserable fate in Promotions Tab limbo?

I was quite astounded at the response: several people complained that this was too much for participants to comply with (the vast majority register with a Gmail address) or that it was even showing all Gmail users contempt (can't they smell the contempt for users in the aforementioned Gmail "privacy" policy?). Nobody seemed to think participants could cope with that and if we hope these people are going to be the future of diversity, that is really, really scary.

Personally, I have far higher hopes for them: just as Admiral McRaven's Navy SEALS are conditioned to make their bed every day at boot camp, people entering IT, especially those from under-represented groups, need to take pride in small victories for privacy and security, like saying "No" each and every time they have the choice to give up some privacy and get something "free", before they will ever hope to accomplish big projects and change the world.

If they don't learn these lessons at the outset, like the survival and success habits drilled into soldiers during boot-camp, will they ever? If programs just concentrate on some "job skills" and gloss over the questions of privacy and survival in the information age, how can they ever deliver the power shift that is necessary for diversity to mean something?

Come and share your thoughts on the FSFE discussion list (join, thread and reply).

Please also see the subsequent blog on this topic, Fair communication requires mutual consent

Sociological ImagesChildren Learn Rules for Romance in Preschool

Originally Posted at TSP Discoveries

Photo by oddharmonic, Flickr CC

In the United States we tend to think children develop sexuality in adolescence, but new research by Heidi Gansen shows that children learn the rules and beliefs associated with romantic relationships and sexuality much earlier.

Gansen spent over 400 hours in nine different classrooms in three Michigan preschools. She observed behavior from teachers and students during daytime classroom hours and concluded that children learn — via teachers’ practices — that heterosexual relationships are normal and that boys and girls have very different roles to play in them.

In some classrooms, teachers actively encouraged “crushes” and kissing between boys and girls. Teachers assumed that any form of affection between opposite gender children was romantically-motivated and these teachers talked about the children as if they were in a romantic relationship, calling them “boyfriend/girlfriend.” On the other hand, the same teachers interpreted affection between children of the same gender as friendly, but not romantic. Children reproduced these beliefs when they played “house” in these classrooms. Rarely did children ever suggest that girls played the role of “dad” or boys played the role of “mom.” If they did, other children would propose a character they deemed more gender-appropriate like a sibling or a cousin.

Preschoolers also learned that boys have power over girls’ bodies in the classroom. In one case, teachers witnessed a boy kiss a girl on the cheek without permission. While teachers in some schools enforced what the author calls “kissing consent” rules, the teachers in this school interpreted the kiss as “sweet” and as the result of a harmless crush. Teachers also did not police boys’ sexual behaviors as actively as girls’ behaviors. For instance, when girls pulled their pants down teachers disciplined them, while teachers often ignored the same behavior from boys. Thus, children learned that rules for romance also differ by gender.

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at https://thesocietypages.org/socimages)

CryptogramWhatsApp Vulnerability

A new vulnerability in WhatsApp has been discovered:

...the researchers unearthed far more significant gaps in WhatsApp's security: They say that anyone who controls WhatsApp's servers could effortlessly insert new people into an otherwise private group, even without the permission of the administrator who ostensibly controls access to that conversation.

Matthew Green has a good description:

If all you want is the TL;DR, here's the headline finding: due to flaws in both Signal and WhatsApp (which I single out because I use them), it's theoretically possible for strangers to add themselves to an encrypted group chat. However, the caveat is that these attacks are extremely difficult to pull off in practice, so nobody needs to panic. But both issues are very avoidable, and tend to undermine the logic of having an end-to-end encryption protocol in the first place.

Here's the research paper.

Worse Than FailureThe More Things Change: Fortran Edition

Technology improves over time. Storage capacity increases. Spinning platters are replaced with memory chips. CPUs and memory get faster. Moore's Law. Compilers and languages get better. More language features become available. But do these changes actually improve things? Fifty years ago, meteorologists used the best mainframes of the time, and got the weather wrong more than they got it right. Today, they have a global network of satellites and supercomputers, yet they're wrong more than they're right (we just had a snowstorm in NJ that was forecast as 2-4", but got 16" before drifting).

As with most other languages, FORTRAN also added structure, better flow control and so forth. The problem with languages undergoing such a massive improvement is that occasionally, coding styles live for a very long time.

Imagine a programmer who learned to code using FORTRAN IV (variable names up to 6 characters, integers implicitly start with "I" through "N" and reals start with any other letter - unless explicitly declared, flow control via GOTO, etc) writing a program in 2000 (using a then-current compiler but with FORTRAN IV style). Now imagine some PhD candidate coming along in 2017 to maintain and enhance this FORTRAN IV-style code with the latest FORTRAN compiler.

A.B.was working at a university with just such a scientific software project as part of earning a PhD. These are just a couple of the things that caused a few head-desk moments.

Include statements. The first variant only allows code to be included. The second allows preprocessor directives (like #define).

    INCLUDE  'path/file'

    #include 'path/file'

Variables. Since the only data types/structures originally available were character, logical, integer, real*4, real*8 and arrays, you had to shoehorn your data into the closest fit. This led to declarations sections that included hundreds of basic declarations. This hasn't improved today as people still use one data type to hold something that really should be implemented as something else. Also, while the compilers of today support encapsulation/modules, back then, everything was pretty much global.

Data structures. The only thing supported back then was multidimensional arrays. If you needed something like a map, you needed to roll your own. This looks odd to someone who cut their teeth on a version of the language where these features are built-in.

Inlining. FORTRAN subroutines support local subroutines and functions which are inlined, which is useful to provide implied visibility scoping. Prudent use allows you to DRY your code. This feature isn't even used, so the same code is duplicated over and over again inline. Any of you folks see that pattern in your new-fangled modern systems?

Joel Spolsky commented about the value of keeping old code around. While there is much truth in his words, the main problem is that the original programmers invariably move on, and as he points out, it is much harder to read (someone else's) code than to write your own; maintenance of ancient code is a real world issue. When code lives across too many language version improvements, it becomes inherently more difficult to maintain as its original form becomes more obsolete.

To give you an idea, take a look at the just the declaration section of one module that A.B. inherited (except for a 2 line comment at the top of the file, there were no comments). FWIW, when I did FORTRAN at the start of my career, I used to document the meaning of every. single. abbreviated. variable. name.

      subroutine thesub(xfac,casign,
     &     update,xret,typret,
     &     meop1,meop2,meop12,meoptp,
     &     traop1, traop2, tra12,
     &     iblk1,iblk2,iblk12,iblktp,
     &     idoff1,idoff2,idof12,
     &     cntinf,reoinf,
     &     strinf,mapinf,orbinf)
      implicit none
      include 'routes.h'
      include 'contr_times.h'
      include 'opdim.h'
      include 'stdunit.h'
      include 'ioparam.h'
      include 'multd2h.h'
      include 'def_operator.h'
      include 'def_me_list.h'
      include 'def_orbinf.h'
      include 'def_graph.h'
      include 'def_strinf.h'
      include 'def_filinf.h'
      include 'def_strmapinf.h'
      include 'def_reorder_info.h'
      include 'def_contraction_info.h'
      include 'ifc_memman.h'
      include 'ifc_operators.h'
      include 'hpvxseq.h'

      integer, parameter ::
     &     ntest = 000
      logical, intent(in) ::
     &     update
      real(8), intent(in) ::
     &     xfac, casign
      real(8), intent(inout), target ::
     &     xret(1)
      type(coninf), target ::
     &     cntinf
      integer, intent(in) ::
     &     typret,
     &     iblk1, iblk2, iblk12, iblktp,
     &     idoff1,idoff2,idof12
      logical, intent(in) ::
     &     traop1, traop2, tra12
      type(me_list), intent(in) ::
     &     meop1, meop2, meop12, meoptp
      type(strinf), intent(in) ::
     &     strinf
      type(mapinf), intent(inout) ::
     &     mapinf
      type(orbinf), intent(in) ::
     &     orbinf
      type(reoinf), intent(in), target ::
     &     reoinf

      logical ::
     &     bufop1, bufop2, buf12, 
     &     first1, first2, first3, first4, first5,
     &     msfix1, msfix2, msfx12, msfxtp,
     &     reject, fixsc1, fixsc2, fxsc12,
     &     reo12, non1, useher,
     &     traop1, traop2
      integer ::
     &     mstop1,mstop2,mst12,
     &     igmtp1,igmtp2,igmt12,
     &     nc_op1, na_op1, nc_op2, na_op2,
     &     nc_ex1, na_ex1, nc_ex2, na_ex2, 
     &     ncop12, naop12,
     &     nc12tp, na12tp,
     &     nc_cnt, na_cnt, idxres,
     &     nsym, isym, ifree, lenscr, lenblk, lenbuf,
     &     buftp1, buftp1, bftp12,
     &     idxst1, idxst2, idxt12,
     &     ioff1, ioff2, ioff12,
     &     idxop1, idxop2, idop12,
     &     lenop1, lenop2, len12,
     &     idxm12, ig12ls,
     &     mscmxa, mscmxc, msc_ac, msc_a, msc_c,
     &     msex1a, msex1c, msex2a, msex2c,
     &     igmcac, igamca, igamcc,
     &     igx1a, igx1c, igx2a, igx2c,
     &     idxms, idxdis, lenmap, lbuf12, lb12tp,
     &     idxd12, idx, ii, maxidx
      integer ::
     &     ncblk1, nablk1, ncbka1, ncbkx1, 
     &     ncblk2, nablk2, ncbka2, ncbkx2, 
     &     ncbk12, nabk12, ncb12t, nab12t, 
     &     ncblkc, nablkc,
     &     ncbk12, nabk12,
     &     ncro12, naro12,
     &     iblkof
      type(filinf), pointer ::
     &     ffop1,ffop2,ff12
      type(operator), pointer ::
     &     op1, op2, op1op2, op12tp
      integer, pointer ::
     &     cinf1c(:,:),cinf1a(:,:),
     &     cinf2c(:,:),cinf2a(:,:),
     &     cif12c(:,:),
     &     cif12a(:,:),
     &     cf12tc(:,:),
     &     cf12ta(:,:),
     &     cfx1c(:,:),cfx1a(:,:),
     &     cfx2c(:,:),cfx2a(:,:),
     &     cfcntc(:,:),cfcnta(:,:),
     &     inf1c(:),
     &     inf1a(:),
     &     inf2c(:),
     &     inf2a(:),
     &     inf12c(:),
     &     inf12a(:),
     &     dmap1c(:),dmap1a(:),
     &     dmap2c(:),dmap2a(:),
     &     dm12tc(:),dm12ta(:)

      real(8) ::
     &     xnrm, facscl, fcscl0, facab, xretls
      real(8) ::
     &     cpu, sys, cpu0, sys0, cpu00, sys00
      real(8), pointer ::
     &     xop1(:), xop2(:), xop12(:), xscr(:)
      real(8), pointer ::
     &     xbf1(:), xbf2(:), xbf12(:), xbf12(:), x12blk(:)

      integer ::
     &     msbnd(2,3), igabnd(2,3),
     &     ms12ia(3), ms12ic(3), ig12ia(3), ig12ic(3),
     &     ig12rw(3)

      integer, pointer ::
     &     gm1dc(:), gm1da(:),
     &     gm2dc(:), gm2da(:),
     &     gmx1dc(:), gmx1da(:),
     &     gmx2dc(:), gmx2da(:),
     &     gmcdsc (:), gmcdsa (:),
     &     gmidsc (:), gmidsa (:),
     &     ms1dsc(:), ms1dsa(:),
     &     ms2dsc(:), ms2dsa(:),
     &     msx1dc(:), msx1da(:),
     &     msx2dc(:), msx2da(:),
     &     mscdsc (:), mscdsa (:),
     &     msidsc (:), msidsa (:),
     &     idm1ds(:), idxm1a(:),
     &     idm2ds(:), idxm2a(:),
     &     idx1ds(:), ixms1a(:),
     &     idx2ds(:), ixms2d(:),
     &     idxdc (:), idxdsa (:),
     &     idxmdc (:),idxmda (:),
     &     lstrx1(:),lstrx2(:),lstcnt(:),
     &     lstr1(:), lstr2(:), lst12t(:)

      integer, pointer ::
     &     mex12a(:), m12c(:),
     &     mex1ca(:), mx1cc(:),
     &     mex2ca(:), mx2cc(:)

      integer, pointer ::
     &     ndis1(:,:), dgms1(:,:,:), gms1(:,:),
     &     lngms1(:,:),
     &     ndis2(:,:), dgms2(:,:,:), gms2(:,:),
     &     lngms2(:,:),
     &     nds12t(:,:), dgms12(:,:,:),
     &     gms12(:,:),
     &     lgms12(:,:),
     &     lg12tp(:,:), lm12tp(:,:,:)

      integer, pointer ::
     &     cir12c(:,:), cir12a(:,:),
     &     ci12c(:,:),  ci12a(:,:),
     &     mire1c(:),   mire1a(:),
     &     mire2c(:),   mire2a(:),
     &     mca12(:), didx12(:), dca12(:),
     &     mca1(:),  didx1(:),  dca1(:),
     &     mca2(:),  didx2(:),  dca2(:),
     &     lenstr_array(:,:,:)

c dbg
      integer, pointer ::
     &     dum1c(:), dum1a(:), hpvx1c(:), hpvx1a(:),
     &     dum2c(:), dum2a(:), hpvx2c(:), hpvx2a(:)
      integer ::
     &     msd1(ngastp,2,meop1%op%njoined),
     &     msd2(ngastp,2,meop2%op%njoined),
     &     jdx, totc, tota
c dbg

      type(graph), pointer ::
     &     graphs(:)

      integer, external ::
     &     ielsum, ielprd, imdst2, glnmp, idxlst,
     &     mxdsbk, m2ims4
      logical, external ::
     &     nxtdis, nxtds2, lstcmp,
     &     nndibk, nndids
      real(8), external ::
     &     ddot
[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 3

Insights – solving every problem for good Paul Wayper

Sysadmins

  • Too much to check, too little time
  • What does this message mean again
  • Too reactive

How Sysadmins fix problems

  • Read text files and command output
  • Look at them for information
  • Check this information against the knowlede
  • Decide on appobiate solution

Insites

  • Reads test files and outputs
  • Process them into information
  • Use information in rules
  • Rules provide information about Solution

Examples

  • Simple rule – check “localhost” is in /etc/hosts
  • Rule 2 – chronyd refuses to fix server’s time since is out by more than 1000s
    • Checks /var/log/message for error message from chrony
  • Insites rolls up all the checks against messages, so only down once
  • Rule 3 – rsyslog dropping messages

Website

http://red.ht/demo_rules

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 2

Personalisation at Scale: A “Cookie Cutter” Approach Jim O’Halloran

  • Impact on site performance on conversion is huge
  • Magento
    • LAMP stack + Redis or memcached
    • Generally App is CPI bound
    • Routing / Rendering still time consuming
  • Varnish full page caching (FPC)
  • But what about personalised content?
  • Edge Side Includes (ESIs)
    • But ESIs run in series, is slllow when you have many
    • Content is nont cacheable, expensive to calculate, significant render time
    • ESI therefore undermines much advantage of FPC
  • Ajax
    • Make ajax request and fetch personalised content
    • Still load on backend
    • ESI limitations plus added network latency
  • Cookie Cutter
    • When an event occurs that modifies personalisation state, send a cookies containing the required data with the response.
    • In the browser, use the content of that cookie to update the page

Example

  • Goto www.example.com
    • Probably cached in varnish
    • I don’t have a cookie
    • If I login, uncachable request, I am changing login state
    • Response includes Set-Cookie header creating a personalised cookie
  • Advantages
    • No backend requests
    • Page data served is cached always
  • How big can cookies be?
    • RFC 6265 has limits but in reality
    • Actual limit ~4096 bytes per cookie
    • Some older browsers also limit to ~4096 bytes total per domain

Potential issues

  • Request Size
    • Keep cookies small
      • Store small values only, No pre-rendered markup, No larger data structures
    • Serve static assets via CDN
    • Lot of stuff in cart can get huge
  • Information leakage
    • Final URLs leaked to unlogged in users
  • Large Scale changes
    • Page needs to look completely different to different users
    • Vary headers might be an option
  • Formkeys
    • XSRF protection workarounds
  • What about cache misses
    • Megento assembles all it’s pages from a series of blocks
    • Most parts of page are relatively static (block cache)
    • Aligent_CacheObserver – Megento extension that adds cache tags to blocks that should be cached but were not picked up as cachable by default
    • Aoe_TemplateHints – Visibility into Block cache
    • Cacheing != Performance Optimisation – Aoe_Profiler

Availability

  • Plugin availbale for Megento 1
    • Varnish CookieCutter
  • For Magento 2 has native varnish
    • But has limitations
    • Maybe some off CookieCutter stuff could improve

Future

  • localStorage instead of cookies


 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Session 1

Panel: Meltdown, Spectre, and the free-software community Jonathan Corbet, Andrew ‘bunnie’ Huang, Benno Rice, Jess Frazelle, Katie McLaughlin, Kees Cook

  • FreeBSD only heard 11 days beforehand. Would have liked more notice
  • Got people involved from the Kernel Summit in Oct
  • Hosting company only heard once it went official, been busy patching since
  • Likely to be class-action lawsuit for $billions. That might make chip makers more paranoid about documentation and disclosure.
  • Thoughts in embargo
    • People noticed strange patches going in beforehand.
    • Only broke 6 days early, had been going for 6 months
    • “Linus is happy with this, something is terribly wrong”
    • Sad that the 2nd-tier cloud providers didn’t know. Exclusive club and lines as to who got informed were not clear
    • Projects that don’t have explicit relationship with Intel didn’t get informed
  • Thoughts on other vendors
    • This class of bugs could affect anybody, open hardware would probably not fix
    • More open hardware could enable people to review the processors and find these from the design rather than poking around
    • Hard to guarantee the shipped hardware matches the design
    • Software people can build everything at home and check. FABs don’t work at home.
  • Speculative execution warned about years ago. Danger ignored. How to make sure the next one isn’t ignored?
    • We always have to do some risky stuff
    • The research on this built up slowly over the years
    • Even if you have only found impractical attacks against something doesn’t mean the practical one doesn’t exist.
  • What criteria do we use to decide who is in?
    • Mechanisms do exist, they were mainly not used. Perhaps because they were for software vulnerabilities
  • Did people move providers?
    • No but Containers made things easier to reboot stuff and shuffle
  • Are there similar vulnerabilities ( similar or general hardware ) coming along?
    • The Kernel page-table patches were fairly general, should cover many similar ones
    • All these performance optimising bit of your CPU are now attack surfaces
    • What are people going to do if this slows down hardware too much?
  • How do we explain problems like these to politicians etc
    • Legos
    • We still have kernel devs getting their laptops
  • Can be use CPUs that don’t have speculative execution?
    • Not really. Back to 486s
  • Who are we protesting against with the embargo?
    • Everybody
    • The longer period let better fixes get in
    • The meltdown fix could be done in semi-public so had better quality

What is the most common street name in Australia? Rachel Bunder

  • Why?
    • Saw a map with most common name by US street
  • Just looking at name, not end bit “park” , “road”
  • Data
    • PSMA Geocoded national address file – Great but came out after project
    • Use Open Street Maps
  • Started with Common Name in Sydney
    • Used Metro Extracts – site closing down soon
    • Format is geojson
    • Road files separately provided
  • Procedure
    • Used python, R also has good features and libaraies
    • geopandas
    • Had some paths with no names
    • What is a road? – “Something with a name I can drive a car on”
  • Sydney
    • Full street name
      • Victoria Road
      • Pacific Highway
      • oops like like names are being counted twice
    • Tried merging them together
    • Roads don’t 100% match ends. Added function to fuzzy merge the roads that are 100m apart
    • Still some weird ones but probably won’t affect top
    • Second attempt
      • Short st, George st, William st, John st, Church st
  • Now with just the “name bit”
    • Tried taking out just the last name. ended up with “the” as most common.
    • Started with “The” = whole name
    • Single word = whole name
    • name – descriptor – suffex
    • lots of weird names
    • name list – Park, Victoria, Railway, William, Short
  • Wouldn’t work in many other counties
  • Now for all over Australia
    • overpass data
    • Downloaded in 50kmx50x squares
  • Lessons
    • Start small
    • Choose something familiar
    • Check you bias (different naming conventions)
    • Constance vigerlence
    • Know your problem
  • Common plant names
    • Wattle – 15th – 385
  • Other name
    • “The Esplanade” more common than “The Avenue”
  • Top names
    • 5th – Victoria
    • 4th – Church – 497
    • 3rd – George –  551
    • 2nd – Railway
    • 1st – Park – 693
  • By State
    • WA – Forest
    • SA – Railway
    • Vic – Park
    • Tas – Esplanade
    • NT – Smith/Stuart
    • NSW – Park

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 4 – Keynote – Hugh Blemings

Wandering through the Commons

Reflections on Free and Open Source Software/Hardware in Australia, New Zealand and beyond

  • Past Linux.conf.au’s reviewed
  • FOSS in Aus and NZ
    • Above per capita
  • List of Aus / NZ people and their contributions
    • John Lions , Lions book on Unix
    • Pia Andrews/Waugh/Smith – Open Government, GovHack, Linux Australia, Open Data
    • Vik Oliver – 3D Printing
    • Clare Cuuran – Open Government in NZ
    • plus a bunch of others

Working in Free Software and Open Hardware

  • The basics
    • Be visable in projects of relevance
      • You will be typed into Google, looked at in GitHub
    • Be yourself
      • But be business Friendly
    • Linkedin is a thing, really
    • Need a accurate basic presence
  • Finding a new job
    • Networks
    • Local user groups
    • Conferences
    • The projects you work on
  • Application and negotiation
    • Be professional, courteous
    • Do homework about company and culture
    • Talk to people that work there
    • Spend time on interview prep
      • Know your stuff, if you don’t know, say so
    • Think about Salary expectations and stick to them
      • Val Aurora’s page on this is excellent
    • Ask to keep copyright on your code
      • Should be a no-brainer for a FOSS.OH company
  • In the Job
    • Takes time to get into groove, don’t sweat it
    • Get out every now and then, particularly if working from home
    • Work/life balance
    • Know when to jump
      • Poisonous workplaces
    • An aside to People’s managers
      • Bring your best or don’t be a people manager
      • Take your reports welfare seriously

Looking after You

  • Ours is in the main a sedentary and solitary pursuit
    • exercise
  • Sitting and standing in front of a desk all day is bad
    • takes breaks
  • Depression is a real thing
  • Eat more vegetables
  • Find friends/colleagues to exercise with

Working if FOSS / OH – Staying Current

  • Look over a colleagues shoulder
  • Do something that is not part of your regular job
    • low level programming
    • Karger systems, Openstack
  • Stay uptodate with Security Blogs and the like
    • Many of the attack vectors have generic relevance
  • Take the lid off, tinker with hardware
    • Lots of videos online to help or just watch

Make Hay while the Sun Shines

  • Save some money for rainy day
  • Keep networks Open
  • Even when you have a job

You’re fired … Now What? – In a moment

  • Don’t panic
    • Going out in a twitter storm won’t help anyone
  • It’s not personal
    • It is the position that is no longer needed, not you
  • If you think it an unfair dismissal, seek legal advice before signing anything
  • It is normal to feel rubbish
  • Beware of imposter syndrome
  • Try to keep 2-3 opportunities in the pipeline
  • Don’t assume people will remember you
    • It’s not personal, everyone gets busy
    • It’s okay to (politely naturally) follow up periodically
  • Keep search a little narrow for the first week or two
    • The expand widely
  • Balance take “something/everything” as better than waiting for your dream job

Dream Job

  • Power 9 CPU
    • 14nm process
    • 4GHz, 24 cores
    • 25km of wires
    • 8 billion transisters
    • 3900 official chips pins
    • ~19,000 connections from die to the pin

Conclusions

  • Part of a vibrant FOSS/OH community both hear and abroad
  • We have accomplished much
  • The most exciting (in both senses) things lie before us
  • We need all of you to be part at every level of the stack
  • Look forward to working with you…

Share

,

Planet DebianSteinar H. Gunderson: Movit 1.6.0 released

I just released version 1.6.0 of Movit, my GPU-based video filter library.

The full changelog is below, but what's more interesting is maybe what isn't in it, namely the compute shader version of the high-quality resampling filter I blogged about earlier. It turned out that my benchmark setup was wrong in a sort-of subtle way, and unfortunately biased towards the compute shader. Fixing that negated the speed difference—it was actually usually a few percent slower than the fragment shader version, despite a fair amount of earlier tweaks. (It did use less CPU when setting up new parameters, which was nice for things like continuous zooms, but probably not enough to justify the GPU slowdown.)

Which means that after a month or so of testing and performance tuning, I had to scrap it—it's sad to notice so late (I only realized that something was wrong as I started writing up the final documentation, and figured I couldn't actually justify why I would let one of them chain with other effects and the other one not), but it's a sunk cost, and keeping it in based on known-bad benchmarks would have helped nobody. I've left it in a git branch in case the world should change.

I still believe there are useful gains from compute shaders—in particular, the deinterlacer shines—but it's increasingly clear to me that fragment shaders should remain the default go-to tool for graphics on the GPU. (I guess the next natural target would be the convolution/FFT operators, but they're not all that much used.)

The full changelog reads:

Movit 1.6.0, January 24th, 2018

  - Support for effects that work as compute shaders. Compute shaders are
    generally slower than fragment shaders for the same algorithm,
    but allow some forms of communication between shader invocations
    and have more flexible output, which can enable more efficient algorithms.
    See effect.h for more details. Note that the fastest rendering API on
    EffectChain is now to a texture if possible, not to an FBO. This will
    only matter if the last effect is a compute shader.

  - Movit now includes a compute shader implementation of DeinterlaceEffect,
    which is automatically used instead of the fragment shader implementation
    if your GPU and OpenGL driver supports it (in practice, this means on
    all platforms except on macOS). The compute shader version is typically
    20–80% faster than the fragment shader version, depending on your GPU
    and other factors.

    A compute shader implementation of ResampleEffect was written but
    ultimately failed to be faster, and so is not included.

  - Support for microbenchmarks of effects through the Google microbenchmarking
    framework (optional). Currently, DeinterlaceEffect and ResampleEffect has
    benchmarks; enable them by running the unit test with --benchmark (also try
    --benchmark --help).

  - Effects can now explicitly request _not_ to have mipmaps, which means they
    can do so without needing to request bounce and fiddling with the sampler
    state. Note that this is an API change for effects.

  - Movit now requires C++11, both to build and to #include the header files.
    Support for SDL1 has been dropped; unit tests and the demo program now need
    SDL2.

  - Various smaller bugfixes and optimizations.

Debian packages are on their way up through the NEW queue (there's a soname bump).

Planet DebianShirish Agarwal: The Pune Metro 1st anniversary celebrations

Pune Metro facebook, twitter friends

This would be long. First and foremost, couple of days ago, I got the following direct message on my twitter handle –

Hi Shirish,

We are glad to inform you that we are celebrating the 1st anniversary of Pune Metro Rail Project & the incorporation of both Nagpur Metro and Pune Metro into Maharashtra Metro Rail Corporation Limited(MahaMetro) on 23rd January at 13:00 hrs followed by the lunch.

On this occasion we would like to invite you to accept a small token of appreciation for your immense support & continued valuable interaction on our social media channels for the project at the hands of Dr. Brijesh Dixit, Managing Director, MahaMetro.

Venue: Hotel Citrus, Opposite PCMC Building, Pimpri-Chinchwad.
Time: 13:00 Hrs
Lunch: 14:00 hrs

Kindly confirm your attendance. Looking forward to meet you.

Regards & Thanks, Pune Metro Team

I went and had an interaction with Mr. Dixit and was gifted a gift card which can be redeemed.

I shared it on facebook. Some people have asked me privately as to what I did.

First of all, let me be very clear. I did not enter into any competition or put up any queries with getting any sort of monetary benefit at all. I have been a user of public transport both out of necessity and choice and do feel the need for a fast, secure, reasonable mode of transport. I am also immensely passionate and curious about public transport as a whole.

Just to share couple of facts and I’m sure most of you will agree with me, it takes more than twice the time if you are taking public transport. at least that’s true in India. Part of it is due to the drivers not keeping the GPS on, nor people/users asking/stressing for using GPS and using that location-based info. to derive when the next bus is gonna come. So, for instance for my journey to PCMC roughly took 20 kms. and about 45 minutes, but waiting took almost twice the time and this was not the rush-hour time where it could easily have taken double the time. Hence people opting for private vehicles even though they know it’s harmful for the environment as well as for themselves and their loved ones.

There was/has been a plan of PMC (Pune Municipal Corporation) for over a decade to use GPS to make aware the traveling public tentatively when the next bus would arrive but due to number of reasons (corruption, lack of training, awareness, discipline) all of which has been hampering citizen’s productivity due to which people are forced to get private vehicles and it becomes a zero-sum game. There is much more but don’t want to go in there.

Now people have asked me what sort of suggestions I gave or am giving –

Yesterday’s interaction after seeing Mahametro’s interaction with the press, it seems the press or media seems to have a very poor understanding of the dynamics and not really interested in enriching citizen’s understanding of either the Pune Metro or the idea of Integrated Transport Initiative which has been in making for sometime now. Part of the issue also seem to lay with Pune Metro in not sharing knowledge as much as they can with the opportunities that digital media/space provides and at very low-cost.

Suggestions and Queries –

1. One of the first things that Pune Metro could make is an animation of how a DPR (Detailed Project Report) is made. I don’t think any of the people from the press, especially English language press has seen the DPR or otherwise many of the questions would have been answered.

http://www.punemetrorail.org/download/PuneMetro-DPR.pdf

The simplest analogy I can give is let’s say you want to build a hospital but the land on which you have to build it belongs to 2-3 parties, so how will you will build it? Also you don’t have money. The DPR is different only in the sense of the scale of the things and construction of the routes is not by a single contractor but by multiple contractors. A route say A – B is divided in 5 parts and asked by people to submit tenders for the packet a company/contractor/s are interested in.

The way I see it, the DPR has to figure out the right of way where construction of the spans have to be, where the stations have to be built, from where electricity and water has to come, where the maintainance depot will be (usually the depot is at the end), the casting yard for the spans/pillars etc.

There is a pre-qualification round so that only eligible bidders are interested who have history of doing similar scale work and then bidding as to who can do it at lowest cost with a set reserve price. If no bidder comes up either due to from contractor’s POV a very high reserve price, then the reserve price is lowered. The idea there is simply to have a price discovery which may be seen as being found out by a just and fair method.

The press seemed to be more interested in making a tiff between Pune Metro/Maha Metro chief and Gaurdian Minister Shri Girish Bapat of something which to my mind is a non-issue at this juncture.

Mr. Dixit was absolutely correct in saying that he can’t comment on when the extension to Nigdi will happen unless the DPR for extension to Nigdi is made, land is found and the various cost heads, expenses and funding is approved in the State and Central Government and funding from multiple places is done.

The other question which was being raised by the press was razing of the BRTS in Pune. While the press knew it was neither Mr. Dixit’s place or responsibility nor is he supposed to comment upon whatever happens to BRTS. He can’t even comment as that would come under Pune Urban Transport Ministry.

As far as I understand Mr. Dixit’s obligations, it is to build Pune Metro as safely, as quickly, using good materials, give good signages and give an efficient public transit service that we Puneties can be proud of.

2. The site http://www.punemetrorail.org/ really needs an update and upgrade. You should use something like wordpress or something where you are able to change themes every now and then. Every 3-6 months the themes should be tweaked so the content remains or at least looks fresh.

3. There are no time-stamps of any of the videos. At the very least should have time-stamps so some sort of contextual information is available.

4. There is no way to know if there is news. News should be highlighted and more information be shared. For instance, there should have been more info. e.g. about this particular item –

MoU signed between Dr. Brijesh Dixit, MD MahaMetro, Mrs. Prerna Deshbhratar, Addl Municipal Commissioner(Spl), PMC and Mr Kong Wy Mun, CEO, Singapore Cooperation Enterprise for the Urban Management(Multi-Modal Transport) during a program at Yashada, Pune.

from http://www.punemetrorail.org/projectupdate.aspx

It would have been interesting to know what it means and how would the Singapore Government help us in achieving a unified multi-modal transport model.

There were/are tons of questions that the press could have asked but didn’t as above and more below.

5. The DPR was made in November 2015 and then now it is 2018. There probably needs to be vis-a-vis adjusted prices taking into consideration changes over 3 years and would probably change till 2021.

6. Similarly, there are time-gaps between plans and execution of the plan and for Puneties we don’t even know what the plan is.

I would urge Pune Metro to have a dynamic plan which shows areas in which work is progressing in terms of blinking lights to know which are active areas and which are not. They could be a source of inspiration and trail-blazer on this.

7. Similarly, another idea which could be done or might even be done is to have a single photograph taken everyday at say 1200 hrs. in afternoon at all the areas at 640×480 resolution which can be uploaded to the site and in turn could be published onto a separate web-page which in-turn over days and weeks could be turn into a time-lapse video similar to what was achieved for the first viaduct video shot over a day or two –

If you want to make it more interesting and challenging, you could invite students from Symbiosis to make it on something like a Raspberry Pi2/3 or some other SBC (Single Board Computer), a camera lens, a solar cell and a modem with instructions to stagger images to send it to Pune metro rail portal in case some web traffic is already there. There could be specific port (not port 80) .

Later on make a time-lapse video would be simple as stitching all those photographs together and getting some nice audio music as fillers. Something which has already been done once for the viaduct video as seen above.

8. Tracking planned versus real-time progress – While Mr. Dixit has time and again ensured that things are progressing well, it would make it far much easier to trust if there was a web-service which tells if things are going according to schedule or is it bit off. It does overlap a bit with my earlier suggestion but there are many development projects around the wold which show tentative and actual progress.

9. Apart from traffic diversion news in major newspapers, it would be nice to also have a section about traffic diversion with blinkers or something about road diversions which are in effect.

10. Another would be to have a RSS feed about all news found out by various search-engine crawlers, delete duplicate links and share the news and views for people to click-through and know for themselves.

11. Statistics of jobs (both direct and indirect created) due to Pune Metro works displayed prominently.

12. Have a glossary of terms which can easily be garnered by having a 10th-12th average student going through say the DPR and see which terms he has problems with.

The simplest example is the word ‘Reach’ which has been used in a different context in Pune Metro than what is usually understood.

13. Are there and if there are, How many Indian SME’s have been entrusted either via joint-venture or whichever way to ensure knowledge transfer of making and maintaining the rakes, car/bogies, track etc.

14. If any performance and load guarantee has been asked from various packet holders. If yes, what are the guarantees and for what duration ?

These are all low-hanging fruits. Also I’m no web-developer although am a bit of content producer (as can be seen) and a voracious consumer of the web. I do have few friends though, if there is requirement and who understand the medium in a far more better, intimate way than the crudish manner I shared above.

A student who believes democracy needs work and needs efforts to democracy work. If citizens themselves would not ask these questions who would ?

Krebs on SecurityChronicle: A Meteor Aimed At Planet Threat Intel?

Alphabet Inc., the parent company of Google, said today it is in the process of rolling out a new service designed to help companies more quickly make sense of and act on the mountains of threat data produced each day by cybersecurity tools.

Countless organizations rely on a hodgepodge of security software, hardware and services to find and detect cybersecurity intrusions before an incursion by malicious software or hackers has the chance to metastasize into a full-blown data breach.

The problem is that the sheer volume of data produced by these tools is staggering and increasing each day, meaning already-stretched IT staff often miss key signs of an intrusion until it’s too late.

Enter “Chronicle,” a nascent platform that graduated from the tech giant’s “X” division, which is a separate entity tasked with tackling hard-to-solve problems with an eye toward leveraging the company’s core strengths: Massive data analytics and storage capabilities, machine learning and custom search capabilities.

“We want to 10x the speed and impact of security teams’ work by making it much easier, faster and more cost-effective for them to capture and analyze security signals that have previously been too difficult and expensive to find,” wrote Stephen Gillett, CEO of the new venture.

Few details have been released yet about how exactly Chronicle will work, although the company did say it would draw in part on data from VirusTotal, a free service acquired by Google in 2012 that allows users to scan suspicious files against dozens of commercial antivirus tools simultaneously.

Gillett said his division is already trialing the service with several Fortune 500 firms to test the preview release of Chronicle, but the company declined to name any of those participating.

ANALYSIS

It’s not terribly clear from Gillett’s post or another blog post from Alphabet’s X division by Astro Teller how exactly Chronicle will differentiate itself in such a crowded market for cybersecurity offerings. But it’s worth considering the impact that VirusTotal has had over the years.

Currently, VirusTotal handles approximately one million submissions each day. The results of each submission get shared back with the entire community of antivirus vendors who lend their tools to the service — which allows each vendor to benefit by adding malware signatures for new variants that their tools missed but that a preponderance of other tools flagged as malicious.

Naturally, cybercriminals have responded by creating their own criminal versions of VirusTotal: So-called “no distribute” scanners. These services cater to malware authors, and use the same stable of antivirus tools, except they prevent these tools from phoning home to the antivirus companies about new, unknown variants.

On balance, it’s difficult to know whether the benefit that antivirus companies — and by extension their customers — gain by partnering with VirusTotal outweighs the mayhem enabled by these no-distribute scanners. But it seems clear that VirusTotal has helped antivirus companies and their customers do a better job focusing on threats that really matter, as opposed to chasing after (or cleaning up after) so-called “false positives,” — benign files that erroneously get flagged as malicious.

And this is precisely the signal-to-noise challenge created by the proliferation of security tools used in a typical organization today: How to spend more of your scarce cybersecurity workforce, budget and time identifying and stopping the threats that matter and less time sifting through noisy but otherwise time-wasting alerts triggered by non-threats.

I’m not a big listener of podcasts, but I do find myself increasingly making time to listen to Risky Business, a podcast produced by Australian cybersecurity journalist Patrick Gray. Responding to today’s announcement on Chronicle, Gray said he likewise had few details about it but was looking forward to learning more.

“Google has so much data and so many amazing internal resources that my gut reaction is to think this new company could be a meteor aimed at planet Threat Intel™️,” Gray quipped on Twitter, referring to the burgeoning industry of companies competing to help companies trying to identify new threats and attack trends. “Imagine if other companies spin out their tools…Netflix, Amazon, Facebook etc. That could be a fundamentally reshaped industry.”

Well said. I also look forward to hearing more about how Chronicle works and, more importantly, if it works.

Full disclosure: Since September 2016, KrebsOnSecurity has received protection against massive online attacks from Project Shield, a free anti-distributed denial-of-service (DDoS) offering provided by Jigsaw — another subsidiary of Google’s parent company. Project Shield provides DDoS protection for news, human rights, and elections monitoring Web sites.

Rondam RamblingsA Multilogue on Free Will

[Inspired by this comment thread.] The Tortoise is standing next to a railroad track when Achilles, an ancient Greek warrior, happens by.  In the distance, a train whistle sounds. Tortoise: Greetings, friend Achilles.  You have impeccable timing.  I could use your assistance. Achilles: Hello, Mr. T.  Always happy to help.  What seems to be the trouble? Tortoise: Look there. Achilles: Why, it

Planet DebianRenata D'Avila: Ideas for the project architecture and short term goals

There has been many discussions about planning for the FOSS calendar. On this post, I report about some of the ideas.

How I first thought the Foss Events calendar

Back in december, when I was just making sense of my surroundings and trying to find a way to start the internship, I drawed this diagram to picture in my head how everything would work:

A diagram showing the schema that will be described bellow. Each item is connected to the next using arrows, except for the relationship between user interface and API, where data flows both ways.

  1. There would be a "crawler.py" module, which would access each site on a determined list (It could be Facebook, Meetup or any other site such as another calendar) that have events information. This module would pull the event data from those sites.

  2. A validator.py would check if the data was good and if there was data. Once this module verified this, it would dump all info into a dirty_events database.

  3. The dirty_events database would be accessed by the module parser.py, which would clean and organize the data to be properly stored in the events database.

  4. An API.py module would query the events database and return the proper data, formatted into JSON, ical and/or some other formats.

  5. There would be an user interface to get data from API.py and to display this data. It should also be possible to add (properly formatted) events to the database using this interface. [If we were talking about a plugin to merely display the events in MoinMoin or Wordpress or some other system, this plugin would enter in this category.]

The ideas that Daniel put on paper

Then, when I shared with my mentors, Daniel came up with this:

Another diagram. On the left, the plugins boxes, they connect to an aggregator, with input towards the storage. The storage then outputs to reports and data dump, represented on the right.

Daniel proposed that module or plugins could be developed or improved (there are some of them already, but they might not support iCalendar URLs) for MoinMoin, Drupal, Wordpress that would allow the data each of these systems have about events to be aggregated. Information from the Meetup and the Facebook APIs could be converted to ical to be agreggated. This aggregation process could happen through a cron job - and I believe daily is enough, because people don't usually publish an event to happen in the very next day (they need time for people to acknowledge it). If the time frame ends up not being the ideal, this can be reviewed and changed later.

Once all this data is gathered, it would then be stored, inserting it or updating it in what could be a PostgreSQL or NoSQL solution.

Using the database with events information, it should be possible to do a data dump with all the information or to give "reports" of the event data, whether the user wants to access the data in iCalendar format (for Thunderbird or GNOME Evolution) or just HTML for viewing in the browser.

Short term goals

Creating a FOSS events calendar it is a big project that will most certainly continue beyond my Outreachy internship.

Therefore, along with my mentors, we have established that my short term goal will be to contribute a bit to it by working on the MoinMoin EventCalendar so the events can be exported to the iCalendar format.

I have been studying and playing around with the EventCalendar code and, so far, I've concluded that the best way to do this might be by writing a function to it. Just like there are other functions on this plugin to change the display of the calendar, there might be a function to just sort the data to the iCalendar format and to allow downloading the file.

Krebs on SecurityExpert: IoT Botnets the Work of a ‘Vast Minority’

In December 2017, the U.S. Department of Justice announced indictments and guilty pleas by three men in the United States responsible for creating and using Mirai, a malware strain that enslaves poorly-secured “Internet of Things” or IoT devices like security cameras and digital video recorders for use in large-scale cyberattacks.

The FBI and the DOJ had help in their investigation from many security experts, but this post focuses on one expert whose research into the Dark Web and its various malefactors was especially useful in that case. Allison Nixon is director of security research at Flashpoint, a cyber intelligence firm based in New York City. Nixon spoke with KrebsOnSecurity at length about her perspectives on IoT security and the vital role of law enforcement in this fight.

Brian Krebs (BK): Where are we today with respect to IoT security? Are we better off than were a year ago, or is the problem only worse?

Allison Nixon (AN): In some aspects we’re better off. The arrests that happened over the last year in the DDoS space, I would call that a good start, but we’re not out of the woods yet and we’re nowhere near the end of anything.

BK: Why not?

AN: Ultimately, what’s going with these IoT botnets is crime. People are talking about these cybersecurity problems — problems with the devices, etc. — but at the end of the day it’s crime and private citizens don’t have the power to make these bad actors stop.

BK: Certainly security professionals like yourself and others can be diligent about tracking the worst actors and the crime machines they’re using, and in reporting those systems when it’s advantageous to do so?

AN: That’s a fair argument. I can send abuse complaints to servers being used maliciously. And people can write articles that name individuals. However, it’s still a limited kind of impact. I’ve seen people get named in public and instead of stopping, what they do is improve their opsec [operational security measures] and keep doing the same thing but just sneakier. In the private sector, we can frustrate things, but we can’t actually stop them in the permanent, sanctioned way that law enforcement can. We don’t really have that kind of control.

BK: How are we not better off?

AN: I would say that as time progresses, the community that practices DDoS and malicious hacking and these pointless destructive attacks get more technically proficient when they’re executing attacks, and they just become a more difficult adversary.

BK: A more difficult adversary?

AN: Well, if you look at the individuals that were the subject of the announcement this month, and you look in their past, you can see they’ve been active in the hacking community a long time. Litespeed [the nickname used by Josiah White, one of the men who pleaded guilty to authoring Mirai] has been credited with lots of code.  He’s had years to develop and as far as I could tell he didn’t stop doing criminal activity until he got picked up by law enforcement.

BK: It seems to me that the Mirai authors probably would not have been caught had they never released the source code for their malware. They said they were doing so because multiple law enforcement agencies and security researchers were hot on their trail and they didn’t want to be the only ones holding the source code when the cops showed up at their door. But if that was really their goal in releasing it, doing so seems to have had the exact opposite effect. What’s your take on that?

AN: You are absolutely, 100 million percent correct. If they just shut everything down and left, they’d be fine now. The fact that they dumped the source was a tipping point of sorts. The damages they caused at that time were massive, but when they dumped the source code the amount of damage their actions contributed to ballooned [due to the proliferation of copycat Mirai botnets]. The charges against them specified their actions in infecting the machines they controlled, but when it comes to what interested researchers in the private sector, the moment they dumped the source code — that’s the most harmful act they did out of the entire thing.

BK: Do you believe their claimed reason for releasing the code?

AN: I believe it. They claimed they released it because they wanted to hamper investigative efforts to find them. The problem is that not only is it incorrect, it also doesn’t take into account the researchers on the other end of the spectrum who have to pick from many targets to spend their time looking at. Releasing the source code changed that dramatically. It was like catnip to researchers, and was just a new thing for researchers to look at and play with and wonder who wrote it.

If they really wanted to stay off law enforcement’s radar, they would be as low profile as they could and not be interesting. But they did everything wrong: They dumped the source code and attacked a security researcher using tools that are interesting to security researchers. That’s like attacking a dog with a steak. I’m going to wave this big juicy steak at a dog and that will teach him. They made every single mistake in the book.

BK: What do you think it is about these guys that leads them to this kind of behavior? Is it just a kind of inertia that inexorably leads them down a slippery slope if they don’t have some kind of intervention?

AN: These people go down a life path that does not lead them to a legitimate livelihood. They keep doing this and get better at it and they start to do these things that really can threaten the Internet as a whole. In the case of these DDoS botnets, it’s worrying that these individuals are allowed to go this deep before law enforcement catches them.

BK: There was a narrative that got a lot of play recently, and it was spun by a self-described Internet vigilante who calls himself “the Janitor.” He claimed to have been finding zero-day exploits in IoT devices so that he could shut down insecure IoT things that can’t really be secured before or maybe even after they have been compromised by IoT threats like Mirai. The Janitor says he released a bunch of his code because he’s tired of being the unrecognized superhero that he is, and many in the media seem to have eaten this up and taken his manifesto as gospel. What’s your take on the Janitor, and his so-called “bricker bot” project?

AN: I have to think about how to choose my words, because I don’t want to give anyone bad ideas. But one thing to keep in mind is that his method of bricking IoT devices doesn’t work, and it potentially makes the problem worse.

BK: What do you mean exactly?

AN: The reason is sometimes IoT malware like Mirai will try to close the door behind it, by crashing the telnet process that was used to infect the device [after the malware is successfully installed]. This can block other telnet-based malware from getting on the machine. And there’s a lot of this type of King of the Hill stuff going on in the IoT ecosystem right now.

But what [this bricker bot] malware does is a lot times it reboots a machine, and when the device is in that state the vulnerable telnet service goes back up. It used to be a lot of devices were infected with the very first Mirai, and when the [control center] for that botnet went down they were orphaned. We had a bunch of Mirai infections phoning home to nowhere. So there’s a real risk of taking the machine that was in the this weird state and making it vulnerable again.

BK: Hrm. That’s a very different story from the one told by the Bricker bot author. According to him, he spent several years of his life saving the world from certain doom at the hands of IoT devices. He even took credit for foiling the Mirai attacks on Deutsche Telekom. Could this just be a case of researcher exaggerating his accomplishments? Do you think his Bricker bot code ever really spread that far?

AN: I don’t have any evidence that there was mass exploitation by Bricker bot. I know his code was published. But when I talk to anyone running an IoT honeypot [a collection of virtual or vulnerable IoT devices designed to attract and record novel attacks against the devices] they have never seen it. The consensus is that regardless of peoples’ opinion on it we haven’t seen it in our honeypots. And considering the diversity of IoT honeypots out there today, if it was out there in real life we would have seen it by now.

BK: A lot of people believe that we’re focusing on the wrong solutions to IoT security — that having consumers lock down IoT devices security-wise or expecting law enforcement agencies to fix this problem for us for me are pollyannish ideas that in any case don’t address the root cause: Which is that there are a lot of companies producing crap IoT products that have virtually no security. What’s your take?

AN: The way I approach this problem is I see law enforcement as the ultimate end goal for all of these efforts. When I look at the IoT DDoS activity and the actual human beings doing this, the vast majority of Mirai attacks, attack infrastructure, malware variants and new exploits are coming from a vast minority of people doing this. That said, the way I perceive the underground ecosystem is probably different than the way most people perceive it.

BK: What’s the popular perception, do you think?

AN: It’s that, “Oh hey, one guy got arrested, great, but another guy will just take his place.” People compare it to a drug dealer on the street corner, but I don’t think that’s accurate in this case. The difference is when you’re looking at advanced criminal hacking campaigns, there’s not usually a replacement person waiting in the wings. These are incredibly deep skills developed over years. The people doing innovations in DDoS attacks and those who are driving the field forward are actually very few. So when you can ID them and attach behavior to the perpetrator, you realize there’s only a dozen people I need to care about and the world suddenly becomes a lot smaller.

BK: So do you think the efforts to force manufacturers to harden their products are a waste of time?

AN: I want to make it clear that all these different ways to tackle the problem…I don’t want to say one is more important than the other. I just happened to be working on one component of it. There’s definitely a lot of disagreement on this. I totally recognize this as a legitimate approach. A lot of people think the way forward is to focus on making sure the devices are secure. And there are efforts ongoing to help device manufacturers create more secure devices that are more resistant to these efforts.

And a lot is changing, although slowly. Do you remember way back when you bought a Wi-Fi router and it was open by default? Because the end user was obligated to change the default password, we had open Wi-Fi networks everywhere. As years passed, many manufacturers started making them more secure. For example, many of these devices now have customers refer to sticker on the machine that has a unique Wi-Fi password. That type of shift may be an example of what we can see in the future of IoT security.

BK: In the wake of the huge attacks from Mirai in 2016 and 2017, several lawmakers have proposed solutions. What do you think of the idea that it doesn’t matter what laws we pass in the United States that might require more security by IoT makers, that those makers are just going to keep on ignoring best practices when it comes to security?

AN: It’s easy to get cynical about this and a lot of people definitely feel like these these companies don’t sell directly to the U.S. and therefore don’t care about such efforts. Maybe in the short term that might be true, but in the long term I think it ends up biting them if they continue to not care.

Ultimately, these things just catch up with you if you have a reputation for making a poor product. What if you had a reputation for making a device that if you put it on the Internet it would reboot every five minutes because it’s getting attacked? Even if we did enact security requirements for IoT that manufacturers not in the U.S. wouldn’t have to follow, it would still in their best interests to care, because they are going to care sooner or later.

BK: I was on a Justice Department conference call with other journalists on the day they announced the Mirai author arrests and guilty pleas, and someone asked why this case was prosecuted out of Alaska. The answer that came back was that a great many of the machines infected with Mirai were in Alaska. But it seems more likely that it was because there was an FBI agent there who decided this was an important case but who actually had a very difficult time finding enough infected systems to reach the threshold needed to prosecute the case. What’s your read on that?

AN: I think that this case is probably going to set precedent in terms of the procedures and processes used to go after cybercrime. I’m sure you finished reading The Wired article about the Alaska investigation into Mirai: It goes in to detail about some of the difficult things that the Alaska FBI field office had to do to satisfy the legal requirements to take the case. Just to prove they had jurisdiction, they had to find a certain number of infected machines in Alaska.

Those were not easy to find, and in fact the FBI traveled far and wide in order to find these machines in Alaska. There are all kinds of barriers big and small that slow down the legal process for prosecuting cases like this, some of which are legitimate and some that I think are going to end up being streamlined after a case like this. And every time a successful case like this goes through [to a guilty plea], it makes it more possible for future cases to succeed.

This one group [that was the subject of the Mirai investigation] was the worst of the worst in this problem area. And right now it’s a huge victory for law enforcement to take down one group that is the worst of the worst in one problem area. Hopefully, it will lead to the takedown of many groups causing damage and harming people.

But the concept that in order for cybercriminals to get law enforcement attention they need to make international headlines and cause massive damage needs to change. Most cybercriminals probably think that what they’re doing nobody is going to notice, and in a sense they’re correct because there is so much obvious criminal activity blatantly connected to specific individuals. And that needs to change.

BK: Is there anything we didn’t talk about related to IoT security, the law enforcement investigations into Mirai, or anything else you’d like to add?

AN: I want to extend my gratitude to the people in the security industry and network operator community who recognized the gravity of this threat early on. There are a lot of people who were not named [in the stories and law enforcement press releases about the Mirai arrests], and want to say thank you for all the help. This couldn’t have happened without you.

Worse Than FailureSponsor Post: Make Your Apps Better with Raygun

I once inherited an application which had a bug in it. Okay, I’ve inherited a lot of applications like that. In this case, though, I didn’t know that there was a bug, until months later, when I sat next to a user and was shocked to discover that they had evolved a complex work-around to bypass the bug which took about twice as long, but actually worked.

“Why didn’t you open a ticket? This shouldn’t be like this.”

“Enh… it’s fine. And I hate dealing with tickets.”

In their defense, our ticketing system at that office was a godawful nightmare, and nobody liked dealing with it.

The fact is, awful ticket tracking aside, 99% of users don’t report problems in software. Adding logging can only help so much- eventually you have a giant haystack filled with needles you don’t even know are there. You have no way to see what your users are experiencing out in the wild.

But what if you could? What if you could build, test and deploy software with a real-time feedback loop on any problems the users were experiencing?

Our new sponsor, Raygun, gives you a window into the real user-experience for your software. With a few minutes of setup, all the errors, crashes, and performance issues will be identified for you, all in one tool.

You're probably using software and services today that relies on Raygun to identify when users have a poor experiences: Domino's Pizza, Coca-Cola, Microsoft and Unity all use it, along with many others.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integration, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Worse Than FailureAll Saints' Day

Cathedral Antwerp July 2015-1

Oh, PHP. It's the butt of any number of jokes in the programming community. Those who do PHP often lie and pretend they don't, just to avoid the social stigma. Today's submitter not only works in PHP, but they also freelance: the bottom of the bottom of the development hierarchy.

Last year, Ilya was working on a Joomla upgrade as well as adjusting several components on a big, obscure website. As he was poking around in the custom code, he found today's submission. You see, the website is in Italian. At the top of the page, it shows not only the date, but also the saint of the day. This is a Catholic thing: every day of the year has a patron saint, and in certain cultures, you might even be named for the saint whose day you were born on. A full list can be found on this Italian Wikipedia page.

Every day, the website was supposed to display text like "18 luglio: santi Sinforosa e sette compagni" (July 18: Sinforosa and the Seven Companions). But the code that generated this string had broken. It wasn't Ilya's task to fix it, but he chose to do so anyway, because why not?

His first suspect for where this text came from was this mess of Javascript embedded in the head:

     function getDataGiorno(){
     data = new Date();
     ora =data.getHours();
     minuti=data.getMinutes();
     secondi=data.getSeconds();
     giorno = data.getDay();
     mese = data.getMonth();
     date= data.getDate();
     year= data.getYear();
     if(minuti< 10)minuti="0"+minuti;
     if(secondi< 10)secondi="0"+secondi;
     if(year<1900)year=year+1900;
     if(ora<10)ora="0"+ora;
     if(giorno == 0) giorno = " Domenica ";
     if(giorno == 1) giorno = " Lunedì ";
     if(giorno == 2) giorno = " Martedì ";
     if(giorno == 3) giorno = " Mercoledì ";
     if(giorno == 4) giorno = " Giovedì ";
     if(giorno == 5) giorno = " Venerdì ";
     if(giorno == 6) giorno = " Sabato ";
     if(mese == 0) mese = "gennaio ";
     if(mese ==1) mese = "febbraio ";
     if(mese ==2) mese = "marzo ";
     if(mese ==3) mese = "aprile ";
     if(mese ==4) mese = "maggio ";
     if(mese ==5) mese = "giugno ";
     if(mese ==6) mese = "luglio ";
     if(mese ==7) mese = "agosto ";
     if(mese ==8) mese = "settembre ";
     if(mese ==9) mese = "ottobre ";
     if(mese ==10) mese = "novembre ";
     if(mese ==11) mese = "dicembre";
     var dt=date+" "+mese+" "+year;
     var gm =date+"_"+mese;

     return gm.replace(/^\s+|\s+$/g,""); ;
     }

     function getXMLHttp() {
     var xmlhttp = null;
     if (window.ActiveXObject) {
       if (navigator.userAgent.toLowerCase().indexOf("msie 5") != -1) {
         xmlhttp = new ActiveXObject("Microsoft.XMLHTTP");
       } else {
           xmlhttp = new ActiveXObject("Msxml2.XMLHTTP");
       }
     }
     if (!xmlhttp && typeof(XMLHttpRequest) != 'undefined') {
       xmlhttp = new XMLHttpRequest()
     }
     return xmlhttp
     }

     function elaboraRisposta() {
      var dt=getDataGiorno();
      var data = dt.replace('_',' ');
      if (dt.indexOf('1_')==0){
          dt.replace('1_','%C2%BA');
      }
       // alert("*"+dt+"*");
     var temp = new Array();
     temp = objHTTP.responseText.split(dt);
     //alert(temp[1]);

     var temp1=new Array();
     temp1=temp[1].split(":");
     temp=temp1[1].split("");


      if (objHTTP.readyState == 4) {
      santi=temp[0].split(",");
      //var app = new Array();
      //app=santi[0].split(";");
      santo=santi[0];
      //alert(santo);

        // document.write(data+" - "+santo.replace(/\/wiki\//g,"http://it.wikipedia.org/wiki/"));
        document.write(data+" - "+santo);
      }else {

      }

     }
     function loadDati() {
      objHTTP = getXMLHttp();
      objHTTP.open("GET", "calendario.html" , false);

      objHTTP.onreadystatechange = function() {elaboraRisposta()}


     objHTTP.send(null)
     }

If you've never seen Joomla before, do note that most templates use jQuery. There's no need to use ActiveXObject here.

"calendario.html" contained very familiar text: a saved copy of the Wikipedia page linked above. This ought to be splitting the text with the date, avoiding parsing HTML with regex by using String.prototype.split(), and then parsing the HTML to get the saint for that date to inject into the HTML.

But what if a new saint gets canonized, and the calendar changes? By caching a local copy, you ensure that the calendar will get out of date unless meticulously maintained. Therefore, the code to call this Javascript was commented out entirely in the including page:

<div class="santoForm">
<!--  <?php echo "<script>loadDati()</script>" ;  ?> -->
<?php ...

Instead, it had been replaced with this:

       setlocale(LC_TIME, 'ita', 'it_IT.utf8');
       $gg = ltrim(strftime("%d"), '0');

       if ($gg=='1'){
        $dt=ltrim(strftime("1º %B"), '0');
       }else{
         $dt=ltrim(strftime("%d %B"), '0');
       }
       //$dt='4 febbraio';
     $html = file_get_contents('http://it.wikipedia.org/wiki/Calendario_dei_santi');
     $doc = new DOMDocument();
     $doc->loadHTML($html);
     $elements = $doc->getElementsByTagName('li');
     foreach ($elements as $element) {
        if (strpos($element->nodeValue,utf8_encode($dt))!== false){
         list($santo,$after)= split(';',$element->nodeValue);
         break ;
        }else {}
     }
       list($santo,$after)= split(',',utf8_decode($santo));
       if (strlen ( $santo) > 55) {
         $santo=substr($santo, 0, 55)."...";
       }

This migrates the logic to the backend—the One True Place for all such logic—and uses standard library routines, just as it should. Of course, this being PHP, it breaks horribly if you look at it cross-eyed, or—like Ilya—don't have the Italian locale installed on your development machine. And of course, it'll also break if the live Wikipedia page about the saints gets reformatted. But what's the likelihood of that? Plus, it's not cached in the least, letting every visitor see updates in real time. After all, next time they canonize a saint, everyone will rush right to this site to verify that the day changed. That's how the Internet works, right?

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

CryptogramDetecting Drone Surveillance with Traffic Analysis

This is clever:

Researchers at Ben Gurion University in Beer Sheva, Israel have built a proof-of-concept system for counter-surveillance against spy drones that demonstrates a clever, if not exactly simple, way to determine whether a certain person or object is under aerial surveillance. They first generate a recognizable pattern on whatever subject­ -- a window, say -- someone might want to guard from potential surveillance. Then they remotely intercept a drone's radio signals to look for that pattern in the streaming video the drone sends back to its operator. If they spot it, they can determine that the drone is looking at their subject.

In other words, they can see what the drone sees, pulling out their recognizable pattern from the radio signal, even without breaking the drone's encrypted video.

The details have to do with the way drone video is compressed:

The researchers' technique takes advantage of an efficiency feature streaming video has used for years, known as "delta frames." Instead of encoding video as a series of raw images, it's compressed into a series of changes from the previous image in the video. That means when a streaming video shows a still object, it transmits fewer bytes of data than when it shows one that moves or changes color.

That compression feature can reveal key information about the content of the video to someone who's intercepting the streaming data, security researchers have shown in recent research, even when the data is encrypted.

Research paper and video.

Planet DebianDaniel Pocock: apt-get install more contributors

Every year I participate in a number of initiatives introducing people to free software and helping them make a first contribution. After all, making the first contribution to free software is a very significant milestone on the way to becoming a leader in the world of software engineering. Anything we can do to improve this experience and make it accessible to more people would appear to be vital to the continuation of our communities and the solutions we produce.

During the time I've been involved in mentoring, I've observed that there are many technical steps in helping people make their first contribution that could be automated. While it may seem like creating SSH and PGP keys is not that hard to explain, wouldn't it be nice if we could whisk new contributors through this process in much the same way that we help people become users with the Debian Installer and Synaptic?

Paving the path to a first contribution

Imagine the following series of steps:

  1. Install Debian
  2. apt install new-contributor-wizard
  3. Run the new-contributor-wizard (sets up domain name, SSH, PGP, calls apt to install necessary tools, procmail or similar filters, join IRC channels, creates static blog with Jekyll, ...)
  4. write a patch, git push
  5. write a blog about the patch, git push

Steps 2 and 3 can eliminate a lot of "where do I start?" head-scratching for new contributors and it can eliminate a lot of repetitive communication for mentors. In programs like GSoC and Outreachy, where there is a huge burst of enthusiasm during the application process (February/March), will a tool like this help a higher percentage of the applicants make a first contribution to free software? For example, if 50% of applicants made a contribution last March, could this tool raise that to 70% in March 2019? Is it likely more will become repeat contributors if their first contribution is achieved more quickly after using a tool like this? Is this an important pattern for the success of our communities? Could this also be a useful stepping stone in the progression from being a user to making a first upload to mentors.debian.net?

Could this wizard be generic enough to help multiple communities, helping people share a plugin for Mozilla, contribute their first theme for Drupal or a package for Fedora?

Not just for developers

Notice I've deliberately used the word contributor and not developer. It takes many different people with different skills to build a successful community and this wizard will also be useful for people who are not writing code.

What would you include in this wizard?

Please feel free to add ideas to the wiki page.

All projects really need a couple of mentors to support them through the summer and if you are able to be a co-mentor for this or any of the other projects (or even proposing your own topic) now is a great time to join the debian-outreach list and contact us. You don't need to be a Debian Developer either and several of these projects are widely useful outside Debian.

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 3 – Booting

Securing the Linux boot process Matthew Garrett

  • Without boot security there is no other security
  • MBR Attacks – previously common, still work sometimes
  • Bootloader attacks – Seen in the wild
  • Malicious initrd attacks
    • RAM disk, does stuff like decrypt hard drive
    • Attack captures disk pass-shrase when typed in
  • How do we fix these?
    • UEFI Secure boot
    • Microsoft required in machines shipped after mid-2012
    • sign objects, firmware trusts some certs, boots things correctly signed
    • Problem solved! Nope
    • initrds are not signed
  • initrds
    • contain local changes
    • do a lot of security stuff
  • TPMs
    • devices on system motherboards
    • slow but inexpensive
    • Not under control of the CPU
    • Set of registers “platform configuration registers”, list of hashes of objects booted in boot process. Measurements
    • PCR can enforce things, stop boots if stuff doesn’t match
    • But stuff changes all the time, eg update firmware . Can brick machine
  • Microsoft to the resuce
    • Tie Secure boot into measured boot
    • Measure signing keys rather than the actual files themselves
    • But initrds are not signed
  • Systemd to the resuce
    • systemd boot stub (not the systemd boot loader)
    • Embed initrd and the kernel into a single image with a single signature
    • But initrds contain local information
    • End users should not be signing stuff
  • Kernel can be handed multiple initranfs images (via cpio)
    • each unpacked in turn
    • Each will over-write the previous one
    • configuration can over-written but the signed image, perhaps safely so that if config is changed, stuff fails
    • unpack config first, code second
  • Kernel command line is also security sensative
    • eg turn off iommu and dump RAM to extract keys
    • Have a secure command line turning on all security features, append on the what user sends
  • Proof of device state
    • Can show you are number after boot based on TPM. Can compare to 2FA device to make sure it is securely booted. Safe to type in passwords
  • Secure Provision of secrets
    • Know a remote machine is booted safely and not been subverted before sending it secret stuff.

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 2

Dealing with Contributor Overload Holden Karau

  • Developer Advocate at Google
  • Apache Spark, Contributor to BEAM

Some people from big projects, some from projects hoping to get big

  • Remember it’s okay to not fix it all
  • The fun of a small project
    • Simple communication
    • Aligned incentives
    • Easy to tell who knows what
    • Tight community
  • The fun of a parge project
    • More people to do the work
    • More impact and people thanking you
    • Lots of ideas and experiences
    • If $s then fun conferences
    • Get paid to work on it.
  • Is my project on Fire? or just lots of people on it.
    • Measurable
      • User questions spike
      • issue spike
    • Lesss measurable
      • Non-explicit stuff not being passed on
  • Classic Pipeline
    • Users -> contributors -> committers _> PMC
    • Each stage takes times
    • Very leaky pipeline, perhaps it leaks too much
  • With hyper-growth project can quickly go south
    • Committer:user ration can’t get too far out.
  • Even without hyper-growth: sadness
    • Same thing happens, but slower
  • Overload – Mitigation
    • You don’t have to answer everyone, this can be hard
    • Stackoverflow
    • Are your answers easily searchable
    • Knowledge base – “do you mean”
    • Take time and look for patterns in questions
    • Find people who like writing and get to to write a book
      • Don’t to for core committers, they will have no time for anything else
  • Issue overload
    • Try and get rid of duplicate tickets
    • Autoclose tickets – mixed results
  • How to deal with a spike
    • Raise the bar
    • Make it easier
    • Get Perl to solve the problem
  • Raising the bar
    • Reject trivial changes – reduces the onramp
    • Add weird system – more posts on how to contribute
  • What can Perl solve
    • Style guide
    • bot bot bots
    • make it faster to merge
    • Improve PR + reviewer notice
    • Can increase productivity
  • Add more committers
    • Takes time and effort
    • People can be shy
    • Make a guide for new folks to follow
    • Have a safe space for people to ask questions
  • Reduce overhead for contributing well
    • Have doc on how to contribute next to the code, not elsewhere that people have to search for.

The Open Sourcing of Infrastructure Elizabeth K. Joseph

The recent history of infrastructure

  • 1998
    • To make a server use Solaris or NT. But off a shelf
    • Linux seen as Cheap Unix
    • Lots of FUD

Got a Junior Sysadmin Job

  • 2004
    • Had to tell people the basics “What is free software?”  , “Using Open Source Web Applications to Produce Business Results”
    • Turning point LAMP stack
    • Flood of changes on how customers interacted with software over last
      • Reluctance to be locked-in by a vendor
      • Concerns of security
      • Ability to fix bugs ourselves
      • Innovation stifled when software developed in isloation

Last 10 years

  • Changes in how peopel interacted with software
    • Downtime un-acceptable
    • Reliance of scaling and automation
    • Servers as Pets -> cattle
    • Large focus on data

Open Source is now Ubiquitous

  • Even Microsoft is using it a lot and interacting with the community

Operations tools were not as Open Sourced

  • Configuration Management
    • puppet modules, chef playbooks
  • Open application definitions – juhu charms, DC?OS Universe Catalog
  • Full disk images
    • Dockerhub

The Cloud

  • Cloud is the new propriatory
  • EC2-only infrastructure
  • Questions you should ask beforehand
    • Is your service adhering to open standards or am I locked in?
    • Recourse if the company goes out of business
    • Does vendor have a history of communicating about downtime and security problems?
    • Does vendor responds to bugs and feature requests?
    • Will the vendor use data in a way I’m not comfortable with?
    • Initial costs may be low, but do you have a plan to handle long term, growing costs
  • Alternatives
    • Openstack, Kubernetes, Docker Swarm, DC/OS with Apache Mesos

Hybrid Cloud

  • Tooling can be platform agnostic
  • Hard but can be done

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Session 1 – k8s @ home and bad buses

How to run Kubernetes on your spare hardware at home, and save the world Angus Lees

  • Mainframe ->
  • PC ->
  • Rackmount PC
    • Back the rackmount PC even with built-in redundancy will still fail. Or the location will go offline, or your data spreads across multiple machines
  • Since you need to have distributed/redundancy anyway. New model (2005). Grid computing. Clever software, dumb hardware. Loosely coupled servers
    • Libraries > RPC / Microservices
    • Threadpool -> hadoop
    • SQL -> key/store
    • NFS -> Object store
    • In-place upgrades -> “Immutable” image-based build from scratch
  • Computers in clouds
    • No cases. No redundant Power, journaling on filesystems turned off, etc
  • Everything is in clouds – Secondary effects
    • Corperate driven
    • Apache license over GPL
    • Centralised services rather than federated protocols
    • Profit-driven rather than scrating itches
  • Summary
    • Problem
      • Distributed Systems hard to configure
      • Solutions scale down poorly
      • Most homes don’t have racks of servers
    • Implication
      • Home Free Software “stuck” at single-machine architecture
  • Kubernetes (lots of stuff, but I use it already so just doing unique bits)
    • “Unix Process as a service”
    • Inverts the stack. Data is important then app. Kernel and Hardware unimportant.
    • Easy upgrades, everything is an upgrade
    • Declarative API , command line interface
  • “We’ve conducted this experiment for decades now, and I have news for you, Hardware fails”

Hardware at Home

  • Raid used to be “enterprise” now normal for home
  • Elastic compute for home too
  • Kubernetes for Home
    • Budget $100
      • ARM master nodes
      • Mixed architecture
    • Assume single layer-2 home ethernet
    • Worker nodes – old $500 laptops
      • x86-64
      • CoreOS
      • Broken screens, dead batteries
    • 3 * $30 Banana pis
      • Raspberry Pi2
      • armv7a
      • containOS
    • Persistentvolumes
      • NFS mount from RAID server
    • Service – keepalived-vip
    • Ingress
      • keepalived and nginx-ingress , letsEncrypt
      • Wildcard DNS
    • Status
      • Works!
      • Printing works
      • Install: PXE boot and run coreos-install
    • Status – ungood
      • Banana PIs a bit too slow.
    • github.com/anguslees/k8s-home

Is the 370 the worst bus route in Sydney? Katie Bell

  • The 370 bus
    • Goes UNSW and Sydney University. Goes around the city
  • If bus runs every 15 minutes, you should not be able to see 3 at once
  • Newspaper articles and Facebook group about how bad it is.
  • Two Questions
    • Bus privitisation better or worse
    • Is the 370 really the worst
  • Data provided
    • Lots of stuff but nothing the reliability
    • But they do have realtime data eg for the Tripetime app (done via a 3rd party)
    • They have a API and Key with standard format via GTFS
  • But they only publish “realtime” data, not the old data
    • So collected the realtime data, once a minute for 4 months
    • 557 GB
  • Format
    • zipfile of csv files
    • IDs sometimes ephemeral
    • Had to match timetable data and realtime data
    • Data had to be tidied up – lots
  • Processing realtime data
    • Download 1 minute
    • Parse
    • Match each of around ~7000 trips in timetable (across all of NSW)
    • Write ~20000 realtime updates to the DB
    • Running 5 EC2 instances at leak
    • Writing up to 40MB/s to the DB
  • Is the 370 the worst?
    • Define “worst”
    • Found NSW definition of what an on-time bus is.
    • Now more than 5:59 late or 1:59 early. Measured start/middle/end
    • Victoria definition strictor
    • She defined:
      • Early: more than 2min early
      • On time: 2m early – 5 min late
      • late more than 5m late
      • Very late – more thna 20m late
    • Across all trips
      • 3.7 million trips
      • On time 31%
      • More than 20m late 2.86%
    • Best routes
      • Nightime buses
      • Outside of Sydney
      • Shorter routes
      • 86% – 97% or better
    • Worst
      • Less than 5% on time
      • Longer routes
      • 370 is the 22nd worst
        • 8.79% on time
    • Worst routes ( percent > 20 min late)
      • 23% of 370 trips (6th worst)
      • Lots of Wollongong
    • Worst agencies
      • No obvious difference between agencies and private companies
    • Conclusion
      • Privatisation could go either way
      • 370 is close to the worst (277 could be worse) in Sydney
    • bus-shaming.com
    • github.com/katharosada/bus-shaming

Questions

  • Used Spot instances to keep cost down
  • $200 month on AWS
  • Buses better/worse according to time? Now checked yet
  • Wanted to calculate the “wait time” , not done yet.
  • Another feed of bus locations and some other data out there too.
  • Lots of other questions

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 3 – Keynote – Karen Sandler

Executive director of Software Freedom Conservancy

Previously spoke that LCA 2012 about closed-source software on her heart implant. Since then has pivoted career to more open-source advocacy in career.

  • DMCA exemption for medical device research
  • When you ask your doctor about safety of devices you sound like a conspiracy theorist
  • Various problems have been highlighted, some progress
  • Some companies addressing them

Initially published paper highlighting problem without saying she had the device

  • Got pushback from groups who thought she was scaremongering
  • Companies thinking about liability issues
  • After told story in 2012 things improved

Had to get new device recently.

  • Needed this disabled since her jobs pisses off hackers sometimes
  • All manufacturers said they could not disable wireless access
  • Finally found a single model that could be disabled made by a European manufacturer

 

Note: This is a quick summary, Lots more covered but hard to cover. Video should be good. Her slides were broken though much of the talk be she still delivered great talk.

Share

,

Google AdsenseLet machine learning create your In-feed ads


Last year we launched AdSense Native ads, a new family of ads created to match the look and feel of your site. If your site has an editorial feed (a list of articles or news) or a listings feed (a list of products or services), then Native In-feed ads are a great option to give your users a better experience.

Now we've brought the power of machine learning to In-feed ads, saving you time. If you're not quite sure what fonts, colors, and styles will work best for your site, you can let Google's machine learning help you decide. 

How it works: 

  1. Create a new Native In-Feed ad and select "Let Google suggest a style." 
  2. Enter the URL of a page with a feed you’d like to monetize. AdSense will scan your page to find the best placement. 
  3. Select which feed element you’d like your In-feed ad to match.
  4. Your ad is automatically created – simply place the piece of code into your feed, and you’re done! 

By the way, this method is optional, so if you prefer, you can create your ads manually. 

Posted by: 

Faris Zerdoudi, AdSense Tech Lead 
Violetta Kalathaki, AdSense Product Manager 


Sociological ImagesPod Panic & Social Problems

My gut reaction was that nobody is actually eating the freaking Tide Pods.

Despite the explosion of jokes—yes, mostly just jokes—about eating detergent packets, sociologists have long known about media-fueled cultural panics about problems that aren’t actually problems. Joel Best’s groundbreaking research on these cases is a classic example. Check out these short video interviews with Best on kids not really being poisoned by Halloween candy and the panic over “sex bracelets.”

Click here to view the embedded video.

In a tainted Halloween candy study, Best and Horiuchi followed up on media accounts to confirm cases of actual poisoning or serious injury, and they found many cases couldn’t be confirmed or were greatly exaggerated. So, I followed the data on detergent digestion.

It turns out, there is a small trend. According to a report from the American Association of Poison Control Centers,

…in 2016 and 2017, poison control centers handled thirty-nine and fifty-three cases of intentional exposures, respectively, among thirteen to nineteen year olds. In the first fifteen days of 2018 alone, centers have already handled thirty-nine such intentional cases among the same age demographic.

That said, this trend is only relative to previous years and cannot predict future incidents. The life cycle of internet jokes is fairly short, rotating quickly with an eye toward the attention economy. It wouldn’t be too surprising if people moved on from the pods long before the panic dies out.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main February 2018 Meeting: Linux.conf.au report

Feb 6 2018 18:30
Feb 6 2018 20:30
Feb 6 2018 18:30
Feb 6 2018 20:30
Location: 
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

PLEASE NOTE NEW LOCATION

Tuesday, February 6, 2018
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000

Speakers:

  • Russell Coker and others, LCA conference report

Russell Coker has done lots of Linux development over the years, mostly involved with Debian.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 6, 2018 - 18:30

CryptogramNew Malware Hijacks Cryptocurrency Mining

This is a clever attack.

After gaining control of the coin-mining software, the malware replaces the wallet address the computer owner uses to collect newly minted currency with an address controlled by the attacker. From then on, the attacker receives all coins generated, and owners are none the wiser unless they take time to manually inspect their software configuration.

So far it hasn't been very profitable, but it -- or some later version -- eventually will be.

Worse Than FailureCoded Smorgasbord: Archive This

Michael W came into the office to a hair-on-fire freakout: the midnight jobs failed. The entire company ran on batch processes to integrate data across a dozen ERPs, mainframes, CRMs, PDQs, OMGWTFBBQs, etc.: each business unit ran its own choice of enterprise software, but then needed to share data. If they couldn’t share data, business ground to a halt.

Business had ground to a halt, and it was because the archiver job had failed to copy some files. Michael owned the archiver program, not by choice, but because he got the short end of that particular stick.

The original developer liked logging. Pretty much every method looked something like this:

public int execute(Map arg0, PrintWriter arg1) throws Exception {
    Logger=new Logger(Properties.getString("LOGGER_NAME"));
    Log=new Logger(arg1);
    .
    .
    .
catch (Exception e) {
    e.printStackTrace();
    Logger.error("Monitor: Incorrect arguments");
    Log.printError("Monitor: Incorrect arguments");
    arg1.write("In Correct Argument Passed to Method.Please Check the Arguments passed \n \r");
    System.out.println("Monitor: Incorrect arguments");
}

Sometimes, to make the logging even more thorough, the catch block might look more like this:

catch(Exception e){
    e.printStackTrace();
    Logger.error("An exception happened during SFTP movement/import. " + (String)e.getMessage());
}

Java added Generics in 2004. This code was written in 2014. Does it use generics? Of course not. Every Hashtable is stringly-typed:

Hashtable attributes;
.
.
.
if (((String) attributes.get(key)).compareTo("1") == 0 | ((String) attributes.get(key)).compareTo("0") == 0) { … }

And since everything is stringly-typed, you have to worry about case-sensitive comparisons, but don’t worry, the previous developer makes sure everything’s case-insensitive, even when comparing numbers:

if (flag.equalsIgnoreCase("1") ) { … }

And don’t forget to handle Booleans…

public boolean convertToBoolean(String data) {
    if (data.compareToIgnoreCase("1") == 0)
        return true;
    else
        return false;
}

And empty strings…

if(!TO.equalsIgnoreCase("") && TO !=null) { … }

Actually, since types are so confusing, let’s make sure we’re casting to know-safe types.

catch (Exception e) {
    Logger.error((Object)this, e.getStackTraceAsString(), null, null);
}

Yes, they really are casting this to Object.

Since everything is stringly typed, we need this code, which checks to see if a String parameter is really sure that it’s a string…

protected void moveFile(String strSourceFolder, Object strSourceObject,
                     String strDestFolder) {
    if (strSourceObject.getClass().getName().compareToIgnoreCase("java.lang.String") == 0) { … }
    …
}

Now, that all was enough to get Michael’s blood pressure up, but none of that had anything to do with his actual problem. Why did the copy fail? The logs were useless, as they were spammed with messages with no particular organization. The code was bad, sure, so it wasn’t surprising that it crashed. For a little while, Michael thought it might be the getFiles method, which was supposed to identify which files needed to be copied. It did a recursive directory search (with no depth checking, so one symlink could send it into an infinite loop) nor did it actually filter files that it didn’t care about. It just made an ArrayList of every file in the directory structure and then decided which ones to copy.

He spent some time really investigating the copy method, to see if that would help him understand what went wrong:

sourceFileLength = sourceFile.length();
newPath = sourceFile.getCanonicalPath();
newPath = newPath.replace(".lock", "");
newFile = new File(newPath);
sourceFile.renameTo(newFile);                    
destFileLength = newFile.length();
while(sourceFileLength!=destFileLength)
{
    //Copy In Progress
}
//Remy: I didn't elide any code from the inside of that while loop- that is exactly how it's written, as an empty loop.

Hey, out of curiosity, what does the JavaDoc have to say about renameTo?

Many aspects of the behavior of this method are inherently platform-dependent: The rename operation might not be able to move a file from one filesystem to another, it might not be atomic, and it might not succeed if a file with the destination abstract pathname already exists. The return value should always be checked to make sure that the rename operation was successful.

It only throws exceptions if you don’t supply a destination, or if you don’t have permissions to the files. Otherwise, it just returns false on a failure.

So… if the renameTo operation fails, the archiver program will drop into an infinite loop. Unlogged. Undetected. That might seem like the root cause of the failure, but it wasn’t.

As it turned out, the root cause was that someone in ops hit “Ok” on a security update, which triggered a reboot, disrupting all the scheduled jobs.

Michael still wanted to fix the archiver program, but there was another problem with that. He owned the InventoryArchiver.jar. There was also OrdersArchiver.jar, and HRArchiver.jar, and so on. They had all been “written” by the same developer. They all did basically the same job. So they were all mostly copy-and-paste jobs with different hard-coded strings to specify where they ran. But they weren’t exactly copy-and-paste jobs, so each one had to be analyzed, line by line, to see where the logic differences might possibly crop up.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 2 – Keynote – Matthew Todd

Collaborating with Everybody: Open Source Drug Discovery

  • Term used is a bit undefined. Open Source, Free Drugs?
  • First Open Source Project – Praziquantel
    • Molecule has 2 mirror image forms. One does the job, other tastes awful. Pills were previously a mix
    • Project to just have pill with the single form
      • Created discussion
      • Online Lab Notebook
      • 75% of contributions were from private sector (especially Syncom)
      • Ended up finding a approach that worked, different from what was originally proposed from feedback.
      • Similar method found by private company that was also doing the work
  • Conventional Drug discovery
    • Find drug that kills something bad – Hit
    • Test it and see if it is suitable – Led
    • 13,500 molecules in public domain that kill maleria parasite
  • 6 Laws of Open Scrience
    • All data is open and all ideas are shared
    • Anyone can take part at any level of the project
  • Openness increasing seen as a key
  • Open Source Maleria
    • 4 campaigns
    • Work on a molecule, park it when doesn’t seem promising
    • But all data is still public
  • What it actually is
    • Electronic lab book (80% of scientists still use paper)
    • Using Labtrove, changing to labarchives
    • Everything you do goes up every day
    • Todo list
      • Tried stuff, ended up using issue list on github
      • Not using most other github stuff
    • Data on a Google Sheet
    • Light Website, twitter feed
  • Lab vs Code
  • Have a promising molecule – works well in mice
    • Would probably be a patentable state
    • Not sure yet exactly how it works
  • Competition – Predictive model
    • Lots of solutions submitted, not good enough to use
    • Hopeful a model will be created
  • Tried a a known-working molecule from elsewhere, but couldn’t get it to work
    • This is out in the open. Lots of discussion
  • School group able to recreate Daraprim, a high-priced US drug
  • Public Domain science is now accepted for publications
  • Need to to make computers understand molecule digram and convert to representative format which can then be search one.
  • Missing
    • Automated links to databases in tickets
    • Basic web page stuff, auto-porting of data, newsletter, become non-profit, stickers
    • Stuff is not folded back into the Wiki
  • OS Mycetoma – New Project
    • Fungus with no treatment
    • Working on possible molecule to treat
  • Some ideas on how to get products created this way to market – eg “data exclusivity”

 

Share

,

CryptogramSkygofree: New Government Malware for Android

Kaspersky Labs is reporting on a new piece of sophisticated malware:

We observed many web landing pages that mimic the sites of mobile operators and which are used to spread the Android implants. These domains have been registered by the attackers since 2015. According to our telemetry, that was the year the distribution campaign was at its most active. The activities continue: the most recently observed domain was registered on October 31, 2017. Based on our KSN statistics, there are several infected individuals, exclusively in Italy.

Moreover, as we dived deeper into the investigation, we discovered several spyware tools for Windows that form an implant for exfiltrating sensitive data on a targeted machine. The version we found was built at the beginning of 2017, and at the moment we are not sure whether this implant has been used in the wild.

It seems to be Italian. Ars Technica speculates that it is related to Hacking Team:

That's not to say the malware is perfect. The various versions examined by Kaspersky Lab contained several artifacts that provide valuable clues about the people who may have developed and maintained the code. Traces include the domain name h3g.co, which was registered by Italian IT firm Negg International. Negg officials didn't respond to an email requesting comment for this post. The malware may be filling a void left after the epic hack in 2015 of Hacking Team, another Italy-based developer of spyware.

BoingBoing post.

Cory DoctorowMy keynote from ConveyUX 2017: “I Can’t Let You Do That, Dave.”

“The Internet’s broken and that’s bad news, because everything we do today involves the Internet and everything we’ll do tomorrow will require it. But governments and corporations see the net, variously, as a perfect surveillance tool, a perfect pornography distribution tool, or a perfect video on demand tool—not as the nervous system of the 21st century. Time’s running out. Architecture is politics. The changes we’re making to the net today will prefigure the future our children and their children will thrive in—or suffer under.”

—Cory Doctorow

ConveyUX is pleased to feature author and activist Cory Doctorow to close out our 2017 event. Cory’s body work includes fascinating science fiction and engaging non-fiction about the relationship between society and technology. His most recent book is Information Doesn’t Want to be Free: Laws for the Internet Age. Cory will delve into some of the issues expressed in that book and talk about issues that affect all of us now and in the future. Cory will be on hand for Q&A and a post-session book signing.

CryptogramDark Caracal: Global Espionage Malware from Lebanon

The EFF and Lookout are reporting on a new piece of spyware operating out of Lebanon. It primarily targets mobile devices compromised by fake secure messaging clients like Signal and WhatsApp.

From the Lookout announcement:

Dark Caracal has operated a series of multi-platform campaigns starting from at least January 2012, according to our research. The campaigns span across 21+ countries and thousands of victims. Types of data stolen include documents, call records, audio recordings, secure messaging client content, contact information, text messages, photos, and account data. We believe this actor is operating their campaigns from a building belonging to the Lebanese General Security Directorate (GDGS) in Beirut.

It looks like a complex infrastructure that's been well-developed, and continually upgraded and maintained. It appears that a cyberweapons arms manufacturer is selling this tool to different countries. From the full report:

Dark Caracal is using the same infrastructure as was previously seen in the Operation Manul campaign, which targeted journalists, lawyers, and dissidents critical of the government of Kazakhstan.

There's a lot in the full report. It's worth reading.

Three news articles.

Worse Than FailureAlien Code Reuse

“Probably the best thing to do is try and reorganize the project some,” Tim, “Alien”’s new boss said. “It’s a bit of a mess, so a little refactoring will help you understand how the code all fits together.”

“Alien” grabbed the code from git, and started walking through the code. As promised, it was a bit of a mess, but partially that mess came from their business needs. There was a bunch of common functionality in a Common module, but for each region they did business in- Asia, North America, Europe, etc.- there was a region specific deployable, each in its own module. Each region had its own build target that would include the Common module as part of the build process.

The region-specific modules were vaguely organized into sub-modules, and that’s where “Alien” settled in to start reorganizing. Since Asia was the largest, most complicated module, they started there, on a sub-module called InventoryManagement. THey moved some files around, set up the module and sub-modules in Maven, and then rebuilt.

The Common library failed to build. This gave “Alien” some pause, as they hadn’t touched anything pertaining to the Common project. Specifically, Common failed to build because it was looking for some files in the Asia.InventoryManagement sub-module. Cue the dive into the error trace and the vagaries of the build process. Was there a dependency between Common and Asia that had gone unnoticed? No. Was there a build-order issue? No. Was Maven just being… well, Maven? Yes, but that wasn’t the problem.

After hunting around through all the obvious places, “Alien” eventually ran an ls -al.

~/messy-app/base/Common/src/com/mycompany > ls -al
lrwxrwxrwx 1 alien  alien    39 Jan  4 19:10 InventoryManagement -> ../../../../../Asia/InventoryManagement/src/com/mycompany/IM/
drwxr-x--- 3 alien  alien  4096 Jan  4 19:10 core/

Yes, that is a symbolic link. A long-ago predecessor discovered that the Asia.InventoryManagement sub-module contained some code that was useful across all modules. Acutally moving that code into Common would have involved refactoring Asia, which was the largest, most complicated module. Presumably, that sounded like work, so instead they just added a sym-link. The files actually lived in Asia, but were compiled into Common.

“Alien” writes, “This is the first time in my over–20-year working life I see people reuse source code like this.”

They fixed this, and then went hunting, only to find a dozen more examples of this kind of code “reuse”.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaJames Morris: LCA 2018 Kernel Miniconf – SELinux Namespacing Slides

I gave a short talk on SELinux namespacing today at the Linux.conf.au Kernel Miniconf in Sydney — the slides from the talk are here: http://namei.org/presentations/selinux_namespacing_lca2018.pdf

This is a work in progress to which I’ve been contributing, following on from initial discussions at Linux Plumbers 2017.

In brief, there’s a growing need to be able to provide SELinux confinement within containers: typically, SELinux appears disabled within a container on Fedora-based systems, as a workaround for a lack of container support.  Underlying this is a requirement to provide per-namespace SELinux instances,  where each container has its own SELinux policy and private kernel SELinux APIs.

A prototype for SELinux namespacing was developed by Stephen Smalley, who released the code via https://github.com/stephensmalley/selinux-kernel/tree/selinuxns.  There were and still are many TODO items.  I’ve since been working on providing namespacing support to on-disk inode labels, which are represented by security xattrs.  See the v0.2 patch post for more details.

Much of this work will be of interest to other LSMs such as Smack, and many architectural and technical issues remain to be solved.  For those interested in this work, please see the slides, which include a couple of overflow pages detailing some known but as yet unsolved issues (supplied by Stephen Smalley).

I anticipate discussions on this and related topics (LSM stacking, core namespaces) later in the year at Plumbers and the Linux Security Summit(s), at least.

The session was live streamed — I gather a standalone video will be available soon!

ETA: the video is up! See:

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 3 – Developers, Developers Miniconf

Beyond Web 2.0 Russell Keith-Magee

  • Django guy
  • Back in 2005 when Django first came out
    • Web was fairly simple, click something and something happened
    • model, views, templates, forms, url routing
  • The web c 2016
    • Rich client
    • API
    • mobile clients, native apps
    • realtime channels
  • Rich client frameworks
    • reponse to increased complexity that is required
    • Complex client-side and complex server-side code
  • Isomorphic Javascript development
    • Same code on both client and server
    • Only works with javascript really
    • hacks to work with other languages but not great
  • Isomorphic javascript development
    • Requirements
    • Need something in-between server and browser
    • Was once done with Java based web clients
    • model, view, controller
  • API-first development
  • How does it work with high-latency or no-connection?
  • Part of the controller and some of the model needed in the client
    • If you have python on the server you need python on the client
    • brython, skulp, pypy.js
    • <script type=”text/pyton”>
    • Note: Not phyton being compiled into javascript. Python is run in the browser
    • Need to download full python interpreter though (500k-15M)
    • Fairly fast
  • Do we need a full python interpreter?
    • Maybe something just to run the bytecode
    • Batavia
    • Javascript implementation of python virtual machine
    • 10KB
    • Downside – slower than cpython on the same machine
  • WASM
    • Like assembly but for the web
    • Benefits from 70y of experience with assembly languages
    • Close to Cpython speed
    • But
      • Not quite on browsers
      • No garbage collection
      • Cannot manipulate DOM
      • But both coming soon
  • Example: http://bit.ly/covered-in-bees
  • But “possible isn’t enough”
  • pybee.org
  • pybee.org/bee/join

Using “old skool” Free tools to easily publish API documentation – Alec Clew

  • https://github.com/alecthegeek/doc-api-old-skool
  • You API is successful if people are using it
  • High Quality and easy to use
  • Provide great docs (might cut down on support tickets)
  • Who are you writing for?
    • Might not have english as first language
    • New to the API
    • Might have different tech expertise (different languages)
    • Different tooling
  • Can be hard work
  • Make better docs
    • Use diagrams
    • Show real code (complete and working)
  • Keep your sentence simple
  • Keep the docs current
  • Treat documentation like code
    • Fix bugs
    • add features
    • refactor
    • automatic builds
    • Cross platform support
    • “Everything” is text and under version control
  • Demo using pandoc
  • Tools
  • pandoc, plantuml, Graphviz, M4, make, base/sed/python/etc

 

Lightning Talks

  • Nic – Alt attribute
    • need to be added to images
    • Don’t have alts when images as links
    • http://bit.ly/Nic-slides
  • Vaibhav Sager – Travis-CI
    • Builds codes
    • Can build websites
    • Uses to build Resume
    • Build presentations
  • Steve Ellis
    • Openshift Origin Demo
  • Alec Clews
    • Python vs C vs PHP vs Java vs Go for small case study
    • Implemented simple xmlrpc client in 5 languages
    • Python and Go were straightforward, each had one simple trick (40-50 lines)
    • C was 100 lines. A lot harder. Conversions, etc all manual
    • PHP wasn’t too hard. easier in modern vs older PHP
  • Daurn
    • Lua
    • Fengari.io – Lua in the browser
  • Alistair
    • How not to docker ( don’t trust the Internet)
    • Don’t run privileged
    • Don’t expose your docker socket
    • Don’t use host network mode
    • Don’t where your code is FROM
    • Make sure your kernel on your host is secure
  • Daniel
    • Put proxy in front of the docker socket
    • You can use it to limit what no-priv users with socket access to docker port can do

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 2

Manage all your tasks with TaskWarrior Paul ‘@pjf’ Fenwick

  • Lots of task management software out there
    • Tried lots
    • Doesn’t like proprietary ones, but unable to add features he wants
    • Likes command line
  • Disclaimer: “Most systems do not work for most people”
  • TaskWarrior
    • Lots of features
    • Learning cliff

Intro to TaskWarrior

  • Command line
  • Simple level can be just a todo list
  • Can add tags
    • unstructured many to many
    • Added just put putting “+whatever” on command
    • Great for searching
    • Can put all people or all types of jobs togeather
  • Meta Tags
    • Automatic date related (eg due this week or today)
  • Project
    • A bunch of tasks
    • Can be strung togeather
    • eg Travel project, projects for each trip inside them
  • Contexts (show only some projects and tasks)
    • Work tasks
    • Tasks for just a client
    • Home stuff
  • Annotation (Taking notes)
    • $ task 31 annotate “extra stuff”
    • has an auto timestamp
    • show by default, or just show a count of them
  • Tasks associated with dates
    • “wait”
    • Don’t show task until a date (approx)
    • Hid a task for an amount of time
    • Scheduled tasks urgency boasted at specific date
  • Until
    • delete a task after a certain date
  • Relative to other tasks
    • eg book flights 30 days before a conference
    • good for scripting, create a whole bunch of related tasks for a project
  • due dates
    • All sorts of things give (see above) gives tasks higher priority
    • Tasks can be manually changed
  • Tools and plugins
    • Taskopen – Opens resources in annotations (eg website, editor)
  • Working with others
    • Bugworrier – interfaces with github trello, gmail, jira, trac, bugzilla and lots of things
    • Lots of settings
    • Keeps all in sync
  • Lots of extra stuff
    • Paul updates his shell prompt to remind him things are busy
  • Also has
    • Graphical reports: burndown, calendar
    • Hooks: Eg hooks to run all sort of stuff
    • Online Sync
    • Android client
    • Web client
  • Reminder it has a steep learning curve.

Love thy future self: making your systems ops-friendly Matt Palmer

  • Instrumentation
  • Instrumenting incoming requests
    • Count of the total number of requests (broken down by requestor)
    • Count of reponses (broken down by request/error)
    • How long it took (broken down by sucess/errors
    • How many right now
  • Get number of in-progress requests, average time etc
  • Instrumenting outgoing requests
    • For each downstream component
    • Number of request sent
    • how many reponses we’ve received (broken down by success/err)
    • How long it too to get the response (broken down by request/ error)
    • How many right now
  • Gives you
    • incoming/outgoing ratio
    • error rate = problem is downstream
  • Logs
    • Logs cost tends to be more than instrumentation
  • Three Log priorities
    • Error
      • Need a full stack trace
      • Add info don’t replace it
      • Capture all the relivant variables
      • Structure
    • Information
      • Startup messages
      • Basic request info
      • Sampling
    • Debug
      • printf debugging at webcale
      • tag with module/method
      • unique id for each request
      • late-bind log data if possible.
      • Allow selective activation at runtime (feature flag, special url, signals)
    • Summary
      • Visbility required
      • Fault isolation

 

Share

Planet Linux AustraliaSimon Lyall: Linux.conf.au 2018 – Day 1 – Session 1 – Kernel Miniconf

Look out for what’s in the security pipeline – Casey Schaufler

Old Protocols

  • SeLinux
    • No much changing
  • Smack
    • Network configuration improvements and catchup with how the netlable code wants things to be done.
  • AppArmor
    • Labeled objects
    • Networking
    • Policy stacking

New Security Modules

  • Some peopel think existing security modules don’t work well with what they are doing
  • Landlock
    • eBPF extension to SECMARK
    • Kills processes when it goes outside of what it should be doing
  • PTAGS
    • General purpose process tags
    • Fro application use ( app can decide what it wants based on tags, not something external to the process enforcing things )
  • HardChroot
    • Limits on chroot jail
    • mount restrictions
  • Safename
    • Prevents creation of unsafe files names
    • start, middle or end characters
  • SimpleFlow
    • Tracks tainted data

Security Module Stacking

  • Problems with incompatibility of module labeling
  • People want different security policy and mechanism in containers than from the base OS
  • Netfilter problems between smack and Apparmor

Container

  • Containers are a little bit undefined right now. Not a kernel construct
  • But while not kernel constructs, need to work with and support them

Hardening

  • Printing pointers (eg in syslog)
  • Usercopy

 

Share

,

Planet Linux AustraliaBen Martin: 4cm thick wood cnc project: shelf

The lighter wood is about 4cm thick. Both of the sides are cut from a single plank of timber which left the feet with a slight weak point at the back. Given a larger bit of timber I would have tapered the legs outward from the back more gradually. But the design is restricted by the timber at hand.


The shelves are plywood which turned out fairly well after a few coats of poly. I knocked the extreme sharp edges of the ply so its a hurt a little rather than a lot if you accidentally poke the edge. This is a mixed machine and human build, the back of the plywood that meets the uprights was knocked off using a bandsaw.

Being able to CNC thick timber like this opens up more bold designs. Currently I have to use a 1/2 inch bit to get this reach. Stay tuned for more CNC timber fun!


,

Don MartiMore brand safety bullshit

There's enough bullshit on the Internet already, but I'm afraid I'm going to quote some more. This time from Ilyse Liffreing at IBM.

The reality is none of us can say with certainty that anywhere in the world, we are [brand] safe. Look what just happened with YouTube. They are working on fixing it, but even Facebook and Google themselves have said there’s not much they can do about it. I mean, it’s hard. It’s not black and white. We are putting a lot of money in it, and pull back on channels where we have concerns. We’ve had good talks with the YouTube teams.

Bullshit.

One important part of this decision is black and white.

Either you give money to Nazis.

Or you don't give money to Nazis.

If Nazis are better at "programmatic" than the resting-and-vesting chill bros at the programmatic ad firms (and, face it, Nazis kick ass at programmatic), then the choice to spend ad money in a we're-kind-of-not-sure-if-this-goes-to-Nazis-or-not way is a choice that puts your brand on the wrong side of a black and white line.

There are plenty of Nazi-free places for brands to run ads. They might not be the cheapest. But I know which side of the line I buy from.

,

CryptogramFriday Squid Blogging: Te Papa Colossal Squid Exhibition Is Being Renovated

The New Zealand home of the colossal squid exhibit is behind renovated.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity Breaches Don't Affect Stock Price

Interesting research: "Long-term market implications of data breaches, not," by Russell Lange and Eric W. Burger.

Abstract: This report assesses the impact disclosure of data breaches has on the total returns and volatility of the affected companies' stock, with a focus on the results relative to the performance of the firms' peer industries, as represented through selected indices rather than the market as a whole. Financial performance is considered over a range of dates from 3 days post-breach through 6 months post-breach, in order to provide a longer-term perspective on the impact of the breach announcement.

Key findings:

  • While the difference in stock price between the sampled breached companies and their peers was negative (1.13%) in the first 3 days following announcement of a breach, by the 14th day the return difference had rebounded to + 0.05%, and on average remained positive through the period assessed.

  • For the differences in the breached companies' betas and the beta of their peer sets, the differences in the means of 8 months pre-breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • For the differences in the breached companies' beta correlations against the peer indices pre- and post-breach, the difference in the means of the rolling 60 day correlation 8 months pre- breach versus post-breach was not meaningful at 90, 180, and 360 day post-breach periods.

  • In regression analysis, use of the number of accessed records, date, data sensitivity, and malicious versus accidental leak as variables failed to yield an R2 greater than 16.15% for response variables of 3, 14, 60, and 90 day return differential, excess beta differential, and rolling beta correlation differential, indicating that the financial impact on breached companies was highly idiosyncratic.

  • Based on returns, the most impacted industries at the 3 day post-breach date were U.S. Financial Services, Transportation, and Global Telecom. At the 90 day post-breach date, the three most impacted industries were U.S. Financial Services, U.S. Healthcare, and Global Telecom.

The market isn't going to fix this. If we want better security, we need to regulate the market.

Note: The article is behind a paywall. An older version is here. A similar article is here.

Worse Than FailureError'd: Alphabetical Soup

"I appreciate that TIAA doesn't want to fully recognize that the country once known as Burma now calls itself Myanmar, but I don't think that this is the way to handle it," Bruce R. writes.

 

"MSI Installed an update - but I wonder what else it decided to update in the process? The status bar just kept going and going..." writes Jon T.

 

Paul J. wrote, "Apparently my occupation could be 'All Other Persons' on this credit card application!"

 

Geoff wrote, "So I need to commit the changes I didn't make, and my options are 'don't commit' or 'don't commit'?"

 

David writes, "This was after a 15 minute period where I watched a timer spin frantically."

 

"It's as if DealeXtreme says 'three stars, I think you meant to say FIVE stars'," writes Henry N.

 

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

,

Sociological ImagesBros and Beer Snobs

The rise of craft beer in the United States gives us more options than ever at happy hour. Choices in beer are closely tied to social class, and the market often veers into the world of pointlessly gendered products. Classic work in sociology has long studied how people use different cultural tastes to signal social status, but where once very particular tastes showed membership in the upper class—like a preference for fine wine and classical music—a world with more options offers status to people who consume a little bit of everything.

Photo Credit: Brian Gonzalez (Flickr CC)

But who gets to be an omnivore in the beer world? New research published in Social Currents by Helana Darwin shows how the new culture of craft beer still leans on old assumptions about gender and social status. In 2014, Darwin collected posts using gendered language from fifty beer blogs. She then visited four craft beer bars around New York City, surveying 93 patrons about the kinds of beer they would expect men and women to consume. Together, the results confirmed that customers tend to define “feminine” beer as light and fruity and “masculine” beer as strong, heavy, and darker.

Two interesting findings about what people do with these assumptions stand out. First, patrons admired women who drank masculine beer, but looked down on those who stuck to the feminine choices. Men, however, could have it both ways. Patrons described their choice to drink feminine beer as open-mindedness—the mark of a beer geek who could enjoy everything. Gender determined who got “credit” for having a broad range of taste.

Second, just like other exclusive markers of social status, the India Pale Ale held a hallowed place in craft brew culture to signify a select group of drinkers. Just like fancy wine, Darwin writes,

IPA constitutes an elite preference precisely because it is an acquired taste…inaccessible to those who lack the time, money, and desire to cultivate an appreciation for the taste.

Sociology can get a bad rap for being a buzzkill, and, if you’re going to partake, you should drink whatever you like. But this research provides an important look at how we build big assumptions about people into judgments about the smallest choices.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

TEDNew clues about the most mysterious star in the universe, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

New clues about the most mysterious star in the universe. KIC 8462852 (often called “Tabby’s star,” after the astronomer Tabetha Boyajian, who led the first study of the star) intermittently dims as much as 22% and then brightens again, for a reason no one has yet quite figured out. This bizarre occurrence led astronomers to propose over a dozen theories for why the star might be dimming, including the fringe theory that it was caused by an alien civilization using the planet’s energy. Now, new data shows that the dimming isn’t fully opaque; certain colors of light are blocked more than others. This suggests that what’s causing the star to dim is dust. After all, if an opaque object — like a planet or alien megastructure — was passing in front of the star, all of the light would be blocked equally. Tabby’s star is due to become visible again in late February or early March of 2018. (Watch Boyajian’s TED Talk)

TED’s new video series celebrates the genius design of everyday objects. What do the hoodie, the London Tube Map, the hyperlink, and the button have in common? They’re everyday objects, often overlooked, that have profoundly influenced the world around us. Each 3- to 4- minute episode of TED’s original video series Small Thing Big Idea celebrates one of these objects, with a well-known name in design explaining what exactly makes it so great. First up is Michael Bierut on the London Tube Map. (Watch the first episode here and tune in weekly on Tuesday for more.)

The science of black holes. In the new PBS special Black Hole Apocalypse, astrophysicist Janna Levin explores the science of black holes, what they are, why they are so powerful and destructive, and what they might tell us about the very origin of our existence. Dubbing them the world’s greatest mystery, Levin and her fellow scientists, including astronomer Andrea Ghez and experimental physicist Rainer Weiss, embark on a journey to portray the magnitude and importance of these voids that were long left unexplored and unexplained. (Watch Levin’s TED Talk, Ghez’s TED Talk, and read Weiss’ Ideas piece.)

An organized crime thriller with non-fiction roots. McMafia, a television show starring James Norton, premiered in the UK in early January. The show is a fictionalized account of Misha Glenny’s 2008 non-fiction book of the same name. The show focuses on Alex Goldman, the son of an exiled Mafia boss who wants to put his family’s history behind him. Unfortunately, a murder foils his plans and to protect his family, he must face up to various international crime syndicates. (Watch Glenny’s TED Talk)

Inside the African-American anti-abortion movement. In her new documentary for PBS’ Frontline, Yoruba Richen examines the complexities of the abortion debate as it relates to US’ racial history. Richen speaks with African-American members of both the pro-life and the anti-abortion movements, as her short doc follows a group of anti-abortion activists as they work in the black community. (Watch Richen’s TED Talk.)

Have a news item to share? Write us at contact@ted.com and you may see it included in this weekly round-up.

Worse Than FailureCodeSOD: The Least of the Max

Adding assertions and sanity checks to your code is important, especially when you’re working in a loosely-typed language like JavaScript. Never assume the input parameters are correct, assert what they must be. Done correctly, they not only make your code safer, but also easier to understand.

Matthias’s co-worker… doesn’t exactly do that.

      function checkPriceRangeTo(x, min, max) {
        if (max == 0) {
          max = valuesPriceRange.max
        }
        min = Math.min(min, max);
        max = Math.max(min, max);
        x = parseInt(x)
        if (x == 0) {
          x = 50000
        }

        //console.log(x, 'min:', min, 'max:', max);
        return x >= min && x <= max
      }

This code isn’t bad, per se. I knew a kid, Marcus, in middle school that wore the same green sweatshirt every day, and had a musty 19th Century science textbook that discussed phlogiston in his backpack. Over lunch, he was happy to strike up a conversation with you about the superiority of phlogiston theory over Relativity. He wasn’t bad, but he was annoying and not half as smart as he thought he was.

This code is the same. Sure, x might not be a numeric value, so let’s parseInt first… which might return NaN. But we don’t check for NaN, we check for 0. If x is 0, then make it 50,000. Why? No idea.

The real treat, though, is the flipping of min/max. If the calling code did this wrong (min=6,max=1) then instead of swapping them, which is obviously the intent, it instead makes them both equal to the lowest of the two.

In the end, Matthias has one advantage in dealing with this pest, that I didn’t have in dealing with Marcus. He could actually make it go away. I just had to wait until the next year, when we didn’t have lunch at the same time.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

,

Valerie AuroraGetting free of toxic tech culture

This post was co-authored by Valerie Aurora and Susan Wu, and cross-posted on both our blogs.

Marginalized people leave tech jobs in droves, yet we rarely write or talk publicly about the emotional and mental process of deciding to leave tech. It feels almost traitorous to publicly discuss leaving tech when you’re a member of a marginalized group – much less actually go through with it.

There are many reasons we feel this way, but a major reason is that the “diversity problem in tech” is often framed as being caused by marginalized people not “wanting” to be in tech enough: not taking the right classes as teenagers, not working hard enough in university, not “leaning in” hard enough at our tech jobs. In this model, it is the moral responsibility of marginalized people to tolerate unfair pay, underpromotion, harassment, and assault in order to serve as role models and mentors to the next generation of marginalized people entering tech. With this framing, if marginalized people end up leaving tech to protect ourselves, it’s our duty to at least keep quiet about it, and not scare off other marginalized people by sharing our bad experiences.

A green plant growing out of a printer
A printer converted to a planter
CC BY-SA Ben Stanfield https://flic.kr/p/2CjHL

Under that model, this post is doubly taboo: it’s a description of how we (Susan and Valerie) went through the process of leaving toxic tech culture, as a guide to other marginalized people looking for a way out. We say “toxic tech culture” because we want to distinguish between leaving tech entirely, and leaving areas of tech which are abusive and harmful. Toxic tech culture comes in many forms: the part of Silicon Valley VC hypergrowth culture that deifies founders as “white, male, nerds who’ve dropped out of Harvard or Stanford,” the open source software ecosystem that so often exploits and drives away its best contributors, and the scam-riddled cryptocurrency community, to name just three.

What is toxic tech culture? Toxic tech cultures are those that demean and devalue you as holistic, multifaceted human beings. Toxic tech cultures are those that prioritize profits and growth over human and societal well being. Toxic tech cultures are those that treat you as replaceable cogs within a system of constant churn and burnout.

But within tech there are exceptions to the rule: technology teams, organizations, and communities where marginalized people can feel a degree of safety, belonging, and purpose. You may be thinking about leaving all of tech, or leaving a particular toxic tech culture for a different, better tech culture; either way, we hope this post will be useful to you.

A little about us: Valerie spent more than ten years working as a software engineer, specializing in file systems, Linux, and operating systems. Susan grew up on the Internet, and spent 25 years as a software developer, a community builder, an investor, and a VC-backed Silicon Valley founder. We were both overachievers who advanced quickly in our fields – until we could not longer tolerate the way we were treated, or be complicit in a system that did not match our values. Valerie quit her job as a programmer to co-found a tech-related non-profit for women, and now teaches ally skills to tech workers. Susan relocated to France and Australia, co-founded Project Include, a nonprofit dedicated to improving diversity and inclusion in tech, and is now launching a new education system. We are both still involved in tech to various degrees, but on our own terms, and we are much happier now.

We disagree that marginalized people should stay silent about how and why they left toxic tech culture. When, for example, more than 50% of women in tech leave after 12 years, there is an undeniable need for sharing experience and hard-learned lessons. Staying silent about the unfairness that causes 37% of underrepresented people of color to leave tech 37% of people of color cite as a reason they left helps no one.

We reject the idea that it is the “responsibility” of marginalized people to stay in toxic tech culture despite abuse and discrimination, solely to improve the diversity of tech. Marginalized people have already had to overcompensate for systemic sexist, ableist, and racist biases in order to earn their roles in tech. We believe people with power and privilege are responsible for changing toxic tech culture to be more inclusive and fair to marginalized people. If you want more diversity in tech, don’t ask marginalized people to be silent, to endure often grievous discrimination, or to take on additional unpaid, unrecognized labor – ask the privileged to take action.

For many marginalized people, our experience of being in tech includes traumatic experience(s) which we may not have not yet fully come to terms with and that influenced our decisions to leave. Sometimes we don’t make a direct connection between the traumatic experiences and our decision to leave. We just find that we are “bored” and are no longer excited about our work, or start avoiding situations that used to be rewarding, like conferences, speaking, and social events. Often we don’t realize traumatic events are even traumatic until months or years later. If you’ve experienced trauma, processing the trauma is necessary, whether or not you decide to leave toxic tech culture.

This post doesn’t assume that you are sure that you want to leave your current area of tech, or tech as a whole. We ourselves aren’t “sure” we want to permanently leave the toxic tech cultures we were part of even now – maybe things will get better enough that we will be willing to return. You can take the steps described in this post and stay in your current area of tech for as long as you want – you’ll just be more centered, grounded, and happy.

The steps we took are described in roughly the order we took them, but they all overlapped and intermixed with each other. Don’t feel like you need to do things in a particular order or way; this is just to give you some ideas on what you could do to work through your feelings about leaving tech and any related trauma.

Step 1: Deprogram yourself from the cult of tech

The first step is to start deprogramming yourself from the cult of tech. Being part of toxic tech culture has a lot in common with being part of a cult. How often have you heard a Silicon Valley CEO talk about how his (it’s almost always a he) startup is going to change the world? The refrain of how a startup CEO is going to save humanity is so common that it’s actually uncommon for a CEO to not use saviour language when describing their startup. Cult leaders do the same thing: they create a unique philosophy, imbued with some sort of special message that they alone can see or hear, convince people that only they have the answers for what ails humanity, and use that influence to control the people around them.

Consider this list of how to identify a cult, and how closely this list mirrors patterns we can observe in Silicon Valley tech:

  • “Be wary of any leader who proclaims him or herself as having special powers or special insight.” How often have you heard a Silicon Valley founder or CEO proclaimed as some sort of genius, and they alone can figure out how to invent XYZ? Nearly every day, there’s some deific tribute to Elon Musk or Mark Zuckerberg in the media.
  • “The group is closed, so in other words, although there may be outside followers, there’s usually an inner circle that follows the leader without question, and that maintains a tremendous amount of secrecy.” The Information just published a database summarizing how secretive, how protective, how insular the boards are for the top 30 private companies in tech. Here’s what they report: “Despite their enormous size and influence, the biggest privately held technology companies eschew some basic corporate governance standards, blocking outside voices, limiting decision making to small groups of mostly white men and holding back on public disclosures, an in-depth analysis by The Information shows.”
  • “A very important aspect of cult is the idea that if you leave the cult, horrible things will happen to you.” There’s an insidious reason why your unicorn startup provides you with a free cafeteria, gym, yoga rooms, and all night snack bars: they never want you to leave. And if you do leave the building, you can stay engaged with Slack, IM, SMS, and every other possible communications tool so that you can never disconnect. They then layer over this with purported positive cultural messaging around how lucky, how fortunate you are to have landed this job — you were the special one selected out of thousands of candidates. Nobody else has it as good as we do here. Nobody else is as smart, as capable, as special as our team. Nobody else is building the best, most impactful solutions to solve humanity’s problems. If you fall off this treadmill, you will become irrelevant, you’ll be an outsider, a consumer instead of a builder, you’ll never be first on the list for the Singularity, when it happens. You’ll be at the shit end of the income inequality distribution funnel.

Given how similar toxic tech culture (and especially Silicon Valley tech culture) is to cult culture, leaving tech often requires something like cult-deprogramming techniques. We found the following steps especially useful for deprogramming ourselves from the cult of tech: recognizing our unconscious beliefs, experimenting with our identity, avoiding people who don’t support us, and making friendships that aren’t dependent on tech.

Recognize your unconscious beliefs

One cult-like aspect of toxic tech culture is a strong moral us-vs-them dichotomy: either you’re “in tech,” and you’re important and smart and hardworking and valuable, or you are not “in tech” because you are ignorant and untalented and lazy and irrelevant. (What are the boundaries of “in tech?” Well, the more privileged you are, the more likely people will define you as “in tech” – so be generous to yourself if you are part of a marginalized group. Or read more about the fractal nature of the gender binary and how it shows up in tech.)

We didn’t realize how strongly we’d unconsciously adopted this belief that people in tech were better than those who weren’t until we started to imagine ourselves leaving tech and felt a wave of self-judgment and fear. Early on, Valerie realized that she unconsciously thought of literally every single job other than software engineer as “for people who weren’t good enough to be a software engineer” – and that she thought this because other software engineers had been telling her that for her entire career. Even now, as Susan is launching a new education startup in Australia, she’s trying to be careful to not assume that just because people are doing things in a “non Silicon Valley, lean startup, agile way,” that it’s not automatically wrong. In reality, the best way in which to do things is probably not based on any particular dogma, but one that reflects a healthy balance of diverse perspectives and styles.

The first step to ridding yourself of the harmful belief that only people who are “in tech” or doing things in a “startup style” are good or smart or valuable is surfacing the unconscious belief to the conscious level, so you can respond to it. Recognize and name that belief when it comes up: when you think about leaving your job and feel fear, when you meet a new person and immediately lose interest when you learn their job is not “technical,” when you notice yourself trying to decide if someone is “technical enough.” Say to yourself, “I am experiencing the belief that only people I consider technical are valuable. This isn’t true. I believe everyone is valuable regardless of their job or level of technical knowledge.”

Experiment with your self-identity

The next step is to experiment with your own self-identity. Begin thinking of yourself as having different non-tech jobs or self-descriptions, and see what thoughts come up. React to those thoughts as though you were reacting to a friend you care about who was saying those things about them. Try to find positive things to think and say about your theoretical new job and new life. Think about people you know with that job and ask yourself if you would say negative things about their job to them. Some painful thoughts and experiences will come up during this time; aim to recognize them consciously and process them, rather than trying to stuff them down or make them go away.

When you live in Silicon Valley, it’s easy for your work life to consume 95% of your waking hours — this is how startups are designed, after all, with their endless perks and pressures to socialize within the tribe. Often times, promotions go hand in hand with socializing successfully within the startup scene. What can you do to carve out several hours a week just for yourself, and an alternate identity that isn’t defined by success within toxic tech culture? How do you make space for self care? For example, Susan began to take online writing courses, and found that the outlet of interacting with poets and fiction writers helped ground her.

If necessary, change the branding of your personal life. Stop wearing tech t-shirts and get shirts that reflect some other part of your self. Get a different print for your office wall. Move the tech books into one out-of-the-way shelf and donate any you don’t use right now (especially the ones that you have been planning to read but never got around to). Donate most of your conference schwag and stop accepting new schwag. Pack away the shelf of tech-themed tchotchkes or even (gasp) throw them away. Valerie went to a “burn party” on Ocean Beach, where everyone brought symbols of old jobs that they were happy to be free of and symbolically burned them in a beach bonfire. You might consider a similar ritual.

De-emphasize tech in your self-presentation. Change any usernames that reference your tech interests. Rewrite any online bios or descriptions to emphasize non-tech parts of your life. Start introducing yourself by talking about your non-tech hobbies and interests rather than your job. You might even try introducing yourself to new people as someone whose primary job isn’t tech. Valerie, who had been writing professionally for several years, started introducing herself as a writer at tech events in San Francisco. People who would have talked to her had she introduced herself as a Linux kernel developer would immediately turn away without a second word. Counterintuitively, this made her more determined to leave her job, when she saw how inconsiderate her colleagues were when she did not make use of her technical privilege.

Avoid unsupportive people

Identify any people in your life who are consistently unsupportive of you, or only supportive when you perform to their satisfaction, and reduce your emotional and financial dependence on them. If you have friends or idols who are unhelpfully critical or judgemental, take steps to see or hear from them less often. Don’t seek out their opinion and don’t stoke your admiration for them. This will be difficult the closer and more dependent you are on the person; if your spouse or manager is one of these people, you have our sympathy. For more on this dynamic and how to end it, see this series of posts about narcissism, co-narcissism, and tech.

Depressingly often, we especially seek the approval of people who give approval sparingly (think about the popularity of Dr. House, who is a total jerk). If you find yourself yearning for the approval of someone in tech who has been described as an “asshole,” this is a great time to stop. Some helpful tips to stop seeking the approval of an asshole: make a list of cruel things they’ve done, make a list of times they were wrong, stop reading their writing or listening to their talks, filter them out of your daily reading, talk to people who don’t know who that person is or care what they think, listen to people who have been hurt by them, and spend more time with people who are kind and nurturing.

At the same time, seek out and spend more time with people who are generally supportive of you, especially people who encourage experimentation and personal change. You may already have many of these people in your life, but don’t spend much time thinking about them because you can depend on their friendship and support. Reach out to them and renew your relationship.

Make friendships that don’t depend on tech

If your current social circle consists entirely of people who are fully bought into toxic tech culture, you may not have anyone in your life willing to support a career change. To help solve this, make friendships that aren’t dependent on your identity as a person in tech. The goal is to have a lot of friendships that aren’t dependent on your being in tech, so that if you decide to leave, you won’t lose all your friends at the same time as your job. Being friends with people who aren’t in tech will help you get an outside perspective on the kind of tech culture you are part of. It also helps you envision a future for yourself that doesn’t depend on being in toxic tech culture. You can still have lots of friends in tech, you are just aiming for diversity in your friendships.

One way to make this easier is to focus on your existing friendships that are “near tech,” such as people working in adjacent fields that sometimes attend tech conferences, but aren’t “in tech” themselves. Try also getting a new hobby, being more open to invitations to social events, and contacting old friends you’ve fallen out of touch with. Spend less time attending tech-related events, especially if you currently travel to a lot of tech conferences. It’s hard to start and maintain new local friendships when you’re constantly out of town or working overtime to prepare a talk for a conference. If you have a set of conferences you attend every year, it will feel scary the first time you miss one of them, but you’ll notice how much more time you have to spend with your local social circle.

Making friends outside of your familiar context (tech co-workers, tech conferences, online tech forums) is challenging for most people. If you learned how to socialize entire in tech culture, you may also need to learn new norms and conventions (such as how to have a conversation that isn’t about competing to show who knows more about a subject). Both Valerie and Susan experienced this when we started trying to make friends outside of toxic tech culture: all we knew how to talk about was startups, technology, video games, science fiction, scientific research, and (ugh) libertarian economic philosophy. We discovered people outside toxic tech culture wanted to talk about a wider range of topics, and often in a less confrontational way. And after a lifetime of socialization to distrust and discount everyone who wasn’t a man, we learned to seek out and value friendships with women and non-binary people.

If making new friends sounds intimidating, we recommend checking out Captain Awkward’s practical advice on making friends. Making new friends takes work and willingness to be rejected, but you’ll thank yourself for it later on.

Step 2: Make room for a career change

If you are already in a place where you have the freedom to make a big career change, congratulations! But if changing careers seems impossibly hard right now, that’s okay too. You can make room for a career change while still working in tech. Even if you end up deciding to stay in your current job, you will likely appreciate the freedom and flexibility that you’ve opened up for yourself.

Find a career counselor

The most useful action you can take is to find a career counselor who is right for you, and be honest with them about your fears, goals, and desires. Finding a career counselor is a lot like finding a dentist or a therapist: ask your friends for recommendations, read online reviews, look for directories or lists, and make an appointment for a free first meeting. If your first meeting doesn’t click, go ahead and try another career counselor until you find someone you can work with. A good career counselor will get a comprehensive view of your entire life (including family and friends) and your goals (not just job-related goals), and give you concrete steps to take to bring you closer to your goals.

Sometimes a career counselor’s job is explaining to you how the job you want but thought was impossible to get is actually possible. Valerie started seeing a career counselor about two years before she quit her last job as a software engineer and co-founded a non-profit. It took her about five years to get everything she listed as part of what she thought was an unattainable dream job (except for the “view of the water from her office,” which she is still working on). All the rest of this section is a high-level generic version of the advice a good career counselor will give you.

Improve your financial situation

Many tech jobs pay relatively well, but many people in tech would still have a hard time switching careers tomorrow because they don’t have enough money saved or couldn’t take a pay cut (hello, overheated rental markets and supporting your extended family). Don’t assume you’ll have to take a pay cut if you leave tech or your particular part of toxic tech culture, but it gives you more flexibility if you don’t have to immediately start making the same amount of money in a different job.

Look for ways to change your lifestyle or your expectations in ways that let you save money or lower your bills. Status symbols and class markers will probably loom large here and it’s worth thinking about which things are most valuable to you and which ones you can let go. You might find it is a relief to no longer have an expensive car with all its attendant maintenance and worries and fear, but that you really value the weekly exercise class that makes you feel happier and more energetic the rest of the week. Making these changes will often be painful in the short term but pay off in the long term. Valerie ended up temporarily moving out of the San Francisco Bay Area to a cheaper area near her family, which let her save up money and spend less while she was planning a career change. She moved back to the Bay Area when she was established in her new career, into a smaller, cheaper apartment she could afford on her new salary. Today she is making more money than she ever did as a programmer.

Take stock of your transferrable skills

Figure out what you actually like to do and how much of that is transferrable to other fields or jobs. One way to do this is to look back at, say, the top seven projects you most enjoyed doing in your life, either for your job or as a volunteer. What skills were useful to you in getting those projects done? What parts of doing that project did you enjoy the most? For example, being able to quickly read and understand a lot of information is a transferrable skill that many people enjoy using. The ability to persuade people is another such skill, useful for selling gym memberships, convincing people to recycle more, teaching, getting funding, and many other jobs. Once you have an idea of what it is that you enjoy doing and that is transferrable to other jobs, you can figure out what jobs you might enjoy and would be reasonably good at from the beginning.

Think carefully before signing up for new education

This is not necessarily the time to start taking career-related classes or going back to university in a serious way! If you start taking classes without first figuring out what you enjoy, what your skills are, and what your goals are, you are likely to be wasting your time and money and making it more difficult to find your new career. We highly recommend working with a career counselor before spending serious money or time on new training or classes. However, it makes sense to take low-cost, low-time commitment classes to explore what you enjoy doing, open your mind to new possibilities, or meet new people. This might look like a pottery class at the local community college, learning to 3D print objects at the local hackerspace, or taking an online course in African history.

Recognise there are many different paths in tech

The good news about software finally eating the world is that there are now many ways in which you can work in and around technology, without having to be part of toxic tech culture. Every industry needs tech expertise, and nearly every country around the world is trying to cultivate its own startup ecosystem. Many of these are much saner, kinder places to work than the toxic tech culture you may currently be part of, and a few of these involve industries that are more inclusive and welcoming of marginalized groups. Some of our friends have left the tech industry to work in innovation or technology related jobs in government, education, advocacy, policy, and arts. Though there are no great industries, and no ideal safe places for marginalized groups nearly anywhere in the world, there are varying degrees of toxicity and you can seek out areas with less toxicity. Try not to be swayed by the narrative that the only tech worth doing is the tech that’s written about in the media or receiving significant VC funding.

Step 3: Take care of yourself

Since being part of toxic tech culture is harmful to you as a person, simply focusing on taking care of yourself will help you put tech culture in its proper perspective, leaving you the freedom to be part of tech or not as you choose.

Prioritize self-care

Self-care means doing things that are kind or nurturing for yourself, whatever that looks like for you. Being in toxic tech culture means that many things take priority over self-care: fixing that last bug instead of taking a walk, going to an evening work-related meetup instead of staying home and getting to sleep on time, flying to yet another tech conference instead of spending time with family and friends. For Susan, prioritizing self-care looked like taking a road trip up the Pacific Coast Highway for the weekend instead of going to an industry fundraiser, or eating lunch by herself with a book instead of meeting up with another VC. One of the few constants in life is that you will always be stuck with your own self – so take care of it!

Learn to say no and enforce boundaries

We found that we were saying yes to too many things. The tech industry depends on extracting free or low-cost labor from many people in different ways: everything from salaried employees working 60-hour weeks to writing and giving talks in your “free time” – all of which are considered required for your career to advance. Marginalized people in tech are often expected to work an additional second (third?) shift of diversity-related work for free: giving recruiting advice, mentoring other marginalized people, or providing free counseling to more privileged people.

FOMO (fear of missing out) plays an important role too. It’s hard to cut down on free work when you are wondering, what if this is the conference where you’ll meet the person who will get you that venture capital job you’ve always wanted? What if serving on this conference program committee will get you that promotion? What if going to lunch with this powerful person so they can “pick your brain” for free will get you a new job? Early in your tech career, these kinds of investments often pay off but later on they have diminishing returns. The first time you attend a conference in your field, you will probably meet dozens of people who are helpful to your career. The twentieth conference – not so much.

For Valerie, switching from a salaried job to hourly consulting taught her the value of her time and just how many hours she was spending on unpaid work for the Linux and file systems communities. She taped a note reading “JUST SAY NO” to the wall behind her computer, and then sent a bunch of emails quitting various unpaid responsibilities she had accumulated. A few months later, she found she had made too many commitments again, and had to send another round of emails backing out of commitments. It was painful and embarrassing, but not being constantly frazzled and stressed out was worth it.

When you start saying no to unpaid work, some people will be upset and push back. After all, they are used to getting free work from you which gives them some personal advantage, and many people won’t be happy with this. They may try to make you feel guilty, shame you, or threaten you. Learning to enforce boundaries in the face of opposition is an important part of this step. If this is hard for you, try reading books, practicing with a friend, or working with a therapist. If you are worried about making mistakes when going against external pressure, keep in mind that simply exercising some control over your life choices and career path will often increase your personal happiness, regardless of the outcome.

Care for your mental health

Let’s be brutally honest: toxic tech culture is highly abusive, and there’s an excellent chance you are suffering from depression, trauma, chronic stress, or other serious psychological difficulties. The solution that works for many people is to work with a good therapist or counselor. A good licensed therapist is literally an expert in helping people work through these problems. Even if you don’t think your issues reach the level of seriousness that requires a therapist, a good therapist can help you with processing guilt, fear, anxiety, or other emotions that come up around the idea of leaving toxic tech culture.

Whether or not you work with a therapist, you can make use of many other forms of mental health care: meditation, support groups, mindfulness apps, walking, self-help books, spending time in nature, various spiritual practices, doing exercises in workbooks, doing something creative, getting alone time, and many more. Try a bunch of different things and pick what works for you – everyone is different. For Susan, practicing yoga four times a week, meditating, and working in her vegetable garden instead of reading Hacker News gave her much needed perspective and space.

Finding a therapist can be intimidating for many people, which is why Valerie wrote “HOWTO therapy: what psychotherapy is, how to find a therapist, and when to fire your therapist.” It has some tips on getting low-cost or free therapy if that’s what you need. You can also read Tiffany Howard‘s list of free and low-cost mental health resources which covers a wide range of different options, including apps, peer support groups, and low-cost therapy.

Process your grief

Even if you are certain you want to leave toxic tech culture, actually leaving is a loss – if nothing else, a loss of what you thought your career and future would look like. Grief is an appropriate response to any major life change, even if it is for the better. Give yourself permission to grieve and be sad, for whatever it is that you are sad about. A few of the things we grieved for: the meritocracy we thought we were participating in, our vision for where our careers would be in five years, the good times we had with friends at conferences, a sense of being part of something excited and world-changing, all the good people who left before us, our relationships with people we thought would support us but didn’t, and the people we were leaving behind to suffer without us.

Step 4: Give yourself time

If you do decide to leave toxic tech culture, give yourself a few years to do it, and many more years to process your feelings about it. Valerie decided to stop being a programmer two years before she actually quit her programming job, and then she worked as a file systems consultant on and off for five years after that. Seven years later, she finally feels mostly at peace about being driven out of her chosen career (though she still occasionally has nightmares about being at a Linux conference). Susan’s process of extricating herself from the most toxic parts of tech culture and reinvesting in her own identity and well being has taken many years as well. Her partner (who knows nothing about technology) and her two kids help her feel much more balanced. Because Susan grew up on the Internet and has been building in tech for 25 years, she feels like she’ll probably always be doing something in tech, or tech-related, but wants to use her knowledge and skills to do this on her own terms, and to use her hard won know-how to benefit other marginalized folks to successfully reshape the industry.

An invitation to share your story

We hope this post was helpful to other people thinking about leaving toxic tech culture. There is so much more to say on this topic, and so many more points of view we want to hear about. If you feel safe doing so, we would love to read your story of leaving toxic tech culture. And wherever you are in your journey, we see you and support you, even if you don’t feel safe sharing your story or thoughts.

Krebs on SecuritySome Basic Rules for Securing Your IoT Stuff

Most readers here have likely heard or read various prognostications about the impending doom from the proliferation of poorly-secured “Internet of Things” or IoT devices. Loosely defined as any gadget or gizmo that connects to the Internet but which most consumers probably wouldn’t begin to know how to secure, IoT encompasses everything from security cameras, routers and digital video recorders to printers, wearable devices and “smart” lightbulbs.

Throughout 2016 and 2017, attacks from massive botnets made up entirely of hacked IoT devices had many experts warning of a dire outlook for Internet security. But the future of IoT doesn’t have to be so bleak. Here’s a primer on minimizing the chances that your IoT things become a security liability for you or for the Internet at large.

-Rule #1: Avoid connecting your devices directly to the Internet — either without a firewall or in front it, by poking holes in your firewall so you can access them remotely. Putting your devices in front of your firewall is generally a bad idea because many IoT products were simply not designed with security in mind and making these things accessible over the public Internet could invite attackers into your network. If you have a router, chances are it also comes with a built-in firewall. Keep your IoT devices behind the firewall as best you can.

-Rule #2: If you can, change the thing’s default credentials to a complex password that only you will know and can remember. And if you do happen to forget the password, it’s not the end of the world: Most devices have a recessed reset switch that can be used to restore to the thing to its factory-default settings (and credentials). Here’s some advice on picking better ones.

I say “if you can,” at the beginning of Rule #2 because very often IoT devices — particularly security cameras and DVRs — are so poorly designed from a security perspective that even changing the default password to the thing’s built-in Web interface does nothing to prevent the things from being reachable and vulnerable once connected to the Internet.

Also, many of these devices are found to have hidden, undocumented “backdoor” accounts that attackers can use to remotely control the devices. That’s why Rule #1 is so important.

-Rule #3: Update the firmware. Hardware vendors sometimes make available security updates for the software that powers their consumer devices (known as “firmware). It’s a good idea to visit the vendor’s Web site and check for any firmware updates before putting your IoT things to use, and to check back periodically for any new updates.

-Rule #4: Check the defaults, and make sure features you may not want or need like UPnP (Universal Plug and Play — which can easily poke holes in your firewall without you knowing it) — are disabled.

Want to know if something has poked a hole in your router’s firewall? Censys has a decent scanner that may give you clues about any cracks in your firewall. Browse to whatismyipaddress.com, then cut and paste the resulting address into the text box at Censys.io, select “IPv4 hosts” from the drop-down menu, and hit “search.”

If that sounds too complicated (or if your ISP’s addresses are on Censys’s blacklist) check out Steve Gibson‘s Shield’s Up page, which features a point-and-click tool that can give you information about which network doorways or “ports” may be open or exposed on your network. A quick Internet search on exposed port number(s) can often yield useful results indicating which of your devices may have poked a hole.

If you run antivirus software on your computer, consider upgrading to a “network security” or “Internet security” version of these products, which ship with more full-featured software firewalls that can make it easier to block traffic going into and out of specific ports.

Alternatively, Glasswire is a useful tool that offers a full-featured firewall as well as the ability to tell which of your applications and devices are using the most bandwidth on your network. Glasswire recently came in handy to help me determine which application was using gigabytes worth of bandwidth each day (it turned out to be a version of Amazon Music’s software client that had a glitchy updater).

-Rule #5: Avoid IoT devices that advertise Peer-to-Peer (P2P) capabilities built-in. P2P IoT devices are notoriously difficult to secure, and research has repeatedly shown that they can be reachable even through a firewall remotely over the Internet because they’re configured to continuously find ways to connect to a global, shared network so that people can access them remotely. For examples of this, see previous stories here, including This is Why People Fear the Internet of Things, and Researchers Find Fresh Fodder for IoT Attack Cannons.

-Rule #6: Consider the cost. Bear in mind that when it comes to IoT devices, cheaper usually is not better. There is no direct correlation between price and security, but history has shown the devices that tend to be toward the lower end of the price ranges for their class tend to have the most vulnerabilities and backdoors, with the least amount of vendor upkeep or support.

In the wake of last month’s guilty pleas by several individuals who created Mirai — one of the biggest IoT malware threats ever — the U.S. Justice Department released a series of tips on securing IoT devices.

One final note: I realize that the people who probably need to be reading these tips the most likely won’t ever know they need to care enough to act on them. But at least by taking proactive steps, you can reduce the likelihood that your IoT things will contribute to the global IoT security problem.

CryptogramArticle from a Former Chinese PLA General on Cyber Sovereignty

Interesting article by Major General Hao Yeli, Chinese People's Liberation Army (ret.), a senior advisor at the China International Institute for Strategic Society, Vice President of China Institute for Innovation and Development Strategy, and the Chair of the Guanchao Cyber Forum.

Against the background of globalization and the internet era, the emerging cyber sovereignty concept calls for breaking through the limitations of physical space and avoiding misunderstandings based on perceptions of binary opposition. Reinforcing a cyberspace community with a common destiny, it reconciles the tension between exclusivity and transferability, leading to a comprehensive perspective. China insists on its cyber sovereignty, meanwhile, it transfers segments of its cyber sovereignty reasonably. China rightly attaches importance to its national security, meanwhile, it promotes international cooperation and open development.

China has never been opposed to multi-party governance when appropriate, but rejects the denial of government's proper role and responsibilities with respect to major issues. The multilateral and multiparty models are complementary rather than exclusive. Governments and multi-stakeholders can play different leading roles at the different levels of cyberspace.

In the internet era, the law of the jungle should give way to solidarity and shared responsibilities. Restricted connections should give way to openness and sharing. Intolerance should be replaced by understanding. And unilateral values should yield to respect for differences while recognizing the importance of diversity.

Worse Than FailureIn $BANK We Trust

During the few months after getting my BS and before starting my MS, I worked for a bank that held lots of securities - and gold - in trust for others. There was a massive vault with multiple layers of steel doors, iron door grates, security access cards, armed guards, and signature comparisons (live vs pre-registered). It was a bit unnerving to get in there, so deep below ground, but once in, it looked very much like the Fort Knox vault scene in Goldfinger.

Someone planning things on a whiteboard

At that point, PCs weren't yet available to the masses and I had very little exposure to mainframes. I had been hired as an assistant to one of their drones who had been assigned to find all of the paper-driven-changes that had gone awry and get their books up to date.

To this end, I spent about a month talking to everyone involved in taking a customer order to take or transfer ownership of something, and processing the ledger entries to reflect the transaction. From this, I drew a simple flow chart, listing each task, the person(s) responsible, and the possible decision tree at each point.

Then I went back to each person and asked them to list all the things that could and did go wrong with transaction processing at their junction in the flow.

What had been essentially straight-line processing with a few small decision branches, turned out to be enough to fill a 30 foot long by 8 foot high wall of undesirable branches. This became absolutely unmanageable on physical paper, and I didn't know of any charting programs on the mainframe at that time, so I wrote the whole thing up with an index card at each junction. The "good" path was in green marker, and everything else was yellow (one level of "wrong") or red (wtf-level of "wrong").

By the time it was fully documented, the wall-o-index-cards had become a running joke. I invited the people (who had given me all of the information) in to view their problems in the larger context, and verify that the problems were accurately documented.

Then management was called in to view the true scope of their problems. The reason that the books were so snafu'd was that there were simply too many manual tasks that were being done incorrectly, cascading to deeply nested levels of errors.

Once we knew where to look, it became much easier to track transactions backward through the diagram to the last known valid junction and push them forward until they were both correct and current. A rather large contingent of analysts were then put onto this task to fix all of the transactions for all of the customers of the bank.

It was about the time that I was to leave and go back to school that they were talking about taking the sub-processes off the mainframe and distributing detailed step-by-step instructions for people to follow manually at each junction to ensure that the work flow proceeded properly. Obviously, more manual steps would reduce the chance for errors to creep in!

A few years later when I got my MS, I ran into one of the people that was still working there and discovered that the more-manual procedures had not only not cured the problem, but that entirely new avenues of problems had cropped up as a result.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Google AdsenseReceiving your payment via EFT (Electronic Funds Transfer)


Electronic Funds Transfer (EFT) is our fastest, most secure, and environmentally friendly payment method. It is available across most countries and you can check if this payment method is available to you here.

To use this payment method we first need to verify your bank account to ensure that you will receive your payment. This involves entering specific bank account information and receiving a small test deposit.

Some of our publishers found this process confusing and we want to guide you through it. Our latest video will guide you through adding EFT as a payment method, from start to finish.
If you didn’t receive your test deposit, you can watch this video to understand why. If you have more questions, visit our Help Center.
Posted by: The AdSense Support Team

,

LongNowStewart Brand Gives In-Depth and Personal Interview to Tim Ferriss

Tim Ferriss, who wrote the The Four Hour Work Week and gave a Long Now talk on accelerated learning in 02011, recently interviewed Long Now co-founder Stewart Brand on his podcast, “The Tim Ferriss Show”. The interview is wide-ranging, in-depth, and among the most personal Brand has given to date. Over the course of nearly three hours, Brand touches on everything from the Whole Earth Catalog, why he gave up skydiving, how he deals with depression, his early experiences with psychedelics, the influence of Marshall McLuhan and Buckminster Fuller on his thinking, his recent CrossFit regimen, and the ongoing debate between artificial intelligence and intelligence augmentation. He also discusses the ideas and projects of The Long Now Foundation.

Brand frames The Long Now Foundation as a way to augment social intelligence:

The idea of the Long Now Foundation is to give encouragement and permission to society that is rewarded for thinking very, very rapidly, in business terms and, indeed, in scientific terms, of rapid turnaround, and getting inside the adversaries’ loop, move fast and break things, [to think long term]. Long term thinking might be proposing that some things you don’t want to break. They might involve moving slow, and steadily.

The Pace Layer diagram.

He introduces the pace layer diagram as a tool to approach global scale challenges:

What we’re proposing is there are a lot of problems, a lot of issues and a lot of quite wonderful things in that category of being big and slow moving and so I wound up with Brian Eno developing a pace layer diagram of civilization where there’s the fast moving parts like fashion and commerce, and then it goes slower when you get to infrastructure and then things move really slow in how governance changes, and then you go down to culture and language and religion move really slowly and then nature, the tectonic forces in climate change and so on move really big and slow. And what’s interesting about that is that the fast parts get all the attention, but the slow parts have all the power. And if you want to really deal with the powerful forces in the world, bear relation to seeing what can be done with appreciating and maybe helping adjust the big slow things.

Stewart Brand and ecosystem ecologist Elena Bennett during the Q&A of her November 02017 SALT Talk. Photo: Gary Wilson.

Ferris admits that in the last few months he’s been pulled out of the current of long-term thinking by the “rip tide of noise,” and asks Brand for a “homework list” of SALT talks that can help provide him with perspective. Brand recommends Jared Diamond’s 02005 talk on How Societies Fail (And Sometimes Succeed), Matt Ridley’s 02011 talk on Deep Optimism, and Ian Morris’ 02011 talk on Why The West Rules (For Now).

Brand also discusses Revive & Restore’s efforts to bring back the Wooly Mammoth, and addresses the fear many have of meddling with complex systems through de-extinction.

Long-term thinking has figured prominently in Tim Ferriss’ podcast in recent months. In addition to his interview with Brand, Ferris has also interviewed Long Now board member Kevin Kelly and Long Now speaker Tim O’Reilly.

Listen to the podcast in full here.

TEDTED debuts “Small Thing Big Idea” original video series on Facebook Watch

Today we’re debuting a new original video series on Facebook Watch called Small Thing Big Idea: Designs That Changed the World.

Each 3- to 4-minute weekly episode takes a brief but delightful look at the lasting genius of one everyday object – a pencil, for example, or a hoodie – and explains how it is so perfectly designed that it’s actually changed the world around it.

The series features some of design’s biggest names, including fashion designer Isaac Mizrahi, museum curator Paola Antonelli, and graphic designer Michael Bierut sharing their infectious obsession with good design.

To watch the first episode of Small Thing Big Idea (about the little-celebrated brilliance of subway maps!), tune in here, and check back every Tuesday for new episodes.

Cory DoctorowThe Man Who Sold the Moon, Part 02


Here’s part two of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.

MP3

CryptogramJim Risen Writes about Reporting Government Secrets

Jim Risen writes a long and interesting article about his battles with the US government and the New York Times to report government secrets.

Worse Than FailureWhy Medical Insurance Is So Expensive

VA One AE Preliminary Project Timeline 2001-02

At the end of 2016, Ian S. accepted a contract position at a large medical conglomerate. He was joining a team of 6 developers on a project to automate what was normally a 10,000-hour manual process of cross-checking spreadsheets and data files. The end result would be a Django server offering a RESTful API and MySQL backend.

"You probably won't be doing anything much for the first week, maybe even the first month," Ian's interviewer informed him.

Ian ignored the red flag and accepted the offer. He needed the experience, and the job seemed reasonable enough. Besides, there were only 2 layers of management to deal with: his boss Daniel, who led the team, and his boss' boss Jim.

The office was in an lavish downtown location. The first thing Ian learned was that nobody had assigned desks. Each day, everyone had to clean out their desks and return their computers and peripherals to lockers. Because team members needed to work closely together, everyone claimed the same desk every day anyway. This policy only resulted in frustration and lost time.

As if that weren't bad enough, the computers were also heavily locked down. Ian had to go through the company's own "app store" to install anything. This was followed by an approval process that could take a few days based on how often Jim went through his pending approvals. The one exception was VMWare Workstation. Because this app cost money, it involved a 2-week approval process. In the middle of December, everyone was off on holiday, making it impossible for Ian's team to get approvals or talk to anyone helpful. Thus Ian's only contributions that month were a couple of Visio diagrams and a Django "hello world" that Daniel had requested. (It wasn't as if Daniel could check his work, though. He didn't know anything about Python, Django, REST, MySQL, MVC, or any other technology relevant to the project.)

The company provided Ian a copy of Agile for Dummies, which seemed ironic in retrospect, as the team was forced to the spend entire first week of January breaking the next 6 months into 2-week sprints. They weren't allowed to leave sprints empty, and had to allocate 36-40 hours each week. They could only make stories for features, so no time was penciled in for bug fixes or paying off technical debt. These stories were then chopped into meaningless pieces ("Part 1", "Part 2", etc.) so they'd fit into their arbitrary timelines.

"This is why medical insurance is so expensive", Daniel remarked at one point, either trying to lighten the mood or stave off his pending insanity.

Later in January, Ian arrived one morning to find the rest of his team standing around confused. Their project was now dead at the hands of a VP who'd had it in for Jim. The company had a tenure process, so the VP couldn't just fire Jim, but he could make his life miserable. He reassigned all of Jim's teams that he didn't outright terminate, exiled Jim to New Jersey, and gave him nothing to do but approve timesheets. Meanwhile, Daniel was told not to bother coming in again.

"Don't worry," the powers-that-be said. "We don't usually terminate people here."

Ian's gapingly empty schedule was filled with a completely different task: "shadowing" someone in another state by screen-sharing and watching them work. The main problem with this arrangement was that Ian's disciple was a systems analyst, not a programmer.

Come February, Ian's new team was also terminated.

"We don't have a culture of layoffs," the powers-that-be assured him.

They were still intent on shoving Ian into a systems analyst position despite his requisite lack of experience. It was at that point that he gave up and moved on. He later heard that within a few months, the entire division had been fired.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Don MartiRemove all the tracking widgets? Maybe not.

Good one from Mark Pilipczuk: Publisher Advice From a Buyer.

Remove all the tracking widgets from your site. That Facebook “Like” button only serves to exfiltrate your valuable data to an entity that doesn’t have your best interests at heart. If you’ve got a valuable audience, why would you want to help the ad tech industry which promises “I can find the same and bigger audience over here for $2 CPM, so don’t buy from the publisher?” Sticking your own head in the noose is never a good idea.

That advice makes sense for the Facebook "like button." That button is just a data shoplifter. The others, though? All those extra trackers come in as side effects of ad deals, and they're likely to be contractually required to make ads on the site saleable.

Yes, those trackers feed bots and data leakage, and yes, they're even terrible at fighting adfraud. Augustine Fou points out that Fraud filters don't work. "In some cases it's worse when filter is on."

So in an ideal world you would be able to pull all the third-party trackers, but as far as day-to-day operations go, user tracking is a Chesterton's Fence problem. What happens if a legit site unilaterally takes down the third-party trackers? All the targeted ad impressions that would have given that site a (small) payment end up going to bots.

So what can a site do? Understand that the real fix has to happen on the browser end, and nudge the users to either make their browsers less data-leaky, or switch to browsers that are leakage-resistant out of the box.

Start A/B testing some notifications to remind users to turn on tracking protection.

  • Can you get users who are already choosing "Do Not Track" to turn on real protection if you inform them that sites ignore their DNT choice?

  • If a user is running an ad blocker with a paid whitelisting scheme, can you inform them about it to get them to switch to a better tool, or at least add a second layer of protection that limits the damage that paid whitelisting can do?

  • When users visit privacy pages or opt-out of a marketing program, are they also willing to check their browser privacy settings?

Every site's audience is different. It's hard to know in advance how users will respond to different calls to action to turn up their privacy and create a win-win for legit sites and legit brands. We do know that users are concerned and confused about web advertising, and the good news is that the JavaScript needed to collect data and administer nudges is as easy to add as yet another tracker.

More on what sites can do, that might be more effective than just removing trackers: What The Verge can do to help save web advertising

Planet Linux AustraliaRussell Coker: More About the Thinkpad X301

Last month I blogged about the Thinkpad X301 I got from a rubbish pile [1]. One thing I didn’t realise when writing that post is that the X301 doesn’t have the keyboard light that the T420 has. With the T420 I could press the bottom left (FN) and top right (PgUp from memory) keys on the keyboard to turn a light on the keyboard. This is really good for typing at night. While I can touch type the small keyboard on a laptop makes it a little difficult so the light is a feature I found useful. I wrote my review of the X301 before having to use it at night.

Another problem I noticed is that it crashes after running Memtest86+ for between 30 minutes and 4 hours. Memtest86+ doesn’t report any memory errors, the system just entirely locks up. I have 2 DIMMs for it (2G and 4G), I tried installing them in both orders, and I tried with each of them in the first slot (the system won’t boot if only the second slot is filled). Nothing changed. Now it is possible that this is something that might not happen in real use. For example it might only happen due to heat when the system is under sustained load which isn’t something I planned for that laptop. I would discard a desktop system that had such a problem because I get lots of free desktop PCs, but I’m prepared to live with a laptop that has such a problem to avoid paying for another laptop.

Last night the laptop battery suddenly stopped working entirely. I had it unplugged for about 5 minutes when it abruptly went off (no flashing light to warn that the battery was low or anything). Now when I plug it in the battery light flashes orange. A quick Google search indicates that this might mean that a fuse inside the battery pack has blown or that there might be a problem with the system board. Replacing the system board is much more than the laptop is worth and even replacing the battery will probably cost more than it’s worth. Previously bought a Thinkpad T420 at auction because it didn’t cost much more than getting a new battery and PSU for a T61 [2] and I expect I can find a similar deal if I poll the auction sites for a while.

Using an X series Thinkpad has been a good experience and I’ll definitely consider an X series for my next laptop. My previous history of laptops involved going from ones with a small screen that were heavy and clunky (what was available with 90’s technology and cost less than a car) to ones that had a large screen and were less clunky but still heavy. I hadn’t tried small and light with technology from the last decade, it’s something I could really get used to!

By today’s standards the X301 is deficient in a number of ways. It has 64G of storage (the same as my most recent phones) which isn’t much for software development, 6G of RAM which isn’t too bad but is small by today’s standards (16G is a common factory option nowadays), a 1440*900 screen which looks bad in any comparison (less than the last 3 phones I’ve owned), and a slow CPU. No two of these limits would be enough to make me consider replacing that laptop. Even with the possibility of crashing under load it was still a useful system. But the lack of a usable battery in combination with all the other issues makes the entire system unsuitable for my needs. I would be very happy to use a fast laptop with a high resolution screen even without a battery, but not with this list of issues.

Next week I’m going to a conference and there’s no possibility of buying a new laptop before then. So for a week when I need to use a laptop a lot I will have a sub-standard laptop.

It really sucks to have a laptop develop a problem that makes me want to replace it so soon after I got it.

Krebs on SecuritySerial SWATter Tyler “SWAuTistic” Barriss Charged with Involuntary Manslaughter

Tyler Raj Barriss, a 25-year-old serial “swatter” whose phony emergency call to Kansas police last month triggered a fatal shooting, has been charged with involuntary manslaughter and faces up to eleven years in prison.

Tyler Raj Barriss, in an undated selfie.

Barriss’s online alias — “SWAuTistic” — is a nod to a dangerous hoax known as “swatting,” in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with potentially deadly force.

Barriss was arrested in Los Angeles this month for alerting authorities in Kansas to a fake hostage situation at an address in Wichita, Kansas on Dec. 28, 2017.

Police responding to the alert surrounded the home at the address Barriss provided and shot 28-year old Andrew Finch as he emerged from the doorway of his mother’s home. Finch, a father of two, was unarmed, and died shortly after being shot by police.

The officer who fired the shot that killed Finch has been identified as a seven-year veteran with the Wichita department. He has been placed on administrative leave pending an internal investigation.

Following his arrest, Barriss was extradited to a Wichita jail, where he had his first court appearance via video on FridayThe Los Angeles Times reports that Barriss was charged with involuntary manslaughter and could face up to 11 years and three months in prison if convicted.

The moment that police in Kansas fired a single shot that killed Andrew Finch (in doorway of his mother’s home).

Barriss also was charged with making a false alarm — a felony offense in Kansas. His bond was set at $500,000.

Sedgwick County District Attorney Marc Bennett told the The LA Times Barriss made the fake emergency call at the urging of several other individuals, and that authorities have identified other “potential suspects” that may also face criminal charges.

Barriss sought an interview with KrebsOnSecurity on Dec. 29, just hours after his hoax turned tragic. In that interview, Barriss said he routinely called in bomb threats and fake hostage situations across the country in exchange for money, and that he began doing it after his own home was swatted.

Barriss told KrebsOnSecurity that he felt bad about the incident, but that it wasn’t he who pulled the trigger. He also enthused about the rush that he got from evading police.

“Bomb threats are more fun and cooler than swats in my opinion and I should have just stuck to that,” he wrote in an instant message conversation with this author.

In a jailhouse interview Friday with local Wichita news station KWCH, Barriss said he feels “a little remorse for what happened.”

“I never intended for anyone to get shot and killed,” he reportedly told the news station. “I don’t think during any attempted swatting anyone’s intentions are for someone to get shot and killed.”

The Wichita Eagle reports that Barriss also has been charged in Calgary, Canada with public mischief, fraud and mischief for allegedly making a similar swatting call to authorities there. However, no one was hurt or killed in that incident.

Barriss was convicted in 2016 for calling in a bomb threat to an ABC affiliate in Los Angeles. He was sentenced to two years in prison for that stunt, but was released in January 2017.

Using his SWAuTistic alias, Barriss claimed credit for more than a hundred fake calls to authorities across the nation. In an exclusive story published here on Jan. 2, KrebsOnSecurity dissected several months’ worth of tweets from SWAuTistic’s account before those messages were deleted. In those tweets, SWAuTistic claimed responsibility for calling in bogus hostage situations and bomb threats at roughly 100 schools and at least 10 residences.

In his public tweets, SWAuTistic claimed credit for bomb threats against a convention center in Dallas and a high school in Florida, as well as an incident that disrupted a much-watched meeting at the U.S. Federal Communications Commission (FCC) in November.

But in private online messages shared by his online friends and acquaintances SWAuTistic can be seen bragging about his escapades, claiming to have called in fake emergencies at approximately 100 schools and 10 homes.

The serial swatter known as “SWAuTistic” claimed in private conversations to have carried out swattings or bomb threats against 100 schools and 10 homes.

,

Krebs on SecurityCanadian Police Charge Operator of Hacked Password Service Leakedsource.com

Canadian authorities have arrested and charged a 27-year-old Ontario man for allegedly selling billions of stolen passwords online through the now-defunct service Leakedsource.com.

The now-defunct Leakedsource service.

On Dec. 22, 2017, the Royal Canadian Mounted Police (RCMP) charged Jordan Evan Bloom of Thornhill, Ontario for trafficking in identity information, unauthorized use of a computer, mischief to data, and possession of property obtained by crime. Bloom is expected to make his first court appearance today.

According to a statement from the RCMP, “Project Adoration” began in 2016 when the RCMP learned that LeakedSource.com was being hosted by servers located in Quebec.

“This investigation is related to claims about a website operator alleged to have made hundreds of thousands of dollars selling personal information,” said Rafael Alvarado, the officer in charge of the RCMP Cybercrime Investigative Team. “The RCMP will continue to work diligently with our domestic and international law enforcement partners to prosecute online criminality.”

In January 2017, multiple news outlets reported that unspecified law enforcement officials had seized the servers for Leakedsource.com, perhaps the largest online collection of usernames and passwords leaked or stolen in some of the worst data breaches — including three billion credentials for accounts at top sites like LinkedIn and Myspace.

Jordan Evan Bloom. Photo: RCMP.

LeakedSource in October 2015 began selling access to passwords stolen in high-profile breaches. Enter any email address on the site’s search page and it would tell you if it had a password corresponding to that address. However, users had to select a payment plan before viewing any passwords.

The RCMP alleges that Jordan Evan Bloom was responsible for administering the LeakedSource.com website, and earned approximately $247,000 from trafficking identity information.

A February 2017 story here at KrebsOnSecurity examined clues that LeakedSource was administered by an individual in the United States.  Multiple sources suggested that one of the administrators of LeakedSource also was the admin of abusewith[dot]us, a site unabashedly dedicated to helping people hack email and online gaming accounts.

That story traced those clues back to a Michigan man who ultimately admitted to running Abusewith[dot]us, but who denied being the owner of LeakedSource.

The RCMP said it had help in the investigation from The Dutch National Police and the FBI. The FBI could not be immediately reached for comment.

LeakedSource was a curiosity to many, and for some journalists a potential source of news about new breaches. But unlike services such as BreachAlarm and HaveIBeenPwned.com, LeakedSource did nothing to validate users.

This fact, critics charged, showed that the proprietors of LeakedSource were purely interested in making money and helping others pillage accounts.

Since the demise of LeakedSource.com, multiple, competing new services have moved in to fill the void. These services — which are primarily useful because they expose when people re-use passwords across multiple accounts — are popular among those involved in a variety of cybercriminal activities, particularly account takeovers and email hacking.

CryptogramFighting Ransomware

No More Ransom is a central repository of keys and applications for ransomware, so people can recover their data without paying. It's not complete, of course, but is pretty good against older strains of ransomware. The site is a joint effort by Europol, the Dutch police, Kaspersky, and McAfee.

Worse Than FailureRepresentative Line: Tern Back

In the process of resolving a ticket, Pedro C found this representative line, which has nothing to do with the bug he was fixing, but was just something he couldn’t leave un-fixed:

$categories = (isset($categoryMap[$product['department']]) ?
                            (isset($categoryMap[$product['department']][$product['classification']])
                                        ?
                                    $categoryMap[$product['department']][$product['classification']]
                                        : NULL) : NULL);

Yes, the venerable ternary expression, used once again to obfuscate and confuse.

It took Pedro a few readings before he even understood what it did, and then it took him a few more readings to wonder about why anyone would solve the problem this way. Then, he fixed it.

$department = $product['department'];
$classification = $product['classification'];
$categories = NULL;
//ED: isset never triggers as error with an undefined expression, but simply returns false, because PHP
if( isset($categoryMap[$department][$classification]) ) { 
    $categories = $categoryMap[$department][$classification];
}

He submitted the change for code-review, but it was kicked back. You see, Pedro had fixed the bug, which had a ticket associated with it. There were to be no code changes without a ticket from a business user, and since this change wasn’t strictly related to the bug, he couldn’t submit this change.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

Don MartiEasy question with too many wrong answers

Content warning: Godwin's Law.

Here's a marketing question that should be easy.

How much of my brand's ad budget goes to Nazis?

Here's the right answer.

Zero.

And here's a guy who still seems to be having some trouble answering it: Dear Google (GOOG): Please stop using my advertising dollars to monetize hate speech.

If you're responsible for a brand and somewhere in the mysterious tubes of adtech your money is finding its way to Nazis, what is the right course of action?

One wrong answer is to write a "please help me" letter to a company that will just ignore it. That's just admitting to knowingly sending money to Nazis, which is clearly wrong.

Here's another wrong idea, from the upcoming IAB Annual Leadership Meeting session on "brand safety" (which is the nice, sanitary professional-sounding term for "trying not to sponsor Nazis, but not too hard.")

Threats to brand safety arise internally and externally, in your control and out of your control—and the stakes have never been higher. Learn how to minimize brand safety risks and maximize odds of survival when your brand takes a hit (spoiler alert: overreacting is as bad as underreacting). Best Buy and Starcom share best practices based on real-world encounters with brand safety issues.

Really, people? Overreacting is as bad as underreacting? The IAB wants you to come to a deluxe conference about how it's fine to send a few bucks to Nazis here and there as long as it keeps their whole adtech/adfraud gravy train running on time.

I disagree. If Best Buy is fine with (indirectly of course) paying the occasional Nazi so that the IAB companies can keep sending them valuable eyeballs from the cheapest possible sites, then I can shop elsewhere.

Any nationalist extremist movement has its obvious supporters, who wear the outfits and get the tattoos and go march in the streets and all that stuff, and also the quiet supporters, who come up with the money and make nice with the powers that be. The supporters who can keep it deniable.

Can I, as a potential customer from the outside, tell the difference between quiet Nazi supporters and people who are just bad at online advertising and end up supporting Nazis by mistake? Of course not. Do I care? Of course not. If you're not willing to put the basic "don't pay Nazis to do Nazi stuff" rule ahead of a few ad clicks, I don't want your brand anyway. And I'll make sure to install and use the tracking protection tools that help keep my good data away from bad sites.

,

CryptogramFriday Squid Blogging: Japanese "Dude Food" Includes Squid

This seems to be a trend.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesScreen Capping the News Shows Different Stories for Different Folks

During a year marked by social and political turmoil, the media has found itself under scrutiny from politicians, academics, the general public, and increasingly self-reflexive journalists and editors. Fake news has entered our lexicon both as a form of political meddling from foreign powers and a dismissive insult directed towards any less-than-complimentary news coverage of the current administration.

Paying attention to where people are getting their news and what that news is telling them is an important step to understanding our increasingly polarized society and our seeming inability to talk across political divides. The insight can also help us get at those important and oh-too common questions of “how could they think that?!?” or “how could they support that politician?!?”

My interest in this topic was sparked a few months ago when I began paying attention to the top four stories and single video that magically appear whenever I swipe left on my iPhone. The stories compiled by the Apple News App provide a snapshot of what the dominant media sources consider the newsworthy happenings of the day. After paying an almost obsessive attention to my newsfeed for a few weeks—and increasingly annoying my friends and colleagues by telling them about the compelling patterns I was seeing—I started to take screenshots of the suggested news stories on a daily or twice daily basis. The images below were gathered over the past two months.

It is worth noting that the Apple News App adapts to a user’s interests to ensure that it provides “the stories you really care about.” To minimize this complicating factor I avoided clicking on any of the suggested stories and would occasionally verify that my news feed had remained neutral through comparing the stories with other iPhone users whenever possible.

Some of the differences were to be expected—People simply cannot get enough of celebrity pregnancies and royal weddings. The Washington Post, The New York Times, and CNN frequently feature stories that are critical of the current administration, and Fox News is generally supportive of President Trump and antagonistic towards enemies of the Republican Party.

(Click to Enlarge)

However, there are two trends that I would like to highlight:

1) A significant number of Fox News headlines offer direct critiques of other media sites and their coverage of key news stories. Rather than offering an alternative reading of an event or counter-coverage, the feature story undercuts the journalistic work of other news sources through highlighting errors and making accusations of partisanship motivations. In some cases, this even takes the form of attacking left-leaning celebrities as proxy to a larger movement or idea. Neither of these tactics were employed by any of the other news sources during my observation period.

(Click to Enlarge)

2) Fox News often featured coverage of vile, treacherous, or criminal acts committed by individuals as well as horrifying accidents. This type of story stood out both due to the high frequency and the juxtaposition to coverage of important political events of the time—murderous pigs next to Senate resignations and sexually predatory high school teachers next to massively destructive California wildfires. In a sense, Fox News is effectively cultivating an “asociological” imagination by shifting attention to the individual rather than larger political processes and structural changes. In addition, the repetitious coverage of the evil and devious certainly contributes to a fear-based society and confirms the general loss of morality and decline of conservative values.

(Click to Enlarge)

It is worth noting that this move away from the big stories of the day also occurs through a surprising amount of celebrity coverage.

(Click to Enlarge)

From the screen captures I have gathered over the past two months, it seems apparent that we are not just consuming different interpretations of the same event, but rather we are hearing different stories altogether. This effectively makes the conversation across political affiliation (or more importantly, news source affiliation) that much more difficult if not impossible.

I recommend taking time to look through the images that I have provided on your own. There are a number of patterns I did not discuss in this piece for the sake of brevity and even more to be discovered. And, for those of us who spend our time in the front of the classroom, the screenshot approach could provide the basis for a great teaching activity where the class collectively takes part in both the gathering of data and conducting the analysis. 

Kyle Green is an Assistant Professor of Sociology at Utica College. He is a proud TSP alumnus and the co-author /co-host of Give Methods a Chance.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureError'd: Hamilton, Hamilton, Hamilton, Hamilton

"Good news! I can get my order shipped anywhere I want...So long as the city is named Hamilton," Daniel wrote.

 

"I might have forgotten my username, but at least I didn't forget to change the email template code in Production," writes Paul T.

 

Jamie M. wrote, "Using Lee Hecht Harrison's job search functionality is very meta."

 

"When I decided to go to Cineworld, wasn't sure what I wanted to watch," writes Andy P., "The trailer for 'System Restore' looks good, but it's got a bad rating on Rotten Tomatoes."

 

Mattias writes, "I get the feeling that Visual Studio really doesn't like this error."

 

"While traveling in Philadelphia's airport, I was pleased to see Macs competing in the dumb error category too," Ken L. writes.

 

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.