Planet Russell

,

CryptogramYet Another Biometric: Ear Shape

This acoustic technology identifies individuals by their ear shapes. No information about either false positives or false negatives.

Worse Than FailureRepresentative Line: The Truth About Comparisons

We often point to dates as one of the example data types which is so complicated that most developers can’t understand them. This is unfair, as pretty much every data type has weird quirks and edge cases which make for unexpected behaviors. Floating point rounding, integer overflows and underflows, various types of string representation…

But file-not-founds excepted, people have to understand Booleans, right?

Of course not. We’ve all seen code like:

if (booleanFunction() == true) …

Or:

if (booleanValue == true) {
    return true;
} else {
    return false;
}

Someone doesn’t understand what booleans are for, or perhaps what the return statement is for. But Paul T sends us a representative line which constitutes a new twist on an old chestnut.

if ( Boolean.TRUE.equals(functionReturningBooleat(pa, isUpdateNetRevenue)) ) …

This is the second most Peak Java Way to test if a value is true. The Peak Java version, of course, would be to use an AbstractComparatorFactoryFactory<Boolean> to construct a Comparator instance with an injected EqualityComparison object. But this is pretty close- take the Boolean.TRUE constant, use the inherited equals method on all objects, which means transparently boxing the boolean returned from the function into an object type, and then executing the comparison.

The if (boolean == true) return true; pattern is my personal nails-on-the-chalkboard code block. It’s not awful, it just makes me cringe. Paul’s submission is like an angle-grinder on a chalkboard.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianVincent Bernat: A more privacy-friendy blog

When I started this blog, I embraced some free services, like Disqus or Google Analytics. These services are quite invasive for users’ privacy. Over the years, I have tried to correct this to reach a point where I do not rely on any “privacy-hostile” services.

Analytics🔗

Google Analytics is an ubiquitous solution to get a powerful analytics solution for free. It’s also a great way to provide data about your visitors to Google—also for free. There are self-hosted solutions like Matomo—previously Piwik.

I opted for a simpler solution: no analytics. It also enables me to think that my blog attracts thousands of visitors every day.

Fonts🔗

Google Fonts is a very popular font library and hosting service, which relies on the generic Google Privacy Policy. The google-webfonts-helper service makes it easy to self-host any font from Google Fonts. Moreover, with help from pyftsubset, I include only the characters used in this blog. The font files are lighter and more complete: no problem spelling “Antonín Dvořák”.

Videos🔗

  • Before: YouTube
  • After: self-hosted

Some articles are supported by a video (like “OPL2LPT: an AdLib sound card for the parallel port“). In the past, I was using YouTube, mostly because it was the only free platform with an option to disable ads. Streaming on-demand videos is usually deemed quite difficult. For example, if you just use the <video> tag, you may push a too big video for people with a slow connection. However, it is not that hard, thanks to hls.js, which enables to deliver video sliced in segments available at different bitrates. Users with Java­Script disabled are still delivered with a progressive version of medium quality.

In “Self-hosted videos with HLS”, I explain this approach in more details.

Comments🔗

Disqus is a popular comment solution for static websites. They were recently acquired by Zeta Global, a marketing company and their business model is supported only by advertisements. On the technical side, Disqus also loads several hundred kilobytes of resources. Therefore, many websites load Disqus on demand. That’s what I did. This doesn’t solve the privacy problem and I had the sentiment people were less eager to leave a comment if they had to execute an additional action.

For some time, I thought about implementing my own comment system around Atom feeds. Each page would get its own feed of comments. A piece of Java­Script would turn these feeds into HTML and comments could still be read without Java­Script, thanks to the default rendering provided by browsers. People could also subscribe to these feeds: no need for mail notifications! The feeds would be served as static files and updated on new comments by a small piece of server-side code. Again, this could work without Javascript.

Day Planner by Fowl Language Comics
Fowl Language Comics: Day Planner or the real reason why I didn't code a new comment system.

I still think this is a great idea. But I didn’t feel like developing and maintaining a new comment system. There are several self-hosted alternatives, notably Isso and Commento. Isso is a bit more featureful, with notably an imperfect import from Disqus. Both are struggling with maintainance and are trying to become sustainable with an hosted version. Commento is more privacy-friendly as it doesn’t use cookies at all. However, cookies from Isso are not essential and can be filtered with nginx:

proxy_hide_header Set-Cookie;
proxy_hide_header X-Set-Cookie;
proxy_ignore_headers Set-Cookie;

In Isso, there is currently no mail notifications, but I have added an Atom feed for each comment thread.

Another option would have been to not provide comments anymore. However, I had some great contributions as comments in the past and I also think they can work as some kind of peer review for blog articles: they are a weak guarantee that the content is not totally wrong.

Search engine🔗

An way to provide a search engine for a personal blog is to provide a form for a public search engine, like Google. That’s what I did. I also slapped some Java­Script on top of that to make it look like not Google.

The solution here is easy: switch to DuckDuckGo, which lets you customize a bit the search experience:

<form id="lf-search" action="https://duckduckgo.com/">
  <input type="hidden" name="kf" value="-1">
  <input type="hidden" name="kaf" value="1">
  <input type="hidden" name="k1" value="-1">
  <input type="hidden" name="sites" value="vincent.bernat.im/en">
  <input type="submit" value="">
  <input type="text" name="q" value="" autocomplete="off" aria-label="Search">
</form>

The Java­Script part is also removed as DuckDuckGo doesn’t provide an API. As it is unlikely that more than three people will use the search engine in a year, this seems a good idea to not spend too much time on this non-essential feature.

Newsletter🔗

  • Before: RSS feed
  • After: still RSS feed but also a MailChimp newsletter

Nowadays, RSS feeds are far less popular they were before. I am still baffled as why a technical audience wouldn’t use RSS, but some readers prefer to receive updates by mail.

MailChimp is a common solution to send newsletters. It provides a simple integration with RSS feeds to trigger a mail each time new items are added to the feed. From a privacy point of view, MailChimp seems a good citizen: data collection is mainly limited to the amount needed to operate the service. Privacy-conscious users can still avoid this service and use the RSS feed.

Less Java­Script🔗

  • Before: third-party Java­Script code
  • After: self-hosted Java­Script code

Many privacy-conscious people are disabling Java­Script or using extensions like uMatrix or NoScript. Except for comments, I was using Java­Script only for non-essential stuff:

For mathematical formulae, I have switched from MathJax to KaTeX. The later is faster but also enables server-side rendering: it produces the same output regardless of browser. Therefore, client-side Java­Script is not needed anymore.

For sidenotes, I have turned the Java­Script code doing the transformation into Python code, with pyquery. No more client-side Java­Script for this aspect either.

The remaining code is still here but is self-hosted.

Memento: CSP🔗

The HTTP Content-Security-Policy header controls the resources that a user agent is allowed to load for a given page. It is a safeguard and a memento for the external resources a site will use. Mine is moderately complex and shows what to expect from a privacy point of view:2

Content-Security-Policy:
  default-src 'self' blob:;
  script-src  'self' blob: https://d1g3mdmxf8zbo9.cloudfront.net/js/;
  object-src  'self' https://d1g3mdmxf8zbo9.cloudfront.net/images/;
  img-src     'self' data: https://d1g3mdmxf8zbo9.cloudfront.net/images/;
  frame-src   https://d1g3mdmxf8zbo9.cloudfront.net/images/;
  style-src   'self' 'unsafe-inline' https://d1g3mdmxf8zbo9.cloudfront.net/css/;
  font-src    'self' about: data: https://d1g3mdmxf8zbo9.cloudfront.net/fonts/;
  worker-src  blob:;
  media-src   'self' blob: https://luffy-video.sos-ch-dk-2.exo.io;
  connect-src 'self' https://luffy-video.sos-ch-dk-2.exo.io https://comments.luffy.cx;
  frame-ancestors 'none';
  block-all-mixed-content;

I am quite happy having been able to reach this result. 😊


  1. You may have noticed I am a footnote sicko and use them all the time for pointless stuff. ↩︎

  2. I don’t have issue with using a CDN like CloudFront: it is a paid service and Amazon AWS is not in the business of tracking users. ↩︎

Planet Linux AustraliaMichael Still: pyconau 2018 call for proposals now open

Share

The pyconau call for proposals is now open, and runs until 28 May. I took my teenagers to pyconau last year and they greatly enjoyed it. I hadn’t been to a pyconau in ages, and ended up really enjoying thinking about things from topic areas I don’t normally need to think about. I think expanding one’s horizons is generally a good idea.

Should I propose something for this year? I am unsure. Some random ideas that immediately spring to mind:

  • something about privsep: I think a generalised way to make privileged calls in unprivileged code is quite interesting, especially in a language which is often used for systems management and integration tasks. That said, perhaps its too OpenStacky given how disinterested in OpenStack talks most python people seem to be.
  • nova-warts: for a long time my hobby has been cleaning up historical mistakes made in OpenStack Nova that wont ever rate as a major feature change. What lessons can other projects learn from a well funded and heavily staffed project that still thought that exec() was a great way to do important work? There’s definitely an overlap with the privsep talk above, but this would be more general.
  • a talk about how I had to manage some code which only worked in python2, and some other code that only worked in python3 and in the end gave up on venvs and decided that Docker containers are like the ultimate venvs. That said, I suspect this is old hat and was obvious to everyone except me.
  • something else I haven’t though of.

Anyways, I’m undecided. Comments welcome.

Also, here’s an image for this post. Its the stone henge we found at Guerilla Bay last weekend. I assume its in frequent use for tiny tiny druids.

Share

,

Planet DebianJoachim Breitner: Verifying local definitions in Coq

TL;DR: We can give top-level names to local definitions, so that we can state and prove stuff about them without having to rewrite the programs.

When a Haskeller writes Coq

Imagine you teach Coq to a Haskell programmer, and give them the task of pairing each element in a list with its index. The Haskell programmer might have

addIndex :: [a] -> [(Integer, a)]
addIndex xs = go 0 xs
  where go n [] = []
        go n (x:xs) = (n,x) : go (n+1) xs

in mind and write this Gallina function (Gallina is the programming language of Coq):

Require Import Coq.Lists.List.
Import ListNotations.

Definition addIndex {a} (xs : list a) : list (nat * a) :=
  let fix go n xs := match xs with
                     | []    => []
                     | x::xs => (n, x) :: go (S n) xs
                     end
  in go 0 xs.

Alternatively, imagine you are using hs-to-coq to mechanically convert the Haskell definition into Coq.

When a Coq user tries to verify that

Now your task is to prove something about this function, for example

Theorem addIndex_spec:
  forall {a} n (xs : list a),
  nth n (map fst (addIndex xs)) n = n.

If you just have learned Coq, you will think “I can do this, this surely holds by induction on xs.” But if you have a bit more experience, you will already see a problem with this (if you do not see the problem yet, I encourage you to stop reading, copy the few lines above, and try to prove it).

The problem is that – as so often – you have to generalize the statement for the induction to go through. The theorem as stated says something about addIndex or, in other words, about go 0. But in the inductive case, you will need some information about go 1. In fact, you need a lemma like this:

Lemma go_spec:
  forall {a} n m k (xs : list a), k = n + m ->
  nth n (map fst (go m xs)) k = k.

But go is not a (top-level) function! How can we fix that?

  • We can try to awkwardly work-around not having a name for go in our proofs, and essentially prove go_spec inside the proof of addIndex_spec. Might work in this small case, but does not scale up to larger proofs.
  • We can ask the programmer to avoid using local functions, and first define go as a top-level fixed point. But maybe we don’t want to bother them because of that. (Or, more likely, we are using hs-to-coq and that tool stubbornly tries to make the output as similar to the given Haskell code as possible.)
  • We can copy’n’paste the definition of go and make a separate, after-the-fact top-level definition. But this is not nice from a maintenance point of view: If the code changes, we have to update this copy.
  • Or we apply this one weird trick...

The weird trick

We can define go after-the-fact, but instead of copy’n’pasting the definition, we can use Coq’s tactics to define it. Here it goes:

Definition go {a} := ltac:(
  let e := eval cbv beta delta [addIndex] in (@addIndex a []) in
  (* idtac e; *)
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

Let us take it apart:

  1. We define go, and give the parameters that go depends upon. Note that of the two parameters of addIndex, the definition of go only depends on (“captures”) a, but not xs.
  2. We do not give a type to go. We could, but that would again just be copying information that is already there.
  3. We define go via an ltac expression: Instead of a term we give a tactic that calculates the term.
  4. This tactic first binds e to the body of addIndex. To do so, it needs to pass enough arguments to addIndex. The concrete value of the list argument does not matter, so we pass []. The term @addIndex a [] is now evaluated with the evaluation flags eval cbv beta delta [addIndex], which says “unfold addIndex and do beta reduction, but nothing else”. In particularly, we do not do zeta reduction, which would reduce the let go := … definition. (The user manual very briefly describes these flags.)
  5. The idtac e line can be used to peek at e, for example when the next tactic fails. We can use this to check that e really is of the form let fix go := … in ….
  6. The lazymatch line matches e against the pattern let x := ?def in _, and binds the definition of go to the name def.
  7. And the exact def tactic tells Coq to use def as the definition of go.

We now have defined go, of type go : forall {a}, nat -> list a -> list (nat * a), and can state and prove the auxiliary lemma:

Lemma go_spec:
  forall {a} n m k (xs : list a), k = n + m ->
  nth n (map fst (go m xs)) k = k.
Proof.
  intros ?????.
  revert n m k.
  induction xs; intros; destruct n; subst; simpl.
  1-3:reflexivity.
  apply IHxs; lia.
Qed.

When we come to the theorem about addIndex, we can play a little trick with fold to make the proof goal pretty:

Theorem addIndex_spec:
  forall {a} n (xs : list a),
  nth n (map fst (addIndex xs)) n = n.
Proof.
  intros.
  unfold addIndex.
  fold (@go a).
  (* goal here: nth n (map fst (go 0 xs)) n = n *)
  apply go_spec; lia.
Qed.

Multiple local definitions

The trick extends to multiple local definitions, but needs some extra considerations to ensure that terms are closed. A bit contrived, but let us assume that we have this function definition:

Definition addIndex' {a} (xs : list a) : list (nat * a) :=
  let inc := length xs in
  let fix go n xs := match xs with
                     | []    => []
                     | x::xs => (n, x) :: go (inc + n) xs
                     end in
  go 0 xs.

We now want to give names to inc and to go. I like to use a section to collect the common parameters, but that is not essential here. The trick above works flawlessly for `inc':

Section addIndex'.
Context {a} (xs : list a).

Definition inc := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

But if we try it for go', like such:

Definition go' := ltac:(
  let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
  lazymatch e with | let x := _ in let y := ?def in _ =>
    exact def
  end).

we get “Ltac variable def depends on pattern variable name x which is not bound in current context”. To fix this, we write

    exact (let x := inc in def)

instead. We have now defined both inc and go' and can use them in proofs about addIndex':

Theorem addIndex_spec':
  forall n, nth n (map fst (addIndex' xs)) n = n * length xs.
Proof.
  intros.
  unfold addIndex'.
  fold inc go'. (* order matters! *)
  (* goal here: nth n (map fst (go' 0 xs)) n = n * inc *)

Reaching into a match

This trick also works when the local definition we care about is inside a match statement. Consider:

Definition addIndex_weird {a} (oxs : option (list a))
  := match oxs with
     | None => []
     | Some xs =>
       let fix go n xs := match xs with
                          | []    => []
                          | x::xs => (n, x) :: go (S n) xs
                          end in
       go 0 xs
     end.

Definition go_weird {a} := ltac:(
  let e := eval cbv beta match delta [addIndex_weird]
           in (@addIndex_weird a (Some [])) in
  idtac e;
  lazymatch e with | let x := ?def in _ =>
    exact def
  end).

Note the addition of match to the list of evaluation flags passed to cbv.

Conclusion

While local definitions are idiomatic in Haskell (in particular thanks to the where syntax), they are usually avoided in Coq, because they get in the way of verification. If, for some reason, one is stuck with such definitions, then this trick presents a reasonable way out.

Planet DebianSam Hartman: Shaving the DJ Software Yak

I'm getting married this June. (For the Debian folks, the Ghillie shirt and vest just arrived to go with the kilt. My thanks go out to the lunch table at Debconf that made that suggestion. formal Scottish dress would not have fit, but I wanted something to go with the kilt.)
Music and dance have been an important part of my spiritual journey. Dance has also been an import part of the best weddings I attended. So I wanted dance to be a special part of our celebration. I put together a play list for my 40th birthday; it was special and helped set the mood for the event. Unfortunately, as I started looking at what I wanted to play for the wedding, I realized I needed to do better. Some of the songs were too long. Some of them really felt like they needed a transition. I wanted a continuous mix not a play list.
I'm blind. I certainly could use two turn tables and a mixer--or at least I could learn how to do so. However, I'm a kid of the electronic generation, and that's not my style. So, I started looking at DJ software. With one exception, everything I found was too visual for me to use.
I've used Nama before to put together a mashup. It seemed like Nama offered almost everything I needed. Unfortunately, there were a couple of problems. Nama would not be a great fit for a live mix: you cannot add tracks or effects into the chain without restarting the engine. I didn't strictly need live production for this project, but I wanted to look at that long-term. At the time of my analysis, I thought that Nama didn't support tempo-scaling tracks. For that reason, I decided I was going to have to write my own software. Later I learned that you can adjust the sample rate on a track import, which is more or less good enough for tempo scaling. By that point I already had working code.
I wanted a command line interface. I wanted BPM and key detection; it looked like Mixxx was open-source software with good support for that. Based on my previous work, I chose Csound as the realtime sound backend.

Where I got


I'm proud of what I came up with. I managed to stay focused on my art rather than falling into the trap of focusing too much on the software. I got something that allows me to quickly explore the music I want to mix, but also managed to accomplish my goal and come up with a mix that I'm really happy with. As a result, at the current time, my software is probably only useful to me. However, it is available under the GPL V3. If anyone else would be interested in hacking on it, I'd be happy to spend some time documenting and working with them.
Here's a basic description of the approach.

  • You are editing a timeline that stores the transformations necessary to turn the input tracks into the output mix.
  • There are 10 mixer stereo channels that will be mixed down into a master output.
  • there are a unlimited number of input tracks. Each track is associated with a given mixer channel. Tracks are placed on the timeline at a given start point (starting from a given cue point in the track) and run for a given length. During this time, the track is mixed into the mixer channel. Associated with each track is a gain (volume) that controls how the track is mixed into the mixer channel. Volumes are constant per track.
  • Between the mixer channel and the master is a volume fader and an effect chain.
  • Effects are written in Csound code. Being able to easily write Csound effects is one of the things that made me more interested in writing my own than in looking at adding better tempo scaling/BPM detection to Nama.
  • Associated with each effect are three sliders that give inputs to the effect. Changing the number of mixer channels and effect sliders is an easy code change. However it'd be somewhat tricky to be able to change that dynamically during a performance. Effects also get an arbitrary number of constant parameters.
  • Sliders and volume faders can be manipulated on the timeline. You can ask for a linear change from the current value to a target over a given duration starting at some point. So I can ask for the amplitude to move from 0 to 127 at the point where I want to mix in a track say across 2 seconds. You express slider manipulation in terms of the global timeline. However it is stored relative to the start of the track. That is, if you have a track fade out at a particular musical phrase, the fade out will stay with that phrase even if you move the cue point of the track or move where the track starts on the timeline. This is not what you want all the time, but my experience with Nama (which does it using global time) suggests that I at least save a lot of time with this approach.
  • There is a global effect chain between the output of the final mixer and the master output. This allows you to apply distortion, equalization or compression to the mix as a whole. The sliders for effects on this effect chain are against global time not a specific track.
  • There's a hard-coded compressor on the final output. I'm lazy and I needed it before I had effect chains.

There's some preliminary support for a MIDI controller I was given, but I realized that coding that wasn't actually going to save me time, so I left it. This was a really fun project. I managed to tell a story for my wedding that is really important for me to tell. I learned a lot about what goes into putting together a mix. It's amazing how many decisions go into even simple things like a pan slider. It was also great that there is free software for me to build on top of. I got to focus on the part of the problem I wanted to solve. I was able to reuse components for the realtime sound work and for analysis like BPM detection.

Planet DebianWouter Verhelst: host detection in bash

There are many tools to implement this, and yeah, this is not the fastest. But the advantage is that you don't need extra tools beyond "bash" and "ping"...

for i in $(seq 1 254); do
  if ping -W 1 -c 1 192.168.0.$i; then
    HOST[$i]=1
  fi
done
echo ${!HOST[@]}

will give you the host addresses for the machines that are live on a given network...

Planet Linux AustraliaMichael Still: Caliban’s War

Share

This is the second book in the Leviathan Wakes series by James SA Corey. Just as good as the first, this is a story about how much a father loves his daughter, moral choices, and politics — just as much as it is the continuation of the story arc around the alien visitor. I haven’t seen this far in the Netflix series, but I sure hope they get this right, because its a very good story so far.

Caliban's War Book Cover Caliban's War
James S. A. Corey
Fiction
Orbit Books
April 30, 2013
624

For someone who didn't intend to wreck the solar system's fragile balance of power, Jim Holden did a pretty good job of it. While Earth and Mars have stopped shooting each other, the core alliance is shattered. The outer planets and the Belt are uncertain in their new - possibly temporary - autonomy. Then, on one of Jupiter's moons, a single super-soldier attacks, slaughtering soldiers of Earth and Mars indiscriminately and reigniting the war. The race is on to discover whether this is the vanguard of an alien army, or if the danger lies closer to home.

Share

Planet DebianNorbert Preining: Specification and Verification of Software with CafeOBJ – Part 2 – Basics of CafeOBJ

This blog continues Part 1 of our series on software specification and verification with CafeOBJ.

Availability of CafeOBJ

CafeOBJ can be obtained from the website cafeobj.org. The site provides binary packages built from Linux, MacOS, and Windows, as well as the source code for those who want to build the interpreter themselves. Other services provided are tutorial pages, all kind of documentation (reference manual, wiki, user manual).

What is CafeOBJ

Let us recall some of the items mentioned in the previous blog. CafeOBJ is an algebraic specification language, as well as a verification and programming language. This means, that specifications written in CafeOBJ can be verified right within the system without the need to regress to external utilities.

As algebraic specification language it is built upon the logical foundation formed by the following items: (i) order sorted algebras, (ii) co-algebras (or hidden algebras), and (iii) rewriting logic. As verification and programming language it provides the user with an executable semantics of the equational theory, a rewrite engine that supports conditional, order-sorted, AC (associative and commutative) rewriting, a sofisticated module system including parametrization and inheritance, and last but not least a completely free syntax.

The algebraic semantics can be represented by the CafeOBJ cube exhibiting the various extensions starting more many sorted algebras:

For the algebraically inclined audience we just mention that all the systems and morphisms are formalized as institutions and institution morphisms.

Let us now go through some the the logical foundations of CafeOBJ:

Term rewriting

Term rewriting is concerned with systems of rules to replace certain parts of an expression with another expression. A very simple example of a rewrite system is:

  append(nil, ys)    → ys
  append(x : xs, ys) → x : append(xs, ys)

Here the first rule says that you can rewrite an expression append(nil, ys) where ys can be any list, with ys itself. And the second rule states how to rewrite an expression when the first element is not the empty list.

A typical reduction sequence – that is application of these rules – would be:

append(1 ∶ 2 ∶ 3 ∶ nil, 4 ∶ 5 ∶ nil) → 1 ∶ append(2 ∶ 3 ∶ nil, 4 ∶ 5 ∶ nil)
                                     → 1 ∶ 2 ∶ append(3 ∶ nil, 4 ∶ 5 ∶ nil)
                                     → 1 ∶ 2 ∶ 3 ∶ append(nil, 4 ∶ 5 ∶ nil)
                                     → 1 ∶ 2 ∶ 3 ∶ 4 ∶ 5 ∶ nil

Term rewriting is used in two different ways in CafeOBJ: First as execution engine that considers equations as directed rules and uses them to reduce expressions. And at the same time rewriting logic is included into the language specification allowing for reasoning about transitions.

Order sorted algebras

Most algebras we learn in school or even at the university are single sorted, that is all objects in the algebra are of the same type (e.g., integers, reals, function space). In this case an operation is determined by its arity, that is the number of arguments.

In the many sorted and order sorted case the simple number of arguments of a function is not enough, we need to know for each argument its type and also the type of the value the function returns. Thus, we assume a signature (S,F) given, such that S is a set of sorts, or simply sort names, and F is a set of operations f: s1, s2, ..., sk → s where all the s are sorts.

As an example assume we have two sorts, String and Int, one possible function would be

  substr: String, Int, Int → String

which would tell us that the function substr takes three arguments, the first of sort String, the others of sort Int, and it returns again a value of sort String.

In case the sorts are (partially ordered), we call the corresponding algebra order sorted algebra.

Using order sorted algebras has several advantages compared to other algebraic systems:

  • polymorphism (parametric, subsort) and overloading are natural consequences of ordered sorts;
  • error definition and handling via subsorts;
  • multiple inheritance;
  • rigorous model-theoretic semantics based on institutions;
  • operational semantics that executes equations as rewrite rules (executable specifications).

We want to close this blog post with a short history of CafeOBJ and a short sample list of specifications that have been carried out with CafeOBJ.

History, background, relatives, and examples

CafeOBJ, as an algebraic specification language based on equational theory, has its roots in CLEAR (Burgstall and Goguen, early 70s) and the OBJ language (Goguen et al, 70-80s SRI and UC San Diego). The successor OBJ2 was developed by Futatsugi, Goguen, Jouannaud, and Meseguer at UC San Diego in 1984, based on Horn logic, sub-sorts, and parametrized modules.

The developer then moved on to different languages or extensions: Meseguer started to develop Maude, Jouannaud moved on to develop Coq, an unrelated language, and Futatsugi built upon the OBJ3 language by Kirchner et al to create CafeOBJ.

Example specifications carried out in CafeOBJ are authentication protocols (NSLPK, STS, Otway-Rees), key secrecy PACE protocol (German passport), e-commerce protocols (SET), real time algorithms (Fischer’s mutual exclusion protocol), UML semantics, formal fault tree analysis.


In the next blog post we will make first steps with the CafeOBJ interpreter and see how to define modules, the basic building blocks, and carry out simple computations.

,

Planet DebianBenjamin Mako Hill: Mako Hate

I recently discovered a prolific and sustained community of meme-makers on Tumblr dedicated to expressing their strong dislike for “Mako.”

Two tags with examples are #mako hate and #anti mako but there are many others.

“even Mako hates Mako…” meme. Found on this forum thread.

I’ve also discovered Tumblrs entirely dedicated to the topic!

For example, Let’s Roast Mako describes itself “A place to beat up Mako. In peace. It’s an inspiration to everyone!

The second is the Fuck Mako Blog which describes itself with series of tag-lines including “Mako can fuck right off and we’re really not sorry about that,” “Welcome aboard the SS Fuck-Mako;” and “Because Mako is unnecessary.” Sub-pages of the site include:

I’ll admit I’m a little disquieted.

Planet Linux AustraliaPia Waugh: Exploring change and how to scale it

Over the past decade I have been involved in several efforts trying to make governments better. A key challenge I repeatedly see is people trying to change things without an idea of what they are trying to change to, trying to fix individual problems (a deficit view) rather than recognising and fixing the systems that created the problems in the first place. So you end up getting a lot of symptomatic relief and iterative improvements of antiquated paradigms without necessarily getting transformation of the systems that generated the problems. A lot of the effort is put into applying traditional models of working which often result in the same old results, so we also need to consider new ways to work, not just what needs to be done.

With life getting faster and (arguably) exponentially more complicated, we need to take a whole of system view if we are to improve ‘the system’ for people. People sometimes balk when I say this thinking it too hard, too big or too embedded. But we made this, we can remake it, and if it isn’t working for us, we need to adapt like we always have.

I also see a lot of slogans used without the nuanced discussion they invite. Such (often ideological) assumptions can subtly play out without evidence, discussion or agreement on common purpose. For instance, whenever people say smaller or bigger government I try to ask what they think the role of government is, to have a discussion. Size is assumed to correlate to services, productivity, or waste depending on your view, but shouldn’t we talk about what the public service should do, and then the size is whatever is appropriate to do what is needed? People don’t talk about a bigger or smaller jacket or shoes, they get the right one for their needs and the size can change over time as the need changes. Indeed, perhaps the public service of the future could be a dramatically different workforce comprised of a smaller group of professional public servants complimented with and a large demographically representative group of part time citizens doing their self nominated and paid “civic duty year of service” as a form of participatory democracy, which would bring new skills and perspectives into governance, policy and programs.

We need urgently to think about the big picture, to collectively talk about the 50 or 100 year view for society, and only then can we confidently plan and transform the structures, roles, programs and approaches around us. This doesn’t mean we have to all agree to all things, but we do need to identify the common scaffolding upon which we can all build.

This blog posts challenges you to think systemically, critically and practically about five things:

    • What future do you want? Not what could be a bit better, or what the next few years might hold, or how that shiny new toy you have could solve the world’s problems (policy innovation, data, blockchain, genomics or any tool or method). What is the future you want to work towards, and what does good look like? Forget about your particular passion or area of interest for a moment. What does your better life look like for all people, not just people like you?
    • What do we need to get there? What concepts, cultural values, paradigm, assumptions should we take with us and what should we leave behind? What new tools do we need and how do we collectively design where we are going?
    • What is the role of gov, academia, other sectors and people in that future? If we could create a better collective understanding of our roles in society and some of the future ideals we are heading towards, then we would see a natural convergence of effort, goals and strategy across the community.
    • What will you do today? Seriously. Are you differentiating between symptomatic relief and causal factors? Are you perpetuating the status quo or challenging it? Are you being critically aware of your bias, of the system around you, of the people affected by your work? Are you reaching out to collaborate with others outside your team, outside your organisation and outside your comfort zone? Are you finding natural partners in what you are doing, and are you differentiating between activities worthy of collaboration versus activities only of value to you (the former being ripe for collaboration and the latter less so).
    • How do we scale change? I believe we need to consider how to best scale “innovation” and “transformation”. Scaling innovation is about scaling how we do things differently, such as the ability to take a more agile, experimental, evidence based, creative and collaborative approach to the design, delivery and continuous improvement of stuff, be it policy, legislation or services. Scaling transformation is about how we create systemic and structural change that naturally drives and motivates better societal outcomes. Each without the other is not sustainable or practical.

How to scale innovation and transformation?

I’ll focus the rest of this post on the question of scaling. I wrote this in the context of scaling innovation and transformation in government, but it applies to any large system. I also believe that empowering people is the greatest way to scale anything.

  • I’ll firstly say that openness is key to scaling everything. It is how we influence the system, how we inspire and enable people to individually engage with and take responsibility for better outcomes and innovate at a grassroots level. It is how we ensure our work is evidence based, better informed and better tested, through public peer review. Being open not only influences the entire public service, but the rest of the economy and society. It is how we build trust, improve collaboration, send indicators to vendors and influence academics. Working openly, open sourcing our research and code, being public about projects that would benefit from collaboration, and sharing most of what we do (because most of the work of the public service is not secretive by any stretch) is one of the greatest tools in try to scale our work, our influence and our impact. Openness is also the best way to ensure both a better supply chain as well as a better demand for things that are demonstrable better.

A quick side note to those who argue that transparency isn’t an answer because all people don’t have to tools to understand data/information/etc to hold others accountable, it doesn’t mean you don’t do transparency at all. There will always be groups or people naturally motivated to hold you to account, whether it is your competitors, clients, the media, citizens or even your own staff. Transparency is partly about accountability and partly about reinforcing a natural motivation to do the right thing.

Scaling innovation – some ideas:

  • The necessity of neutral, safe, well resourced and collaborative sandpits is critical for agencies to quickly test and experiment outside the limitations of their agencies (technical, structural, political, functional and procurement). Such places should be engaged with the sectors around them. Neutral spaces that take a systems view also start to normalise a systems view across agencies in their other work, which has huge ramifications for transformation as well as innovation.
  • Seeking and sharing – sharing knowledge, reusable systems/code, research, infrastructure and basically making it easier for people to build on the shoulders of each other rather than every single team starting from scratch every single time. We already have some communities of practice but we need to prioritise sharing things people can actually use and apply in their work. We also need to extend this approach across sectors to raise all boats. Imagine if there was a broad commons across all society to share and benefit from each others efforts. We’ve seen the success and benefits of Open Source Software, of Wikipedia, of the Data Commons project in New Zealand, and yet we keep building sector or organisational silos for things that could be public assets for public good.
  • Require user research in budget bids – this would require agencies to do user research before bidding for money, which would create an incentive to build things people actually need which would drive both a user centred approach to programs and would also drive innovation as necessary to shift from current practices :) Treasury would require user research experts and a user research hub to contrast and compare over time.
  • Staff mobility – people should be supported to move around departments and business units to get different experiences and to share and learn. Not everyone will want to, but when people stay in the same job for 20 years, it can be harder to engage in new thinking. Exchange programs are good but again, if the outcomes and lessons are not broadly shared, then they are linear in impact (individuals) rather than scalable (beyond the individuals).
  • Support operational leadership – not everyone wants to be a leader, disruptor, maker, innovator or intrapreneur. We need to have a program to support such people in the context of operational leadership that isn’t reliant upon their managers putting them forward or approving. Even just recognising leadership as something that doesn’t happen exclusively in senior management would be a huge cultural shift. Many managers will naturally want to keep great people to themselves which can become stifling and eventually we lose them. When people can work on meaningful great stuff, they stay in the public service.
  • A public ‘Innovation Hub’ – if we had a simple public platform for people to register projects that they want to collaborate on, from any sector, we could stimulate and support innovation across the public sector (things for which collaboration could help would be surfaced, publicly visible, and inviting of others to engage in) so it would support and encourage innovation across government, but also provides a good pipeline for investment as well as a way to stimulate and support real collaboration across sectors, which is substantially lacking at the moment.
  • Emerging tech and big vision guidance - we need a team, I suggest cross agency and cross sector, of operational people who keep their fingers on the pulse of technology to create ongoing guidance for New Zealand on emerging technologies, trends and ideas that anyone can draw from. For government, this would help agencies engage constructively with new opportunities rather than no one ever having time or motivation until emerging technologies come crashing down as urgent change programs. This could be captured on a constantly updating toolkit with distributed authorship to keep it real.

Scaling transformation – some ideas:

  • Convergence of effort across sectors – right now in many countries every organisation and to a lesser degree, many sectors, are diverging on their purpose and efforts because there is no shared vision to converge on. We have myriad strategies, papers, guidance, but no overarching vision. If there were an overarching vision for New Zealand Aotearoa for instance, co-developed with all sectors and the community, one that looks at what sort of society we want into the future and what role different entities have in achieving that ends, then we would have the possibility of natural convergence on effort and strategy.
    • Obviously when you have a cohesive vision, then you can align all your organisational and other strategies to that vision, so our (government) guidance and practices would need to align over time. For the public sector the Digital Service Standard would be a critical thing to get right, as is how we implement the Higher Living Standards Framework, both of which would drive some significant transformation in culture, behaviours, incentives and approaches across government.
  • Funding “Digital Public Infrastructure” – technology is currently funded as projects with start and end dates, and almost all tech projects across government are bespoke to particular agency requirements or motivations, so we build loads of technologies but very little infrastructure that others can rely upon. If we took all the models we have for funding other forms of public infrastructure (roads, health, education) and saw some types of digital infrastructure as public infrastructure, perhaps they could be built and funded in ways that are more beneficial to the entire economy (and society).
  • Agile budgeting – we need to fund small experiments that inform business cases, rather than starting with big business cases. Ideally we need to not have multi 100 million dollar projects at all because technology projects simply don’t cost that anymore, and anyone saying otherwise is trying to sell you something :) If we collectively took an agile budgeting process, it would create a systemic impact on motivations, on design and development, or implementation, on procurement, on myriad things. It would also put more responsibility on agencies for the outcomes of their work in short, sharp cycles, and would create the possibility of pivoting early to avoid throwing bad money after good (as it were). This is key, as no transformative project truly survives the current budgeting model.
  • Gov as a platform/API/enabler (closely related to DPI above) – obviously making all government data, content, business rules (inc but not just legislation) and transactional systems available as APIs for building upon across the economy is key. This is how we scale transformation across the public sector because agencies are naturally motivated to deliver what they need to cheaper, faster and better, so when there are genuinely useful reusable components, agencies will reuse them. Agencies are now more naturally motivated to take an API driven modular architecture which creates the bedrock for government as an API. Digital legislation (which is necessary for service delivery to be integrated across agency boundaries) would also create huge transformation in regulatory and compliance transformation, as well as for government automation and AI.
  • Exchange programs across sectors – to share knowledge but all done openly so as to not create perverse incentives or commercial capture. We need to also consider the fact that large companies can often afford to jump through hoops and provide spare capacity, but small to medium sized companies cannot, so we’d need a pool for funding exchange programs with experts in the large proportion of industry.
  • All of system service delivery evidence base – what you measure drives how you behave. Agencies are motivated to do only what they need to within their mandates and have very few all of system motivations. If we have an all of government anonymised evidence base of user research, service analytics and other service delivery indicators, it would create an accountability to all of system which would drive all of system behaviours. In New Zealand we already have the IDI (an awesome statistical evidence base) but what other evidence do we need? Shared user research, deidentified service analytics, reporting from major projects, etc. And how do we make that evidence more publicly transparent (where possible) and available beyond the walls of government to be used by other sectors?  More broadly, having an all of government evidence base beyond services would help ensure a greater evidence based approach to investment, strategic planning and behaviours.

Planet DebianJoey Hess: my haskell controlled offgrid fridge

I'm preparing for a fridge upgrade, away from the tiny propane fridge to a chest freezer conversion. My home computer will be monitoring the fridge temperature and the state of my offgrid energy system, and turning the fridge on and off using a relay and the inverter control board I built earlier.

This kind of automation is a perfect fit for Functional Reactive Programming (FRP) since it's all about time-varying behaviors and events being combined together.

Of course, I want the control code to be as robust as possible, well tested, and easy to modify without making mistakes. Pure functional Haskell code.

There are many Haskell libraries for FRP, and I have not looked at most of them in any detail. I settled on reactive-banana because it has a good reputation and amazing testimonials.

"In the programming-language world, one rule of survival is simple: dance or die. This library makes dancing easy." – Simon Banana Jones

But, it's mostly used for GUI programming, or maybe some musical live-coding. There were no libraries for using reactive-banana for the more staid task of home automation, or anything like that. Also, using it involves a whole lot of IO code, so not great for testing.

So I built reactive-banana-automation on top of it to address my needs. I think it's a pretty good library, although I don't have a deep enough grokking of FRP to say that for sure.

Anyway, it's plenty flexible for my fridge automation needs, and I also wrote a motion-controlled light automation with it to make sure it could be used for something else (and to partly tackle the problem of using real-world time events when the underlying FRP library uses its own notion of time).

The code for my fridge is a work in progress since the fridge has not arrived yet, and because the question of in which situations an offgrid fridge should optimally run and not run is really rather complicated.

Here's a simpler example, for a non-offgrid fridge.

fridge :: Automation Sensors Actuators
fridge sensors actuators = do
        -- Create a Behavior that reflects the most recently reported
        -- temperature of the fridge.
        btemperature <- sensedBehavior (fridgeTemperature sensors)
        -- Calculate when the fridge should turn on and off.
        let bpowerchange = calcpowerchange <$> btemperature
        onBehaviorChangeMaybe bpowerchange (actuators . FridgePower)
  where
        calcpowerchange (Sensed temp)
                | temp `belowRange` allowedtemp = Just PowerOff
                | temp `aboveRange` allowedtemp = Just PowerOn
                | otherwise = Nothing
        calcpowerchange SensorUnavailable = Nothing
        allowedtemp = Range 1 4

And here the code is being tested in a reproducible fashion:

> runner <- observeAutomation fridge mkSensors
> runner $ \sensors -> fridgeTemperature sensors =: 6
[FridgePower PowerOn]
> runner $ \sensors -> fridgeTemperature sensors =: 3
[]
> runner $ \sensors -> fridgeTemperature sensors =: 0.5
[FridgePower PowerOff]

BTW, building a 400 line library and writing reams of control code for a fridge that has not been installed yet is what we Haskell programmers call "laziness".

TEDThe world takes us exactly where we should be: 4 questions with Meagan Fallone

Cartier believes in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with entrepreneur, designer and CEO of Barefoot College International, Meagan Fallone.

TED: Tell us who you are.
Meagan Fallone: I am an entrepreneur, a designer, a passionate mountaineer and a champion of women in the developing world and all women whose voices and potential remain unheard and unrealized. I am a mother and am grounded in the understanding that of all the things I may ever do in my life, it is the only one that truly will define me or endure. I am immovable in my intolerance to injustice in all its forms.

TED: What’s a bold move you’ve made in your career?
MF: I decided to leave the two for-profit companies I started and grow a nonprofit social enterprise.

TED: Tell us about a woman who inspires you.
MF: The women in my family who were risk-takers in their own individual ways: they are always with me and inspire me. My female friends who push me always to dig deeper within myself, to use my power and skills for ever bigger and better impact in the world. I am inspired always by every woman who has ever accepted to come to train with us at Barefoot College. They place their trust in us, leave their community and everyone they love to make an unimaginable journey on every level. It is the bravest thing I have ever seen.

TED: If you could go back in time, what would you tell your 18-year-old self?
MF: I would tell myself not to take myself so seriously. I would tell myself to trust that the world takes us exactly where we should be. It took me far too long to learn to laugh at how ridiculous I am sometimes. It took me even longer to accept that the path that was written for me was not exactly the one I envisaged for myself. Within the things I never imagined lay all the beauty and wonder of my journey so far — and the promise of what I have yet to impact.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

TEDPlaylist: 10 TEDWomen talks for Earth Day

Earlier this week, I had the privilege and honor to plant trees with the daughter and granddaughter of environmentalist Wangari Maathai. In recognition of her life’s work promoting “sustainable development, democracy and peace,” Maathai received the 2004 Nobel Peace Prize. She was a lifelong activist who founded the Green Belt Movement in 1977.

At that time, rural women in Kenya petitioned the government for help. They explained that their streams were drying up, causing their food supplies to be less secure and longer walks to fetch firewood. Maathai established the Green Belt Movement and encouraged the women of Kenya to work together to grow seedlings and plant trees to bind the soil, store rainwater, provide food and firewood, and receive a small monetary token for their work. Through her efforts, over 51 million trees have been planted in Kenya. Although Maathai died in 2011, her daughter Wanjira continues her work improving the livelihoods of the women of Kenya and striving for a “cleaner, greener world.”

This Earth Day, the work of Professor Maathai and the Green Belt Movement is an inspiration and a “testament to the power of grassroots organizing, proof that one person’s simple idea — that a community should come together to plant trees, can make a difference.”

With that in mind, here are 10 TEDWomen talks from over the years that highlight innovative ideas, cutting-edge science, and the power that each of us has to safeguard our planet and make our world better for everyone.

1. Climate change is unfair. While rich countries can fight against rising oceans and dying farm fields, poor people around the world are already having their lives upended — and their human rights threatened — by killer storms, starvation and the loss of their own lands. Mary Robinson asks us to join the movement for worldwide climate justice.

2. Ocean expert Nancy Rabalais tracks the ominously named “dead zone” in the Gulf of Mexico — where there isn’t enough oxygen in the water to support life. The Gulf has the second largest dead zone in the world; on top of killing fish and crustaceans, it’s also killing fisheries in these waters. Rabalais tells us about what’s causing it — and how we can reverse its harmful effects and restore one of America’s natural treasures.

3. Filmmaker Penelope Jagessar Chaffer was curious about the chemicals she was exposed to while pregnant: Could they affect her unborn child? So she asked scientist Tyrone Hayes to brief her on one he studied closely: atrazine, a herbicide used on corn. (Hayes, an expert on amphibians, is a critic of atrazine, which displays a disturbing effect on frog development.) Onstage together at TEDWomen, Hayes and Chaffer tell their story.

4. Deepika Kurup has been determined to solve the global water crisis since she was 14 years old, after she saw kids outside her grandparents’ house in India drinking water that looked too dirty even to touch. Her research began in her family kitchen — and eventually led to a major science prize. Hear how this teenage scientist developed a cost-effective, eco-friendly way to purify water.

5. Days before this talk, journalist Naomi Klein was on a boat in the Gulf of Mexico, looking at the catastrophic results of BP’s risky pursuit of oil. Our societies have become addicted to extreme risk in finding new energy, new financial instruments and more … and too often, we’re left to clean up a mess afterward. Klein’s question: What’s the backup plan?

6. The water hyacinth may look like a harmless, even beautiful flowering plant — but it’s actually an invasive aquatic weed that clogs waterways, stopping trade, interrupting schooling and disrupting everyday life. In this scourge, green entrepreneur Achenyo Idachaba saw opportunity. Follow her journey as she turns weeds into woven wonders.

7. A skyscraper that channels the breeze … a building that creates community around a hearth … Jeanne Gang uses architecture to build relationships. In this engaging tour of her work, Gang invites us into buildings large and small, from a surprising local community center to a landmark Chicago skyscraper. “Through architecture, we can do much more than create buildings,” she says. “We can help steady this planet we all share.”

8. Architect Kate Orff sees the oyster as an agent of urban change. Bundled into beds and sunk into city rivers, oysters slurp up pollution and make legendarily dirty waters clean — thus driving even more innovation in “oyster-tecture.” Orff shares her vision for an urban landscape that links nature and humanity for mutual benefit.

9. Beverly + Dereck Joubert live in the bush, filming and photographing lions and leopards in their natural habitat. With stunning footage (some never before seen), they discuss their personal relationships with these majestic animals — and their quest to save the big cats from human threats.

10. Artist and poet Cleo Wade shares some truths about growing up (and speaking up) and reflects on the wisdom of a life well-lived, leaving us with a simple yet enduring takeaway: be good to yourself, be good to others, be good to the earth. “The world will say to you, ‘Be a better person,'” Wade says. “Do not be afraid to say, ‘Yes.'”

TEDWomen 2018 Updates

If you’re interested in attending TEDWomen later this year in Palm Springs, California, on November 28–30, we encourage you to sign up for our email newsletter now to stay up to date. We will be adding details on venue, sessions themes, guest curators and speakers soon. Don’t miss the news!

Planet DebianLisandro Damián Nicanor Pérez Meyer: moving Qt 4 from Debian testing (aka Buster): some statistics, update II

As in my previous blogpost I'm taking a look at our Qt4 removal wiki page.

Of a total of 438 filed bugs:

  • 181 bugs (41.32%) have been already fixed by either porting the app/library to Qt 5 or a removal from the archive has happened. On most cases the code has been ported and most of the deletions are due to Qt 5 replacements already available in the archive and some due to dead upstreams (ie., no Qt5 port available).
  • 257 bugs (58.68%) still need a fix or are fixed in experimental.
  • 35 bugs (8% of the total, 13% of the remaining) of the remaining bugs are maintained inside the Qt/KDE team.
 We started filing bugs around September 9. That means roughly 32 weeks which gives us around 5.65 packages fixed per week, aka 0.85 packages per day. Obviously not as good as we started (remaining bugs tend to be more complicated), but still quite good.

So, how can you help?

If you are a maintainer of any of the packages still affected try to get upstream to make a port and package it.

If you are not a maintainer you might want to take a look at the list of packages in our wiki page and try to create a patch for them. If you can submit it directly to upstream, the better. Or maybe it's time for you to become the package's upstream or maintainer!



Planet DebianVincent Bernat: OPL2 Audio Board: an AdLib sound card for Arduino

In a previous article, I presented the OPL2LPT, a sound card for the parallel port featuring a Yamaha YM3812 chip, also known as OPL2—the chip of the AdLib sound card. The OPL2 Audio Board for Arduino is another indie sound card using this chip. However, instead of relying on a parallel port, it uses a serial interface, which can be drived from an Arduino board or a Raspberry Pi. While the OPL2LPT targets retrogamers with real hardware, the OPL2 Audio Board cannot be used in the same way. Nonetheless, it can also be operated from ScummVM and DOSBox!

OPL2 Audio Board for Arduino
The OPL2 Audio Board over a “Grim Fandango” box.

Unboxing🔗

The OPL2 Audio Board can be purchased on Tindie, either as a kit or fully assembled. I have paired it with a cheap clone of the Arduino Nano. A library to drive the board is available on GitHub, along with some examples.

One of them is DemoTune.ino. It plays a short tune on three channels. It can be compiled and uploaded to the Arduino with PlatformIO—installable with pip install platformio—using the following command:1

$ platformio ci \
    --board nanoatmega328 \
    --lib ../../src \
    --project-option="targets=upload" \
    --project-option="upload_port=/dev/ttyUSB0" \
    DemoTune.ino
[...]
PLATFORM: Atmel AVR > Arduino Nano ATmega328
SYSTEM: ATMEGA328P 16MHz 2KB RAM (30KB Flash)
Converting DemoTune.ino
[...]
Configuring upload protocol...
AVAILABLE: arduino
CURRENT: upload_protocol = arduino
Looking for upload port...
Use manually specified: /dev/ttyUSB0
Uploading .pioenvs/nanoatmega328/firmware.hex
[...]
avrdude: 6618 bytes of flash written
[...]
===== [SUCCESS] Took 5.94 seconds =====

Immediately after the upload, the Arduino plays the tune. 🎶

The next interesting example is SerialIface.ino. It turns the audio board into a sound card over serial port. Once the code has been pushed to the Arduino, you can use the play.py program in the same directory to play VGM files. They are a sample-accurate sound format for many sound chips. They log the exact commands sent. There are many of them on VGMRips. Be sure to choose the ones for the YM3812/OPL2! Here is a small selection:

The OPL2 Audio Board playing some VGM files. It is connected to an Arduino Nano. You can see the LEDs blinking when the Arduino receives the commands from the serial port.

Usage with DOSBox & ScummVM🔗

Notice

The support for the serial protocol used in this section has not been merged yet. In the meantime, grab SerialIface.ino from the pull request: git checkout 50e1717.

When the Arduino is flashed with SerialIface.ino, the board can be driven through a simple protocol over the serial port. By patching DOSBox and ScummVM, we can make them use this unusual sound card. Here are some examples of games:

  • 0:00, with DOSBox, the first level of Doom 🎮
  • 1:06, with DOSBox, the introduction of Loom 🎼
  • 2:38, with DOSBox, the first level of Lemmings 🐹
  • 3:32, with DOSBox, the introduction of Legend of Kyrandia 🃏
  • 6:47, with ScummVM, the introduction of Day of the Tentacle ☢️
  • 11:10, with DOSBox, the introduction of Another World2 🐅

DOSBox🔗

The serial protocol is described in the SerialIface.ino file:

/*
 * A very simple serial protocol is used.
 *
 * - Initial 3-way handshake to overcome reset delay / serial noise issues.
 * - 5-byte binary commands to write registers.
 *   - (uint8)  OPL2 register address
 *   - (uint8)  OPL2 register data
 *   - (int16)  delay (milliseconds); negative -> pre-delay; positive -> post-delay
 *   - (uint8)  delay (microseconds / 4)
 *
 * Example session:
 *
 * Arduino: HLO!
 * PC:      BUF?
 * Arduino: 256 (switches to binary mode)
 * PC:      0xb80a014f02 (write OPL register and delay)
 * Arduino: k
 *
 * A variant of this protocol is available without the delays. In this
 * case, the BUF? command should be sent as B0F? The binary protocol
 * is now using 2-byte binary commands:
 *   - (uint8)  OPL2 register address
 *   - (uint8)  OPL2 register data
 */

Adding support for this protocol in DOSBox is relatively simple (patch). For best performance, we use the 2-byte variant (5000 ops/s). The binary commands are pipelined and a dedicated thread collects the acknowledgments. A semaphore captures the number of free slots in the receive buffer. As it is not possible to read registers, we rely on DOSBox to emulate the timers, which are mostly used to let the various games detect the OPL2.

The patch is tested only on Linux but should work on any POSIX system—not Windows. To test it, you need to build DOSBox from source:

$ sudo apt build-dep dosbox
$ git clone https://github.com/vincentbernat/dosbox.git -b feature/opl2audioboard
$ cd dosbox
$ ./autogen.sh
$ ./configure && make

Replace the sblaster section of ~/.dosbox/dosbox-SVN.conf:

[sblaster]
sbtype=none
oplmode=opl2
oplrate=49716
oplemu=opl2arduino
opl2arduino=/dev/ttyUSB0

Then, run DOSBox with ./src/dosbox. That’s it!

You will likely get the “OPL2Arduino: too slow, consider increasing buffer” message a lot. To fix this, you need to recompile SerialIface.ino with a bigger receive buffer:

$ platformio ci \
    --board nanoatmega328 \
    --lib ../../src \
    --project-option="targets=upload" \
    --project-option="upload_port=/dev/ttyUSB0" \
    --project-option="build_flags=-DSERIAL_RX_BUFFER_SIZE=512" \
    SerialIface.ino

ScummVM🔗

The same code can be adapted for ScummVM (patch). To test, build it from source:

$ sudo apt build-dep scummvm
$ git clone https://github.com/vincentbernat/scummvm.git -b feature/opl2audioboard
$ cd scummvm
$ ./configure --disable-all-engines --enable-engine=scumm && make

Then, you can start ScummVM with ./scummvm. Select “AdLib Emulator” as the music device and “OPL2 Arduino” as the AdLib emulator.3 Like for DOSBox, watch the console to check if you need a larger receive buffer.

Enjoy! 😍


  1. This command is valid for an Arduino Nano. For another board, take a look at the output of platformio boards arduino↩︎

  2. Another World (also known as Out of This World), released in 1991, designed by Éric Chahi, is using sampled sounds at 5 kHz or 10 kHz. With a serial port operating at 115,200 bits/s, the 5 kHz option is just within our reach. However, I have no idea if the rendering is faithful. It doesn’t sound like a SoundBlaster, but it sounds analogous to the rendering of the OPL2LPT which sounds similar to the SoundBlaster when using the 10 kHz option. DOSBox’ AdLib emulation using Nuked OPL3—which is considered to be the best—sounds worse. ↩︎

  3. If you need to specify a serial port other than /dev/ttyUSB0, add a line opl2arduino_device= in the ~/.scummvmrc configuration file. ↩︎

Planet Linux AustraliaDavid Rowe: WaveNet and Codec 2

Yesterday my friend and fellow open source speech coder Jean-Marc Valin (of Speex and Opus fame) emailed me with some exciting news. W. Bastiaan Kleijn and friends have published a paper called “Wavenet based low rate speech coding“. Basically they take bit stream of Codec 2 running at 2400 bit/s, and replace the Codec 2 decoder with the WaveNet deep learning generative model.

What is amazing is the quality – it sounds as good an an 8000 bit/s wideband speech codec! They have generated wideband audio from the narrowband Codec model parameters. Here are the samples – compare “Parametrics WaveNet” to Codec 2!

This is a game changer for low bit rate speech coding.

I’m also happy that Codec 2 has been useful for academic research (Yay open source), and that the MOS scores in the paper show it’s close to MELP at 2400 bit/s. Last year we discovered Codec 2 is better than MELP at 600 bit/s. Not bad for an open source codec written (more or less) by one person.

Now I need to do some reading on Deep Learning!

Reading Further

Wavenet based low rate speech coding
Wavenet Speech Samples
AMBE+2 and MELPe 600 Compared to Codec 2

,

Planet DebianBenjamin Mako Hill: Hyak on Hyak

I recently fulfilled a yearslong dream of launching a job on Hyak* on Hyak.

Hyak onHyak

 


* Hyak is the University of Washington’s supercomputer which my research group uses for most of our computation-intensive research.
M/V Hyak is a Super-class ferry operated by the Washington State Ferry System.

CryptogramFriday Squid Blogging: Squid Prices Rise as Catch Decreases

In Japan:

Last year's haul sank 15% to 53,000 tons, according to the JF Zengyoren national federation of fishing cooperatives. The squid catch has fallen by half in just two years. The previous low was plumbed in 2016.

Lighter catches have been blamed on changing sea temperatures, which impedes the spawning and growth of the squid. Critics have also pointed to overfishing by North Korean and Chinese fishing boats.

Wholesale prices of flying squid have climbed as a result. Last year's average price per kilogram came to 564 yen, a roughly 80% increase from two years earlier, according to JF Zengyoren.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJonathan Dowland: Twitter 10th anniversary

Tomorrow marks my 10th anniversary on Twitter. I have mixed feelings about the occasion. Twitter has been both a terrific success and a horrific failure. I've enjoyed it, I've discovered interesting people via Twitter and had some great interactions. I certainly prefer it to Facebook, but that's not a high benchmark.

Back in the early days I tried to engage with Twitter the way a hacker would. I worked out a scheme to archive my own tweets. I wrote a twitter bot. But Twitter became more and more hostile to that kind of interaction, so I no longer bother. Anything I put on Twitter I consider ephemeral. I've given up backing up my own tweets, conversations, or favourites. I deleted the bot. I keep a "sliding window" of recent tweets, outside of which I delete (via tweetdelete). My window started out a year wide; now it's down to three months.

Asides from the general hostility to third-parties wanting to build on the Twitter platform, they've also done a really poor job of managing bad actors. Of the the tools they do offer, they save the best for people with "verified" status: ostensibly a system for preventing fakes, now consider by some a status symbol. Twitter have done nothing to counter this, in fact they've actively encouraged it, by withdrawing it in at least one case from a notorious troll as an ad-hoc form of punishment. For the rest of us, the tools are woefully inadequate. If you find yourself on the receiving end of even a small pocket of bad attention, twitter becomes effectively unusable for hours or days on end. Finally troll-in-chief (and now President of the US) is inexplicably still permitted on Twitter despite repeatedly and egregiously violating their terms of service, demonstrating that there's different rules for some folks than the rest of us.

(By the way, I thoroughly recommend looking at Block Lists/Bots. I'm blocking thousands of accounts, although the system I've been using appears to have been abandoned. It might be worth a look at blocktogether.org; I intend to at some point.)

To some extent Twitter is responsible for—if not the death, the mortal wounding— of blogging. Back in the dim-and-distant, we'd write blog posts for the idle thoughts (e.g.), and they've migrated quite comfortably to tweets, but it seems to have had a sapping effect on people writing even longer-form stuff. Twitter isn't the only culprit: Google sunsetting Reader in 2013 was an even bigger blow, and I've still not managed to find something to replace it. (Plenty of alternatives exist; but the habit has died.)

One of the well-meaning, spontaneous things that came from the Twitter community was the notion of "Follow Friday": on Fridays, folks would nominate other interesting folks that you might like to follow. In that spirit, and wishing to try boost the idea of blogging again, I'd like to nominate some interesting blogs that you might enjoy. (Feel free to recommend me some more blogs to read in the comments!):

  • Vicky Lai first came up on my radar via Her One Bag, documenting her nomadic lifestyle (Hello UltraNav keyboard, and Stanley travel mug!), but her main site is worth following, too. Most recently she's written up how she makes her twitter ephemeral using AWS Lambda.
  • Alex Beal, who I have already mentioned.
  • Chris Siebenmann, a UNIX systems administrator at the University of Toronto. Siebenmann's blog feels to me like it comes from a parallel Universe where I stuck it out as a sysadmin, and got institutional support to do the job justice (I didn't, and I didn't.)
  • Darren Wilkinson writes about Statistics, computing, data science, Bayes, stochastic modelling, systems biology and bioinformatics
  • Friend of the family Mina writes candidly and brilliantly about her journey beating Lymphoma as a new mum at Lymphoma, Raphi and me
  • Ashley Pomeroy writes infrequently, eclectically (and surreally) on a range of topics, from the history of the Playstation 3, running old games on modern machines, photography and Thinkpads.

A couple of blogs from non-Debian/Linux OS developers. It's always nice to see what the other grass is like.

Finally, a more pleasing decennial: this year marks 10 years since my first uploaded package for Debian.

Krebs on SecurityIs Facebook’s Anti-Abuse System Broken?

Facebook has built some of the most advanced algorithms for tracking users, but when it comes to acting on user abuse reports about Facebook groups and content that clearly violate the company’s “community standards,” the social media giant’s technology appears to be woefully inadequate.

Last week, Facebook deleted almost 120 groups totaling more than 300,000 members. The groups were mostly closed — requiring approval from group administrators before outsiders could view the day-to-day postings of group members.

However, the titles, images and postings available on each group’s front page left little doubt about their true purpose: Selling everything from stolen credit cards, identities and hacked accounts to services that help automate things like spamming, phishing and denial-of-service attacks for hire.

To its credit, Facebook deleted the groups within just a few hours of KrebsOnSecurity sharing via email a spreadsheet detailing each group, which concluded that the average length of time the groups had been active on Facebook was two years. But I suspect that the company took this extraordinary step mainly because I informed them that I intended to write about the proliferation of cybercrime-based groups on Facebook.

That story, Deleted Facebook Cybercrime Groups had 300,000 Members, ended with a statement from Facebook promising to crack down on such activity and instructing users on how to report groups that violate it its community standards.

In short order, some of the groups I reported that were removed re-established themselves within hours of Facebook’s action. I decided instead of contacting Facebook’s public relations arm directly that I would report those resurrected groups and others using Facebook’s stated process. Roughly two days later I received a series replies saying that Facebook had reviewed my reports but that none of the groups were found to have violated its standards. Here’s a snippet from those replies:

Perhaps I should give Facebook the benefit of the doubt: Maybe my multiple reports one after the other triggered some kind of anti-abuse feature that is designed to throttle those who would seek to abuse it to get otherwise legitimate groups taken offline — much in the way that pools of automated bot accounts have been known to abuse Twitter’s reporting system to successfully sideline accounts of specific targets.

Or it could be that I simply didn’t click the proper sequence of buttons when reporting these groups. The closest match I could find in Facebook’s abuse reporting system were, “Doesn’t belong on Facebook,” and “Purchase or sale of drugs, guns or regulated products.” There was/is no option for “selling hacked accounts, credit cards and identities,” or anything of that sort.

In any case, one thing seems clear: Naming and shaming these shady Facebook groups via Twitter seems to work better right now for getting them removed from Facebook than using Facebook’s own formal abuse reporting process. So that’s what I did on Thursday. Here’s an example:

Within minutes of my tweeting about this, the group was gone. I also tweeted about “Best of the Best,” which was selling accounts from many different e-commerce vendors, including Amazon and eBay:

That group, too, was nixed shortly after my tweet. And so it went for other groups I mentioned in my tweetstorm today. But in response to that flurry of tweets about abusive groups on Facebook, I heard from dozens of other Twitter users who said they’d received the same “does not violate our community standards” reply from Facebook after reporting other groups that clearly flouted the company’s standards.

Pete Voss, Facebook’s communications manager, apologized for the oversight.

“We’re sorry about this mistake,” Voss said. “Not removing this material was an error and we removed it as soon as we investigated. Our team processes millions of reports each week, and sometimes we get things wrong. We are reviewing this case specifically, including the user’s reporting options, and we are taking steps to improve the experience, which could include broadening the scope of categories to choose from.”

Facebook CEO and founder Mark Zuckerberg testified before Congress last week in response to allegations that the company wasn’t doing enough to halt the abuse of its platform for things like fake news, hate speech and terrorist content. It emerged that Facebook already employs 15,000 human moderators to screen and remove offensive content, and that it plans to hire another 5,000 by the end of this year.

“But right now, those moderators can only react to posts Facebook users have flagged,” writes Will Knight, for Technologyreview.com.

Zuckerberg told lawmakers that Facebook hopes expected advances in artificial intelligence or “AI” technology will soon help the social network do a better job self-policing against abusive content. But for the time being, as long as Facebook mainly acts on abuse reports only when it is publicly pressured to do so by lawmakers or people with hundreds of thousands of followers, the company will continue to be dogged by a perception that doing otherwise is simply bad for its business model.

Update, 1:32 p.m. ET: Several readers pointed my attention to a Huffington Post story just three days ago, “Facebook Didn’t Seem To Care I Was Being Sexually Harassed Until I Decided To Write About It,” about a journalist whose reports of extreme personal harassment on Facebook were met with a similar response about not violating the company’s Community Standards. That is, until she told Facebook that she planned to write about it.

CryptogramSecuring Elections

Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them.

Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper.

Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely.

Last year, the Defcon hackers' conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail.

It's important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend.

It shouldn't be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They're computers -- often ancient computers running operating systems no longer supported by the manufacturers -- and they don't have any magical security technology that the rest of the industry isn't privy to. If anything, they're less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment.

We're not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election.

Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can't use the security systems available to banking and other high-value applications.

We can securely bank online, but can't securely vote online. If we could do away with anonymity -- if everyone could check that their vote was counted correctly -- then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread.

We can't, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper.

Let's start with the voter rolls. We know they've already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That's just one possibility. A well-executed attack that deletes, for example, one in five voters at random -- or changes their addresses -- would cause chaos on election day.

Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment.

Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or -- even better -- a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything.

Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter.

Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur.

It's vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it's easy to agree on strong security. But after the vote, someone is the presumptive winner -- and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it's too late to agree on anything.

The politicians running in the election shouldn't have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don't do that in the US.

Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors.

We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated $380m to states to upgrade election security.

These are good starts, but don't go nearly far enough. The constitution delegates elections to the states but allows Congress to "make or alter such Regulations". In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Worse Than FailureError'd: Placeholders-a-Plenty

"On my admittedly old and cheap phone, Google Maps seems to have confused the definition of the word 'trip'," writes Ivan.

 

"When you're Gas Networks Ireland, and don't have anything nice to say, I guess you just say lorem ipsum," wrote Gabe.

 

Daniel D. writes, "Google may not know how I got 100 GB, but they seem pretty sure that it's expiring soon."

 

Peter S. wrote, "F1 finally did it. The quantum driver Lastname is driving a Ferrari and chasing him- or herself in Red Bull."

 

Hrish B. writes, "I hope my last name is not an example as well."

 

Peter S. wrote, "Not sure what IEEE wants me to vote for. But I would vote for hiring better coders."

 

"Well, at least they got my name right, half of the time," Peter S. writes.

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaOpenSTEM: NAPLAN and vocabulary

It is the time of year when the thoughts of teachers of students in years 3, 5, 7 and 9 turn (not so) lightly to NAPLAN. I’m sure many of you are aware of the controversial review of NAPLAN by Les Perelman, a retired professor from MIT in the United States. Perelman conducted a similar […]

Planet Linux AustraliaFrancois Marier: Using a Kenwood TH-D72A with Pat on Linux and ax25

Here is how I managed to get my Kenwood TH-D72A radio working with Pat on Linux using the built-in TNC and the AX.25 mode

Installing Pat

First of all, download and install the latest Pat package from the GitHub project page.

dpkg -i pat_x.y.z_amd64.deb

Then, follow the installation instructions for the AX.25 mode and install the necessary packages:

apt install ax25-tools ax25-apps

along with the systemd script that comes with Pat:

/usr/share/pat/ax25/install-systemd-ax25-unit.bash

Configuration

Once the packages are installed, it's time to configure everything correctly:

  1. Power cycle the radio.
  2. Enable TNC in packet12 mode (band A*).
  3. Tune band A to VECTOR channel 420 (or 421 if you can't reach VA7EOC on simplex).
  4. Put the following in /etc/ax25/axports (replacing CALLSIGN with your own callsign):

     wl2k    CALLSIGN    9600    128    4    Winlink
    
  5. Set HBAUD to 1200 in /etc/default/ax25.

  6. Download and compile the tmd710_tncsetup script mentioned in a comment in /etc/default/ax25:

     gcc -o tmd710_tncsetup tmd710_tncsetup.c
    
  7. Add the tmd710_tncsetup script in /etc/default/ax25 and use these command line parameters (-B 0 specifies band A, use -B 1 for band B):

     tmd710_tncsetup -B 0 -S $DEV -b $HBAUD -s
    
  8. Start ax25 driver:

     systemctl start ax25.service
    

Connecting to a winlink gateway

To monitor what is being received and transmitted:

axlisten -cart

Then create aliases like these in ~/.wl2k/config.json:

{
  "connect_aliases": {
    "ax25-VA7EOC": "ax25://wl2k/VA7EOC-10",
    "ax25-VE7LAN": "ax25://wl2k/VE7LAN-10"
  },
}

and use them to connect to your preferred Winlink gateways.

Troubleshooting

If it doesn't look like ax25 can talk to the radio (i.e. the TX light doesn't turn ON), then it's possible that the tmd710_tncsetup script isn't being run at all, in which case the TNC isn't initialized correctly.

On the other hand, if you can see the radio transmitting but are not seeing any incoming packets in axlisten then double check that the speed is set correctly:

  • HBAUD in /etc/default/ax25 should be set to 1200
  • line speed in /etc/ax25/axports should be set to 9600
  • SERIAL_SPEED in tmd710_tncsetup should be set to 9600
  • radio displays packet12 in the top-left corner, not packet96

If you can establish a connection, but it's very unreliable, make sure that you have enabled software flow control (the -s option in tmd710_tncsetup).

If you can't connect to VA7EOC-10 on UHF, you could also try the VHF BCFM repeater on Mt Seymour, VE7LAN (VECTOR channel 65).

Planet DebianGunnar Wolf: 15.010958904109589041

Gregor's post made me think...

And yes! On April 15, I passed the 15-year-mark as a Debian Developer.

So, today I am 15.010958904109589041 years old in the project, give or take some seconds.

And, quoting my dear and admired friend, I deeply feel I belong to this community. Being part of Debian has defined the way I have shaped my career, has brought me beautiful friendships I will surely keep for many many more years, has helped me decide in which direction I should push to improve the world. I feel welcome and very recognized among people I highly value and admire, and that's the best collective present I could get.

Debian has grown and matured tremendously since the time I decided to join, and I'm very proud to be a part of that process.

Thanks, and lets keep it going for the next decade.

Planet DebianKees Cook: UEFI booting and RAID1

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.

With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.

So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.

The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90

Reading from the mdadm man page:

-e, --metadata= ... 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:

# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| ... # file -s /dev/sda1 /dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1 ... # mkfs.fat -F32 /dev/md0 # dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd 00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|. # file -s /dev/sda1 /dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.

However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:

Installing for x86_64-efi platform. efibootmgr: option requires an argument -- 'd' ... grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted. Failed: grub-install --target=x86_64-efi WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64

So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).

To deal with this “external write” situation, I see some solutions:

  • Make the partition read-only when not under Linux. (I don’t think this is a thing.)
  • Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
  • Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:

ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:

/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:

[Unit] Description=Resync /boot/efi RAID DefaultDependencies=no After=local-fs.target [Service] Type=oneshot ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync ExecStart=/bin/mount /boot/efi RemainAfterExit=yes [Install] WantedBy=sysinit.target

(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)

If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!

And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img # losetup -f --show /boot/efi.img /dev/loop0 # mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

,

Planet DebianEnrico Zini: Detect a UEFI partition

Today I had to implement a check to see if a disk contains a UEFI ESP partition.

Here it is, it also works on disk image files instead of devices:

def get_uefi_partition(self, disk_dev):
    """
    Return the partition device of the UEFI ESP partition for the device in
    disk_dev.

    Returns None if disk_dev contains no UEFI ESP partition.
    """
    import parted
    pdev = parted.getDevice(disk_dev)
    pdisk = parted.newDisk(pdev)
    if pdisk.type != "gpt":
        log.error("device %s has partition table type %s instead of gpt", disk_dev, pdisk.type)
        return None
    for part in pdisk.partitions:
        if part.getFlag(18):
            log.info("Found ESP partition in %s", part.path)
            return part.path
    log.info("No ESP partition found in %s", disk_dev)
    return None

Planet DebianRhonda D'Vine: Diversity Update

I have to excuse for being silent for that long. Way too many things happened. In fact I already wrote most of this last fall, but then something happened that did impact me too much to finalize this entry. And with that I want to go a bit into details how I write my blog entries:
I start to write them in English, I like to cross-reference things, and after I'm done I go over it and write it again in German. That process helps me proof-reading the English part, but it also means that it takes a fair amount of time. And the longer the entries get the more energy the translation and proof reading part takes, too. That's mostly also the reason why I tend to write longer entries when I find the energy and time for it.

Anyway, the first thing that I want to mention here finally happened last June: I officially got changed my name and gender/sex marker in my papers! That was a very happy moment in so many ways. A week later I got my new passport, finally managed to book my flight to Debconf in my name. Yay me, I exist!

Then, Stretch was released. I have to admit I had very little to do, wasn't involved in the release process, neither from the website team nor anywhere else because ...

... because I was packing my stuff that weekend, because on June 21st, a second thing finally happened: I got the keys to my flat in the Que[e]rbau!! Yes, I'm aware that we still need to work on the website. The building company actually did make a big event out of it, called every single person onto stage and handed over the keys. And it made me happy to be able to receive my key in my name and not one I don't relate to since a long while anymore. It did hurt seeing that happening to someone else from our house, even though they knew what the Que[e]rbau is about ... And: I moved right in the same day. Gave up my old flat the following week, even though I didn't have much furniture nor a kitchen but I was waiting way too long to be able to not be there. And just watch that sunset from my balcony. <3

And I mentioned it in the last blog post already, the European Lesbian* Conference organization needed more and more work, too. The program for it started to finalize, but there were still more than enough things to do. I totally fell into this, this was the first time I really felt what intersectionality means and that it's not just a label but an internal part of this conference. The energy going on in the team on that grounds is really outstanding, and I'm totally happy to be part of this effort.

And then came along Debconf17 in Montreal. It was nice to be with a fair amount of people that grew on me like a family over the years. And interestingly I got the notice that there was a Trans March going on, so I joined that. It was a pleasure meeting Sophie LaBelle and Chase Ross there. I wasn't aware that Chase was from Montreal, so that part was a surprise. Sophie I knew, and I brought her back to Vienna in November, right before the Transgender Day of Remembrance. :)

But one of the two moving speeches at the march were from Charlie Rose titled My Gender Is Black. I managed to get a recording of this and another great speech from another Black Lives Matters activist, and hope I'll be able to put them online at some point. For the time being the link to the text should be able to help.

And then Debconf itself started. And I held the Debian Diversity Round Table. While the title might had been misleading, because this group isn't officially formed yet, it turned out to get a fair amount of interest. I started off with why I called for it, that I intentionally chose to not have it video taped for people to be able to speak more freely and after a short introduction round with names, pronouns and other things people wanted to share we had some interesting discussions on why people think this is a good idea, what direction to move. A few ideas did spring up, and then ... time ran out. So actually we scheduled a continuation BoF to further enhance the topic. At the end of that we came up with a pretty good consensual view on how to move forward. Unfortunately I didn't manage yet to follow up on that and feel quite bad about it. :/

Because, after returning, getting back into work, and needing a bit more time for EL*C I started to feel serious pain in my back and my leg which seems to be a slipped disc and was on sick leave for about two months. The pain was too much, I even had to stay at the hospital for two weeks because my stomach acted up too.

At the end of October we had a grand opening: We have a community space in our Que[e]rbau in which we built sort of a bar, with cooking facility and hi-fi equipment. And we intentionally opened it up to the public. It's name is Yella Yella! Nachbar_innentreff. We named it after Yella Hertzka who was an important feminist at the start of the 20th century. The park on the other side of the street is called Yella Hertzka park, so the pun in the name with the connection to the arabic proverb Yalla Yalla is intentional.

With the Yella Yella a fair amount of internal discussions emerged, we all only started to live together, so naturally this took a fair amount of energy and discussions. Things take time to get a feeling for all the people. There were several interviews made, and events to get organized to get it running.

And then out of the sudden it turned 2018 and I still haven't published this post. I'm sorry 'bout that, but sometimes there are other things needing time. And here I am. Time move on even if we don't look at it.

A recent project that I had the honor to be part of is my movement is limitless [trans_non-binary short]. It was interesting to think about the topic whether gender identity affects the way you dance. And to seen and hear other people's approach to it.

At the upcoming Linuxtage Graz there will be a session about Common misconceptions about names and spaces and communities because they were enforcing a realname policy – at a community event. Not only is this a huge issue for trans people but also works against privacy researchers or people from the community that noone really knows by the name in their papers. The discussions that happened on twitter or in the background were partly a fair bit disturbing. Let's hope that we'll manage to make a good panel.

Which brings us to a panel for the upcoming Debconf in Taiwan. There is a suggestion to have a Gender Forum at the Openday. I'm still not completely sure what it should cover or what is expected for it and I guess it's still open for suggestions. There will be a plan, let's see to make it diverse and great!

I won't promise to send the next update sooner, but I'll try to get back into it. Right now I'm also working on a (German language) submission for a non-binary YouTube project and it would be great to see that thing lift off. I'll be more verbose on that front.

Thanks for reading so far, and read you soon. :)

/personal | permanent link | Comments: 0 | Flattr this

Planet DebianGregor Herrmann: 10 years + 1 day

yesterday 10 years ago I became a Debian Developer.
& I still feel that I belong to this community.
& it took me one more day to write this tiny blog post about it.
so tonight I can celebrate 10 years plus 1 day :)

Planet DebianSteinar H. Gunderson: MySQL 8.0 released

(This post is not endorsed by Oracle, and I do not speak for them.)

MySQL 8.0.11 GA (General Availability) is out today—for those not used to Oracle's idiosyncratic versioning, this essentially means “MySQL 8.0 is released” (8.0.1 and so forth were various stages of alpha and beta). This marks the end of three years of development, of which I've been on board for two or so of them.

It feels a bit strange to be working on a product you don't use yourself (my personal datastore of choice firmly remains PostgreSQL), but that's how things go—back when I worked in Google, there were also products that I felt more or less deeply about, although I generally felt more connected to them and less “it's just a job”. I do pride myself on having a neutral assessment of my employer's products, though—I don't use a product just because my employer makes it (e.g., I didn't use Gmail or Google Plus personally when I worked at Google, as I don't find them very good products, but I did use Maps and Search, which I both find excellent).

Being new on the team, it's hard to dive directly in and make major contributions—MySQL is a product with a lot of legacy, despite extensive cleanups over the last few years (especially after Oracle bought Sun and the original developers left), and the amount of documentation is rather varying. So what did I do? I removed stuff. Tons of it.

I removed warnings for newer compilers (in many rounds, and so did many others). I removed tons of unneeded includes. I removed unneeded usages of Boost. I removed the abomination that was my_global.h (there used to be a single global header file that everything was supposed to #include, and in turn #included half the world). I removed home-grown atomics, since C++11 includes its own in the standard. I removed DTrace probes that nobody used. I removed bad names. I removed the custom bool type (yes, seriously, MySQL had its own type for bool, which caused rather subtle bugs). I removed PAD SPACE behavior from the default collation, which enabled a few important optimizations (and NO PAD makes so much more sense in a Unicode world). I removed lots of internal header files from the default installation. I removed the home-grown TLS system (which sped up everything by a few percent). I removed the home-grown quicksort code. I removed radixsort, which was only a win because the old home-grown quicksort code was so slow. I removed the home-grown hash table HASH which was not type-safe, slower than the C++11 unordered_map and fairly buggy. I removed the dreaded query cache! (I didn't remove the embedded server, which doubled the compile time and nobody ever used, but I pushed pretty hard for doing so.) I removed ambiguous includes. I removed unused code from binaries, shrinking the distribution by a few megabytes. I removed compiler flag ricing that doesn't help in 2018. I removed a home-grown printf that was buggy. I removed a lot of C legacy, since MySQL no longer needs to be bound to a world without C++. I removed Sql_alloc, a magical class you'd inherit from and get surprising memory-allocation behavior. I removed some waiting on the compiler. I removed MySQL's custom and weird coding style (well, at least its formatting). And I removed obsolete SQL modes (together with others), which as far as I know is the first time anyone's removed an SQL mode in MySQL.

Of course, I didn't only remove stuff; I also added things like more efficient sorting of strings, added a new microbenchmark framework, sped up the new Unicode 9.0 collations by 20x, and probably added some legacy on my own. :-) But sometimes, it's good to remove. Remove some code today!

Planet DebianJulien Danjou: Lessons from OpenStack Telemetry: Deflation

Lessons from OpenStack Telemetry: Deflation

This post is the second and final episode of Lessons from OpenStack Telemetry. If you have missed the first post, you can read it here.

Splitting

At some point, the rules relaxed on new projects addition with the Big Tent initiative, allowing us to rename ourselves to the OpenStack Telemetry team and splitting Ceilometer into several subprojects: Aodh (alarm evaluation functionality) and Panko (events storage). Gnocchi was able to join the OpenStack Telemetry party for its first anniversary.

Finally being able to split Ceilometer into several independent pieces of software allowed us to tackle technical debt more rapidly. We built autonomous teams for each project and gave them the same liberty they had in Ceilometer. The cost of migrating the code base to several projects was higher than we wanted it to be, but we managed to build a clear migration path nonetheless.

Gnocchi Shamble

With Gnocchi in town, we stopped all efforts on Ceilometer storage and API and expected people to adopt Gnocchi. What we underestimated is the unwillingness of many operators to think about telemetry. They did not want to deploy anything to have telemetry features in the first place, so adding yet a new component (a timeseries database) to have proper metric features was seen a burden – and sometimes not seen at all.
Indeed, we also did not communicate enough on our vision for that transition. After two years of existence, many operators were asking what Gnocchi was and what they needed it for. They deployed Ceilometer and its bogus storage and API and were confused about needing yet another piece of software.

It took us more than two years to deprecate the Ceilometer storage and API, which is way too long.

Deflation

In the meantime, people were leaving the OpenStack boat. Soon enough, we started to feel the shortage of human resources. Smartly, we never followed the OpenStack trend of imposing blueprints, specs, bug reports or any process to contributors, obeying my list of open source best practice. This flexibility allowed us to iterate more rapidly; compared to other OpenStack projects; we were going faster proportionately to the size of our contributor base.

Lessons from OpenStack Telemetry: Deflation

Nonetheless, we felt like bailing out a sinking ship. Our contributors were disappearing while we were swamped with technical debt: half-baked feature, unfinished migration, legacy choices and temporary hacks. After the big party that happened, we had to wash the dishes and sweep the floor.

Being part of OpenStack started to feel like a burden in many ways. The inertia of OpenStack being a big project was beginning to surface, so we put up a lot of efforts to dodge most of its implications. Consequently, the team was perceived as an outlier, which does not help, especially when you have to interact with a lot your neighbors.

The OpenStack Foundation never understood the organization of our team. They would refer to us as "Ceilometer" whereas we formally renamed ourselves to "Telemetry" since we were englobing four server projects and a few libraries. For example, while Gnocchi has been an OpenStack project for two years before leaving, it has never been listed on the project navigator maintained by the foundation.

That's a funny anecdote that demonstrates the peculiarity of our team, and how it has been both a strength and a weakness.

Competition

Nobody was trying to do what we were doing when we started Ceilometer. We filled the space of metering OpenStack. However, as the number of companies involved increased and the friction with it along, some people grew unhappy. The race to have a seat at the table of the feast and becoming a Project Team Leader was strong, so some people preferred to create their project rather than trying to play the contribution game. In many areas, including our, that divided the effort up to a ridiculous point where several teams where doing the exact the same thing, or were trying to step on each other toes to kill the competitors.

We spent a significant amount of time trying to bring other teams in the Telemetry scope, to unify our efforts, without much success. Some companies were not embracing open-source because of their cultural differences, while some others had no interest to join a project where they would not be seen as the leader.

That fragmentation did not help us, but also did not do much harm in the end. Most of those projects are now either dead or becoming irrelevant as the rest of the world caught up on what they were trying to do.

Epilogue

As of 2018, I'm the PTL for Telemetry – because nobody else ran. The official list of maintainer for the telemetry projects is five people: two are inactive, and three are part-time. During the latest development cycle (Queens), 48 people committed in Ceilometer, though only three developers made impactful contributions. The code size has been divided by two since the peak: Ceilometer is now 25k lines of code long.

Panko and Aodh have no active developer. A Red Hat colleague and I are maintaining the projects afloat to keep it working.

Gnocchi has humbly thriven since it left OpenStack. The stains from having been part of OpenStack are not yet all gone. It has a small community, but users see its real value and enjoy using it.

Those last six years have been intense, and riding the OpenStack train has been amazing. As I concluded in the first blog post of this series, most of us had a great time overall; the point of those writings is not to complain, but to reflect.

I find it fascinating to see how the evolution of a piece of software and the metamorphosis of its community are entangled. The amount of politics that a corporately-backed project of this size generates is majestic and has a prominent influence on the outcome of software development.

So, what's next? Well, as far as Ceilometer is concerned, we still have ideas and plans to keep shrinking its footprint to a minimum. We hope that one-day Ceilometer will become irrelevant – at least that's what we're trying to achieve so we don't have anything to maintain. That mainly depends on how the myriad of OpenStack projects will chose to address their metering.

We don't see any future for Panko nor Aodh.

Gnocchi, now blooming outside of OpenStack, is still young and promising. We've plenty of ideas and every new release brings new fancy features. The storage of timeseries at large scale is exciting. Users are happy, and the ecosystem is growing.

We'll see how all of that concludes, but I'm sure it'll be new lessons to learn and write about in six years!

CryptogramLifting a Fingerprint from a Photo

Police in the UK were able to read a fingerprint from a photo of a hand:

Staff from the unit's specialist imaging team were able to enhance a picture of a hand holding a number of tablets, which was taken from a mobile phone, before fingerprint experts were able to positively identify that the hand was that of Elliott Morris.

[...]

Speaking about the pioneering techniques used in the case, Dave Thomas, forensic operations manager at the Scientific Support Unit, added: "Specialist staff within the JSIU fully utilised their expert image-enhancing skills which enabled them to provide something that the unit's fingerprint identification experts could work. Despite being provided with only a very small section of the fingerprint which was visible in the photograph, the team were able to successfully identify the individual."

Planet DebianSven Hoexter: logstash 5.6.9 logstash-input-udp 3.3.1 br0ken

While the memory leak is fixed in logstash 5.6.9 the logstash-input-udp plugin is broken. A fixed plugin got released as version 3.3.2.

The code change is https://github.com/logstash-plugins/logstash-input-udp/commit/7ecec49a3f1a0f8b51c77bd9243b8cc0dbebaeb8.

The discussion is at https://discuss.elastic.co/t/udp-input-is-crashing/128485.

So instead of fiddling again with plugin updates and offline bundles we decided to just go down the ugly road of abusing ansible, and install a file copy of the udp.rb file. This is horrible but works.

- name: check for br0ken logstash udp input plugin version
  shell: /usr/share/logstash/bin/logstash-plugin list --verbose logstash-input-udp | grep -E '3\.3\.1'
  register: logstash_udp_plugin_check
  ignore_errors: True
  tags:
    - "skip_ansible_lint"

- name: install fixed udp input plugin
  copy:
    src: "hacks/udp.rb"
    dest: "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-udp-3.3.1/lib/logstash/inputs/udp.rb"
    owner: "root"
    group: "root"
    mode: 0644
  when: logstash_udp_plugin_check.rc == 0
  notify: restart logstash

Kudos to Martin and Paul for handling this one swiftly.

Worse Than FailureCodeSOD: A Problematic Place

In programming, sometimes the ordering of your data matters. And sometimes the ordering doesn’t matter and it can be completely random. And sometimes… well, El Dorko found a case where it apparently matters that it doesn’t matter:

DirectoryInfo di = new DirectoryInfo(directory);
FileInfo[] files = di.GetFiles();
DirectoryInfo[] subdirs = di.GetDirectories();

// shuffle subdirs to avoid problematic places
Random rnd = new Random();
for( int i = subdirs.Length - 1; i > 0; i-- )
{
    int n = rnd.Next( i + 1 );
    DirectoryInfo tmp = subdirs[i];
    subdirs[i] = subdirs[n];
    subdirs[n] = tmp;
}

foreach (DirectoryInfo dir in subdirs)
{
   // process files in directory
}

This code does some processing on a list of directories. Apparently while coding this, the author found themself in a “problematic place”. We all have our ways of avoiding problematic places, but this programmer decided the best way was to introduce some randomness into the equation. By randomizing the order of the list, they seem to have completely mitigated… well, it’s not entirely clear what they’ve mitigated. And while their choice of shuffling algorithm is commendable, maybe next time they could leave us a comment elaborating on the problematic place they found themself in.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet Linux AustraliaMichael Still: Art with condiments

Share

Mr 15 just made me watch this video, its pretty awesome…

You’re welcome.

Share

Planet DebianNorbert Preining: Analysing Debian packages with Neo4j – Part 2 UDD and Graph DB Schema

In the first part of this series of articles on analyzing Debian packages with Neo4j we gave a short introduction to Debian and the life time and structure of Debian packages.

The current second part first describes the Ultimate Debian Database UDD and how to map the information presented here from the UDD into a Graph Database by developing the database scheme, that is the set of nodes and relations, together with their attributes, from the inherent properties of Debian packages.

The next part will describe how to get the data from the UDD into Neo4j, give some sample queries, and discuss further work.

The Ultimate Debian Database UDD

The Ulimate Debian Database UDD

gathers a lot of data about various aspects of Debian in the same SQL database. It allows users to easily access and combine all these data.

Data currently being imported include: Packages and Sources files, from Debian and Ubuntu, Bugs from the Debian BTS, Popularity contest, History of uploads, History of migrations to testing, Lintian, Orphaned packages, Carnivore, Debtags, Ubuntu bugs (from Launchpad), Packages in NEW queue, DDTP translations.
Debian Wiki

Collection all these information and obviously having been grown over time, the database exhibits a highly de-normalized structure with ample duplication of the same information. As a consequence, reading the SQL code fetching data from the UDD and presenting them in a coherent interface tends to be highly convoluted.

This lets us to the project of putting (parts) of the UDD into a graph database, removing all the duplication on the way and representing the connections between the entities in a natural graph way.

Developing the database schema

Recall from the first column that there are source packages and binary packages in Debian, and that the same binary package can be built in different versions from different source packages. Thus we decided to have both source and binary packages as separate entities, that is nodes of the graph, and the two being connected via a binary relation builds.

Considering dependencies between Debian packages we recall the fact that there are versioned and unversioned dependencies. We thus decide to have again different entities for versioned source and binary packages, and unversioned source and binary packages.

The above considerations leads to the following sets of nodes and relations:


vsp -[:is_instance_of]-> sp
vbp -[:is_instance_of]-> bp
sp -[:builds]-> bp
vbp -[:next]-> vbp
vsp -[:next]-> vsp

where vsp stands for versioned source package, sp for (unversioned) source package, and analog for binary packages. The versioned variants carry besides the name attribute also a version attribute in the node.

The relations are is_instance_of between versioned and unversioned packages, builds between versioned source and versioned binary packages, and next that defines an order on the versions.

An example of a simple graph for the binary package luasseq which has was originally built from the source package luasseq but was then taken over into the TeX Live packages and built from a different source.

Next we want to register suites, that is associating which package has been included in which release of Debian. Thus we add a new node type suite and a new relation contains which connects suites and versioned binary packages vbp:

suite -[:contains]-> vbp

Nodes of type suite only contain one attribute name. We could add release dates etc, but refrained from it for now. Adding the suites to the above diagram we obtain the following:

Next we add maintainers. The new node type mnt has two attributes: name and email. Here it would be nice to add alternative email addresses as well as alternative spellings of the name, something that is quite common. We add a relation maintains to versioned source and binary packages only since, as we have seen, the maintainership can change over the history of a package:

mnt -[:maintains]-> vbp
mnt -[:maintains]-> vsp

This leads us to the following graph:

This concludes the first (easy) part with basic node types and relations. We now turn to the more complicated part to represent dependencies between packages in the graph.

Representing dependencies

For simple dependencies (versioned or unversioned, but no alternatives) we represent the dependency relation with two attributes reltype and relversion specifying the relation type (<<, <=, ==, >=, >>) and the version as string. For unversioned relations we use the reltype=none and relversion=1:


vbp -[:depends reltype: TYPE, relversion: VERS]-> bp

Adding all the dependencies to the above graph, we obtain the following graph:

Our last step is dealing with alternative dependencies. Recall from the first blog that a relation between two Debian packages can have alternative targets like in

Depends: musixtex (>= 1:0.98-1) | texlive-music
which means that either musixtex or texlive-music needs to be installed to satisfy this dependency.

We treat these kind of dependencies by introducing a new node type altdep and a new relation is_satisfied_by between altdep nodes and versioned or unversioned binary packages (vbp, bp).

The following slice of our graph shows binary packages pmx which has alternative dependencies as above:

Summary of nodes, relations, and attributes

Let us summarize the node types, relations types, and respective attributes we have deduced from the requirements and data in the Debian packages:

Nodes and attributes

  • mnt: name, email
  • bp, sp, suite, altdeps: name
  • vbp, vsp: name, version

Relations and attributes

  • breaks, build_conflicts, build_conflicts_indep, build_depends, build_depends_indep, conflicts, depends, enhances, is_satisfied_by, pre_depends, provides, recommends, replaces, suggests
    Attributes: reltype, relversion
  • builds, contains, is_instance_of, maintains, next: no attributes

Next is …

Now that we have the graph database schema set up, we need to pull data from the UDD and put them into the Graph database. This will be discussed in the next entry of this series.

Planet DebianHideki Yamane: Improve debootstrap time a bit, without local mirror

I've introduced two features to improve debootstrap time, auto proxy detection via squid-deb-proxy-client (by Michael Vogt) and cache directory support. It reduces time to create chroot environment without huge local mirror.

Let's create chroot environment without any new features.
$ time sudo debootstrap sid sid-chroot
I: Target architecture can be executed     
I: Retrieving InRelease                     
I: Checking Release signature           
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages             
I: Validating Packages             
(snip)
I: Base system installed successfully. 
real    8m27.624s
user    1m52.732s
sys     0m10.786s

Then, use --cache-dir option.
$ time sudo debootstrap --cache-dir=/home/henrich/tmp/cache sid sid-chroot
E: /home/henrich/tmp/cache: No such directory

Yes, we should cache directory first.
$ mkdir ~/tmp/cache
Let's go.
$ time sudo debootstrap --cache-dir=/home/henrich/tmp/cache sid sid-chroot
I: Target architecture can be executed
I: Retrieving InRelease             
I: Checking Release signature         
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages                 
I: Validating Packages                   
(snip)
I: Base system installed successfully. 
real    2m10.180s
user    1m47.428s
sys     0m8.196s
It cuts about 6 minutes! (of course, it depends on the mirror you choose). Then, try to use proxy feature.
$ sudo apt install squid-deb-proxy-client
(snip)
$ time sudo debootstrap sid sid-chroot
Using auto-detected proxy: http://192.168.10.13:8000/
I: Target architecture can be executed     
I: Retrieving InRelease                     
I: Checking Release signature           
I: Valid Release signature (key id 126C0D24BD8A2942CC7DF8AC7638D0442B90D010)
I: Retrieving Packages             
I: Validating Packages             
(snip)
I: Configuring systemd...
I: Base system installed successfully.
Can you see the words "Using auto-detected proxy: http://192.168.10.13:8000/"? It detects package proxy and use it. And its result is
real    2m15.995s
user    1m49.737s
sys     0m8.778s

Conclusion: If you already run squid-deb-proxy on some machine in local network, then install squid-deb-proxy-client and debootstrap automatically use it, or you can use --cache-dir option for speed up creating chroot environment via debootstrap. Especiall if you don't have good network conectivity, both features will help without effort.


Oh, and one more thing... Thomas Lange has proposed patches to improve debootstrap and it makes debootstrap much faster. If you're interested, please look into it.

Planet DebianSteve Kemp: A filesystem for known_hosts

The other day I had an idea that wouldn't go away, a filesystem that exported the contents of ~/.ssh/known_hosts.

I can't think of a single useful use for it, beyond simple shell-scripting, and yet I couldn't resist.

 $ go get -u github.com/skx/knownfs
 $ go install github.com/skx/knownfs

Now make it work:

 $ mkdir ~/knownfs
 $ knownfs ~/knownfs

Beneat out mount-point we can expect one directory for each known-host. So we'll see entries:

 ~/knownfs $ ls | grep \.vpn
 builder.vpn
 deagol.vpn
 master.vpn
 www.vpn

 ~/knownfs $ ls | grep steve
 blog.steve.fi
 builder.steve.org.uk
 git.steve.org.uk
 mail.steve.org.uk
 master.steve.org.uk
 scatha.steve.fi
 www.steve.fi
 www.steve.org.uk

The host-specified entries will each contain a single file fingerprint, with the fingerprint of the remote host:

 ~/knownfs $ cd www.steve.fi
 ~/knownfs/www.steve.fi $ ls
 fingerprint
 frodo ~/knownfs/www.steve.fi $ cat fingerprint
 98:85:30:f9:f4:39:09:f7:06:e6:73:24:88:4a:2c:01

I've used it in a few shell-loops to run commands against hosts matching a pattern, but beyond that I'm struggling to think of a use for it.

If you like the idea I guess have a play:

It was perhaps more useful and productive than my other recent work - which involves porting an existing network-testing program from Ruby to golang, and in the process making it much more uniform and self-consistent.

The resulting network tester is pretty good, and can now notify via MQ to provide better decoupling too. The downside is of course that nobody changes network-testing solutions on a whim, and so these things are basically always in-house only.

Planet DebianShirish Agarwal: getting libleveldb1v5 fixed

Please treat this as a child’s fantasy till the information is not approved or corrected by a DD/DM who obviously have much more info and experience in dealing with below.

It had been quite a few years since I last played Minetest, a voxel-based game similar and yet different to its more famous brethren minecraft .

I wanted to install and play it but found that one of the libraries it needs is libleveldb1v5, a fast key-value storage library which according to #877773 has been marked as grave bug report because of no info. on the soname bump.

I saw that somebody had also reported it upstream and the bug has been fixed and has some more optimizations done to the library as well. From the description of the library it reminded me so much of sqlite which has almost the same feature-set (used by mozilla for bookmarks and pwd management if I’m not mistaken).

I was thinking as to if this has been fixed quite some back then why the maintainer didn’t put the fixed version on sid and then testing. I realized it might be because the new version has a soname bump which means it would need to be transitioned probably with proper breaks and everything.

A quick check via

$ apt-rdepends -r libleveldb1v5 | wc -l
Reading package lists... Done
Building dependency tree
Reading state information... Done
195

revealed that almost 190 packages directly or indirectly will be affected by the transition change. I then tried to find where the VCS is located by doing –

$ apt-cache showsrc libleveldb1v5 | grep Vcs-Git
Vcs-Git: git://anonscm.debian.org/collab-maint/leveldb.git
Vcs-Git: git://anonscm.debian.org/collab-maint/leveldb.git

Then I cloned the repo to my system to see if the maintainer had done any recent changes and saw :-


b$ git log --graph --pretty=format:'%Cred%h%Creset -%C(yellow)%d%Creset %s %Cgreen(%cr)' --abbrev-commit | head -15
* 7465515 - (HEAD -> master, tag: debian/1.20-2, origin/master, origin/HEAD) Packaging cleanup (4 months ago)
* f85b876 - Remove libleveldb-dbg package and use the auto-generated one (4 months ago)
* acac71f - Update Standards-Version to 4.1.2 (4 months ago)
* e281654 - Update debhelper level to 11 (4 months ago)
* df015eb - Don't run self-test parallel (4 months ago)
* ba81cc9 - (tag: debian/1.20-1) Update debhelper level to 10 (7 months ago)
* cb84f26 - Update Standards-Version to 4.1.0 (7 months ago)
* be0ef22 - Convert markdown documentation to HTML (7 months ago)
* ab8faa7 - Start 1.20-1 changelog (7 months ago)
* 03641f7 - Updated version 1.20 from 'upstream/1.20' (7 months ago)
|\
| * 59c75ca - (tag: upstream/1.20, origin/upstream) New upstream version 1.20 (7 months ago)
* | a21bcbc - (tag: debian/1.19-2) Add the missing ReadMemoryBarrier and WriteMemoryBarrier functions for mips* (1 year, 5 months ago)
* | 70c6e38 - Add myself to debian/copyright (1 year, 5 months ago)
* | 1ba7231 - Update source URL (1 year, 5 months ago)

There is probably a much simpler way to get the same output but for now that would have to suffice.

Anyways, there are many variations of the code I used using git log --pretty and git log --decorate etc. Maybe one of those could give the same output, would need the time diff as shared above.

Trivia – I am usually more interested in commit messages and time when the commits are done and know a bit of git to find out the author of a particular commit even if abbreviated commit is there and want to thank her(im) for the work done on that package or a particular commit which address some annoying bug that I had. /Trivia

Although the best I have hankered for is to have some sort of visualization tool about projects that I like

something like Andrews plot or the C-Chart for visualization purposes but till date haven’t found anything which would render it into those visuals straightway. Maybe a feature for a future git version, who knows 🙂

I know that in itself is a Pandora’s box as some people might just like to have visualization of only when releases were made of an upstream project while there will be others like who would enjoy and be fascinated to see amount of time between each commit on a project. I have seen quite a few projects rise, wane and have a rise again but having such visualizations may possibly help out in getting people more involved with a project/library whatever.

Andrews plot example - Wikipedia - CC-0

All the commits for the said library are done by the maintainer Laszlo Boszormenyi so it seems that the maintainer is interested in maintaining it. At least all the last 10-12 messages going almost 1.5 years shows that he is/was active till at least 4 months back, which brings me to another one of my pet issues.

There aren’t any ways to figure out how recently a DD or DM committed on Debian somewhere. People usually try the MIA team (Missing in Action) and many a times you feel you are taking the team’s time especially when it turns out to be a false positive. If users had more tools than probably MIA’s workload would be much lesser than before.

The only the other way is to look at all the packages a particular DD/DM is maintaining and if you are lucky then s(he) has made a release of a package or something that you can look into and know for certain that the person is active.

The other longer way is to download all the VCS repositories of a DD/DM, cycle through all of them using something like above to see when was the last commit done on all her(is) repos. and then come to conclusion one way or the other. If s(he) is really MIA then tell them to MIA team so they can try to connect with the person concerned, and if s(he) doesn’t respond in a reasonable time-frame then orphan the packages.

If a DD/DM has not committed for more than a year or two for any of her(is) projects I guess it’s reasonable to expect that the person concerned is MIA.

Anyways, it would be nice if the present maintainer is able to get the new release out so the other 190 packages which are probably installable could also work. When I was churning this on my head, I thought why couldn’t the DD’s have some sort of CI infrastructure which may automate things a bit and make life somewhat easier.

I have seen the Debian travis ci instance but know that’s limited to upstream projects hosted on github.

For those who might not know Travis CI is one of many such solutions. They are continuous integration software and they are quite a few of them.

What they do is they try to build the project/application/library etc. after each and every commit taking into account any parameters told/programmed into it. There may be times when upstream make an incompatible change or make some mistake while committing, because it’s autobuilds the application or whatever automatically, if it fails to build it forces the developer to see where they messed up. At the end you have a slightly better application at the end as at least obvious bugs are ironed out.

I do remember reading about gitlab-ci somewhere, maybe in the thread where DD’s were discussing about various alternatives to alioth or somewhere else. I dunno if would be just a matter of turning it on or that part is still not open-sourced yet, no idea.

If that happens, it would probably save the DD’s/DM some computational time apart from being able to know if things are going well or not.

I know gitlab had shared (paraphrasing here) they may make some of the things more open-source if Debian were to adopt the product, now that Debian has, I and guess most of the community would be hoping as lot of hard work, tears have gone into getting things ported from alioth to salsa especially in the last one month or so.

I do know that we have the autobuilder network but from what I understand, it’s for a slightly different use-case. This is more to see if the package builds on all the 10-11 official architectures and maybe some of the unofficial architectures.

While I was reading it, I was unable to find if just like people all around the world are doing mirrors (full or partial depending on the resources they have and the kind of pickup they are seeing) can people be part of autobuilder network to give additional computational power to the network. The name does say ‘autobuilder network’ so maybe that possibility exists, maybe it does not.

I did consult the documentation on the topic and it seems it’s a bit of work, see the workflow shared in wiki for transitions.

After reading that, you really wonder the patience of the people who slog through all this.

I did try to connect with him on the bug mentioned but he hasn’t got back, perhaps he’s busy IRL.

https://bugs.debian.org/cgi-bin/pkgreport.cgi?package=release.debian.org#_0_17_4

Till later.

Note – I have not talked about */debian/control or */debian/changelog.Debian, */debian/changelog or any of the files because once those are made, they are probably just need to be fiddled around a bit. The control file will probably list newer version of dependencies and may or may not have newer build dependencies. Changelog.Debian would document the changes the DD/DM had to do in order for the binary to be built successfully and in the archive and changelog will just document the time till where upstream’s work was taken.

,

Planet DebianVincent Bernat: Self-hosted videos with HLS

Note

This article was first published on Exoscale blog with some minor modifications.

Hosting videos on YouTube is convenient for several reasons: pretty good player, free bandwidth, mobile-friendly, network effect and, at your discretion, no ads.1 On the other hand, this is one of the less privacy-friendly solution. Most other providers share the same characteristics—except the ability to disable ads for free.

With the <video> tag, self-hosting a video is simple:2

<video controls>
  <source src="../videos/big_buck_bunny.webm" type="video/webm">
  <source src="../videos/big_buck_bunny.mp4" type="video/mp4">
</video>

However, while it is possible to provide a different videos depending on the screen width, adapting the video to the available bandwidth is trickier. There are two solutions:

They are both adaptive bitrate streaming protocols: the video is sliced in small segments and made available at a variety of different bitrates. Depending on current network conditions, the player automatically selects the appropriate bitrate to download the next segment.

HLS was initially implemented by Apple but is now also supported natively by Microsoft Edge and Chrome on Android. hls.js is a JavaScript library bringing HLS support to other browsers. MPEG-DASH is technically superior (codec-agnostic) but only works through a JavaScript library, like dash.js. In both cases, support of the Media Source Extensions is needed when native support is absent. Safari on iOS doesn’t have this feature and cannot use MPEG-DASH. Consequently, the most compatible solution is currently HLS.

Encoding🔗

To serve HLS videos, you need three kinds of files:

  • the media segments (encoded with different bitrates/resolutions),
  • a media playlist for each variant, listing the media segments, and
  • a master playlist, listing the media playlists.

Media segments can come in two formats:

  • MPEG-2 Transport Streams (TS), or
  • Fragmented MP4.

Fragmented MP4 media segments are supported since iOS 10. They are a bit more efficient and can be reused to serve the same content as MPEG-DASH (only the playlists are different). Also, they can be served from the same file with range requests. However, if you want to target older versions of iOS, you need to stick with MPEG-2 TS.3

FFmpeg is able to convert a video to media segments and generate the associated media playlists. Peer5’s documentation explains the suitable commands. I have put together an handy (Python 3.6) script, video2hls, stitching together all the steps. After executing it on your target video, you get a directory containing:

  • media segments for each resolution (1080p_1_001.ts, 720p_2_001.ts, …)
  • media playlists for each resolution (1080p_1.m3u8, 720p_2.m3u8, …)
  • master playlist (index.m3u8)
  • progressive (streamable) MP4 version of your video (progressive.mp4)
  • poster (poster.jpg)

The script accepts a lot of options for customization. Use the --help flag to discover them. Run it with --debug to get the ffmpeg commands executed with an explanation for each flag. For example, the poster is built with this command:

ffmpeg \
  `# seek to the given position (5%)` \
   -ss 4 \
  `# load input file` \
   -i ../2018-self-hosted-videos.mp4 \
  `# take only one frame` \
   -frames:v 1 \
  `# filter to select an I-frame and scale` \
   -vf 'select=eq(pict_type\,I),scale=1280:720' \
  `# request a JPEG quality ~ 10` \
   -qscale:v 28 \
  `# output file` \
   poster.jpg

Serving🔗

So, we got a bunch of static files we can upload anywhere. Yet two details are important:

  • When serving from another domain, CORS needs to be configured to allow GET requests. Adding Access-Control-Allow-Origin: * to response headers is enough.4
  • Some clients may be picky about the MIME types. Ensure files are served with the ones in the table below.
Kind Extension MIME type
Playlists .m3u8 application/vnd.apple.mpegurl
MPEG2-TS segments .ts video/mp2t
fMP4 segments .mp4 video/mp4
Progressive MP4 .mp4 video/mp4
Poster .jpg image/jpeg

Let’s host our files on Exoscale’s Object Storage which is compatible with S3 and located in Switzerland. As an example, the Caminandes 3: Llamigos video is about 213 MiB (five sizes for HLS and one progressive MP4). It would cost us less than 0.01 € per month for storage and 1.42 € for bandwidth if 1000 people watch the 1080p version from beginning to end—unlikely.5

We use s3cmd to upload files. First, you need to recover your API credentials from the portal and put them in ~/.s3cfg:

[default]
host_base = sos-ch-dk-2.exo.io
host_bucket = %(bucket)s.sos-ch-dk-2.exo.io
access_key = EXO.....
secret_key = ....
use_https = True
bucket_location = ch-dk-2

The second step is to create a bucket:

$ s3cmd mb s3://hls-videos
Bucket 's3://hls-videos/' created

You need to configure the CORS policy for this bucket. First, define the policy in a cors.xml file (you may want to restrict the allowed origin):

<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>GET</AllowedMethod>
 </CORSRule>
</CORSConfiguration>

Then, apply it to the bucket:

$ s3cmd setcors cors.xml s3://hls-videos

The last step is to copy the static files. Playlists are served compressed to save a bit of bandwidth. For each video, inside the directory containing all the generated files, use the following command:

while read extension mime gz; do
  [ -z "$gz" ] || {
    # gzip compression (if not already done)
    for f in *.${extension}; do
      ! gunzip -t $f 2> /dev/null || continue
      gzip $f
      mv $f.gz $f
    done
  }
  s3cmd --no-preserve -F -P \
        ${gz:+--add-header=Content-Encoding:gzip} \
        --mime-type=${mime} \
        --encoding=UTF-8 \
        --exclude=* --include=*.${extension} \
        --delete-removed \
    sync . s3://hls-videos/video1/
done <<EOF
m3u8  application/vnd.apple.mpegurl true
jpg   image/jpeg
mp4   video/mp4
ts    video/mp2t
EOF

The files are now available at https://hls-videos.sos-ch-dk-2.exo.io/video1/.

HTML🔗

We can insert our video in a document with the following markup:

<video poster="https://hls-videos.sos-ch-dk-2.exo.io/video1/poster.jpg"
       controls preload="none">
  <source src="https://hls-videos.sos-ch-dk-2.exo.io/video1/index.m3u8"
          type="application/vnd.apple.mpegurl">
  <source src="https://hls-videos.sos-ch-dk-2.exo.io/video1/progressive.mp4"
          type='video/mp4; codecs="avc1.4d401f, mp4a.40.2"'>
</video>

Browsers with native support use the HLS version while others would fall back to the progressive MP4 version. However, with the help of hls.js, we can ensure most browsers benefit from the HLS version too:

<script src="https://cdn.jsdelivr.net/npm/hls.js@latest"></script>
<script>
    if(Hls.isSupported()) {
        var selector = "video source[type='application/vnd.apple.mpegurl']",
            videoSources = document.querySelectorAll(selector);
        videoSources.forEach(function(videoSource) {
            var once = false;

            // Clone the video to remove any source
            var oldVideo = videoSource.parentNode,
                newVideo = oldVideo.cloneNode(false);

            // Replace video tag with our clone.
            oldVideo.parentNode.replaceChild(newVideo, oldVideo);

            // On play, initialize hls.js, once.
            newVideo.addEventListener('play',function() {
                if (!once) return;
                once = true;

                var hls = new Hls({ capLevelToPlayerSize: true });
                hls.loadSource(m3u8);
                hls.attachMedia(newVideo);
                hls.on(Hls.Events.MANIFEST_PARSED, function() {
                    newVideo.play();
                });
            }, false);
        });
    }
</script>

Here is the result, featuring Caminandes 3: Llamigos, a video created by Pablo Vasquez, produced by the Blender Foundation and released under the Creative Commons Attribution 3.0 license:

Most JavaScript attributes, methods and events work just like with a plain <video> element. For example, you can seek to an arbitrary position, like 1:00 or 2:00—but you would need to enable JavaScript to test.

The player is different from one browser to another but provides the basic needs. You can upgrade to a more advanced player, like video.js or MediaElements.js. They also handle HLS videos through hls.js.

Hosting your videos on YouTube is not unavoidable: serving them yourself while offering quality delivery is technically affordable. If bandwidth requirements are modest and the network effect not important, self-hosting makes it possible to regain control of the published content and not to turn over readers to Google. In the same spirit, PeerTube offers a video sharing platform. Decentralized and federated, it relies on BitTorrent to reduce bandwidth requirements.

Addendum🔗

Preloading🔗

In the above example, preload="none" was used for two reasons:

  • Most readers won’t play the video as it is an addon to the main content. Therefore, bandwidth is not wasted by downloading a few segments of video, at the expense of slightly increased latency on play.
  • We do not want non-native HLS clients to start downloading the non-HLS version while hls.js is loading and taking over the video. This could also be done by declaring the progressive MP4 fallback from JavaScript, but this would make the video unplayable for users without JavaScript. If preloading is important, you can remove the preload attribute from JavaScript—and not wait for the play event to initialize hls.js.

CSP🔗

Setting up CSP correctly can be quite a pain. For browsers with native HLS support, you need the following policy, in addition to your existing policy:

  • image-src https://hls-videos.sos-ch-dk-2.exo.io for the posters,
  • media-src https://hls-videos.sos-ch-dk-2.exo.io for the playlists and media segments.

With hls.js, things are more complex. Ideally, the following policy should also be applied:

  • worker-src blob: for the transmuxing web worker,
  • media-src blob: for the transmuxed segments,
  • connect-src https://hls-videos.sos-ch-dk-2.exo.io to fetch playlists and media segments from JavaScript.

However, worker-src is quite recent. The expected fallbacks are child-src (deprecated), script-src (but not everywhere) and then default-src. Therefore, for broader compatibility, you also need to append blob: to default-src as well as to script-src and child-src if you already have them. Here is an example policy—assuming the original policy was just default-src 'self' and media, XHR and workers were not needed:

HTTP/1.0 200 OK
Content-Security-Policy: 
  default-src 'self' blob:;
  image-src 'self' https://hls-videos.sos-ch-dk-2.exo.io;
  media-src blob: https://hls-videos.sos-ch-dk-2.exo.io;
  connect-src https://hls-videos.sos-ch-dk-2.exo.io;
  worker-src blob:;

  1. YouTube gives you the choice to not display ads on your videos. In advanced settings, you can unselect “Allow advertisements to be displayed alongside my videos.” Alternatively, you can also monetize your videos. ↩︎

  2. Nowadays, everything supports MP4/H.264. It usually also brings hardware acceleration, which improves battery life on mobile devices. WebM/VP9 provides a better quality at the same bitrate. ↩︎

  3. You could generate both formats and use them as variants in the master playlist. However, a limitation in hls.js prevents this option. ↩︎

  4. Use https://example.org instead of the wildcard character to restrict access to your own domain. ↩︎

  5. There is no need to host those files behind a (costly) CDN. Latency doesn’t matter much as long as you can sustain the appropriate bandwidth. ↩︎

Planet DebianChris Lamb: Re-elected as Debian Project Leader

I have been extremely proud to have served as the Debian Project Leader since my election in early 2017. During this time I've learned a great deal about the inner workings of the Project as well as about myself. I have grown as a person thanks to all manner of new interactions and fresh experiences.

I believe is a privilege simply to be a Debian Developer, let alone to be selected as their representative. It was therefore an even greater honour to learn that I have been re-elected by the community for another year. I profoundly and wholeheartedly thank everyone for placing their trust in me for another term.



Being the "DPL" is a hard job. It is even difficult to even communicate exactly how and any statistics somehow fail to capture it. However, I now understand the look in previous Leaders' eyes when they congratulated me on my appointment and future candidates should not nominate themselves lightly.

Indeed, I was unsure whether I would stand for re-appointment and I might not have done had it not been for some touching and encouraging words from some close confidants. They underlined to me that a year is not a long time, further counselling that I should consider myself just getting started and only now prepared to start to take on the bigger picture.



Debian itself will always face challenges but I sincerely believe that the Project remains as healthy as ever. We are uniquely cherished and remain remarkably poised to improve the free software ecosystem as a whole. Moreover, our stellar reputation for technical excellence, stability and software freedom remains highly respected and without peer. It is truly an achievement to be proud of.



I thank everyone who had the original confidence, belief and faith in me, but I offer my further sincere and humble thanks to all those who have felt they could extend this to a second term, especially with such a high turnout. I am truly excited and looking forward to the year ahead.


https://lamby-www.s3.amazonaws.com/yadt/blog.Image/image/original/34.jpeg

Sociological ImagesThe Sociology Behind the X-Files

Originally Posted at TSP Clippings

Photo Credit: Val Astraverkhau, Flickr CC

Throughout history, human beings have been enthralled by the idea of the paranormal. While we might think that UFOs and ghosts belong to a distant and obscure dimension, social circumstances help to shape how we envision the supernatural. In a recent interview with New York Magazine, sociologist Joseph O. Baker describes the social aspects of Americans’ beliefs about UFOs.

Baker argues that pop culture shapes our understandings of aliens. In the 1950s and 1960s, pop culture imagined aliens in humanoid form, typically as very attractive Swedish blonde types with shining eyes. By the 1970s and 1980s, the abductor narrative took hold and extraterrestrials were represented as the now iconic image of the little gray abductor — small, grey-skinned life-forms with huge hairless heads and large black eyes. Baker posits that one of the main causes of UFOs’ heightened popularity during this time was the extreme distrust of the government following incidents such as Watergate. Baker elaborates,

“I think there is something to be said for a lack of faith in government and institutions in that era, and that coincided with UFOs’ rise in popularity. The lack of trust in the government, and the idea that the government knows something about this — those two things went together, and you can see it in the public reaction post-Vietnam, to Watergate, all that stuff.”

While the individual characteristics of “believers” are hard to determine, survey evidence suggests that men and people from low-income backgrounds are more likely to believe in the existence of alien life. Baker says that believing is also dependent upon religious participation rather than education or income. In his words,

“One of the other strongest predictors is not participating as strongly in forms of organized religion. In some sense, there’s a bit of a clue there about what’s going on with belief — it’s providing an alternative belief system. If you look at religious-service attendance, there will be a strong negative effect there for belief in UFOs.”

Baker’s research on the paranormal indicates that social circumstances influence belief in extraterrestrial beings. In short, these social factors help to shape whether you are a Mulder or a Scully. Believing in UFOs goes beyond abductions and encounters of the Third Kind. In the absence of trust in government and religious institutions, UFOs represent an appealing and mysterious alternative belief system.

Isabel Arriagada (@arriagadaisabeis a Ph.D. student in the sociology department at the University of Minnesota. Her research focuses on the development of prison policies in South America and the U.S. and how technology shapes new experiences of imprisonment. 

(View original at https://thesocietypages.org/socimages)

TEDIn Case You Missed It: The dawn of “The Age of Amazement” at TED2018

In Case You Missed It TED2018More than 100 speakers — activists, scientists, adventurers, change-makers and more — took the stage to give the talk of their lives this week in Vancouver at TED2018. One blog post could never hope to hold all of the extraordinary wisdom they shared. Here’s a (shamelessly inexhaustive) list of the themes and highlights we heard throughout the week — and be sure to check out full recaps of day 1, day 2, day 3 and day 4.

Discomfort is a proxy for progress. If we hope to break out of the filter bubbles that are defining this generation, we have to talk to and connect with people we disagree with. This message resonated across the week at TED, with talks from Zachary R. Wood and Dylan Marron showing us the power of reaching out, even when it’s uncomfortable. As Wood, a college student who books “uncomfortable speakers,” says: “Tuning out opposing viewpoints doesn’t make them go away.” To understand how society can progress forward, he says, “we need to understand the counterforces.” Marron’s podcast “Conversations With People Who Hate Me” showcases him engaging with people who have attacked him on the internet. While it hasn’t led to world peace, it has helped him develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

The Audacious Project, a new initiative for launching big ideas, seeks to create lasting change at scale. (Photo: Ryan Lash / TED)

Audacious ideas for big impact. The Audacious Project, TED’s newest initiative, aims to be the nonprofit version of an IPO. Housed at TED, it’s a collaboration among some of the biggest names in philanthropy that asks for nonprofit groups’ most audacious dreams; each year, five will be presented at TED with an invitation for the audience and world to get involved. The inaugural Audacious group includes public defender Robin Steinberg, who’s working to end the injustice of bail; oceanographer Heidi M. Sosik, who wants to explore the ocean’s twilight zone; Caroline Harper from Sight Savers, who’s working to end the scourge of trachoma; conservationist Fred Krupp, who wants to use the power of satellites and data to track methane emissions in unprecedented detail; and T. Morgan Dixon and Vanessa Garrison, who are inspiring a nationwide movement for Black women’s health. Find out more (and how you can get involved) at AudaciousProject.org.

Living means acknowledging death. Philosopher-comedian Emily Levine has stage IV lung cancer — but she says there’s no need to “oy” or “ohhh” over her: she’s OK with it. Life and death go hand in hand, she says; you can’t have one without the other. Therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse Rosenthal, wrote about their lives in an article read by millions of people: “You May Want to Marry My Husband” — a meditation on dying disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public — and challenged him to begin anew. He speaks to others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?”

“It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” says Yuval Noah Harari. (Photo: Ryan Lash / TED)

Can we rediscover the humanity in our tech?  In a visionary talk about a “globally tragic, astoundingly ridiculous mistake” companies like Google and Facebook made at the foundation of digital culture, Jaron Lanier suggested a way we can fix the internet for good: pay for it. “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them,” he says. Historian Yuval Noah Harari, appearing onstage as a hologram live from Tel Aviv, warns that with consolidation of data comes consolidation of power. Fascists and dictators, he says, have a lot to gain in our new digital age; and “it’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. Hill turned her family’s apartment into a smart home, loading up on 18 internet-connected appliances; her colleague Mattu built a router that tracked how often the devices connected, who they were transmitting to, what they were transmitting. Through the data, he could decipher the Hill family’s sleep schedules, TV binges, even their tooth-brushing habits. And a lot of this data can be sold, including deeply intimate details. “Who is the true beneficiary of your smart home?” he asks. “You, or the company mining you?”

An invitation to build a better world. Actor and activist Tracee Ellis Ross came to TED with a message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men. Ross believes it is past time that men take responsibility to change men’s bad behavior — and she offers an invitation to men, calling them in as allies with the hope they will “be accountable and self-reflective.” She offers a different invitation to women: Acknowledge your fury. “Your fury is not something to be afraid of,” she says. “It holds lifetimes of wisdom. Let it breathe, and listen.”

Wow! discoveries. Among the TED Fellows, explorer and conservationist Steve Boyes’ efforts to chart Africa’s Okavango Delta has led scientists to identify more than 25 new species; University of Arizona astrophysicist Burçin Mutlu-Pakdil discovered a galaxy with an outer ring and a reddish inner ring that was unlike any ever seen before (her reward: it’s now called Burçin’s Galaxy). Another astronomer, University of Hawaii’s Karen Meech saw — and studied for an exhilarating few days — ‘Oumuamua, the first interstellar comet observed from Earth. Meanwhile, engineer Aaswath Raman is harnessing the cold of deep space to invent new ways to keep us cooler and more energy-efficient. Going from the sublime to the ridiculous, roboticist Simone Giertz showed just how much there is to be discovered from the process of inventing useless things.  

Walter Hood shares his work creating public spaces that illuminate shared memories without glossing over past — and present — injustices. (Photo: Ryan Lash / TED)

Language is more than words. Even though the stage program of TED2018 consisted primarily of talks, many went beyond words. Architects Renzo Piano, Vishaan Chakbrabarti, Ian Firth and Walter Hood showed how our built structures, while still being functional, can lift spirits, enrich lives, and pay homage to memories. Smithsonian Museum craft curator Nora Atkinson shared images from Burning Man and explained how, in the desert, she found a spirit of freedom, creativity and collaboration not often found in the commercial art world. Designer Ingrid Fetell Lee uncovered the qualities that make everyday objects a joy to behold. Illustrator Christoph Niemann reminded us how eloquent and hilarious sketches can be; in her portraits of older individuals, photographer Isadora Kosofsky showed us that visuals can be poignant too. Paul Rucker discussed his painful collection of artifacts from America’s racial past and how the artistic act of making scores of Ku Klux Klan robes has brought him some catharsis. Our physical movements are another way we speak  — for choreographer Elizabeth Streb, it’s expressing the very human dream to fly. For climber Alex Honnold, it was attaining a sense of mastery when he scaled El Capitan alone without ropes. Dolby Laboratories chief scientist Poppy Crum demonstrated the emotions that can be read through physical tells like body temperature and exhalations, and analytical chemist Simone Francese revealed the stories told through the molecules in our fingerprints.  

Kate Raworth presents her vision for what a sustainable, universally beneficial economy could look like. (Photo: Bret Hartman / TED)

Is human growth exponential or limited? There will be almost ten billion people on earth by 2050. How are we going to feed everybody, provide water for everybody and get power to everybody? Science journalist Charles C. Mann has spent years asking these questions to researchers, and he’s found that their answers fall into two broad categories: wizards and prophets. Wizards believe that science and technology will let us produce our way out of our dilemmas — think: hyper-efficient megacities and robots tending genetically modified crops. Prophets believe close to the opposite; they see the world as governed by fundamental ecological processes with limits that we transgress to our peril. As he says: “The history of the coming century will be the choice we make as a species between these two paths.” Taking up the cause of the prophets is Oxford economist Kate Raworth, who says that our economies have become “financially, politically and socially addicted” to relentless GDP growth, and too many people (and the planet) are being pummeled in the process. What would a sustainable, universally beneficial economy look like? A doughnut, says Raworth. She says we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits.

Seeing opportunity in adversity. “I’m basically nuts and bolts from the knee down,” says MIT professor Hugh Herr, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He dreams of a future where humans have augmented their bodies in a way that redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. In a beautiful, touching talk in the closing session of TED2018, Mark Pollock and Simone George take us inside their relationship — detailing how Pollock became paralyzed and the experimental work they’ve undertaken to help him regain motion. In collaboration with a team of engineers who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test — proving that progress is definitely still possible.

TED Fellow and anesthesiologist Rola Hallam started the world’s first crowdfunded hospital in Syria. (Photo: Ryan Lash / TED)

Spotting the chance to make a difference. The TED Fellows program was full of researchers, activists and advocates capitalizing on the spaces that go unnoticed. Psychiatrist Essam Daod, found a “golden hour” in refugees’ treks when their narratives can sometimes be reframed into heroes’ journeys; landscape architect Kotcharkorn Voraakhom realized that a park could be designed to allow her flood-prone city of Bangkok mitigate the impact of climate change; pediatrician Lucy Marcil seized on the countless hours that parents spend in doctors’ waiting rooms to offer tax assistance; sustainability expert DeAndrea Salvador realized the profound difference to be made by helping low-income North Carolina residents with their energy bills; and anesthesiologist Rola Hallam is addressing aid shortfalls for local nonprofits, resulting in the world’s first crowdfunded hospital in Syria.

Catch up on previous In Case You Missed It posts from April 10 (Day 1), April 11 (Day 2), April 12 (Day 3), and yesterday, April 13 (Day 4).

Planet DebianSven Hoexter: logstash 5.6.9 released with logstash-filter-grok 4.0.3

In case you're using logstash 5.6.x from elastic, version 5.6.9 is released with logstash-filter-grok 4.0.3. This one fixes a bad memory leak that was a cause for frequent logstash crashes since logstash 5.5.6. Reference: https://github.com/logstash-plugins/logstash-filter-grok/issues/135

I hope this is now again a decent logstash 5.x release. I've heard some rumours that the 6.x versions is also a bit plagued by memory leaks. :-/

Krebs on SecurityA Sobering Look at Fake Online Reviews

In 2016, KrebsOnSecurity exposed a network of phony Web sites and fake online reviews that funneled those seeking help for drug and alcohol addiction toward rehab centers that were secretly affiliated with the Church of Scientology. Not long after the story ran, that network of bogus reviews disappeared from the Web. Over the past few months, however, the same prolific purveyor of these phantom sites and reviews appears to be back at it again, enlisting the help of Internet users and paying people $25-$35 for each fake listing.

Sometime in March 2018, ads began appearing on Craigslist promoting part-time “social media assistant” jobs, in which interested applicants are directed to sign up for positions at seorehabs[dot]com. This site promotes itself as “leaders in addiction recovery consulting,” explaining that assistants can earn a minimum of $25 just for creating individual Google for Business listings tied to a few dozen generic-sounding addiction recovery center names, such as “Integra Addiction Center,” and “First Exit Recovery.”

The listing on Craigslist.com advertising jobs for creating fake online businesses tied to addiction rehabilitation centers.

Applicants who sign up are given detailed instructions on how to step through Google’s anti-abuse process for creating listings, which include receiving a postcard via snail mail from Google that contains a PIN which needs to be entered at Google’s site before a listing can be created.

Assistants are cautioned not to create more than two listings per street address, but otherwise to use any U.S.-based street address and to leave blank the phone number and Web site for the new business listing.

A screen shot from Seorehabs’ instructions for those hired to create rehab center listings.

In my story Scientology Seeks Captive Converts Via Google Maps, Drug Rehab Centers, I showed how a labyrinthine network of fake online reviews that steered Internet searches toward rehab centers funded by Scientology adherents was set up by TopSeek Inc., which bills itself as a collection of “local marketing experts.” According to LinkedIn, TopSeek is owned by John Harvey, an individual (or alias) who lists his address variously as Sacramento, Calif. and Hawaii.

Although the current Web site registration records from registrar giant Godaddy obscure the information for the current owner of seorehabs[dot]com, a historic WHOIS search via DomainTools shows the site was also registered by John Harvey and TopSeek in 2015. Mr. Harvey did not respond to requests for comment. [Full disclosure: DomainTools previously was an advertiser on KrebsOnSecurity].

TopSeek’s Web site says it works with several clients, but most especially Narconon International — an organization that promotes the rather unorthodox theories of Scientology founder L. Ron Hubbard regarding substance abuse treatment and addiction.

As described in Narconon’s Wikipedia entry, Narconon facilities are known not only for attempting to win over new converts to Scientology, but also for treating all substance abuse addictions with a rather bizarre cocktail consisting mainly of vitamins and long hours in extremely hot saunas. Their Wiki entry documents multiple cases of accidental deaths at Narconon facilities, where some addicts reportedly died from overdoses of vitamins or neglect.

A LUCRATIVE RACKET

Bryan Seely, a security expert who has written extensively about the use of fake search listings to conduct online bait-and-switch scams, said the purpose of sites like those that Seorehabs pays people to create is to funnel calls to a handful of switchboards that then sell the leads to rehab centers that have agreed to pay for them. Many rehab facilities will pay hundreds of dollars for leads that may ultimately lead to a new patient. After all, Seely said, some facilities can then turn around and bill insurance providers for thousands of dollars per patient.

Perhaps best known for a stunt in which he used fake Google Maps listings to intercept calls destined for the FBI and U.S. Secret Service, Seely has learned a thing or two about this industry: Until 2011, he worked for an SEO firm that helped to develop and spread some of the same fake online reviews that he is now helping to clean up.

“Mr. Harvey and TopSeek are crowdsourcing the data input for these fake rehab centers,” Seely said. “The phone numbers all go to just a few dedicated call centers, and it’s not hard to see why. The money is good in this game. He sells a call for $50-$100 at a minimum, and the call center then tries to sell that lead to a treatment facility that has agreed to buy leads. Each lead can be worth $5,000 to $10,000 for a patient who has good health insurance and signs up.”

This graph illustrates what happens when someone calls one of these Seorehabs listings. Source: Bryan Seely.

Many of the listings created by Seorehab assistants are tied to fake Google Maps entries that include phony reviews for bogus treatment centers. In the event those listings get suspended by Google, Seorehab offers detailed instructions on how assistants can delete and re-submit listings.

Assistants also can earn extra money writing fake, glowing reviews of the treatment centers:

Below are some of the plainly bogus reviews and listings created in the last month that pimp the various treatment center names and Web sites provided by Seorehabs. It is not difficult to find dozens of other examples of people who claim to have been at multiple Seorehab-promoted centers scattered across the country. For example, “Gloria Gonzalez” supposedly has been treated at no fewer than seven Seorehab-marketed detox locations in five states, penning each review just in the last month:

A reviewer using the name “Tedi Spicer” also promoted at least seven separate rehab centers across the United States in the past month. Getting treated at so many far-flung facilities in just the few months that the domains for these supposed rehab centers have been online would be an impressive feat:

Bring up any of the Web sites for these supposed rehab listings and you’ll notice they all include the same boilerplate text and graphic design. Aside from combing listings created by the reviewers paid to promote the sites, we can find other Seorehab listings just by searching the Web for chunks of text on the sites. Doing so reveals a long list (this is likely far from comprehensive) of domain names registered in the past few months that were all created with hidden registration details and registered via Godaddy.

Seely said he spent a few hours this week calling dozens of phone numbers tied to these rehab centers promoted by TopSeek, and created a spreadsheet documenting his work and results here (Google Sheets).

Seely said while he would never advocate such activity, TopSeek’s fake listings could end up costing Mr. Harvey plenty of money if someone figured out a way to either mass-report the listings as fraudulent or automate calls to the handful of hotlines tied to the listings.

“It would kill his business until he changes all the phone numbers tied to these fake listings, but if he had to do that he’d have to pay people to rebuild all the directories that link to these sites,” he said.

WHAT YOU CAN DO ABOUT FAKE ONLINE REVIEWS

Before doing business with a company you found online, don’t just pick the company that comes up at the top of search results on Google or any other search engine. Unfortunately, that generally guarantees little more than the company is good at marketing.

Take the time to research the companies you wish to hire before booking them for jobs or services — especially when it comes to big, expensive, and potentially risky services like drug rehab or moving companies. By the way, if you’re looking for a legitimate rehab facility, you could do worse than to start at rehabs.com, a legitimate rehab search engine.

It’s a good idea to get in the habit of verifying that the organization’s physical address, phone number and Web address shown in the search result match that of the landing page. If the phone numbers are different, use the contact number listed on the linked site.

Take the time to learn about the organization’s reputation online and in social media; if it has none (other than a Google Maps listing with all glowing, 5-star reviews), it’s probably fake. Search the Web for any public records tied to the business’ listed physical address, including articles of incorporation from the local secretary of state office online.

A search of the company’s domain name registration records can give you an idea of how long its Web site has been in business, as well as additional details about the the organization (although the ability to do this may soon be a thing of the past).

Seely said one surefire way to avoid these marketing shell games is to ask a simple question of the person who answers the phone in the online listing.

“Ask anyone on the phone what company they’re with,” Seely said. “Have them tell you, take their information and then call them back. If they aren’t forthcoming about who they are, they’re most likely a scam.”

In 2016, Seely published a book on Amazon about the thriving and insanely lucrative underground business of fake online reviews. He’s agreed to let KrebsOnSecurity republish the entire e-book, which is available for free at this link (PDF).

“This is literally the worst book ever written about Google Maps fraud,” Seely said. “It’s also the best. Is it still a niche if I’m the only one here? The more people who read it, the better.”

Planet DebianJonathan Dowland: simple

Every now and then, for one reason or another, I am sat in front of a Linux-powered computer with the graphical user interface disabled, instead using an old-school text-only mode.

shell prompt

There's a strange, peaceful quality about these environments.

When I first started using computers in the 90s, the Text Mode was the inferior, non-multitasking system that you generally avoided unless you were trying to do something specific (like run Doom without any other programs eating up your RAM).

On a modern Linux (or BSD) machine, unless you are specifically trying to do something graphical, the power and utility of the machine is hardly diminished at all in this mode. The surface looks calm: there's nothing much visibly going on, just the steady blink of the command prompt, as if the whole machine is completely dedicated to you, and is waiting poised to do whatever you ask of it next. Yet most of the same background tasks are running as normal, doing whatever they do.

One difference, however, is the distractions. Rather like when you drive out of a city to the countryside and suddenly notice the absence of background noise, background light, etc., working at a text terminal — relative to a regular graphical desktop — can be a very calming experience.

So I decided to take a fresh look at my desktop and see whether there were unwelcome distractions. For some time now I've been using a flat background colour to avoid visual clutter. After some thought I realised that most of the time I didn't need to see what was in GNOME3's taskbar. I found and installed this hide-top-bar extension and now it's tucked away unless I mouse up to the top. Now that it's out of the way by default, I actually put more information into it: the full date in the time display; and (via another extension, TopIcons Plus) the various noisy icons that apps like Dropbox, OpenBox, VLC, etc. provide.

GNOME 3 Desktop

There's still some work to do, notably in my browser (Firefox), but I think this is a good start.

TEDWhat can your phone do in the next mobile economy? A workshop with Samsung

An attendee plays with an interface for exploring the possibilities of the mobile phone at the Samsung Social Space during TED2018: The Age of Amazement, in Vancouver. Photo: Lawrence Sumulong / TED

What do you imagine your phone doing for you in the future? Sure, you can take calls, send texts, use apps and surf the internet. But according to Samsung, the next corner for mobile engagement could turn your cell phone into a superhero (of sorts) in industries like public safety and healthcare. 5G technology will not only improve a company’s ability to deliver faster, higher quality services, but the “greater connectivity paves the way for data-intensive solutions like self-driving vehicles, Hi-Res streaming VR, and rich real-time communications.” Imagine a world where your Facetime or Skype call doesn’t drop mid-conversation, you never have to wait for a video to buffer, and connecting to Wi-Fi becomes the slower option compared to staying on data.

At their afternoon worksop during TED2018, Samsung provided a short list of real-world issues to guide thoughtful discussion among workshop groups on how the mobile economy can be a part of the big solutions. Scenarios included: data security in the an evolving retail world; hurricane preparedness; urban traffic management; and overburdened emergency rooms.

These breakout out sessions lead to fascinating conversation between those with different perspectives, background and skill sets. Architects and scientists weighed in with writers and business development professionals to dream up a vision of the future where everything works seamlessly and interacts like a well-conducted symphony.

After intense discussion, swapping ideas and possibilities, groups were encouraged to synthesize the conversation and share to the larger room. They didn’t just offer solutions, but posed fascinating questions on how we may unlock answers to the endless possibilities the next mobile economy will bring in the age of amazement.

A view of Samsung’s social space at TED2018, which featured mobile phone activities for exploring the next mobile economy (as well as delicious coffee). Photo: Lawrence Sumulong / TED

CryptogramOblivious DNS

Interesting idea:

...we present Oblivious DNS (ODNS), which is a new design of the DNS ecosystem that allows current DNS servers to remain unchanged and increases privacy for data in motion and at rest. In the ODNS system, both the client is modified with a local resolver, and there is a new authoritative name server for .odns. To prevent an eavesdropper from learning information, the DNS query must be encrypted; the client generates a request for www.foo.com, generates a session key k, encrypts the requested domain, and appends the TLD domain .odns, resulting in {www.foo.com}k.odns. The client forwards this, with the session key encrypted under the .odns authoritative server's public key ({k}PK) in the "Additional Information" record of the DNS query to the recursive resolver, which then forwards it to the authoritative name server for .odns. The authoritative server decrypts the session key with his private key, and then subsequently decrypts the requested domain with the session key. The authoritative server then forwards the DNS request to the appropriate name server, acting as a recursive resolver. While the name servers see incoming DNS requests, they do not know which clients they are coming from; additionally, an eavesdropper cannot connect a client with her corresponding DNS queries.

News article.

Worse Than FailureThe Proprietary Format

Have you ever secured something with a lock? The intent is that at some point in the future, you'll use the requisite key to regain access to it. Of course, the underlying assumption is that you actually have the key. How do you open a lock once you've lost the key? That's when you need to get creative. Lock picks. Bolt cutters. Blow torch. GAU-8...

In 2004, Ben S. went on a solo bicycle tour, and for reasons of weight, his only computer was a Handspring Visor Deluxe PDA running Palm OS. He had an external, folding keyboard that he would use to type his notes from each day of the trip. To keep these notes organized by day, he stored them in the Datebook (calendar) app as all-day events. The PDA would sync with a desktop computer using a Handspring-branded fork of the Palm Desktop software. The whole Datebook could then be exported as a text file from there. As such, Ben figured his notes were safe. After the trip ended, he bought a Windows PC that he had until 2010, but he never quite got around to exporting the text file. After he switched to using a Mac, he copied the files to the Mac and gave away the PC.

Handspring Treo 90

Ten years later, he decided to go through all of the old notes, but he couldn't open the files!

Uh oh.

The Handspring company had gone out of business, and the software wouldn't run on the Mac. His parents had the Palm-branded version of the software on one of their older Macs, but Handspring used a different data file format that the Palm software couldn't open. His in-laws had an old Windows PC, and he was able to install the Handspring software, but it wouldn't even open without a physical device to sync with, so the file just couldn't be opened. Ben reluctantly gave up on ever accessing the notes again.

Have you ever looked at something and then turned your head sideways, only to see it in a whole new light?

One day, Ben was going through some old clutter and found a backup DVD-R he had made of the Windows PC before he had wiped its hard disk. He found the datebook.dat file and opened it in SublimeText. There he saw rows and rows of hexadecimal code arranged into tidy columns. However, in this case, the columns between the codes were not just on-screen formatting for readability, they were actual space characters! It was not a data file after all, it was a text file.

The Handspring data file format was a text file containing hexadecimal code with spaces in it! He copied and pasted the entire file into an online hex-to-text converter (which ignored the spaces and line breaks), and voilà , Ben had his notes back!

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianLaura Arjona Reina: Kubb 2018 season has just begun

Since last year I play kubb with my son. It’s a sport/game of marksmanship and patience. It’s a quite inclusive game and it’s played outside, in a grass or sand field.

It happens that the Spanish association of Kubb is in the town where I live (even, in my neighbourhood!) so several family gatherings with tournaments happen in the parks near my house. Last year we attended for first time and learned how to play, and since then, we participated in 2 or 3 events more.

As kubb is played in open air, season starts in March/April, when the weather is good enough to have a nice morning in the park. I got surprised that being a so minority game, about 50-100 people gather in each local tournament, grouped in teams of any kind: individuals, couples or up to 6 persons-teams, mothers and daughters, only kids-teams, teams formed by people of 3 different generations… as strenght or speed (or even experience) are not relevant to win this game, almost anybody can play with anybody.

image

Enjoying playing kubb makes me also think about how communities around a non-mainstream topic are formed and maintained, and how to foster diversity and good relationships among participants. I’ve noted down some ideas that I think the kubb association does well:

  •  No matter how big or small you are, always take into account the possible newcomers: setting a slot at the start of the event to welcome them and explain “how the day will work” makes those newcomers feel less stressed.
  • Designing events where the whole family can participate (or at least “be together”, not only “events with childcare”) but it’s not mandatory that all of them participate, helps people to get involved more long-term.
  • The format of the event has to be kept simple to avoid organisers to get burned out. If the organisers are so overwhelmed taking care of things that they cannot taste the result of their work, that means that the organisation team should grow and balance the load.
  • Having a “break” during the year so everybody can rest and do other things also helps people get more motivated when the next season/event starts.

Thinking about kubb, particularly together/versus with the other sport that my kid plays (football), I find similarities and contrasts with another “couple” of activities that we also experience in our family: the “free software way of life” versus the “mainstream use” of computers/devices nowadays. It’s good to know both (not to be “apart of the world in our warm bubble”), and it’s good to have the humble, but creative and more human-focused and good-values-loaded one as big reference for the type of future that we want to live and we build everyday with our small actions.

Comments?

You can comment on this post using this pump.io thread.

Planet Linux AustraliaMichael Still: City2Surf 2018

Share

I registered for city2surf this morning, which will be the third time I’ve run in the event. In 2016 my employer sponsored a bunch of us to enter, and I ran the course in 86 minutes and 54 seconds. 2017 was a bit more exciting, because in hindsight I did the final part of my training and the race itself with a torn achilles tendon. Regardless, I finished the course in 79 minutes and 39 seconds — a 7 minute and 16 second improvement despite the injury.

This year I’ve done a few things differently — I’ve started training much earlier, mostly as a side effect to recovering from the achilles injury; and secondly I’ve decided to try and raise some money for charity during the run.

Specifically, I’m raising money for the Black Dog Institute. They were selected because I’ve struggled with depression on and off over my adult life, and that’s especially true for the last twelve months or so. I figure that raising money for a resource that I’ve found personally useful makes a lot of sense.

I’d love for you to donate to the Black Dog Institute, but I understand that’s not always possible. Either way, thanks for reading this far!

Share

Planet DebianNorbert Preining: TeX Live 2018 for Debian

TeX Live 2018 has hit Debian/unstable today. The packages are based on what will be (most likely, baring any late desasters) on the TeX Live DVD which is going to press this week. This brings the newest and shiniest version of TeX Live to Debian. There have

The packages that have been uploaded are:

The changes listed in the TeX Live documentation and which are relevant for Debian are:

  • Kpathsea: Case-insensitive filename matching now done by default in non-system directories; set texmf.cnf or environment variable texmf_casefold_search to 0 to disable. Full details in the Kpathsea manual.
  • epTEX, eupTEX: New primitive \epTeXversion.
  • LuaTEX: Preparation for moving to Lua 5.3 in 2019: a binary luatex53 is available on most platforms, but must be renamed to luatex to be effective. Or use the ConTEXt Garden files; more information there.
  • MetaPost: Fixes for wrong path directions, TFM and PNG output.
  • pdfTEX: Allow encoding vectors for bitmap fonts; current directory not hashed into PDF ID; bug fixes for \pdfprimitive and related.
  • XeTEX: Support /Rotate in PDF image inclusion; exit nonzero if the output driver fails; various obscure UTF-8 and other primitive fixes.
  • tlmgr: new front-ends tlshell (Tcl/Tk) and tlcockpit (Java); JSON output; uninstall now a synonym for remove; new action/option print-platform-info.

And above all, the most important change: We switched to CMSS, a font designed by DEK, for our logo and banners 😉

Enjoy.

,

Planet Linux AustraliaDavid Rowe: Lithium Cell Amp Hour Tester and Electric Sailing

I recently electrocuted my little sail boat. I built the battery pack using some second hand Lithium cells donated by my EV. However after 8 years of abuse from my kids and I those cells are of varying quality. So I set about developing an Amp-Hour tester to determine the capacity of the cells.

The system has a relay that switches a low value power resistor (OK some coat hanger wire) across the 3.2V cell terminals, loading it up at about 27A, roughly the cruise current for my e-boat. It’s about 0.12 ohms once it heats up. This gets too hot to touch but not red hot, it’s only 86W being dissipated along about 1m of wire. When I built my EV I used the coat hanger wire load trick to test 3kW loads, that was a bit more exciting!

The empty beer can in the background makes a useful insulated stand off. Might need to make more of those.

When I first installed Lithium cells in my EV I developed a charge controller for my EV. I borrowed a small part of that circuit; a two transistor flip flop and a Battery Management System (BMS) module:

Across the cell under test is a CM090 BMS module from EV Power. That’s the good looking red PCB in the photos, onto which I have tacked the circuit above. These modules have a switch than opens when the cell voltage drops beneath 2.5V.

Taking the base of either transistor to ground switches on the other transistor. In logic terms, it’s a “not set” and “not reset” operation. When power is applied, the BMS module switch is closed. The 10uF capacitor is discharged, so provides a momentary short to ground, turning Q1 off, and Q2 on. Current flows through the automotive relay, switching on the load to the battery.

After a few hours the cell discharges beneath 2.5V, the BMS switch opens and Q2 is switched off. The collector voltage on Q2 rises, switching on Q1. Due to the latching operation of the flip flip – it stays in this state. This is important, as when the relay opens, the cell will be unloaded and it’s voltage will rise again and the BMS module switch will close. In the initial design without a flip flop, this caused the relay to buzz as the cell voltage oscillated about 2.5V as the relay opened and closed! I need the test to stop and stay stopped – it will be operating unattended so I don’t want to damage the cell by completely discharging it.

The LED was inserted to ensure the base voltage on Q1 was low enough to switch Q1 off when Q2 was on (Vce of Q2 is not zero), and has the neat side effect of lighting the LED when the test is complete!

In operation, I point a cell phone taking time lapse video of the LED and some multi-meters, and start the test:

I wander back after 3 hours and jog-shuttle the time lapse video to determine the time when the LED came on:

The time lapse feature on this phone runs in 1/10 of real time. For example Cell #9 discharged in 12:12 on the time lapse video. So we convert that time to seconds, multiply by 10 to get “seconds of real time”, then divide by 3600 to get the run time in hours. Multiplying by the discharge current of 27(ish) Amps we get the cell capacity:

  12:12 time lapse, 27*(12*60+12)*10/3600 = 55AH

So this cells a bit low, and won’t be finding it’s way onto my boat!

Another alternative is a logging multimeter, one could even measure and integrate the discharge current over time. or I could have just bought or borrowed a proper discharge tester, but where’s the fun in that?

Results

It was fun to develop, a few Saturday afternoons of sitting in the driveway soldering, occasional burns from 86W of hot wire, and a little head scratching while I figured out how to take the design from an expensive buzzer to a working circuit. Nice to do some soldering after months of software based DSP. I’m also happy that I could develop a transistor circuit from first principles.

I’ve now tested 12 cells (I have 40 to work through), and measured capacities of 50 to 75AH (they are rated at 100AH new). Some cells have odd behavior under load; dipping beneath 3V right at the start of the test rather than holding 3.2V for a few hours – indicating high internal resistance.

My beloved sail e-boat is already doing better. Last weekend, using the best cells I had tested at that point, I e-motored all day on varying power levels.

One neat trick, explained to me by Matt, is motor-sailing. Using a little bit of outboard power, the boat overcomes hydrodynamic friction (it gets moving in the water) and the sail is moved out of stall (like an airplane wing moving to just above stall speed). This means to boat moves a lot faster than under motor or sail alone in light winds. For example the motor was registering just 80W, but we were doing 3 knots in light winds. This same trick can be done with a stink-motor and dinosaur juice, but the e-motor is completely silent, we forgot it was on for hours at a time!

Reading Further

Electric Car BMS Controller
New Lithium Battery Pack for my EV
Engage the Silent Drive
EV Bugs

Planet DebianAlexander Wirt: alioth deprecation - next steps

As you should be aware, alioth.debian.org will be decommissioned with the EOL of wheezy, which is at the end of May. The replacement for the main part of alioth, git, is alive and out of beta, you know it as salsa.debian.org. If you did not move your git repository yet, hurry up, time is running out.

The other important service from the alioth set, lists, moved to a new host and is now live at https://alioth-lists.debian.net with the lists which opted into migration. All public list archives moved over too and will continue to exist under the old URL.

decommissioning timeline

2018-05-01 DISABLE registration of new users on alioth. Until an improved SSO (GSOC Project) is ready, new user registrations needed for SSO services will be handled manually. More details on this will follow in a seperate announcement.
2018-05-10 - 2018-05-13 darcs, bzr and mercurial repositories will be exported as tarballs and made available readonly from a new archive host, details on that will follow.
2018-05-17 - 2018-05-18 During the Mini-DebConf Hamburg any existing cron jobs will be turned off, websites still on alioth will be disabled.
2018-05-31 All remaining repositories (cvs, svn and git) will be archived similar to the ones above. The host moszumanska, the home of alioth, will go offline!

CryptogramHijacking Emergency Sirens

Turns out it's easy to hijack emergency sirens with a radio transmitter.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV April 2018 Workshop: Linux and Drupal mentoring and troubleshooting

Apr 21 2018 12:00
Apr 21 2018 16:00
Apr 21 2018 12:00
Apr 21 2018 16:00
Location: 
Room B2:11, State Library of Victoria, 328 Swanston St, Melbourne

As our usual venue at Infoxchange is not available this month due to construction work, we'll be joining forces with DrupalMelbourne at the State Library of Victoria.

Linux Users of Victoria is a subcommittee of Linux Australia.

April 21, 2018 - 12:00

Worse Than FailureCodeSOD: Breaking Changes

We talk a lot about the sort of wheels one shouldn’t reinvent. Loads of bad code stumbles down that path. Today, Mary sends us some code from their home-grown unit testing framework.

Mary doesn’t have much to say about whatever case of Not Invented Here Syndrome brought things to this point. It’s especially notable that this is Python, which comes, out of the box, with a perfectly serviceable unittest module built in. Apparently not serviceable enough for their team, however, as Burt, the Lead Developer, wrote his own.

His was Object Oriented. Each test case received a TestStepOutcome object as a parameter, and was expected to return that object. This meant you didn’t have to use those pesky, readable, and easily comprehensible assert… methods. Instead, you just did your test steps and called something like:

  outcome.setPassed()

Or

  outcome.setPassed(False)

Now, no one particularly liked the calling convention of setPassed(False), so after some complaints, Burt added a setFailed method. Developers started using it, and everyone’s tests passed. Everyone was happy.

At least, everyone was happy up until Mary wrote a test she expected to see fail. There was a production bug, and she could replicate it, step by step, at the Python REPL. So that she could “catch” the bug and “prove” it was dead, she wanted a unit test.

The unit test passed.

The bug was still there, and she continued to be able to replicate it manually.

She tried outcome.setFailed() and outcome.setFailed(True) and outcome.setFailed("OH FFS THIS SHOULD FAIL"), but the test passed. outcome.setPassed(False), however… worked just like it was supposed to.

Mary checked the implementation of the TestStepOutcome class, and found this:

class TestStepOutcome(object):
  def setPassed(self, flag=True):
    self.result = flag

  def setFailed(self, flag=True):
    self.result = flag

Yes, in Burt’s reluctance to have a setFailed message, he just copy/pasted the setPassed, thinking, “They basically do the same thing.” No one checked his work or reviewed the code. They all just started using setFailed, saw their tests pass, which is what they wanted to see, and moved on about their lives.

Fixing Burt’s bug was no small task- changing the setFailed behavior broke a few hundred unit tests, proving that every change breaks someone’s workflow.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #155

Here's what happened in the Reproducible Builds effort between Sunday April 8 and Saturday April 14 2018:

Patches filed

In addition, 38 build failure bugs were reported by Adrian Bunk.

strip-nondeterminism development

Version 0.041-1 was uploaded to unstable by Chris Lamb:

jenkins.debian.net development

Mattia Rizzolo made a large number of changes to our Jenkins-based testing framework, including:

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet Linux AustraliaGary Pendergast: Introducing: Click Sync

Chrome’s syncing is pretty magical: you can see your browsing history from your phone, tablet, and computers, all in one place. When you install Chrome on a new computer, it automatically downloads your extensions. You can see your bookmarks everywhere, it even lets you open a tab from another device.

There’s one thing that’s always bugged me, however. When you click a link, it turns purple, as all visited links should. But it doesn’t turn purple on your other devices. Google have had this bug on their radar for ages, but it hasn’t made much progress. There’s already an extension that kind of fixes this, but it works by hashing every URL you visit and sending them to a server run by the extension author: not something I’m particularly comfortable with.

And so, I wrote Click Sync!

When you click a link, it’ll use Chrome’s inbuilt sync service to tell all your other computers to mark it as visited. If you like watching videos of links turn purple without being clicked, I have just the thing for you:

While you’re thinking about how Chrome syncs between all your devices, it’s good to setup a Chrome Passphrase, if you haven’t already. This encrypts your personal data before it passes through Google’s servers.

Unfortunately, Chrome mobile doesn’t support extensions, so this is only good for syncing between computers. If you run into any bugs, head on over the Click Sync repository, and let me know!

Don MartiGDPR and client-side tools

Lots of GDPR advice out there. As far as I can tell it pretty much falls into three categories.

But what if there is another way?

  1. Start with the clean version. (Here's that link again: How to: GDPR, consent and data processing).

  2. Add microformats to label consent forms as consent forms, and appropriate links to the data usage policy to which the user is being asked to agree.

  3. Release a browser extension that will do the right thing with the consent forms, and submit automatically if the user is fine with the data usage request and policy, and appears to trust the site. Lots of options here, since the extension can keep track of known data usage policies and which sites the user appears to trust, based on their activity.

  4. Publish user research results from the browser extension. At this point the browsers can compete to do their own versions of step 3, in order to give their users a more trustworthy and less annoying experience.

Browsers need to differentiate in order to attract new users and keep existing users. Right now a good way to do that is in creating a safer-feeling, more trustworthy environment. The big opportunity is in seeing the overlap between that goal for the browser and the needs of brands to build reputation and the needs of high-reputation publishers to shift web advertising from a hacking game that adtech/adfraud wins now, to a reputation game where trusted sites can win.

TEDMore TED2018 conference shorts to amuse and amaze

Even in the Age of Amazement, sometimes you need a break between talks packed with fascinating science, tech, art and so much more. That’s where interstitials come in: short videos that entertain and intrigue, while allowing the brain a moment to reset and ready itself to absorb more information.

For this year’s conference, TED commissioned and premiered four short films made just for the conference. Check out those films here!

Mixed in with our originals, curator Anyssa Samari curated a week-long program of even more videos — animations, music, even cool ads — to play throughout the week. Here’s the program of shorts she found, from creative people all around the world:

The short: Jane Zhang: “Dust My Shoulders Off.” A woman having a bad day is transported a world of famous paintings where she has a fantastic adventure.

The creator: Outerspace Leo

Shown during: Session 2, After the end of history …

The short: “zoom(art).” A kaleidoscopic, visually compelling journey of artificial intelligence creating beautiful works of art.

The creator: Directed and programmed by Alexander Mordvintsev, Google Research

Shown during: Session 2, After the end of history …

The short: “20syl – Kodama.” A music video of several hands playing multiple instruments (and drawing a picture) simultaneously to create a truly delicious electronic beat.

The creators: Mathieu Le Dude & 20syl

Shown during: Session 3, Nerdish Delight

The short: “If HAL-9000 was Alexa.” 2001: A Space Odyssey seems a lot less sinister (and lot more funny) when Alexa can’t quite figure out what Dave is saying.

The creator: ScreenJunkies

Shown during: Session 3, Nerdish Delight

The short: “Maxine the Fluffy Corgi.” A narrated day in the life of an adorable pup named Maxine who know what she wants.

The creator: Bryan Reisberg

Shown during: Session 3, Nerdish Delight

The short: “RGB FOREST.” An imaginative, colorful and geometric jaunt through the woods set to jazzy electronic music.

The creator: LOROCROM

Shown during: Session 6, What on earth do we do?

The short: “High Speed Hummingbirds.” Here’s your chance to watch the beauty and grace of hummingbirds in breathtaking slow motion.

The creator: Anand Varma

Shown during: Session 6, What on earth do we do?

The short: “Cassius ft. Cat Power & Pharrell Williams | Go Up.” A split screen music video that cleverly subverts and combines versions of reality.

The creator: Alex Courtès

Shown during: Session 7, Wow. Just wow.

The short: “Blobby.” A stop motion film about a man and a blob and the peculiar relationship they share.

The creator: Laura Stewart

Shown during: Session 7, Wow. Just wow.

The short: “WHO.” David Byrne and St. Vincent dance and sing in this black-and-white music video about accidents and consequences.

The creator: Martin de Thurah

Shown during: Session 8, Insanity. Humanity.

The short: “MAKIN’ MOVES.” When music makes the body move in unnatural, impossible ways.

The creator: Kouhei Nakama

Shown during: Session 9, Body electric

The short: “The Art of Flying.” The beautiful displays the Common Starling performs in nature.

The creator: Jan van IJken

Shown during: Session 9, Body electric

The short: “Kiss & Cry.” The heart-rending story of Giselle, a woman who lives and loves and wants to be loved. (You’ll never guess who plays the heroine.)

The creators: Jaco Van Dormael and choreographer Michèle Anne De Mey

Shown during: Session 10, Personally speaking

The short: “Becoming Violet.” The power of the human body, in colors and dance.

The creator: Steven Weinzierl

Shown during: Session 10, Personally speaking

The short: “Golden Castle Town.” A woman is transported to another world and learns to appreciate life anew.

The creator: Andrew Benincasa

Shown during: Session 10, Personally speaking

The short: “Tom Rosenthal | Cos Love.” A love letter to love that is grand and a bit melacholic.

The creator: Kathrin Steinbacher

Shown during: Session 11, What matters

TEDTEDFilms: Four new short films premiered at TED2018

For the TED conference this year, we wanted to entertain attendees between talks — and support and encourage up-and-coming filmmakers. Meet TEDFilms, a new program for promoting the creation of original short films.

Executive-produced by Sinéad McDevitt and led up by TED’s director of Production and Video Operations, Mina Sabet, the short films acted as a creative palate-cleanser during the speaker program, a short blast of humor, beauty and awe.

Each film is less than two minutes, and genres range from experimental art and documentary to PSA and dark comedy. Enjoy!

 

 

 

Chromatic
As light passes through defective glass, beams split into color spectra, causing ‘diffraction grating’. For the first time ever in film, we get up close and personal with this visual phenomenon in a series of beautiful chromatic abstractions.

Director: Shane Griffin

Music: Gavin Little

With special thanks to:
Ed Bruce at Screenscene
Los York

 

 

 

Illusions for a Better Society
Could visual illusions be a cure for polarization?

Co-Directors:
Aaron Duffy
Lake Buckley
Jack Foster

Director of Photography: William Atherton

Production Design: Adam Pruitt

Creative Partner: SpecialGuest

Production Company: 1stAveMachine

Producers:
Dave Kornfield
Andrew Geller
Matt Snetzko

Music: Bryn Bliska

 

 

 

It’s Not Amazing Enough
The pressures of having to make an amazing film sent this deadpan deep-voiced award winning filmmaker into a crippling spiral of self-doubt and comic indecision.

Director, Writer & Producer: Duncan Cowles

Music:Stillhead

 

 

 

A.I. Therapy
After 100 years of progress, AI bots have finally become too human for their own good.

Mother London

Directors: Emerald Fennell & Chris Vernon

Director of Photography: Ben Kracun

Production Design: Jessica Sutton

VFX: Coffee & TV

Planet DebianNorbert Preining: Specification and Verification of Software with CafeOBJ – Part 1 – Introducing CafeOBJ


Software bugs are everywhere – the feared Blue Screen of Death, the mobile phone rebooting at the most inconvenient moment, games crashing. Most of these bugs are not serious problems, but there are other cases that are far more serious:

While bugs will always remain, the question is how to deal with these kinds of bug. There is unsurmountable amount of literature on this topic, but it general falls into one of the following categories:

  • program testing: subject the program to be checked to a large set of tests trying to exhaust all possible code paths
  • post coding formal verification – model checking: given program code, model the behavior of the program in temporal logic and prove necessary properties
  • pre coding specification and verification: start with a formal specification what the program should do, and verify that the specification is correct, that is, that it satisfies desirable properties

The first two items above are extremely successful and well developed. In this blog series we want to discuss the third item, specification and their verification.

Overview on the blog series

This blog will introduce some general concepts about software and specifications, as well as introduce CafeOBJ as an algebraic specification language that is executable and thus can be used to verify the specification at the same time.

Further blog entries will introduce the CafeOBJ language in bit more detail, go through a simple example of cloud synchronization specification, and discuss more involved techniques and automated theorem proving using CafeOBJ.

Why should we verify specifications?

The value of formal specifications of software has been recognized since the early 80ies, and formal systems have been in development since then (Z, Larch and OBJ all originate at that time). On the other hand, actual use of these techniques did remain mostly in the academic surrounding – engineers and developers where mostly reluctant to learn highly mathematical languages to write specifications instead of writing code.

With the growth of interactivity, explosion of number of communication protocols (from low level TCP to high level SSL) with handshakes and data exchange sequences, the need for formal verification of these protocols, especially if they guard crucial data, has been increasing steadily.

The CafeOBJ approach

CafeOBJ is a member of the OBJ family and thus uses algebraic methods to describe specifications. This is in contrast to the Z system which uses set theory and lambda calculus.

Our aims in developing the language (as well as the system) CafeOBJ can be summarized as follows:

  • provide a reasonable blend of user and machine capabilities
  • allow intuitive modeling while preserving a rigorous formal background
  • allow for various levels of modelling – from high-level to hard-core
  • do not try to fully automate everything – understanding of design and problems is necessary

We believe that we achieve this through the combination of a rigid formal background, the incorporation of order-sorted equational theory, an executable semantics via rewriting, high-level programming facilities (inheritance, templates and instantiations, …), and last but not least a completely freedom to redefine the language of the specification (postfix, infix, mixfix, syntax overloading, …).

More specifically, the logical foundations are formed by the following four elements:

  • Order sorted algebras: partial order of sorts
  • Hidden algebras: co-algebraic methods, infinite objects
  • Rewriting logic: transitions as first class objects
  • Order sorted rewriting: executable semantics

Our vision

Our vision for safety aware software development can be summarized as follows:

  • Step 1: Model and describe a system in order-sorted algebraic specification
    The domain/design engineers construct proof scores hand-in-hand with formal specification;
  • Step 2: Construct proof score and verify the specification by rewriting
    The proof scores (CafeOBJ code) are executable instructions, which, when evaluated provide proofs of desirable properties of the specification.

This concludes the first part of this series on CafeOBJ. We will dive into the language of CafeOBJ in the next blog.

,

Krebs on SecurityDeleted Facebook Cybercrime Groups Had 300,000 Members

Hours after being alerted by KrebsOnSecurity, Facebook last week deleted almost 120 private discussion groups totaling more than 300,000 members who flagrantly promoted a host of illicit activities on the social media network’s platform. The scam groups facilitated a broad spectrum of shady activities, including spamming, wire fraud, account takeovers, phony tax refunds, 419 scams, denial-of-service attack-for-hire services and botnet creation tools. The average age of these groups on Facebook’s platform was two years.

On Thursday, April 12, KrebsOnSecurity spent roughly two hours combing Facebook for groups whose sole purpose appeared to be flouting the company’s terms of service agreement about what types of content it will or will not tolerate on its platform.

One of nearly 120 different closed cybercrime groups operating on Facebook that were deleted late last week. In total, there were more than 300,000 members of these groups. The average age of these groups was two years, but some had existed for up to nine years on Facebook

My research centered on groups whose singular focus was promoting all manner of cyber fraud, but most especially those engaged in identity theft, spamming, account takeovers and credit card fraud. Virtually all of these groups advertised their intent by stating well-known terms of fraud in their group names, such as “botnet helpdesk,” “spamming,” “carding” (referring to credit card fraud), “DDoS” (distributed denial-of-service attacks), “tax refund fraud,” and account takeovers.

Each of these closed groups solicited new members to engage in a variety of shady activities. Some had existed on Facebook for up to nine years; approximately ten percent of them had plied their trade on the social network for more than four years.

Here is a spreadsheet (PDF) listing all of the offending groups reported, including: Their stated group names; the length of time they were present on Facebook; the number of members; whether the group was promoting a third-party site on the dark or clear Web; and a link to the offending group. A copy of the same spreadsheet in .csv format is available here.

The biggest collection of groups banned last week were those promoting the sale and use of stolen credit and debit card accounts. The next largest collection of groups included those facilitating account takeovers — methods for mass-hacking emails and passwords for countless online accounts such Amazon, Google, Netflix, PayPal, as well as a host of online banking services.

This rather active Facebook group, which specialized in identity theft and selling stolen bank account logins, was active for roughly three years and had approximately 2,500 members.

In a statement to KrebsOnSecurity, Facebook pledged to be more proactive about policing its network for these types of groups.

“We thank Mr. Krebs for bringing these groups to our attention, we removed them as soon as we investigated,” said Pete Voss, Facebook’s communications director. “We investigated these groups as soon as we were aware of the report, and once we confirmed that they violated our Community Standards, we disabled them and removed the group admins. We encourage our community to report anything they see that they don’t think should be in Facebook, so we can take swift action.”

KrebsOnSecurity’s research was far from exhaustive: For the most part, I only looked at groups that promoted fraudulent activities in the English language. Also, I ignored groups that had fewer than 25 members. As such, there may well be hundreds or thousands of other groups who openly promote fraud as their purpose of membership but which achieve greater stealth by masking their intent with variations on or mispellings of different cyber fraud slang terms.

Facebook said its community standards policy does not allow the promotion or sale of illegal goods or services including credit card numbers or CVV numbers (stolen card details marketed for use in online fraud), and that once a violation is reported, its teams review a report and remove the offending post or group if it violates those policies.

The company added that Facebook users can report suspected violations by loading a group’s page, clicking “…” in the top right and selecting “Report Group”. Users who wish to learn more about reporting abusive groups can visit facebook.com/report.

“As technology improves, we will continue to look carefully at other ways to use automation,” Facebook’s statement concludes, responding to questions from KrebsOnSecurity about what steps it might take to more proactively scour its networks for abusive groups. “Of course, a lot of the work we do is very contextual, such as determining whether a particular comment is hateful or bullying. That’s why we have real people looking at those reports and making the decisions.”

Facebook’s stated newfound interest in cleaning up its platform comes as the social networking giant finds itself reeling from a scandal in which Cambridge Analytica, a political data firm, was found to have acquired access to private data on more than 50 million Facebook profiles — most of them scraped without user permission.

Google AdsenseHelping publishers recover lost revenue from ad blocking

Today, the majority of the internet is supported by digital advertising. But bad ad experiences—the ones that blare music unexpectedly, or force you to wait 10 seconds before you get to the page—are hurting publishers who make the content, apps and services we use everyday. When people encounter annoying ads, and then decide to block all ads, it cuts off revenue for the sites you actually find useful. Many of these people don't intend to defund the sites they love when they install an ad blocker, but when they do, they block all ads on every site they visit. 

Last year we announced Funding Choices to help publishers with good ad experiences recover lost revenue due to ad blocking. While Funding Choices is still in beta, millions of ad blocking users every month are now choosing to see ads on publisher websites, or “whitelisting” that site, after seeing a Funding Choices message. In fact, in the last month over 4.5 million visitors who were asked to allow ads said yes, creating over 90 million additional paying page views for those sites.

Over the coming weeks, we’re expanding Funding Choices to 31 additional countries, giving publishers the ability to ask visitors from those countries to choose between allowing ads on a site, or purchasing an ad removal pass through Google Contributor. Also, we’ve started a test that allows publishers to use their own proprietary subscription services within Funding Choices.

How Funding Choices works


Funding Choice gives publishers a way to have a conversation with their site visitors through custom messages they can use to express how ad blocking impacts their business and content. When a visitor arrives at a site using an ad blocker, Funding Choices allows the site to display one of three message types to that user:

A dismissible message that doesn’t restrict access to content: 



A dismissible message that counts and limits the number of page views that person is allowed per month, as determined by the site owner, before the content is blocked.



Or, a message that blocks access to content until the visitor chooses to allow ads on the site, or to pay to access the content with either the site’s proprietary subscription service or a pass that removes all ads on that site through Google Contributor.




On average, publishers using Funding Choices are seeing 16 percent of visitors allow ads on their sites with some seeing rates as high as 37 percent.

Ad blockers designed to remove all ads from all sites are making it difficult for publishers with good ad experiences to maintain sustainable businesses. Our goal for Funding Choices is to help publishers get paid for their work by reducing the impact of ad blocking on them, and we look forward to continuing to expand the product availability.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, March 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 214 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 26. Thanks to a few extra hours dispatched this month (accumulated backlog of a contributor), the number of open issues came back to a more usual value.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

TEDWow. Just wow. Notes from Session 7 of TED2018

Renzo Piano makes the case for beauty in architecture during TED2018: The Age of Amazement, April 12, 2018, in Vancouver. Photo: Bret Hartman / TED

What we need sometimes is a little awe, a little wonder. This session of TED Talks was designed to provoke an exquisite human emotion: the sense that the world is bigger and stranger than you’d known. Without further ado, wow.

A blueprint for how humans and machines can coexist. American researchers are leading AI discoveries, Chinese engineers are leading AI implementations like speech recognition and machine translation, and taken together, they will bring about a technological revolution that will pose major challenges to society. All types of jobs will be replaced by AI in the near-future, from radiologists to truckers. “But what’s more serious than the loss of jobs is the loss of meaning,” says investor Kai-Fu Lee, “because the work ethic in the Industrial Age has brainwashed us into thinking work is the reason we exist, that work defines the meaning of our lives.” Lee confesses that for many years, he was guilty of being a workaholic — nearly missing the birth of his daughter to give a presentation to Apple’s CEO– until he was diagnosed with Stage IV lymphoma five years ago. The experience made him realize that his priorities were completely out of order, but it also gave him a new view about what AI should mean for humanity. “AI is taking away a lot of routine jobs, but routine jobs are not what we’re about. Why we exist is love,” he says. He explains how we might harness human compassion – along with human creativity — to work with AI in a way that may help solve both the loss of jobs and the loss of meaning.

Personalized music composition through AI. Wouldn’t it be nice to finally have the perfect soundtrack for your life? Perhaps that pensive, melancholic song when you’re feeling down, or an upbeat tune when you’ve just received great news. Well, engineer and musician Pierre Barreau is making your personalized playlist a reality through AI. Barreau created an AI technology called AIVA, or Artificial Intelligence Visual Artist. He taught AIVA the art of music composition by inputting 30,000 scores from history’s greatest composers. “Using deep neural networks, AIVA looks for patterns in these scores.” he says. “From a couple of bars of existing music from an existing score, [AIVA] infers what should come next in those tracks. Once AIVA gets good at these predictions, it can actually build a set of mathematical rules for that style of music in order to create its own original compositions.” To make music unique to each person, he also taught AIVA to understand what makes a musical score emotionally unique by matching scores to different categories, including mood and note density. While personalized AI-generated music has clear applications for immersive media storytelling, like video games, it also has the power to better tell our life narratives. “This moment here together at TED is now a part of our life story. So it only felt fitting that AIVA would compose music for this moment.” Barreau concludes by playing a brief, mesmerizing song by AIVA, fittingly titled, “The Age of Amazement.”

A retrospective in the pursuit of beauty. “Architecture is the art of making shelter for human beings,” says architect Renzo Piano. Throughout his prolific career, Piano has designed some of the most recognizable buildings across the world; notables include the Centre Georges Pompidou in Paris, The Shard in London and the Whitney Museum of Art in New York City, part of an impressive body of work spanning decades. When he sets out to create these buildings, he looks for them to the flirt with the surrounding world — with water, wind, and even light — and communicate with humanity’s most universal language: beauty. Real beauty, he believes, is when the invisible joins the visible. And this doesn’t just apply to art or nature, it can relate to science and human curiosity as well. “This universal beauty is one of the few thing that can change the world,” he says. “Believe me this beauty will change the world, one person at a time.”

Hope for the organ transplant shortage — inside a baby pig. Every day, thousands of people wait for a desperately needed organ replacement — and by the end of today, 20 people will have died waiting. For almost half a century it’s been thought to be theoretically possible to transplant organs from pigs into humans, because pigs are similar enough to humans biologically and about the same size. But as scientist Luhan Yang puts it, there was one major problem: “The pig genome has a dangerous virus that does not express in pigs, but can be transmitted to humans.” If the virus, PERV, migrated to humans through a transplanted organ, it could spark a deadly HIV-like epidemic. So research had stalled, but Yang decided to dig in. Using CRISPR, a technique for editing genes, she and her lab have been working to create a pig without PERV in its genome. Difficulty: PERV expresses in 62 places on the pig genome. She shows a picture of Laika, one of the more than 30 PERV-free pigs her lab has bred. They may represent an exciting first step toward solving the organ transplantation crisis. After the talk, co-host Chris Anderson asks Yang for a timeline; she demurs at first but then says “We hope that it happens within one decade.”

CSI: Fingerprints. TV crime dramas are credited with luring people, particularly young women, into the field of forensic science; they’re drawn by the combination of conducting serious lab work and helping catch the bad guys. Seeing a talk from Sheffield Hallam University analytical chemist Simona Francese about her fascinating research might have that same impact. An expert in fingerprint analysis, Francese says: “Molecules are the storytellers of who are we and what we’ve been up to. We just need to have the right technology to make them talk.” Through her work, she is revealing the tales to be found in the microscopic remnants that we all leave behind. A person’s prints can contain three types of molecules: ordinary sweat molecules; molecules of substances that we’ve introduced into our bodies and sweat out; and molecules of stuff that’s adhered to our hands. Francese and her team achieve their breathtakingly detailed analyses by using UV lasers — which release the molecules in fingerprints — and mass spectrometry imaging technology, which then measures the mass of those molecules, pinpointing what they are. They’ve been able to detect thousands of different molecules in a single print. They can also visualize the distribution of each molecule on the fingerprint — which allows them to separate prints when they’re overlapping, something that tends to stymie the police. Their work can also fill in faint prints by improving ridge pattern continuity and clarity. In 2017, law enforcement in the UK and in other parts of Europe began using Francese’s technology in their criminal investigations.

Sometimes it’s awesome to be sad. “I love depressing songs,” says Luke Sital-Singh, “songs of sorrow, of grief, of longing … because they speak to a very real part of being human.” Accompanied only by a piano on a dark stage, the singer-songwriter performed two gorgeous and melancholic ballads, “Afterneath” and “Killing Me,” leaving many in the audience in tears, including our co-host.

Conquering physical and mental mountains. Not many people would consider being stuck between a rock and … another rock … nearly 3,000 feet off the ground without a rope to be one of the best moments of their life. But for Alex Honnold, it was the cumulation of a nearly two-decade-long dream. On June 3, 2017, at the age of 31, Honnold became the only person to summit El Capitan, a nearly 3,000-foot climb in Yosemite National Park, without a rope. This is a style of climbing known as free soloing, and one that Honnold is recognized for internationally. During his talk, Honnold shared how preparing for El Capitan required as much mental exercise as it did physical. He spent a year rehearsing not only every physical move, but every thought and doubt he could have on the wall. “Doubt is the precursor to fear, and I knew that I couldn’t experience my perfect moment if I was afraid.” In the end, he had his perfect moment, soaring up the wall that takes the average climber three days to summit in a mere three hours and fifty-six minutes. Honnold’s talk ended with a standing ovation and a final note from Chris Anderson: “Don’t share this talk with your children, please.” — Michaela Eames

TEDIn Case You Missed It: Finding space to dream at day 3 at TED2018

In Case You Missed It TED2018TED2018 hit its stride on day 3, with talks from explorers of space and oceans, builders of cities and bridges, engineers of the future and many more.

Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver.

Are we alone in the cosmos? The universe is 13.8 billion years old and contains billions of galaxies — in fact, there are probably a trillion planets in our galaxy alone. People have long thought a civilization like ours must exist or should have existed somewhere out there, but British astronomer Stephen Webb sees another possibility: we’re alone. Thinkers have speculated about all the barriers that a planet would need to house an alien civilization: it would need to be habitable; life would have to develop there; such life forms would need a certain technological intelligence to reach out; and they’d have to be able to communicate across space. Rather than viewing the situation with sorrow and the cosmos as a lonely place, “the silence of the universe is shouting: we’re the creatures who got lucky,” says Webb. One cosmic visitor we just recently met can confirm something else is definitely out there — ‘Oumuamua, the first known interstellar object to pass through the Solar System. University of Hawaii astrobiologist Karen Meech introduces us to the mysterious object, which she says is a package from the nearest star system 4.4 light years away, having traveled on a journey of more than 50,000 years. She believes it could be a chunk of rocky debris from a new star system; other researchers believe it may be something else altogether — evidence of extraterrestrial civilizations, or material cast off in the death throes of a star. “This unexpected gift has generated more questions than answers,” says Meech, “but we were the first to say hello to this visitor from our distant past.”

Penny Chisholm explains how an ancient, ocean-dwelling cyanobacterium — Prochlorococcus — could inspire us to break our dependency on fossil fuels. (Photo: Bret Hartman / TED)

Ocean explorers. Prochlorococcus is an ancient ocean-dwelling cyanobacterium that Penny Chisholm, a biological oceanographer at MIT, discovered in the mid-1980s. It’s the most abundant photosynthetic cell on the planet and Chisholm believes that it could hold clues for sustainable energy in its genetic architecture. With a gene pool four times the size of the human genome but 1/100th the width of a human hair, this engineering masterpiece might inspire solutions to break our dependency on fossil fuel. If we hope to unlock the wonders of Prochlorococcus in the Age of Amazement, we’re going to need to protect the world’s waters first. Enric Sala, a marine ecologist and National Geographic Explorer-in-Residence, proposes the creation of a giant high seas reserve. Falling outside of any single country’s jurisdiction, the high seas are the “Wild West” of the ocean and until recently, it was difficult to know who was fishing (and how much). Satellite technology and machine learning now enable the tracking of boats and revenue, revealing that practically the entire high seas fishing proposition is misguided. In response, Sala advocates for creating a reserve that would include two-thirds of the ocean, protecting the ecological, economic and social benefits of our waters.

Bridges reveal something about creativity, ingenuity; they even hint at our identity, says engineer Ian Firth. (Photo: Bret Hartman / TED)

How we’re shaping (and reshaping) the built environment. TED is known for its fair share of tech wizardry, where innovation happens at the scale of microns. But our built environment is in need of some love in the Age of Amazement as well. Architect and Columbia University professor Vishaan Chakrabarti highlights the creeping sameness in many urban buildings and streetscapes throughout the world. This physical homogeneity — stemming from regulations, automobiles, accessibility and safety issues, and cost considerations, among other factors — has resulted in a social and mental one as well. Let’s strive to create cities of difference, magnetic places that embody an area’s cultural and local proclivities, exhorts Chakrabarti. One great way to beautify a city: an elegant, distinctive bridge! Ian Firth, an engineer who has designed spans all over the world, including the 3.3 kilometer-long suspension bridge over the Messina Strait in Italy, talks about the connectivity that makes them special pieces of infrastructure. “They reveal something about creativity, ingenuity; they even hint at our identity,” he explains. Although they fall into only a few types, depending on the nature of their structural support, bridges hold great potential for innovation and variety is tremendous. “Bridges need to be elegant, they need to be beautiful,” Firth says.

Angel Hsu shows us that real change is afoot in China’s, as the country’s energy initiatives have unexpectedly placed it at the vanguard of the fight against pollution and climate change. (Photo: Bret Hartman / TED)

Pollution problems — and solutions. Iconic images of skylines buried in clouds of smog ensured China’s notoriety as one of the world’s biggest polluters. But Angel Hsu shows us that real change is afoot in China’s, as the country’s energy initiatives have unexpectedly placed it at the vanguard of the fight against pollution and climate change. In 2011, when Hsu began conducting her research, China’s own environmental data — specifically for fine particulate matter, or PM2.5 — was kept secret. But thanks to citizen activism, pollution’s hazardous impacts on human health skyrocketed into China’s consciousness. The emerging zeitgeist grabbed the government’s attention. Recognizing China’s toxic reliance on fossil fuels, they pulled the plug on more than 300,000 coal plants, and began feverishly developing alternative energy. Although China must still address its coal problem abroad, its efforts at home (although uncertain) could impact global pollution — and China’s massive carbon footprint — in a major way. While cutting down on pollution is good, removing harmful greenhouse gases from the atmosphere would be even better. The concentration of CO2 in today’s atmosphere is a staggering 400ppm, but we’re still not cutting emissions as fast as we need to, according to chemical engineer Jennifer Wilcox. So we need to pull CO2 back out of the atmosphere — a strategy known as negative emissions. The technology to do this already exists: a device known as an air contactor uses CO2-grabbing chemicals in solid materials or dissolved in water to pull the gas out of the air, sort of like a synthetic forest. What makes this process tricky, though, is that it’s energy-intensive, which drives costs up or, depending on the type of energy used, ends up emitting more CO2 than is captured. Several companies are working on making the process more cost-effective using a variety of techniques, as well as solving other problems of carbon capture like how and where we should build these “synthetic forests.” And in a truly mind-blowing talk, applied engineer Aaswath Raman explains how the next great renewable resource might be … the cold of space. “What keeps me up at night is that the energy use for cooling is expected to grow six-fold by the year 2050,” he says. “The warmer the world gets, the more we are going to need cooling systems.” He’s exploring a potential solution that leverages a cool fact about infrared light and deep space.

Untraditional storytellers. Three TED speakers evoked storytelling in their talks — perhaps that’s not so unexpected, but what was surprising was there wasn’t a writer, musician or filmmaker among them. Game designer David Cage entreated the audience to think of videogames as more than pixelated shooting ranges or mindless time-fillers. “I’ve always been fascinated with the idea of recreating the notion of choice in fiction,” he says. “My dream is to put the audience in the shoes of the protagonist, let them make their own decisions, and by doing so, let them tell their own stories.” While playing, gamers also get the chance to enjoy two tremendously liberating qualities not usually found when reading a book: personal autonomy and flexibility. Veteran architect and Pritzker winner Renzo Piano — who is responsible for such indelible buildings such as Paris’s Centre George Pompidou, the New York Times building, and London’s The Shard — took audience members through his life’s work and his thinking. He too views himself as a storyteller. But while Cage concentrated on the narrative aspects of that role, Piano extolled the love, happiness and other emotional reactions that beautiful structures evoke in all of us. And the most surprising of today’s storytellers? Your fingerprints — or, more specifically, the molecules in your fingerprints, according to analytical chemist Simona Francese: “Molecules are the storytellers of who are we and what we’ve been up to. We just need to have the right technology to make them talk.” Francese and a team at her lab at the UK’s Sheffield Hallam University have spent nearly a decade perfecting the process to identify as much as 1,000 molecules in a single fingerprint — and this technology is now being used by the police in Europe to catch criminals.

Nora Atkinson invites us on a trip to Burning Man, to see the wonderful art that’s constructed and burned — and never sold — there each year. (Photo: Lawrence Sumulong / TED)

Love, actually. In 2017, Smithsonian American Art Museum craft curator Nora Atkinson went to Burning Man in Nevada for the first time, and what she found was an artistic experience like no other. Every August, more than 70,000 people trek to the desert and engage with 300+ installations, sculptures and structures. None of the pieces are for sale (all are burned or taken away at week’s end), and anyone can make art. As a result, creativity there is driven by passion, not profit. Burning Man art is “authentic and optimistic in a way we rarely see anywhere else,” and it encourages, even demands, interaction and investigation. “What is art for,” Atkinson asks, “if not this?” Love was also on the mind of speaker Kai-Fu Lee. The longtime technology investor and executive admits love was absent from his career-minded trajectory until he was diagnosed with cancer (but shares he is now in remission). He feels it’s been similarly overlooked in discussions about technology and the future. “Love is what differentiates us from AI,” he says. “Despite what science fiction movies may portray, I can tell you responsibly that AI programs cannot love.” Lee urges people to think of how human love, compassion and technological brilliance can co-exist and help us create better, more connected lives. Musician Luke Sital-Singh brought down the house — and made TED curator Chris Anderson wipe his eyes — by playing and singing a beautiful composition called “Killing Me.” He wrote the song from the point of view of his grandmother, who has had to figure out how to live without her soulmate, Sital-Singh’s late grandfather, even as she experiences the joys of her family and their new members and accomplishments. He sang, “Oh you won’t believe, the wonders I can see/This world is changing me, but I’ll love you faithfully.” And while soft-spoken climber Alex Honnold, the day’s final speaker, didn’t use the L-word, it came through loud and clear as he talked about his record-setting, free-solo climb of El Capitan in 2017. He spoke about his intense mental preparation for the feat — he took months to memorize every handhold and foot placement, so the climb would come naturally and automatically to him. Of that day, he recalled, “With six hundred feet to go, it felt like the mountain was offering me a victory lap. I climbed with a smooth precision and enjoyed the sounds of the birds swooping around the cliff. It all felt like a celebration.”

Workshops aplenty. TEDsters had 19 workshops to choose from on day 3. Adam Savage had attendees create armor helmets out of laser cut corrugated cardboard. Angelica Dass guided attendees through painting self portraits, asking people to revisit their childhood art class — specifically the moment they learned about the connections between the colors of their skin and their race. And OK Go led attendees to build an orchestra out of random objects around the room … like suitcases, wine bottles, cans and PVC pipes.

TEDAltair at TED2018: In the “Age of Amazement,” simulation drives innovation

Altair’s exhibit gallery at TED2018 features a vintage car with 3D-printed insides, a helmet designed to reduce football-related head injuries and a Wilson golf driver challenge, among much more. (Photo: Jason Redmond / TED)

In a corner of the Vancouver Convention Center — set against a beautiful backdrop of Vancouver Harbour and the mountains of the North Shore, and right between a comfy simulcast lounge and a pop-up coffee and espresso shop — it’s hard to miss an eye-catching vintage red car. It’s the anchor of Altair’s exhibit gallery, showing off the possibilities of simulation-driven innovation.

Altair is a leading provider of enterprise-class engineering software enabling innovation from concept design to operation. Their simulation-driven approach is powered by a suite of software that optimizes performance while providing data analytics and true-to-life visualization and rendering. Altair products range from biomimicry software that unlocks the potential of industrial 3D-printing to personalized healthcare with machine learning enabled by the Internet of Things. At TED2018, they invited TEDsters to explore the intersection of human creativity and technology — and the extraordinary impact it has on shaping the world around us.

On display at their gallery: an IoT-enabled bodysuit from BioSerenity that records seizures to help diagnose epilepsy; a helmet designed to reduce football-related head injuries created in partnership VICIS, which is set to be used by Notre Dame in NCAA games this coming season; an advanced arm prosthetic … and a vintage car made up of a vintage frame with aluminum 3D-printed insides, created by Altair, APWORKS, csi entwicklungstechnik, EOS, GERG and Heraeus.

Altair is also hosting an interactive design experience where attendees can use their simulation software to design a custom Wilson golf driver. The person with the leading design — the one that hits the ball furthest (and yes, thanks to machine learning and Altair HyperWorks’ Virtual Wind Tunnel, there is a right answer to this) by the end of TED2018 will receive a golf driver as a prize. In the “Age of Amazement” — TED’s theme in 2018 — simulation and machine learning will drive innovation.

TEDIn Case You Missed It: Bold visions for humanity at day 4 of TED2018

In Case You Missed It TED2018Three sessions of memorable TED Talks covering life, death and the future of humanity made the penultimate day of TED2018 a remarkable space for tech breakthroughs and dispatches from the edges of culture.

Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver.

The future built on genetic code. DNA is built on four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the four letters of the genetic alphabet are not all that unique. He and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. And maybe soon, we’ll be able to use that expanded DNA alphabet to teleport. That’s right, you read it here first: teleportation is real. Biologist and engineer Dan Gibson reports from the front lines of science fact that we are now able to transmit the most fundamental parts of who we are: our DNA. It’s called biological teleportation, and the idea is that biological entities including viruses and living cells can be reconstructed in a distant location if we can read and write the sequence of that DNA code. The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines.

“If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. (Photo: Jason Redmond / TED)

Dispatches from the fight against hate online. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. In 2016, Green collaborated with Moonshot CVE to pilot a new approach, the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups, and used what they learned to create targeted advertising aimed at people susceptible to ISIS’s recruiting — and counter those messages. In English and Arabic, the eight-week pilot program reached more than 300,000 people. “If technology has any hope of overcoming today’s challenges,” Green says, “we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” Dylan Marron is taking a different approach to the problem of hate on the internet. His video series, such as “Sitting in Bathrooms With Trans People,” have racked up millions of views, and they’ve also sent a slew of internet poison in his direction. He developed a coping mechanism: he calls up the people who leave hateful remarks, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace, he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years (he’s now just 18) he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of what we call intelligence is just trial-and-error on a massive scale — a machine can try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. Which really isn’t general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives. Picking up on the thread of pitfalls of current AI, artist and technology critic James Bridle describes how automated copycats on YouTube mimic trusted videos by using algorithmic tricks to create “fake news” for kids. End result: children exploring YouTube videos from their favorite cartoon characters are sent down autoplaying rabbit holes, where they can find eerie, disturbing videos filled with very real violence and very real trauma. Algorithms are touted as the fix, but as Bridle says, machine learning is really just what we call software that does things we don’t understand … and we have enough of that already, no?

Chetna Gala Sinha tells us about a bank in India that meets the needs of rural poor women who want to save and borrow. (Photo: Jason Redmond / TED)

Listen and learn. Takemia MizLadi Smith spoke up for the front-desk staffer, the checkout clerk, and everyone who’s ever been told they need to start collecting information from customers, whether it be an email, zip code or data about their race and gender. Smith makes the case to empower every front desk employee who collects data — by telling them exactly how that data will be used. Chetna Gala Sinha, meanwhile, started a bank in India that meets the needs of rural poor women who want to save and borrow — and whom traditional banks would not touch. How does the bank improve their service? As Chetna says: simply by listening. Meanwhile, sex educator Emily Nagoski talked about a syndrome called emotional nonconcordance, where what your body seems to want runs counter to what you actually want. In an intimate situation, ahem, it can be hard to figure out which one to listen to, head or body. Nagoski gives us full permission and encouragement to listen to your head, and to the words coming out of the mouth of your partner. And Harvard Business School prof Frances Frei gave a crash course in trust — building it, keeping it, and the hardest, rebuilding it. She shares lessons from her stint as an embed at Uber, where far from listening to in meetings, staffers would actually text each other during meetings — about the meeting. True listening, the kind that builds trust, starts with putting away your phone.

Bionic man Hugh Herr envisions humanity soaring out of the 21st century. (Photo: Ryan Lash / TED)

A new way to heal our bodies … and build new ones. Optical engineer Mary Lou Jepsen shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it and doesn’t let it pass through. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. MIT professor Hugh Herr is working on a different way to heal — and augment — our bodies. He’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it neural embodied design, a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend who lost a foot in a climbing accident. Using the Agonist-antagonist Myoneural Interface, or AAMI, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. What might be next? Maybe, the ability to fly.

Announcements! Back in 2014, space scientist Will Marshall introduced us to his company, Planet, and their proposed fleet of tiny satellites. The goal: to image the planet every day, showing us how Earth changes in near-real time. In 2018, that vision has come good: every day, a fleet of about 200 small satellites pictures every inch of the planet, taking 1.5 million 29-megapixel images every day (about 6T of data daily), gathering data on changes both natural and human-made. This week at TED, Marshall announced a consumer version of Planet, called Planet Stories, to let ordinary people play with these images. Start playing now here. Another announcement comes from futurist Ray Kurzweil: a new way to query the text inside books using something called semantic search — which is a search on ideas and concepts, rather than specific words. Called TalkToBooks, the beta-stage product uses an experimental AI to query a database of 120,000 books in about a half a second. (As Kurzweil jokes: “It takes me hours to read a hundred thousand books.”) Jump in and play with TalkToBooks here. Also announced today: “TED Talks India: Nayi Soch” — the wildly popular Hindi-language TV series, created in partnership with StarTV and hosted by Shah Rukh Khan — will be back for three more seasons.

Planet DebianHolger Levsen: 20180416-LTS-march

My LTS work in March

So in March I resumed contributing to LTS again, after 2 years of taking a break, due to being overwhelmed with work on Reproducible Builds... Reproducible Builds is still eating a lot of my time, but as we currently are unfunded I had to pick up some other sources of funding.

And then, due to Reproducible Builds still requiring a lot of my attention (both actual work as well as work on getting funded again) and other stuff happening in my life, I was also mostly unable to find time to really dive into LTS again, so while I managed to renew my knowledge of the procedures etc, I only managed to find 1.5h work to be done :/ Which in turn made me feel quite bad, so that I also postponed writing about this until now.

So, in March I only managed to mark libcdio as no-DSA and upload samba to fix CVE-2018-1050.

On the plus side and despite the above, I'm very happy to be able to work on LTS again, because a.) I consider it interesting (to fix bugs in old packages, yes!) and b.) because I use LTS myself and c.) because the LTS crowd is actually a nice and helpful one.

And now let's see how much LTS work I'll manage in April...!

TEDWhat matters: Notes from Session 11 of TED2018

Reed Hastings, the head of Netflix, listens to a question from Chris Anderson during a sparky onstage Q&A on the final morning of TED2018, April 14, 2018. Photo: Ryan Lash / TED

What a week. We’ve heard so much, from dystopian warnings to bold visions for change. Our brains are full. Almost. In this session we pull back to the human stories that underpin everything we are, everything we want. From new ways to set goals and move business forward, to unabashed visions for joy and community, it’s time to explore what matters.

The original people of this land. One important thing to know: TED’s conference home of Vancouver is built on un-ceded land that once belonged to First Nations people. So this morning, two DJs from A Tribe Called Red start this session by remembering and honoring them, telling First Nations stories in beats and images in a set that expands on the concept of Halluci Nation, inspired by the poet, musician and activist John Trudell. In Trudell’s words: “We are the Halluci Nation / Our DNA is of earth and sky / Our DNA is of past and future.”

The power of why, what and how. Our leaders and our institutions are failing us, and it’s not always because they’re bad or unethical. Sometimes, it’s simply because they’re leading us toward the wrong objectives, says venture capitalist John Doerr. How can we get back on track? The trick may be a system called OKR, developed by legendary management thinker Andy Grove. Doerr explains that OKR stands for ‘objectives and key results’ – and setting the right ones can be the difference between success and failure. However, before you set your objective (your what) and your key results (your how), you need to understand your why. “A compelling sense of why can be the launch pad for our objectives,” he says. He illustrates the power of OKRs by sharing the stories of individuals and organizations who’ve put them into practice, including Google’s Larry Page and Sergey Brin. “OKRs are not a silver bullet. They’re not going to be a substitute for a strong culture or for stronger leadership, but when those fundamentals are in place, they can take you to the mountaintop,” he says. He encourages all of us to take the time to write down our values, our objectives, and our key results – and to do it today. “Let’s fight for what it is that really matters, because we can take OKRs beyond our businesses. We can take them to our families, to our schools, even to our government. We can hold those governments accountable,” he says. “We can get back on the right track if we can and do measure what really matters.”

What’s powering China’s tech innovation? The largest mass migration in the world occurs every year around the Chinese Spring Festival. Over 40 days, travelers — including 290 million migrant workers — take 3 billion trips all over China. Few can afford to fly, so railways strained to keep up, with crowding, fraud and drama. So the Chinese technology sector has been building everything from apps to AI to ease not only this process, but other pain points throughout society. But unlike the US, where innovation is often fueled by academia and enterprise, China’s tech innovation is powered by “an overwhelming need economy that is serving an underprivileged populace, which has been separated for 30 years from China’s economic boom.” The CEO of the China Morning Post, Gary Liu has a front-row seat to this transformation. As China’s introduction of a “social credit rating” system suggests, a technology boom in an authoritarian society hides a significant dark side. But the Chinese internet hugely benefits its 772 million users. It has spread deeply into rural regions, revitalizing education and creating jobs. There’s a long way to go to bring the internet to everyone in China — more than 600 million people remain offline . But wherever the internet is fueling prosperity, “we should endeavor to follow it with capital and with effort, driving both economic and societal impact all over the world. Just imagine for a minute what more could be possible if the global needs of the underserved become the primary focus of our inventions.”

Netflix and chill, the interview. The humble beginnings of Netflix paved the way to transforming how we consume content today. Reed Hastings — who started out as a high school math teacher — admits that making the shift from DVDs to streaming was a big leap. “We weren’t confident,” he admits in his interview with TED Curator Chris Anderson. “It was scary.” Obviously, it paid off over time, with 117 million subscribers (and growing), more than $11 billion in revenue (so far) and a slew of popular original content (Black Mirror, anyone?) fueled by curated algorithmic recommendations. The offerings of Netflix, Hastings says, is a mixture of candy and broccoli — and it allows people to decide what a proper “diet” is for them. “We get a lot of joy from making people happy,” he says. The external culture of the streaming platform reflects its internal culture as well: they’re super focused on how to run with no process, but without chaos. There’s an emphasis on freedom, responsibility and honesty (as he puts it, “disagreeing silently is disloyal”). And though Hastings loves business — competing against the likes of HBO and Disney — he also enjoys his philanthropic pursuits supporting innovative education, such as the KIPP charter schools, and advocates for more variety in educational content. For now, he says, it’s the perfect job.

“E. Pluribus Unum” — ”Out of many, one.” It’s the motto of the United States, yet few citizens understand its meaning. Artist and designer Walter Hood calls for national landscapes that preserve the distinct identities of peoples and cultures, while still forging unity. Hood believes spaces should illuminate shared memories without glossing over past — and present — injustices. To guide his projects, Hood follows five simple guidelines. The first — “Great things happen when we exist in each other’s world” — helped fire up a Queens community garden initiative in collaboration with Bette Midler and hip-hop legend 50 Cent. “Two-ness” — or the sense of double identity faced by those who are “othered,” like women and African-Americans — lies behind a “shadow sculpture” at the University of Virginia that commemorates a forgotten, buried servant household uncovered during the school’s expansion. “Empathy” inspired the construction of a park in downtown Oakland that serves office workers and the homeless community, side-by-side. “The traditional belongs to all of us” — and to the San Francisco neighborhood of Bayview-Hunter’s Point, where Hood restored a Victorian opera house to serve the local community. And “Memory” lies at the core of a future shorefront park in Charleston, which will rest on top of Gadsden Wharf — an entry point for 40% of the United States’ slaves, where they were then “stored” in chains — that forces visitors to confront the still-resonating cruelty of our past.

The tension between acceptance and hope. When Simone George met Mark Pollock, it was eight years after he’d lost his sight. Pollock was rebuilding his identity — living a high-octane life of running marathons and racing across Antarctica to reach the South Pole. But a year after he returned from Antarctica, Pollock fell from a third-story window; he woke up paralyzed from the waist down. Pollock shares how being a realist — inspired by the writings of Admiral James Stockdale, a Vietnam POW — helped him through bleak days after this accident, when even hope seemed dangerous. George explains how she helped Pollock navigate months in the hospital; told that any sensation Pollock didn’t regain in the weeks immediately after the fall would likely never come back, the two looked to stories of others, like Christopher Reeve, who had pushed beyond what was understood as possible for those who are paralyzed. “History is filled with the kinds of impossible made possible through human endeavor,” Pollock says. So he started asking: Why can’t human endeavor cure paralysis in his lifetime? In collaboration with a team of engineers in San Francisco, who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who had developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test, proving that progress is definitely still possible. For now, “I accept the wheelchair, it’s almost impossible not to,” says Pollock. “We also hope for another life — a life where we have created a cure through collaboration, a cure that we’re actively working to release from university labs around the world and share with everyone who needs it.”

The pursuit of joy, not happiness. “How do tangible things make us feel intangible joy?” asks designer Ingrid Fetell Lee. She pursued this question for ten years to understand how the physical world relates to the mysterious, quixotic emotion of joy. In turns out, the physical can be a remarkable, renewable resource for fostering a happier, healthier life. There isn’t just one type of joy, and its definition morphs from person to person — but psychologists, broadly speaking, describe joy as intense, momentary experience of positive emotion (or, simply, as something that makes you want to jump up and down). However, joy shouldn’t be conflated with happiness, which measure how good we feel over time. So, Lee asked around about what brings people joy and eventually had a notebook filled with things like beach balls, treehouses, fireworks, googly eyes and ice cream cones with rainbow sprinkles, and realized something significant: the patterns of joy have roots in evolutionary history. Things like symmetrical shapes, bright colors, an attraction to abundance and multiplicity, a feeling of lightness or elevation — this is what’s universally appealing. Joy lowers blood pressure, improves our immune system and even increases productivity. She began to wonder: should we use these aesthetics to help us find more opportunities for joy in the world around us? “Joy begins with the senses,” she says. “What we should be doing is embracing joy, and finding ways to put ourselves in the path of it more often.”

And that’s a wrap. Speaking of joy, Baratunde Thurston steps out to close this conference with a wrap that shouts out the diversity of this year’s audience but also nudges the un-diverse selection of topics: next year, he asks, instead of putting an African child on a slide, can we put her onstage to speak for herself? He winds together the themes of the week, from the terrifying — killer robots, octopus robots, genetically modified piglets — to the badass, the inspiring and the mind-opening. Are you not amazed?

CryptogramThe DMCA and its Chilling Effects on Research

The Center for Democracy and Technology has a good summary of the current state of the DMCA's chilling effects on security research.

To underline the nature of chilling effects on hacking and security research, CDT has worked to describe how tinkerers, hackers, and security researchers of all types both contribute to a baseline level of security in our digital environment and, in turn, are shaped themselves by this environment, most notably when things they do upset others and result in threats, potential lawsuits, and prosecution. We've published two reports (sponsored by the Hewlett Foundation and MacArthur Foundation) about needed reforms to the law and the myriad of ways that security research directly improves people's lives. To get a more complete picture, we wanted to talk to security researchers themselves and gauge the forces that shape their work; essentially, we wanted to "take the pulse" of the security research community.

Today, we are releasing a third report in service of this effort: "Taking the Pulse of Hacking: A Risk Basis for Security Research." We report findings after having interviewed a set of 20 security researchers and hackers -- half academic and half non-academic -- about what considerations they take into account when starting new projects or engaging in new work, as well as to what extent they or their colleagues have faced threats in the past that chilled their work. The results in our report show that a wide variety of constraints shape the work they do, from technical constraints to ethical boundaries to legal concerns, including the DMCA and especially the CFAA.

Note: I am a signatory on the letter supporting unrestricted security research.

Worse Than FailureCodeSOD: All the Things!

Yasmin needed to fetch some data from a database for a report. Specifically, she needed to get all the order data. All of it. No matter how much there was.

The required query might be long running, but it wouldn’t be complicated. By policy, every query needed to be implemented as a stored procedure. Yasmin, being a smart prograammer, decided to check and see if anybody had already implemented a stored procedure which did what she needed. She found one called GetAllOrders. Perfect! She tested it in her report.

Yasmin expected 250,000 rows. She got 10.

She checked the implementation.

CREATE PROCEDURE initech.GetAllOrders
AS
BEGIN
    SELECT TOP 10
        orderId,
        orderNo,
        orderCode,
        …
    FROM initech.orders INNER JOIN…
END

In the original developer’s defense, at one point, when the company was very young, that might have returned all of the orders. And no, I didn’t elide the ORDER BY. There wasn’t one.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

TEDPersonally speaking: Notes from Session 10 of TED2018

What does an illustrator’s life look like? Well, says Christoph Niemann, most of the time: this. He spoke at TED2018 on April 13, 2018, in Vancouver. Photo: Jason Redmond / TED

Sketches that speak volumes. When illustrator Christoph Niemann wakes up after falling asleep on an airplane, he says, “I have the most terrible taste in my mouth that cannot be described with words … But it can be drawn.” Then he shows a spot-on sketch of an outstretched tongue with a dead-fish-rat-hybrid creature on it. Trying to recap his intensely visual talk in words resembles his struggle, because this talk speaks largely through witty, whimsical drawings. Niemann believes all people are bilingual, “fluent in the language of reading images,” and most of our fluency comes organically. For example, while you might remember learning to read the words “men” and “women,” can you recall anyone explaining to you what the symbols on the doors of the bathroom meant? You just figured it out. People share a rich and common visual vocabulary, so Niemann likes to take “images from remote cultural areas and bring them together” — hence his putting the words “ceci n’est pas une pipe” in cursive above white iPhone earbuds. Using this collective lexicon, Niemann and other artists can communicate information, satirize people and ideas, express empathy, and make us laugh — all without words. In that way, he says, as deft as his drawings are, they’d be nothing without an audience. He says, “The real magic happens in the mind of the viewer.”

“Once your appliances can talk, who else will they talk to?” Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. To do this, Kashmir turned her San Francisco apartment into a full-fledged smart home, loading up on 18 different internet-connected appliances — including a “smart bed” that calculated her nightly “sleep score” to let her know if she was well-rested or not. Her colleague Surya built a special router to figure out how often the devices connected, who they were transmitting to, what they were transmitting — and what of that data could be sold. The results were surprising — and a little creepy. By poring over Kashmir’s family’s data, Surya could decipher their sleep schedules, TV binges,  tooth-brushing habits. And while many appliances connected only for updates, the Amazon Echo connected shockingly often — every three minutes. All of this data can tell companies how rich or poor you are, whether or not you’re an insurance risk, and — perhaps worst of all — the state of your sex life. (A digital vibrator company was caught “data-mining their customers’ orgasms.”) All this may lead you to ask, as Surya does, “Who is the true beneficiary of your smart home? You, or the company mining you?”

Embrace the diversity within. Rebeca Hwang has spent a lifetime juggling identities (Korean heritage, Argentinian upbringing, educated in the United States), and for a long time she had difficulty finding a place in the world to call home. Instead, one day, she had a pivotal realization: it was fruitless to search for total commonality with the people around her. Instead, she decided, she would embrace all the possible versions of herself — and the superpower it grants her to make connections with all kinds of people. Through thoughtful reinvention of her personhood, Hwang rid herself of constant anxiety, by “cultivating diversity within me and not just around me.” In the wake of her personal revolution, she’s continued to live a multifaceted life and accept the endless advantages it brings. She hopes to raise her children, who are already growing up with a unique combination of backgrounds, to help create a world where identities are used not to alienate others but to bring people together.

Life after loss. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse-Rosenthal (a best-selling children’s book author), wrote about their lives in a New York Times article read by millions of people. “You May Want to Marry My Husband” was a meditation on dying, disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public, and gave him an empty page on which he could inscribe the rest of his life. For Jason, “The key to my being able to persevere is Amy’s express and very public edict that I must go on.” But grief carries memories, especially of the process of dying itself. Amy chose home hospice, which gave her  happiness — but Jason is honest about the complications it caused for the survivors, including the inevitable, indelible memory of when Jason carried a lifeless Amy “down our stairs, through our living and our dining room, to a waiting gurney to have her body cremated.” Jason’s salvation lay in Amy’s challenge to begin anew, which he shares with others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?”

An emotional reset. Many of us in the audience knew Amy Krouse Rosenthal, who had a key role in planning our TED Active conference; session co-host Juliet Blake asks for the house lights to dim for a moment to create some space for quiet reflection. Then the extraordinary violinist Lili Haydn steps onstage for a welcome musical interlude. Unaccompanied by a band, she performs an emotional and elegant piece of her original musical, called “The Last Serenade.”

Can we help every employee be GRACED? Over the years, poet and trainer Tamekia MizLadi Smith has met her share of Miz Margarets — the longtime front-desk employee at a medical office who knows her job perfectly well and doesn’t take kindly to change. So when new rules for data collection come down from the top, and that suddenly she needs to ask every patient to self-identify by gender (with 6 options!), by race (with even more options!) and nationality (with even more options still!!), it’s no wonder that Miz Margaret starts thinking about early retirement. But what if she knew that this data would be used to help her patients, not to stereotype them — to help the office speak more respectfully to people of all genders, to get research funding for under-served groups? Smith shares an acrostic poem on the letters G.R.A.C.E.D. that will inspire bosses, trainers and data collectors to think carefully about the front-line employees who’ll be asking for this data. Bottom line: Always let people know that (and how) their work matters.

A bank that helps women empower themselves. A few years after social entrepreneur Chetna Gala Sinha moved from Mumbai to a remote village in Maharashtra, India, she met a woman named Kantabai. She was a blacksmith who wanted to open a bank account to save her hard-earned money, but when Sinha accompanied her to the bank, she was turned down because they didn’t think her small savings rate was worth their effort or time. Sinha decided that if the bank wouldn’t open an account for poor women like Kantabai, she would start one that would – and the Mann Deshi Bank was born. Today, it has 100,000 accounts and has done over $20 million in business. Over the years, her women customers have consistently pushed Sinha to come up with better solutions to their needs, teaching her one of the biggest lessons she’s learned: “Never provide poor solutions to poor people.” She shares the stories of Kerabai, Sunita, and Sarita – other women like Kantabai who’ve inspired her over the years and profoundly influenced her work. “There are millions of women like Sarita, Kerabai, Sunita, who can be around you also, they can be all over the world, but at first glance, you may think that they do not have anything to say, they do not have anything to share. You would be so wrong,” she says. Encouraged by the women she works with, Sinha is now in the midst of creating the first fund for women micro entrepreneurs in India, and the first Small Finance Bank for women in the world.

Paging through the Chess Records catalog. “You can’t do Chuck Berry better than Chuck, or Fontella Bass better than Fontella,” says Elise LeGrow, but on her latest record, Playing Chess, the Canadian singer pays homage to these greats (and the American label Chess Records that produced them) with intimate, pared-down interpretations of their hits. On the TED stage, she and her band performed Chuck Berry’s “You Never Can Tell,” “Over the Mountain,” first popularized by Johnnie and Joe, and a slinky cover of Fontella Bass’s sensational “Rescue Me.”

Truth comes from the collision of ideas. Legendary artistic director Oskar Eustis closes session 10 with a beautiful message about the place of theater in modern (and ancient) life. Theater and democracy were born together in Athens in the late sixth century BCE, when the idea that power should stem from the consent of the governed — from below to above, not the other way around — was reshaping the world. At the same time, people were exploring how the truth can best emerge from the conflict between two points of view. Through dialogue, empathy with characters and the experience of watching a performance together with others in the audience, the theater and democracy become parts of a whole. Fast-forward 2,500 years to when Joseph Papp founded The Public Theater. Papp wanted everyone in America to be able to experience theater — so he created free Shakespeare in the Park, based on the idea that the best art that we can produce should be available for everybody. Over the next decades, The Public brought art to the people with plays like The Normal Heart, Chorus Line and Angels in America, among many others. In 2005, when Eustis took over artistic direction, he took Shakespeare in the Park on the road, bringing theater to the people and making it about them. With Hamilton, Lin-Manuel Miranda tapped into this idea of art for the people as well. “What Lin was doing is exactly what Shakespeare was doing — he was taking the language of the people, elevating it into verse and by doing so ennobling the language and ennobling the people who spoke the language.” But we need to go a step further on this form of inclusion, Eustis says, outlining his plan to reach (and listen to) people in places across the United States where the theater, like so many other institutions, has turned its back — like the de-industrialized Rust Belt. “Our job is to try to hold up a vision to America that shows not only who all of us are individually but that welds us back into the commonality that we need to be,” Eustis says.

,

Planet Linux AustraliaDavid Rowe: Testing HAB Telemetry Protocols

On Saturday Mark and I had a pleasant day bench testing High Altitude Balloon (HAB) Telemetry protocols and demodulators.

Project Horus HAB flights use a low power transmitter to send regular updates of the balloons position and status. To date, this has been sent using RTTY, and demodulated using Fldigi, or a special version modified for HAB work called dl-Fldigi.

Lora is becoming common in HAB circles, however I am confident we can do better using a custom protocol and well engineered, and most importantly – open source – modems. While very well designed and conveniently packaged, Lora is not magic – modem performance is defined by physics.

A few year ago, Mark and I developed and flight tested a binary protocol (Horus Binary) for HAB flights. We have dusted this off, and I’ve written a C callable API (horus_api.c) to make Horus RTTY and Binary easy to use. The plan is to release a cross platform GUI application that supports Horus Binary, so anyone with a SSB receiver can join in the fun of tracking Horus flights using Horus Binary.

A good HAB telemetry protocol works at low SNRs, and has fast updates to allow accurate positioning of the payload during the final decent. A way of measuring the performance is Packet Error Rate (PER) – how many telemetry packets get through at a given Signal to Noise Ratio (SNR).

So we generated some synthetic Horus RTTY and Binary packets at calibrated SNRs using GNU Octave simulation code (fsk_horus.m), then played the wave files through several modems.

Here are the results (click for a larger version):

The X-axis is in Eb/No, which is proportional to SNR:

  SNR = EBNodB + 10log10(Rb/BW)

where Rb is the bit rate and BW is the noise bandwidth you want to measure SNR in. Eb/No is handy as it normalises for the effect of bit rate and noise bandwidth, making modem comparison easier.

Protocol dl-Fldigi
RTTY
Fldigi
RTTY
Horus
RTTY
Horus
Binary
Eb/No
(50% PER)
13.0 12.0 11.5 4.5
Rb 100 100 100 200
SNR (3000Hz) -1.7 -2.7 -3.2 -7.2
Packet
Duration
6 6 6 1.6
Wave File Listen Listen Listen Listen

Discussion

The older dl-Fldigi is a few dB behind the more modern Fldigi. Our Horus RTTY and especially Binary protocols are doing very well. At the same bit rate (Eb/No curve), Horus Binary is 9dB ahead of dl-Fldigi, which is a very useful gain; at least double the Line of Site (LOS) range, and equivalent to having nearly 10x the transmit power. The Binary packets are fast as well, allowing for rapid position updates in the final descent.

Trade offs are possible, for example if we slowed Horus Binary to 50 bits/s, it’s packet duration would be 6.4s (about the same as RTTY) however 50% PER would occur at a SNR of -13dB, a 15dB improvement over dl-Fldigi.

Reading Further

Project Horus
Binary Telemetry Protocol
All Your Modem are Belong To Us
SNR and Eb/No Worked Example

Planet DebianHideki Yamane: Of course it runs NetBSD


“Of course it runs NetBSD”

Note: It runs Debian squeeze chroot on NetBSD 7.1.2, with Linux emulation

Planet DebianThorsten Glaser: mksh on Jehanne, a guest post by Shamar

Giacomo Tesio referenced mksh(1) in his annual Jehanne report and provided a guest post (dated 2018-01-09, sorry for posting it this late only) for us on his journey on porting mksh to Jehanne, his Plan 9 derivative operating system. Read on for his story!

(read more…)

Planet DebianRuss Allbery: Free software log (March 2018)

I did get a few software releases out this month, although not as much as I'd planned and I still have a lot of new releases pending that are waiting for me to have a bit more free time.

control-archive got a 1.8.0 release, which catches up from accumulated changes over the past year plus and falls back to GnuPG v1 for signature processing. One of the projects that I'd like to find time for is redoing all of my scattered code for making and checking Usenet control messages.

DocKnot 1.03 adds more support for SPDX license identifiers, which I've started using in my projects, and then 1.04 was a quick bug fix release for something I broke in the test suite on Windows systems.

I also redid the Kerberos authentication plugin for INN to use modern Kerberos APIs, which cleared up some build issues when pointing at non-system Kerberos libraries.

Planet DebianDirk Eddelbuettel: #18: Adding Intel MKL easily via a simple script

Welcome to the eighteenth post in the rarely riveting R ramblings series of posts, or R4 for short.

The Intel Math Kernel Library (MKL) is well-know high(er) performance math library tailored for Intel CPUs offering best-in-class numerical performance on a number of low-level operations (BLAS, LAPACK, ...). They are not open source, used to be under commerical or research-only licenses --- but can now be had (still subject to license terms you should study) via apt-get (and even yum). This page describe the installation of the MKL (and other components) in detail (but stops short of the system integration aspect we show here).

Here we present one short script, discussed in detail below, to add the MKL to your Debian or Ubuntu system. Its main advantages are

  • clean standard code using package management tools;
  • additional steps to make it the the system default; and
  • with an option for clean removal leaning again on the package management system.

We put the script and a README.md largely identical to this writeup into this GitHub repo where issues, comments, questions, ... should be filed.

MKL for .deb-based systems: An easy recipe

This post describes how to easily install the Intel Math Kernel Library (MKL) on a Debian or Ubuntu system. Very good basic documentation is provided by Intel at their site. The discussion here is more narrow as it focusses just on the Math Kernel Library (MKL).

The tl;dr version: Use this script which contains the commands described here.

First Step: Set up apt

We download the GnuPG key first and add it to the keyring:

cd /tmp
wget https://apt.repos.intel.com/intel-gpg-keys/GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB
apt-key add GPG-PUB-KEY-INTEL-SW-PRODUCTS-2019.PUB

To add all Intel products we would run first command, but here we focus just on the MKL. The website above lists other suboptions (TBB, DAAL, MPI, ...)

## all products:
#wget https://apt.repos.intel.com/setup/intelproducts.list -O /etc/apt/sources.list.d/intelproducts.list

## just MKL
sh -c 'echo deb https://apt.repos.intel.com/mkl all main > /etc/apt/sources.list.d/intel-mkl.list'

We then update our lists of what is available in the repositories.

apt-get update

As a personal aside, I still use the awesome wajig frontend to dpkg, apt and more by Graham Williams (of rattle fame). Among other tricks, wajig keeps state and therefore "knows" what packages are new. Here, we see a lot:

edd@rob:/tmp$ wajig update
Hit:1 http://us.archive.ubuntu.com/ubuntu artful InRelease
Ign:2 http://dl.google.com/linux/chrome/deb stable InRelease
Hit:3 http://us.archive.ubuntu.com/ubuntu artful-updates InRelease
Hit:4 https://download.docker.com/linux/ubuntu artful InRelease
Hit:5 http://us.archive.ubuntu.com/ubuntu artful-backports InRelease
Ign:6 https://cloud.r-project.org/bin/linux/ubuntu artful/ InRelease
Hit:7 https://cloud.r-project.org/bin/linux/ubuntu artful/ Release
Hit:8 http://security.ubuntu.com/ubuntu artful-security InRelease
Hit:9 https://apt.repos.intel.com/mkl all InRelease
Hit:10 http://dl.google.com/linux/chrome/deb stable Release
Hit:12 https://packagecloud.io/slacktechnologies/slack/debian jessie InRelease
Reading package lists... Done
This is 367 up on the previous count with 367 new packages.
edd@rob:/tmp$ wajig new
Package                  Description
========================-===================================================
intel-mkl-gnu-f-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-gnu-f-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-cluster-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-c-196      Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-cluster-c-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-cluster-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-239        Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-l-ps-libs-32bit-jp-174 OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-doc-ps-2018    Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-pgi-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-ss-tbb-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-mic-cluster-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-l-ps-libs-jp-196 OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-ps-mic-rt-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-l-ps-libs-jp-239 OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-common-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-f95-mic-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-64bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-f95-common-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-psxe-common-2018.2-046 Intel(R) Parallel Studio XE 2018 Update 2 for Linux*
intel-mkl-ps-mic-cluster-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-cluster-64bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-f95-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-cluster-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-rt-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-2018.0-128 Intel(R) Threading Building Blocks 2018 for Linux*
intel-comp-l-all-vars-196 Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-common-ps-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-pgi-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-f95-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-openmp-18.0.0-128  OpenMP for Intel(R) Compilers 18.0 for Linux*
intel-mkl-common-c-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-ps-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-f95-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-f95-mic-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-common-c-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-cluster-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-doc-f-jp    Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-f-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-32bit-2018.1-038 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-common-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-openmp-l-all-196   OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-pgi-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-common-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-comp-nomcu-vars-18.0.0-128 Intel(R) Compilers 18.0 for Linux*
intel-mkl-common-c-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-f-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-c-64bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-32bit-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-common-ps-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-cluster-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-gnu-f-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-cluster-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-common-c-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-ss-tbb-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-32bit-2018.0-128 Intel(R) Threading Building Blocks 2018 for Linux*
intel-mkl-gnu-c-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-tbb-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-tbb-libs-2018.1-163 Intel(R) Threading Building Blocks 2018 Update 1 for Linux*
intel-mkl-ps-common-f-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-ss-tbb-rt-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-196            Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-pgi-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-psxe-2018.2-046 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-doc-c          Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-f95-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-cluster-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-174       Intel(R) Threading Building Blocks 2017 Update 4 for Linux*
intel-comp-l-all-vars-174 Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-gnu-f-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-rt-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-l-ps-libs-32bit-jp-196 OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-gnu-f-rt-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-openmp-18.0.1-163  OpenMP for Intel(R) Compilers 18.0 Update 1 for Linux*
intel-mkl-ps-cluster-64bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-pgi-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-2018.2-046     Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-rt-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-rt-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-comp-l-all-vars-18.0.0-128 Intel(R) Compilers 18.0 for Linux*
intel-mkl-ps-common-jp-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-32bit-18.0.0-128 OpenMP for Intel(R) Compilers 18.0 for Linux*
intel-mkl-f95-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-core-ps-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-gnu-f-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-tbb-mic-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-psxe-2018.1-038 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-64bit-2017.4-061 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-f95-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-mic-rt-jp-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-196        Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-psxe-common-doc-2018 Intel(R) Parallel Studio XE 2018 Update 2 for Linux*
intel-mkl-ps-tbb-mic-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-core-c-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-cluster-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-rt-jp-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-core-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-ss-tbb-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-core-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-psxe-050       Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-64bit-2018.2-046 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-tbb-rt-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-ss-tbb-rt-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-doc-f       Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-c-174      Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-f95-common-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-rt-jp-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-gnu-f-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-pgi-f-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-32bit-2018.1-163 Intel(R) Threading Building Blocks 2018 Update 1 for Linux*
intel-mkl-common-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-rt-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-64bit-2018.0-033 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-mic-c-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-ss-tbb-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-64bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-32bit-18.0.2-199 OpenMP for Intel(R) Compilers 18.0 Update 2 for Linux*
intel-mkl-ps-rt-jp-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-f-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-common-jp-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-tbb-mic-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-psxe-common-061    Intel(R) Parallel Studio XE 2017 Update 5 for Linux*
intel-mkl-gnu-rt-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-f-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-mic-cluster-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-common-f-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-tbb-libs-196       Intel(R) Threading Building Blocks 2017 Update 6 for Linux*
intel-mkl-cluster-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-cluster-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-pgi-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-ss-tbb-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-l-all-174   OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-tbb-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-pgi-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-64bit-2018.1-038 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-f95-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-gnu-c-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-239            Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-rt-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-cluster-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-f95-common-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-f-64bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-cluster-common-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-cluster-f-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-common-jp-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-openmp-l-all-32bit-196 OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-tbb-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-psxe-common-056    Intel(R) Parallel Studio XE 2017 Update 4 for Linux*
intel-mkl-32bit-2018.0-033 Intel(R) Math Kernel Library 2018 for Linux*
intel-comp-l-all-vars-18.0.2-199 Intel(R) Compilers 18.0 Update 2 for Linux*
intel-mkl-common-ps-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-core-rt-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-pgi-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-common-c-ps-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-gnu-f-rt-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-f95-common-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-openmp-l-all-239   OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-174            Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-f-rt-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-tbb-libs-239       Intel(R) Threading Building Blocks 2017 Update 8 for Linux*
intel-mkl-common-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-common-f-64bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-cluster-common-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-comp-nomcu-vars-18.0.2-199 Intel(R) Compilers 18.0 Update 2 for Linux*
intel-mkl-32bit-196      Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-gnu-f-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-ps-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-rt-174         Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-common-f-64bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-cluster-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-cluster-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-rt-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-doc         Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-rt-32bit-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-l-ps-libs-174 OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-ps-cluster-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-64bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-pgi-c-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-gnu-f-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-f-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-f-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-common-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-pgi-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-rt-jp-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-common-c-64bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-gnu-f-rt-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-c-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-f-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-32bit-239      Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-rt-jp-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-mic-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-tbb-libs-2018.2-199 Intel(R) Threading Building Blocks 2018 Update 2 for Linux*
intel-mkl-f95-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-openmp-l-ps-libs-239 OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-rt-239         Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-core-c-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-openmp-l-all-32bit-174 OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-ps-pgi-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-gnu-f-rt-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-tbb-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-rt-32bit-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-mic-196     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-gnu-f-rt-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-32bit-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-c-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-f-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-f95-common-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-mic-f-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-ps-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-cluster-64bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-64bit-2017.3-056 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-ss-tbb-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-32bit-2017.4-061 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-tbb-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-64bit-2017.2-050 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-mic-rt-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-cluster-c-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-c-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-f-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-c-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-rt-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-common-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-2017.3-056     Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-tbb-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-pgi-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-pgi-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-tbb-rt-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-openmp-l-all-32bit-239 OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-rt-196         Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-cluster-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-2017.4-061     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-common-c-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-psxe-common-doc    Intel(R) Parallel Studio XE 2017 Update 5 for Linux*
intel-tbb-libs-32bit-2018.2-199 Intel(R) Threading Building Blocks 2018 Update 2 for Linux*
intel-mkl-2017.2-050     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-tbb-mic-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-2018.1-038     Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-c-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-ps-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-tbb-mic-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-tbb-rt-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-cluster-f-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-psxe-061       Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-ss-tbb-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-mic-rt-jp-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-common-c-ps-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-doc-jp      Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-common-f-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-c-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-cluster-rt-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-mic-rt-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-l-ps-libs-196 OpenMP for Intel(R) Compilers 17.0 Update 4 for Linux*
intel-mkl-common-c-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-32bit-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-openmp-18.0.2-199  OpenMP for Intel(R) Compilers 18.0 Update 2 for Linux*
intel-mkl-ps-common-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-rt-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-openmp-32bit-18.0.1-163 OpenMP for Intel(R) Compilers 18.0 Update 1 for Linux*
intel-mkl-ps-pgi-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-comp-l-all-vars-239 Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-ps-mic-c-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-f-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-ss-tbb-174  Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-mic-c-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-mic-rt-jp-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-psxe-common-2018.0-033 Intel(R) Parallel Studio XE 2018 for Linux*
intel-mkl-ps-f95-mic-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-common-c-64bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-psxe-056       Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-rt-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-core-c-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-c-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-comp-l-all-vars-18.0.1-163 Intel(R) Compilers 18.0 Update 1 for Linux*
intel-mkl-psxe-2018.0-033 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-f95-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-openmp-l-ps-libs-jp-174 OpenMP for Intel(R) Compilers 17.0 Update 2 for Linux*
intel-mkl-tbb-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-f-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-2018.0-033     Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-ps-f95-239     Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-doc            Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-common-c-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-common-f-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-gnu-f-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-cluster-c-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-common-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-tbb-mic-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-sta-common-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-32bit-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-32bit-2018.2-046 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-f-196   Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-c-239      Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-doc-c-jp    Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-rt-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-doc-2018       Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-pgi-c-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-core-rt-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-common-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-f95-common-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-cluster-c-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-cluster-rt-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-cluster-f-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-f-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-common-c-ps-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-cluster-common-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-32bit-2017.3-056 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-core-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-gnu-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-eula-174       Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-ss-tbb-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-pgi-rt-196  Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-gnu-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-psxe-common-2018.1-038 Intel(R) Parallel Studio XE 2018 Update 1 for Linux*
intel-mkl-pgi-f-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-core-ps-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-common-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-rt-jp-239   Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-ps-pgi-rt-239  Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-32bit-2017.2-050 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-f-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-pgi-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-tbb-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-comp-nomcu-vars-18.0.1-163 Intel(R) Compilers 18.0 Update 1 for Linux*
intel-mkl-common-f-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-tbb-rt-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-common-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-gnu-c-32bit-2018.2-199 Intel(R) Math Kernel Library 2018 Update 2 for Linux*
intel-mkl-ps-mic-cluster-rt-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-f95-174     Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-core-2018.1-163 Intel(R) Math Kernel Library 2018 Update 1 for Linux*
intel-mkl-ps-gnu-f-32bit-196 Intel(R) Math Kernel Library 2017 Update 3 for Linux*
intel-mkl-ps-f95-32bit-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-ps-mic-cluster-174 Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-psxe-common-050    Intel(R) Parallel Studio XE 2017 Update 2 for Linux*
intel-mkl-cluster-c-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-rt-32bit-174   Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-mkl-32bit-174      Intel(R) Math Kernel Library 2017 Update 2 for Linux*
intel-openmp-l-ps-libs-32bit-jp-239 OpenMP for Intel(R) Compilers 17.0 Update 5 for Linux*
intel-mkl-ps-ss-tbb-rt-32bit-239 Intel(R) Math Kernel Library 2017 Update 4 for Linux*
intel-mkl-gnu-f-rt-32bit-2018.0-128 Intel(R) Math Kernel Library 2018 for Linux*
intel-mkl-gnu-174        Intel(R) Math Kernel Library 2017 Update 2 for Linux*
edd@rob:/tmp$

Install MKL

Now that we have everything set up, installing the MKL is as simple as:

apt-get install intel-mkl-64bit-2018.2-046

This picks the 64-bit only variant of the (currently) most recent builds.

There is a slight cost: a 500mb download of 39 packages which install to 1.9 gb! Other than that it is easy: one command! Compare that with the days of yore when we fetched shar archives of NETLIB...

Integrate MKL

One the key advantages of a Debian or Ubuntu system is the overall integration providing a raft of useful features. One of these is the seamless and automatic selection of alternatives. By declaring a particular set of BLAS and LAPACK libraries the default, all application linked against this interface will use the default. Better still, users can switch between these as well.

So here we can make the MKL default for BLAS and LAPACK:

## update alternatives
update-alternatives --install /usr/lib/x86_64-linux-gnu/libblas.so     \
                    libblas.so-x86_64-linux-gnu      /opt/intel/mkl/lib/intel64/libmkl_rt.so 50
update-alternatives --install /usr/lib/x86_64-linux-gnu/libblas.so.3   \
                    libblas.so.3-x86_64-linux-gnu    /opt/intel/mkl/lib/intel64/libmkl_rt.so 50
update-alternatives --install /usr/lib/x86_64-linux-gnu/liblapack.so   \
                    liblapack.so-x86_64-linux-gnu    /opt/intel/mkl/lib/intel64/libmkl_rt.so 50
update-alternatives --install /usr/lib/x86_64-linux-gnu/liblapack.so.3 \
                    liblapack.so.3-x86_64-linux-gnu  /opt/intel/mkl/lib/intel64/libmkl_rt.so 50

Next, we have to tell the dyanmic linker about two directories use by the MKL, and have it update its cache:

echo "/opt/intel/lib/intel64"     >  /etc/ld.so.conf.d/mkl.conf
echo "/opt/intel/mkl/lib/intel64" >> /etc/ld.so.conf.d/mkl.conf
ldconfig

Use the MKL

Now the MKL is 'known' and the default. If we start R, its sessionInfo() shows the MKL:

# Matrix products: default                            
# BLAS/LAPACK: /opt/intel/compilers_and_libraries_2018.2.199/linux/mkl/lib/intel64_lin/libmkl_rt.so        

Benchmarks

# Vanilla r-base Rocker with default reference BLAS 
> n <- 1e3 ; X <- matrix(rnorm(n*n),n,n);  system.time(svd(X)) 
   user  system elapsed 
  2.239   0.004   2.266 
> 

# OpenBlas added to r-base Rocker
>  n <- 1e3 ; X <- matrix(rnorm(n*n),n,n);  system.time(svd(X)) 
   user  system elapsed 
  1.367   2.297   0.353 
> 

# MKL added to r-base Rocker
> n <- 1e3 ; X <- matrix(rnorm(n*n),n,n)  
> system.time(svd(X))                               
   user  system elapsed                             
  1.772   0.056   0.350                             
>  

So just R (with reference BLAS) is slow. (Using Docker is done here to have clean comparisons while not altering the outer host system; impact of running Docker on Linux should be minimal.) Adding OpenBLAS helps quite a bit already by offering multi-core processing -- the, and MKL does not yet improve materially over OpenBLAS. Now, this of course was not any serious benchmarking---we just ran one SVD. More to do as time permits...

Removal, if needed

Another rather nice benefit of the package management is that clean removal is also possible:

root@c9f8062fbd93:/tmp# apt-get autoremove intel-mkl-64bit-2018.2-046
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following packages will be REMOVED:
  intel-comp-l-all-vars-18.0.2-199 intel-comp-nomcu-vars-18.0.2-199 intel-mkl-64bit-2018.2-046 
  intel-mkl-cluster-2018.2-199 intel-mkl-cluster-c-2018.2-199 intel-mkl-cluster-common-2018.2-199 
  intel-mkl-cluster-f-2018.2-199 intel-mkl-cluster-rt-2018.2-199 intel-mkl-common-2018.2-199 
  intel-mkl-common-c-2018.2-199 intel-mkl-common-c-ps-2018.2-199 intel-mkl-common-f-2018.2-199 
  intel-mkl-common-ps-2018.2-199 intel-mkl-core-2018.2-199 intel-mkl-core-c-2018.2-199 
  intel-mkl-core-f-2018.2-199 intel-mkl-core-ps-2018.2-199 intel-mkl-core-rt-2018.2-199 
  intel-mkl-doc-2018 intel-mkl-doc-ps-2018 intel-mkl-f95-2018.2-199 intel-mkl-f95-common-2018.2-199 
  intel-mkl-gnu-2018.2-199 intel-mkl-gnu-c-2018.2-199 intel-mkl-gnu-f-2018.2-199 intel-mkl-gnu-f-rt-2018.2-199 
  intel-mkl-gnu-rt-2018.2-199 intel-mkl-pgi-2018.2-199 intel-mkl-pgi-c-2018.2-199 intel-mkl-pgi-f-2018.2-199 
  intel-mkl-pgi-rt-2018.2-199 intel-mkl-psxe-2018.2-046 intel-mkl-tbb-2018.2-199 intel-mkl-tbb-rt-2018.2-199 
  intel-openmp-18.0.2-199 intel-psxe-common-2018.2-046 intel-psxe-common-doc-2018 intel-tbb-libs-2018.2-199 
  intel-tbb-libs-32bit-2018.2-199 libisl15
0 upgraded, 0 newly installed, 40 to remove and 0 not upgraded.
After this operation, 1,904 kB disk space will be freed.
Do you want to continue? [Y/n] n                    
Abort.                                              
root@c9f8062fbd93:/tmp#  

where we said 'no' just to illustrate the option.

Summary

Package management systems are fabulous. Kudos to Intel for supporting apt (and also yum in case you are on an rpm-based system). We can install the MKL with just a few commands (which we regrouped in this script).

The MKL has a serious footprint with an installed size of just under 2gb. But for those doing extended amounts of numerical analysis, installing this library may well be worth it.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianHideki Yamane: Update desktop components for released version

I found RHEL7.5 desktop is rebased to GNOME 3.26. I hope Debian stable release could do such thing, then what's the blocker for it?

Planet Linux AustraliaMichael Still: On Selecting a Well Engaged Open Source Vendor

Share

Aptira is in an interesting position in the Open Source market, because we don’t usually sell software. Instead, our customers come to us seeking assistance with deciding which OpenStack to use, or how to embed ONAP into their nationwide networks, or how to move their legacy networks to the software defined future. Therefore, our most common role is as a trusted advisor to help our customers decide which Open Source products to buy.

(My boss would insist that I point out here that we do customisation of Open Source for our customers, and have assisted many in the past with deploying pure upstream solutions. Basically, we do what is the right fit for the customer, and aren’t obsessed with fitting customers into pre-defined moulds that suit our partners.)

That makes it important that we recommend products from companies that are well engaged with their upstream Open Source communities. That might be OpenStack, or ONAP, or even something like Open Daylight. This raises the obvious question – what makes a company well engaged with an upstream project?

Read more over at my employer’s blog

Share

Planet Linux AustraliaMichael Still: Escaping from blosxom

Share

I’ve been running my personal blog on a very hacked version of blosxom for a hilariously long time, and its time to escape. I’ve therefore started converting all of the content to wordpress here, and will eventually redirect the old domain to here as well.

Why blogging when its so 2000? I’m increasingly disinterested in social media like Facebook and Twitter. I figure if I’m going to note something down that looks like it might be useful to others I’ll put it on ye olde blog instead.

I’m sure the conversion isn’t perfect, and I’ve decided not to migrate very old content that simply not interesting any more (linux kernel patches from 2004 for example). If you find a post which has converted badly, just comment on it and I’ll do something about it. I am very sure that pretty much no one will do that thing however.

Share

Planet Linux AustraliaMichael Still: Hugo nominees for 2018

Share

Lifehacker kindly pointed out that the Hugo nominees are out for 2018. They are:

  • The Collapsing Empire, by John Scalzi. I’ve read this one and liked it.
  • New York 2140, by Kim Stanley Robinson. I’ve had a difficult time with Kim’s work in the past, but perhaps I’ll one day read this.
  • Provenance, by Ann Leckie. I liked Ancillary Justice, but failed to fully read the sequel, so I guess we’ll wait and see on this one.
  • Raven Stratagem, by Yoon Ha Lee. I know nothing!
  • Six Wakes, by Mur Lafferty. Again, I know nothing about this book or this author.

So a few there to consider in the future.

Share

Planet Linux AustraliaMichael Still: Giving serial devices meaningful names

Share

This is a hack I’ve been using for ages, but I thought it deserved a write up.

I have USB serial devices. Lots of them. I use them for home automation things, as well as for talking to devices such as the console ports on switches and so forth. For the permanently installed serial devices one of the challenges is having them show up in predictable places so that the scripts which know how to drive each device are talking in the right place.

For the trivial case, this is pretty easy with udev:

$  cat /etc/udev/rules.d/60-local.rules
KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="0403", ATTRS{idProduct}=="6001", \
    ATTRS{serial}=="A8003Ye7", \
    SYMLINK+="radish"

This says for any USB serial device that is discovered (either inserted post boot, or at boot), if the USB vendor and product ID match the relevant values, to symlink the device to “/dev/radish”.

You find out the vendor and product ID from lsusb like this:

$ lsusb
Bus 003 Device 003: ID 0624:0201 Avocent Corp.
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 007 Device 002: ID 0665:5161 Cypress Semiconductor USB to Serial
Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 002: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC
Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub
Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

You can play with inserting and removing the device to determine which of these entries is the device you care about.

So that’s great, until you have more than one device with the same USB serial vendor and product id. Then things are a bit more… difficult.

It turns out that you can have udev execute a command on device insert to help you determine what symlink to create. So for example, I have this entry in the rules on one of my machines:

KERNEL=="ttyUSB*", \
    ATTRS{idVendor}=="067b", ATTRS{idProduct}=="2303", \
    PROGRAM="/usr/bin/usbtest /dev/%k", \
    SYMLINK+="%c"

This results in /usr/bin/usbtest being run with the path of the device file on its command line for every device detection (of a matching device). The stdout of that program is then used as the name of a symlink in /dev.

So, that script attempts to talk to the device and determine what it is — in my case either a currentcost or a solar panel inverter.

Share

Planet Linux AustraliaMichael Still: Configuring docker to use rexray and Ceph for persistent storage

Share

For various reasons I wanted to play with docker containers backed by persistent Ceph storage. rexray seemed like the way to do that, so here are my notes on getting that working…

First off, I needed to install rexray:

    root@labosa:~/rexray# curl -sSL https://dl.bintray.com/emccode/rexray/install | sh
    Selecting previously unselected package rexray.
    (Reading database ... 177547 files and directories currently installed.)
    Preparing to unpack rexray_0.9.0-1_amd64.deb ...
    Unpacking rexray (0.9.0-1) ...
    Setting up rexray (0.9.0-1) ...
    
    rexray has been installed to /usr/bin/rexray
    
    REX-Ray
    -------
    Binary: /usr/bin/rexray
    Flavor: client+agent+controller
    SemVer: 0.9.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
    Formed: Thu, 04 May 2017 07:38:11 AEST
    
    libStorage
    ----------
    SemVer: 0.6.0
    OsArch: Linux-x86_64
    Branch: v0.9.0
    Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
    Formed: Thu, 04 May 2017 07:36:11 AEST
    

Which is of course horrid. What that script seems to have done is install a deb’d version of rexray based on an alien’d package:

    root@labosa:~/rexray# dpkg -s rexray
    Package: rexray
    Status: install ok installed
    Priority: extra
    Section: alien
    Installed-Size: 36140
    Maintainer: Travis CI User <travis@testing-gce-7fbf00fc-f7cd-4e37-a584-810c64fdeeb1>
    Architecture: amd64
    Version: 0.9.0-1
    Depends: libc6 (>= 2.3.2)
    Description: Tool for managing remote & local storage.
     A guest based storage introspection tool that
     allows local visibility and management from cloud
     and storage platforms.
     .
     (Converted from a rpm package by alien version 8.86.)
    

If I was building anything more than a test environment I think I’d want to do a better job of installing rexray than this, so you’ve been warned.

Next to configure rexray to use Ceph. The configuration details are cunningly hidden in the libstorage docs, and aren’t mentioned at all in the rexray docs, so you probably want to take a look at the libstorage docs on ceph. First off, we need to install the ceph tools, and copy the ceph authentication information from the the ceph we installed using openstack-ansible earlier.

    root@labosa:/etc# apt-get install ceph-common
    root@labosa:/etc# scp -rp 172.29.239.114:/etc/ceph .
    The authenticity of host '172.29.239.114 (172.29.239.114)' can't be established.
    ECDSA key fingerprint is SHA256:SA6U2fuXyVbsVJIoCEHL+qlQ3xEIda/MDOnHOZbgtnE.
    Are you sure you want to continue connecting (yes/no)? yes
    Warning: Permanently added '172.29.239.114' (ECDSA) to the list of known hosts.
    rbdmap                       100%   92     0.1KB/s   00:00
    ceph.conf                    100%  681     0.7KB/s   00:00
    ceph.client.admin.keyring    100%   63     0.1KB/s   00:00
    ceph.client.glance.keyring   100%   64     0.1KB/s   00:00
    ceph.client.cinder.keyring   100%   64     0.1KB/s   00:00
    ceph.client.cinder-backup.keyring   71     0.1KB/s   00:00
    root@labosa:/etc# modprobe rbd
    

You also need to configure rexray. My first attempt looked like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml
    libstorage:
      service: ceph
    

And the rexray output sure made it look like it worked…

    root@labosa:/etc# rexray service start
    ● rexray.service - rexray
       Loaded: loaded (/etc/systemd/system/rexray.service; enabled; vendor preset: enabled)
       Active: active (running) since Mon 2017-05-29 10:14:07 AEST; 33ms ago
     Main PID: 477423 (rexray)
        Tasks: 5
       Memory: 1.5M
          CPU: 9ms
       CGroup: /system.slice/rexray.service
               └─477423 /usr/bin/rexray start -f
    
    May 29 10:14:07 labosa systemd[1]: Started rexray.
    

Which looked good, but /var/log/syslog said:

    May 29 10:14:08 labosa rexray[477423]: REX-Ray
    May 29 10:14:08 labosa rexray[477423]: -------
    May 29 10:14:08 labosa rexray[477423]: Binary: /usr/bin/rexray
    May 29 10:14:08 labosa rexray[477423]: Flavor: client+agent+controller
    May 29 10:14:08 labosa rexray[477423]: SemVer: 0.9.0
    May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
    May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
    May 29 10:14:08 labosa rexray[477423]: Commit: 2a7458dd90a79c673463e14094377baf9fc8695e
    May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:38:11 AEST
    May 29 10:14:08 labosa rexray[477423]: libStorage
    May 29 10:14:08 labosa rexray[477423]: ----------
    May 29 10:14:08 labosa rexray[477423]: SemVer: 0.6.0
    May 29 10:14:08 labosa rexray[477423]: OsArch: Linux-x86_64
    May 29 10:14:08 labosa rexray[477423]: Branch: v0.9.0
    May 29 10:14:08 labosa rexray[477423]: Commit: fa055d6da595602715bdfd5541b4aa6d4dcbcbd9
    May 29 10:14:08 labosa rexray[477423]: Formed: Thu, 04 May 2017 07:36:11 AEST
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="error starting libStorage server" error.driver=ceph time=1496016848215
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="default module(s) failed to initialize" error.driver=ceph time=1496016848216
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="daemon failed to initialize" error.driver=ceph time=1496016848216
    May 29 10:14:08 labosa rexray[477423]: time="2017-05-29T10:14:08+10:00" level=error
    msg="error starting rex-ray" error.driver=ceph time=1496016848216
    

That’s because the service is called rbd it seems. So, the config file ended up looking like this:

    root@labosa:/var/log# cat /etc/rexray/config.yml
    libstorage:
      service: rbd
    
    rbd:
      defaultPool: rbd
    

Now to install docker:

    root@labosa:/var/log# sudo apt-get update
    root@labosa:/var/log# sudo apt-get install linux-image-extra-$(uname -r) \
        linux-image-extra-virtual
    root@labosa:/var/log# sudo apt-get install apt-transport-https \
        ca-certificates curl software-properties-common
    root@labosa:/var/log# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
    root@labosa:/var/log# sudo add-apt-repository \
        "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
        $(lsb_release -cs) \
        stable"
    root@labosa:/var/log# sudo apt-get update
    root@labosa:/var/log# sudo apt-get install docker-ce
    

Now let’s make a rexray volume.

    root@labosa:/var/log# rexray volume ls
    ID  Name  Status  Size
    root@labosa:/var/log# docker volume create --driver=rexray --name=mysql \
        --opt=size=1
    A size of 1 here means 1gb
    mysql
    root@labosa:/var/log# rexray volume ls
    ID         Name   Status     Size
    rbd.mysql  mysql  available  1
    

Let’s start the container.

    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
        -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
    Unable to find image 'mysql:latest' locally
    latest: Pulling from library/mysql
    10a267c67f42: Pull complete
    c2dcc7bb2a88: Pull complete
    17e7a0445698: Pull complete
    9a61839a176f: Pull complete
    a1033d2f1825: Pull complete
    0d6792140dcc: Pull complete
    cd3adf03d6e6: Pull complete
    d79d216fd92b: Pull complete
    b3c25bdeb4f4: Pull complete
    02556e8f331f: Pull complete
    4bed508a9e77: Pull complete
    Digest: sha256:2f4b1900c0ee53f344564db8d85733bd8d70b0a78cd00e6d92dc107224fc84a5
    Status: Downloaded newer image for mysql:latest
    ccc251e6322dac504e978f4b95b3787517500de61eb251017cc0b7fd878c190b
    

And now to prove that persistence works and that there’s nothing up my sleeve…

    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
        sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" \
        -P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
    mysql: [Warning] Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 3
    Server version: 5.7.18 MySQL Community Server (GPL)
    
    Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql> show databases;
    +--------------------+
    | Database           |
    +--------------------+
    | information_schema |
    | mysql              |
    | performance_schema |
    | sys                |
    +--------------------+
    4 rows in set (0.00 sec)
    
    mysql> create database demo;
    Query OK, 1 row affected (0.03 sec)
    
    mysql> use demo;
    Database changed
    mysql> create table foo(val char(5));
    Query OK, 0 rows affected (0.14 sec)
    
    mysql> insert into foo(val) values ('a'), ('b'), ('c');
    Query OK, 3 rows affected (0.08 sec)
    Records: 3  Duplicates: 0  Warnings: 0
    
    mysql> select * from foo;
    +------+
    | val  |
    +------+
    | a    |
    | b    |
    | c    |
    +------+
    3 rows in set (0.00 sec)
    

Now let’s re-create the container and prove the data remains.

    root@labosa:/var/log# docker stop some-mysql
    some-mysql
    root@labosa:/var/log# docker rm some-mysql
    some-mysql
    root@labosa:/var/log# docker run --name some-mysql --volume-driver=rexray \
        -v mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=my-secret-pw -d mysql
    99a7ccae1ad1865eb1bcc8c757251903dd2f1ac7d3ce4e365b5cdf94f539fe05
    
    root@labosa:/var/log# docker run -it --link some-mysql:mysql --rm mysql \
        sh -c 'exec mysql -h"$MYSQL_PORT_3306_TCP_ADDR" -\
        P"$MYSQL_PORT_3306_TCP_PORT" -uroot -p"$MYSQL_ENV_MYSQL_ROOT_PASSWORD"'
    mysql: [Warning] Using a password on the command line interface can be insecure.
    Welcome to the MySQL monitor.  Commands end with ; or \g.
    Your MySQL connection id is 3
    Server version: 5.7.18 MySQL Community Server (GPL)
    
    Copyright (c) 2000, 2017, Oracle and/or its affiliates. All rights reserved.
    
    Oracle is a registered trademark of Oracle Corporation and/or its
    affiliates. Other names may be trademarks of their respective
    owners.
    
    Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
    
    mysql> use demo;
    Reading table information for completion of table and column names
    You can turn off this feature to get a quicker startup with -A
    
    Database changed
    mysql> select * from foo;
    +------+
    | val  |
    +------+
    | a    |
    | b    |
    | c    |
    +------+
    3 rows in set (0.00 sec)
    

So there you go.

Share

Planet Linux AustraliaMichael Still: I think I found a bug in python’s unittest.mock library

Share

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we’ve used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what “methods” were called on the mock.

However, you use the same mock object later to make assertions about what was called. Herein is the problem — the mock object doesn’t know if you’re the code under test, or the code that’s making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

Here’s an example:

#!/usr/bin/python3

from unittest import mock

class foo(object):
    def dummy(a, b):
        return a + b

@mock.patch.object(foo, 'dummy')
def call_dummy(mock_dummy):
    f = foo()
    f.dummy(1, 2)

    print('Asserting a call should work if the call was made')
    mock_dummy.assert_has_calls([mock.call(1, 2)])
    print('Assertion for expected call passed')

    print()
    print('Asserting a call should raise an exception if the call wasn\'t made')
    mock_worked = False
    try:
        mock_dummy.assert_has_calls([mock.call(3, 4)])
    except AssertionError as e:
        mock_worked = True
        print('Expected failure, %s' % e)

    if not mock_worked:
        print('*** Assertion should have failed ***')

    print()
    print('Asserting a call where the assertion has a typo should fail, but '
          'doesn\'t')
    mock_worked = False
    try:
        mock_dummy.typo_assert_has_calls([mock.call(3, 4)])
    except AssertionError as e:
        mock_worked = True
        print('Expected failure, %s' % e)
        print()

    if not mock_worked:
        print('*** Assertion should have failed ***')
        print(mock_dummy.mock_calls)
        print()

if __name__ == '__main__':
    call_dummy()

If I run that code, I get this:

$ python3 mock_assert_errors.py
Asserting a call should work if the call was made
Assertion for expected call passed

Asserting a call should raise an exception if the call wasn't made
Expected failure, Calls not found.
Expected: [call(3, 4)]
Actual: [call(1, 2)]

Asserting a call where the assertion has a typo should fail, but doesn't
*** Assertion should have failed ***
[call(1, 2), call.typo_assert_has_calls([call(3, 4)])]

So, we should have been told that typo_assert_has_calls isn’t a thing, but we didn’t notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

I don’t really have a solution to this right now (I’m home sick and not thinking straight), but it would be interesting to see what other people think.

Share

Planet Linux AustraliaMichael Still: The Collapsing Empire

Share

This is a fun fast read, as is everything by Mr Scalzi. The basic premise here is that of a set of interdependent colonies that are about to lose their ability to trade with each other, and are therefore doomed. Oh, except they don’t know that and are busy having petty trade wars instead. It isn’t a super intellectual read, but it is fun and does leave me wanting to know what happens to the empire…

The Collapsing Empire Book Cover The Collapsing Empire
John Scalzi
Fiction
Tor Books
March 21, 2017
336

Our universe is ruled by physics and faster than light travel is not possible—until the discovery of The Flow, an extra-dimensional field we can access at certain points in space-time that transport us to other worlds, around other stars. Humanity flows away from Earth, into space, and in time forgets our home world and creates a new empire, the Interdependency, whose ethos requires that no one human outpost can survive without the others. It’s a hedge against interstellar war—and a system of control for the rulers of the empire. The Flow is eternal—but it is not static. Just as a river changes course, The Flow changes as well, cutting off worlds from the rest of humanity. When it’s discovered that The Flow is moving, possibly cutting off all human worlds from faster than light travel forever, three individuals -- a scientist, a starship captain and the Empress of the Interdependency—are in a race against time to discover what, if anything, can be salvaged from an interstellar empire on the brink of collapse. “John Scalzi is the most entertaining, accessible writer working in SF today.” —Joe Hill "If anyone stands at the core of the American science fiction tradition at the moment, it is Scalzi." —The Encyclopedia of Science Fiction, Third Edition

Share

Planet Linux AustraliaMichael Still: Python3 venvs for people who are old and grumpy

Share

I’ve been using virtualenvwrapper to make venvs for python2 for probably six or so years. I know it, and understand it. Now some bad man (hi Ramon!) is making me do python3, and virtualenvwrapper just isn’t a thing over there as best as I can tell.

So how do I make a venv? Its really not too bad…

First, install the dependencies:

    git clone git://github.com/yyuu/pyenv.git .pyenv
    echo 'export PYENV_ROOT="$HOME/.pyenv"' >> ~/.bashrc
    echo 'export PATH="$PYENV_ROOT/bin:$PATH"' >> ~/.bashrc
    echo 'eval "$(pyenv init -)"' >> ~/.bashrc
    git clone https://github.com/yyuu/pyenv-virtualenv.git ~/.pyenv/plugins/pyenv-virtualenv
    source ~/.bashrc
    

Now to make a venv, do something like this (in this case, infrasot is the name of the venv):

    mkdir -p ~/.virtualenvs/pyenv-infrasot
    cd ~/.virtualenvs/pyenv-infrasot
    pyenv virtualenv system infrasot
    

You can see your installed venvs like this:

    $ pyenv versions
    * system (set by /home/user/.pyenv/version)
      infrasot
    

Where system is the system installed python, and not a venv. To activate and deactivate the venv, do this:

    $ pyenv activate infrasot
    $ ... stuff you're doing ...
    $ pvenv deactivate
    

I’ll probably write wrappers at some point so that this looks like virtualenvwrapper, but its good enough for now.

Share

Planet Linux AustraliaMichael Still: So you want to setup a Ceph dev environment using OSA

Share

Support for installing and configuring Ceph was added to openstack-ansible in Ocata, so now that I have a need for a Ceph development environment it seems logical that I would build it by building an openstack-ansible Ocata AIO. There were a few gotchas there, so I want to explain the process I used.

First off, Ceph is enabled in an openstack-ansible AIO using a thing I’ve never seen before called a “Scenario”. Basically this means that you need to export an environment variable called “SCENARIO” before running the AIO install. Something like this will do the trick?L:

    export SCENARIO=ceph
    

Next you need to set the global pg_num in the ceph role or the install will fail. I did that with this patch:

    --- /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:55:07.803635173 +1000
    +++ /etc/ansible/roles/ceph.ceph-common/defaults/main.yml       2017-05-26 08:58:30.417019878 +1000
    @@ -338,7 +338,9 @@
     #     foo: 1234
     #     bar: 5678
     #
    -ceph_conf_overrides: {}
    +ceph_conf_overrides:
    +  global:
    +    osd_pool_default_pg_num: 8
    
     #############
    @@ -373,4 +375,4 @@
     # Set this to true to enable File access via NFS.  Requires an MDS role.
     nfs_file_gw: true
     # Set this to true to enable Object access via NFS. Requires an RGW role.
    -nfs_obj_gw: false
    \ No newline at end of file
    +nfs_obj_gw: false
    

That of course needs to be done after the Ceph role has been fetched, but before it is executed, so in other words after the AIO bootstrap, but before the install.

And that was about it (although of course that took a fair while to work out). I have this automated in my little install helper thing, so I’ll never need to think about it again which is nice.

Once Ceph is installed, you interact with it via the monitor container, not the utility container, which is a bit odd. That said, all you really need is the Ceph config file and the Ceph utilities, so you could move those elsewhere.

    root@labosa:/etc/openstack_deploy# lxc-attach -n aio1_ceph-mon_container-a3d8b8b1
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph -s
        cluster 24424319-b5e9-49d2-a57a-6087ab7f45bd
         health HEALTH_OK
         monmap e1: 1 mons at {aio1-ceph-mon-container-a3d8b8b1=172.29.239.114:6789/0}
                election epoch 3, quorum 0 aio1-ceph-mon-container-a3d8b8b1
         osdmap e20: 3 osds: 3 up, 3 in
                flags sortbitwise,require_jewel_osds
          pgmap v36: 40 pgs, 5 pools, 0 bytes data, 0 objects
                102156 kB used, 3070 GB / 3070 GB avail
                      40 active+clean
    root@aio1-ceph-mon-container-a3d8b8b1:/# ceph osd tree
    ID WEIGHT  TYPE NAME       UP/DOWN REWEIGHT PRIMARY-AFFINITY
    -1 2.99817 root default
    -2 2.99817     host labosa
     0 0.99939         osd.0        up  1.00000          1.00000
     1 0.99939         osd.1        up  1.00000          1.00000
     2 0.99939         osd.2        up  1.00000          1.00000
    

Share

Planet Linux AustraliaMichael Still: Nova vendordata deployment, an excessively detailed guide

Share

Nova presents configuration information to instances it starts via a mechanism called metadata. This metadata is made available via either a configdrive, or the metadata service. These mechanisms are widely used via helpers such as cloud-init to specify things like the root password the instance should use. There are three separate groups of people who need to be able to specify metadata for an instance.

User provided data

The user who booted the instance can pass metadata to the instance in several ways. For authentication keypairs, the keypairs functionality of the Nova APIs can be used to upload a key and then specify that key during the Nova boot API request. For less structured data, a small opaque blob of data may be passed via the user-data feature of the Nova API. Examples of such unstructured data would be the puppet role that the instance should use, or the HTTP address of a server to fetch post-boot configuration information from.

Nova provided data

Nova itself needs to pass information to the instance via its internal implementation of the metadata system. Such information includes the network configuration for the instance, as well as the requested hostname for the instance. This happens by default and requires no configuration by the user or deployer.

Deployer provided data

There is however a third type of data. It is possible that the deployer of OpenStack needs to pass data to an instance. It is also possible that this data is not known to the user starting the instance. An example might be a cryptographic token to be used to register the instance with Active Directory post boot — the user starting the instance should not have access to Active Directory to create this token, but the Nova deployment might have permissions to generate the token on the user’s behalf.

Nova supports a mechanism to add “vendordata” to the metadata handed to instances. This is done by loading named modules, which must appear in the nova source code. We provide two such modules:

  • StaticJSON: a module which can include the contents of a static JSON file loaded from disk. This can be used for things which don’t change between instances, such as the location of the corporate puppet server.
  • DynamicJSON: a module which will make a request to an external REST service to determine what metadata to add to an instance. This is how we recommend you generate things like Active Directory tokens which change per instance.

Tell me more about DynamicJSON

Having said all that, this post is about how to configure the DynamicJSON plugin, as I think its the most interesting bit here.

To use DynamicJSON, you configure it like this:

  • Add “DynamicJSON” to the vendordata_providers configuration option. This can also include “StaticJSON” if you’d like.
  • Specify the REST services to be contacted to generate metadata in the vendordata_dynamic_targets configuration option. There can be more than one of these, but note that they will be queried once per metadata request from the instance, which can mean a fair bit of traffic depending on your configuration and the configuration of the instance.

The format for an entry in vendordata_dynamic_targets is like this:

<name>@<url>

Where name is a short string not including the ‘@’ character, and where the URL can include a port number if so required. An example would be:

testing@http://127.0.0.1:125

Metadata fetched from this target will appear in the metadata service at a new file called vendordata2.json, with a path (either in the metadata service URL or in the configdrive) like this:

openstack/2016-10-06/vendor_data2.json

For each dynamic target, there will be an entry in the JSON file named after that target. For example:

        {
            "testing": {
                "value1": 1,
                "value2": 2,
                "value3": "three"
            }
        }

Do not specify the same name more than once. If you do, we will ignore subsequent uses of a previously used name.

The following data is passed to your REST service as a JSON encoded POST:

  • project-id: the UUID of the project that owns the instance
  • instance-id: the UUID of the instance
  • image-id: the UUID of the image used to boot this instance
  • user-data: as specified by the user at boot time
  • hostname: the hostname of the instance
  • metadata: as specified by the user at boot time

Deployment considerations

Nova provides authentication to external metadata services in order to provide some level of certainty that the request came from nova. This is done by providing a service token with the request — you can then just deploy your metadata service with the keystone authentication WSGI middleware. This is configured using the keystone authentication parameters in the vendordata_dynamic_auth configuration group.

This behaviour is optional however, if you do not configure a service user nova will not authenticate with the external metadata service.

Deploying the same vendordata service

There is a sample vendordata service that is meant to model what a deployer would use for their custom metadata at http://github.com/mikalstill/vendordata. Deploying that service is relatively simple:

$ git clone http://github.com/mikalstill/vendordata
$ cd vendordata
$ apt-get install virtualenvwrapper
$ . /etc/bash_completion.d/virtualenvwrapper (only needed if virtualenvwrapper wasn't already installed)
$ mkvirtualenv vendordata
$ pip install -r requirements.txt

We need to configure the keystone WSGI middleware to authenticate against the right keystone service. There is a sample configuration file in git, but its configured to work with an openstack-ansible all in one install that I setup up for my private testing, which probably isn’t what you’re using:

[keystone_authtoken]
insecure = False
auth_plugin = password
auth_url = http://172.29.236.100:35357
auth_uri = http://172.29.236.100:5000
project_domain_id = default
user_domain_id = default
project_name = service
username = nova
password = 5dff06ac0c43685de108cc799300ba36dfaf29e4
region_name = RegionOne

Per the README file in the vendordata sample repository, you can test the vendordata server in a stand alone manner by generating a token manually from keystone:

$ curl -d @credentials.json -H "Content-Type: application/json" http://172.29.236.100:5000/v2.0/tokens > token.json
$ token=`cat token.json | python -c "import sys, json; print json.loads(sys.stdin.read())['access']['token']['id'];"`

We then include that token in a test request to the vendordata service:

curl -H "X-Auth-Token: $token" http://127.0.0.1:8888/

Configuring nova to use the external metadata service

Now we’re ready to wire up the sample metadata service with nova. You do that by adding something like this to the nova.conf configuration file:

[api]
vendordata_providers=DynamicJSON
vendordata_dynamic_targets=testing@http://metadatathingie.example.com:8888

Where metadatathingie.example.com is the IP address or hostname of the server running the external metadata service. Now if we boot an instance like this:

nova boot --image 2f6e96ca-9f58-4832-9136-21ed6c1e3b1f --flavor tempest1 --nic net-name=public --config-drive true foo

We end up with a config drive which contains the information or external metadata service returned (in the example case, handy Carrie Fischer quotes):

# cat openstack/latest/vendor_data2.json | python -m json.tool
{
    "testing": {
        "carrie_says": "I really love the internet. They say chat-rooms are the trailer park of the internet but I find it amazing."
    }
}

Share

Planet Linux AustraliaMichael Still: Things I read today: the best description I’ve seen of metadata routing in neutron

Share

I happened upon a thread about OVN’s proposal for how to handle nova metadata traffic, which linked to this very good Suse blog post about how metadata traffic is routed in neutron. I’m just adding the link here because I think it will be useful to others. The OVN proposal is also an interesting read.

Share

Planet Linux AustraliaOpenSTEM: Australia and the Commonwealth Games

Australia has been doing exceptionally well at the 2018 Commonwealth Games, held at the Gold Coast, Queensland. We can be very proud of our athletes, not only for their sporting prowess, but also because of their friendly demeanour and wonderful examples of the spirit of sportsmanship. I’m sure we all felt proud when the Australian […]

TEDSteelcase at TED2018: Here’s what a desk chair inspired by a TED Talk looks like

But would it be a dream to sit on? Steelcase showed off its SILQ Chair at TED2018: The Age of Amazement, April 10 – 14, 2018, Vancouver. Photo: Jason Redmond / TEDs

It’s only fitting that the chair which serves as the focal point of the Steelcase exhibit space at TED2018 was inspired, in part, by a TED Talk. Back at TED2009, Steelcase VP of Global Design and Product Engineering James Ludwig saw athlete and Paralympian Aimee Mullins speak about her different prostheses (TED Talk: My 12 pairs of legs). He was especially intrigued by her carbon-fiber “cheetah feet” and how they could store and release energy. He wondered: Could this wondrously light yet stiff yet flexible material — revolutionizing airplane, car and bicycle manufacturing — be used in a desk chair?

Carbon fiber hadn’t been used in mainstream furniture, but Ludwig was also bent on following a vision he’d had the previous year. The standard high-tech office chair had become what he calls “an exquisite machine” — consisting of up to 250 parts — but he’d tired of these contraptions. “I didn’t want to feel like I was sitting on a mechanical bull,” says Ludwig (can we get an amen?). He wanted to make a chair that was far simpler, so he sketched one that sat on “four leaf-like tendrils.” He had no idea how he’d build it, but that’s the task of industrial designers: envisioning what doesn’t exist — and then making it happen.

Mullins’s feet lingered in his mind, and he thought back to his dream seating and realized he might have found a solution. His next step was to see if a chair could even been made from carbon fiber. The result, which took him and his team several years to design and execute, was the LessThanFive, named because it weighs less than five pounds.

The game was on. In the Steelcase Innovation Center in Grand Rapids, Michigan, Ludwig found a corner space, papered the windows, gathered a handful of colleagues, and set to realizing the chair that he’d seen only in his mind. Six months later, he had a prototype. It was everything he wanted — responsive and streamlined — except for its price: it would be prohibitively expensive to produce. “Carbon fiber is a handicraft; it needs to be finished by hand,” explains Ludwig.

He told his engineers that they needed to figure out how to do it more cheaply. After some grumbling and sighing, they returned to work and created what Ludwig evasively refers to as a “proprietary fiber and material composition” — in other words, a patent-pending substance that was made from a high-performance polymer. Upholstery, molding and operations teams were called in, and 18 months later, Ludwig’s creation was released in January 2018: the SILQ chair.

He had a lofty mission: “I wanted it to look like Isamu Noguchi had an aerospace degree.” And with its soft curves and sculptural lines, it kind of does. It also drastically reduced the number of parts: just 30. The special polymer replaced many of the springs and hinges that provide flex and support in the typical desk chair. And in a welcome development to anyone who’s hit the wrong switch and made their chair plummet ridiculously during a meeting, it has only one lever that raises and lowers the seat.

But how does it feel? Well, when I sat in the SILQ, it felt like it was made for someone my size. In some sort of industrial-design wizardry, the back cradled my lower back and my head in the right spots — it was the opposite experience of sitting in an airplane seat where I always feel like I’m wearing someone else’s shoes. “This is a once-in-a-decade chair,” pronounces Ludwig.”It’s intuitive and material-based.” Maybe so, but to me, it was something much more special: comfortable.

,

Planet DebianRuss Allbery: INN 2.6.2

(As usual, Julien finished this release a bit back, and then I got busy with life stuff and hadn't gotten the announcement out. And yes, I copied and pasted this parenthetical from the last announcement. Tradition!)

In the feature department, this release adds a new syntaxchecks parameter to inn.conf that can be used to disable message ID syntax checking, better header sanitization support in mailpost, support for TLS 1.3, and support for using GnuPG v1 (which is unfortunately important for control messages and NoCeM on Usenet still).

In the bug-fix department, this release always uses the OVDB helper server with OVDB to avoid various stability problems, fixes a header checking bug in inews that was incorrectly rejecting some long headers, fixes some control command reporting in the daily status report, and hopefully fixes buffindexed on systems with a native page size larger than 16KB.

As always, thanks to Julien ÉLIE for preparing this release and doing most of the maintenance work on INN!

You can get the latest version from the official ISC download page (although that download link still points to INN 2.6.1 as of this writing) or from my personal INN pages. The latter also has links to the full changelog and the other INN documentation.

TEDEnlightenment? Or entitlement? A response to Steven Pinker from a panel of TED Fellows

Linguist and psychologist Steven Pinker’s talk at the end of TED2018 Session 1 argued enthusiastically that humanity has, as a whole, made remarkable progress toward more prosperity, peace and happiness — backing up his assertions with charts that appeared to agree with his optimistic trajectory. But is that the whole story? We convened a roundtable of TED Fellows — including scientists and journalists — to articulate why they are troubled by some of Pinker’s ideas, as presented in his talk on Tuesday night, his larger body of work and, in particular, his attitude toward identity politics in the scientific field. Below, an edited transcript of the conversation

I watched Steven Pinker’s talk, and I listened to it again afterwards. If I hadn’t had personal run-ins with him, I would’ve thought most of what he said was okay. He presented a nice, sanitized version of his thoughts. So I can see how so many people aren’t fully engaged and critical of his larger body of work. Pinker presents a Western perspective of science in which brown people don’t exist — as if indigenous ways of knowing and non-Western canonized ways of science don’t exist. In his talk, he doesn’t even give it a nod, so it’s a less-than-complete story. –– Danielle Lee, evolutionary biologist

I just find that in his talk and in his work, he never unpacks his assumptions underlying the ways we interact with data. By not unpacking those, most people are not understanding how, then, his lens frames the data that he chooses — or the stories or the interpretations that he gets out of that data. That’s a larger issue in the sciences. There’s this idea that the scientific method is purely objective, and replicable. But it always starts with a subjective lens. — Michele Koppes, glaciologist

I do a lot of work in East Africa. The majority of my research on whitefly-borne cassava disease is there. Regarding cherry-picking data, what I experience on the ground around famine and electricity is very different from the picture Pinker presents. His electricity and famine slides didn’t seem to include African countries, but Africa was included in some others. The data didn’t add up. — Laura Boykin, computational biologist

The first graph he showed, to prove that news is so pessimistic now, was a graph starting in 1945. Yet almost all his other graphs started much earlier — 1300, 1751, and so on. Yes, of course — I think we can agree that life is a little better now than it was in the Middle Ages. That’s not the point that people have been making about how society has progressed since 1945. He ignores the stagnation that has happened — not just in the US, but around the world — when it comes to inequality. The number of people in jail, the fact that schools are still segregated as much as they were 40 years ago, don’t really fit into his narrative.

His critics all accept that we probably live better lives than we did in 1820, but that is entirely not the point. He gives very wealthy people an excuse to be like, “Well, things are fine, so I don’t have to do anything.” And as for his hypothesis that intellectuals don’t like progress, it’s entirely untrue: intellectuals just like to think about things with a little more nuance. — Trevor Timm, investigative journalist + free speech advocate

As humans — even intellectual ones — whenever we see a graph with some lines, everybody’s like, “Ah, science!” And it’s just not science. But it looks like it, and so you believe it. This is the problem with scientific integrity: you posit something, and you make it seem “objective” — which really just means positive toward your frame of reference — and everything else be damned. I thought it was interesting how he constructs this notion of, if you disagree, it’s because you don’t understand. So if someone has a counterpoint, it implies they have less understanding, less knowledge, and less critical thought — ”enlightenment,” as he says — than everybody else. So he’s now attacked everyone, and made it harder for folks to disagree, and also painted a skewed image.  — Jedidah Isler, astrophysicist

If you are going to present graphs and say, “Ooh, science,” I think maybe we should dig a little deeper into that, and say, “Let’s really look at the data.” Anybody who’s quantitative — and we have experts in this room—will tell us that you can’t just look at an average and say, “Yeah, that’s the whole story!” You have to look at things like the spread, and what is their distribution? How does it change over time? — Shohini Ghose, physicist

I just want to take it a little bit away from the book and the lecture, and talk about the science march. For me, in college as a zoology major, Pinker was one of my heroes with his book How the Mind Works. But there were some events around the March for Science and inclusivity that should have kept him from being here in the first place.  — Prosanta Chakrabarty, evolutionary biologist

I did not participate in the March for Science and I never officially supported it, precisely because I was waiting to see how its leaders were going to handle inclusiveness. There were people — just regular scientists — who came out early, saying, “Don’t you dare make this about inclusiveness and identity politics.” They were emboldened by Pinker, who came on hard supporting that stance.

The debate evolved a new hashtag called #MarginSci, which specifically focused on people from marginalized groups talking about why we needed inclusiveness in the March for Science. The march flip-flopped on the issue: basic inclusion — not even necessarily ethnic inclusion, but making sure the march would be accessible to people who had mobility issues, and so on.

Pinker came out saying that he thought that diversity and inclusion were simply identity politics, and that they were a waste of time. He attacked individuals as not authentically scientists. He made it clear that he was anti-identity, anti-inclusiveness. A few months later, he doubled down and said he would spend his career ruining us professionally for the work the we did with diversity and inclusion, because he’d thought we tried to break March for Science up. We didn’t — loads of us just backed away and didn’t participate.

He’s technically a scientific hero saying publicly we were all ridiculous for wanting to make science inclusive, to fight for science and where it stands in policy, and fighting for individual scientists and funding. His stance is that “identity politics” (he uses this phrase as a pejorative) — in other words, those of us who acknowledge and are proud of our identities and want our whole selves included — distracts from science, that it’s not a part of the scientific context, and it’s the antithesis of objectivity and merit in the science process.  — DL

Pinker frames science as this beautiful, pure thing: if you allow any form of identity discussion into it, it’s not science. To him, science is pure: the person, the practitioner, should not be in that argument. Those who do are at war on science.

I’m a geographer-glaciologist, and my work says we should have an inclusive, diverse range of voices in glaciology, to address the fact that this field really so far has had white male Western voices only. I use the phrase “feminist glaciology,” and that’s been the thing that Pinker’s had a hard time with, and pushes back on it in his book and in articles he writes. What’s sexier than to point out a war on science, and then point to feminist glaciology? It sure sounds ridiculous when you frame it that way.

If you read his book, there is no actual argument with the work I do. But he grabbed an example of an academic article that I coauthored with Glacier Lab members, slapped it in his book, and said, “Here is a war on science.” When I spoke with him last night, he didn’t know who I was — it’s not my work, it’s not me. He just used it as another example of this war on science. Glaciology has a very long history of marginalizing or erasing any other voices: not just women’s voices — indigenous voices, all voices. But that doesn’t mean that those voices didn’t participate, they just tended to get dismissed. If we don’t look at that, not only in glaciology, but across science, what is it that we’re actually making today? — M Jackson, geographer + glaciologist

May I just add to that point? Regarding the idea of the impunity of science: I haven’t read the whole book, but of particular concern to me was acknowledgement of the history of statistics. Pinker made the point that eugenics was almost a side tangent — and that it wasn’t the aim of science, and science is good. Having worked quite a lot on this history, the origins of statistics are in eugenics. Pinker briefly mentioned Galton, but when Galton and Pearson developed statistical regression, it was to study “regression to mediocrity.” Pearson held a chair in eugenics. He said the scientific approach was to improve nations by making sure people came from the “better stocks” and to wage war against “inferior races.” It’s important to have that historical context when we’re talking about scientific ideas. Science is not always a pure, objective thing. All those arguments for eugenics were data-driven and statistical, but we’ve got to acknowledge where modern statistical ideas came from, and their uncomfortable background. — Adam Kucharski, infectious disease scientist

Context matters, and I don’t think that’s part of his worldview at all. I think that even for Pinker to acknowledge that there’s more than one way to think about science is to acknowledge that identity matters. There isn’t a positioning of himself within that greater discourse because he already views himself as an objective scientist. Therefore he doesn’t need to try and unpack these other ideas. If you even try to engage with him and have a conversation that involves context mattering, or identity mattering, you get pushed aside because none of that is “pure science.” — MJ

He gives us two options: either you’re a pessimist, or you’re part of the new enlightenment. He never gives us the scientific third option — which is that he could be wrong. As a scientist, I am always thinking, “I could be wrong.” You’re not a good scientist if you can’t admit that. — LB

That’s the ultimate form of privilege, when you don’t have to acknowledge what your positionality is, what your worldview is, and just assume and move forward. We all get to push back on that, but we’ve already been dealt with, really. We’re over here, we’re “pessimists.” — MJ

He takes that privilege even further. On one hand, he says it shouldn’t be contextual, but of course, he is absolutely framing his stance in the context of his take on a European Enlightenment that’s driven all science. He then cherry-picks and claims all that science as what’s good, objective science, but leaves out all the “science” that led to things like eugenics, the justification of slavery and so on. He uses his privilege, and leaves out parts of his own context. — SG

A lot of this stuff has already been named and studied. People have written about white normativity, and how whiteness is rendered invisible in a way that it is the norm: you never even have to talk about it. So when you’re talking about people, you’re talking about white people, right? And any other people get an adjective.

We are at a significant disadvantage in the sciences because we don’t have the language to call this out for what it is. The field has to know more about how people work, how politics work, how sociology works, in order to recognize when the data — or lack of data — itself imprints a powerful bias on the conclusions drawn. — JI

We’re also representing the sciences that he deems nonscience. There is a whole field in the social sciences of critical theory and critical studies that does exactly this, that looks at these power and privilege dynamics within the sciences. We’re doing this in the field of geography right now. But he pushes back on that, as though it’s irrelevant — he sees a hierarchy of purity in the sciences. The highest of the sciences are the physical and the chemical — physics, and then it trickles down from there. — MK

He doesn’t realize that he has this huge platform — a lot of it which he did earn — but that he was three steps up the ladder from everyone else when he started. He became a great scientist, but he had benefits that he believes everybody has access to: that science is a democracy available to everyone equally.

That’s why it’s so offensive when he picks on women, and people of color, or anyone else who he thinks has a level playing field. His not recognizing that science is not a level playing field is the biggest problem. That, and his stance that life is better for the Steven Pinkers of the world, so he thinks life is better for everyone. — TT

That part of his talk drove me bananas, the “happiness curve.” How do you define happiness? He never said explicitly, but you could tell he had a definition of happiness in his head that he thought was shared with everybody in his audience. But there was never any question of, what does that mean? How are you defining that? — MK

I think that’s dead-on. His big assumption is that human beings are data points, not complex interacting communities. — MJ

There’s one talk I heard in Vancouver years ago by the Dalai Lama. He said that we need to consider the morality of making people a statistic. And that’s what Pinker’s presentation was like — lumping the whole of humanity together, and lumping their suffering together. How ethical is it for us to do such a thing? That’s a question that will always resonate with me, whenever I see someone using statistics to average out human experience. Is that the moral system that we want? — MK

All knowledge that we’re creating is mediated through our human experience, and that’s an individual, or a community, or an identity—all human experience. And if we acknowledge that within science, I think that changes the fundamentals of science. I’m not arguing that gravity doesn’t exist, or the theory of evolution doesn’t exist. What I am saying is that the human participant within science matters, and acknowledging this is a great place to begin. — MJ

And the scientific method doesn’t happen in a vacuum. People are the ones who actually apply this made-up framework that we’ve decided to call the scientific method. What happens through that methodology is a product where the input is humans, and the output is interpreted by humans. — SG

A better approach than Pinker’s to understanding human progress and history would be to let people at the table talk who haven’t talked before. As Jedidah said, he’s not saying anything new. He represents the one demographic that has had the mic for thousands of days. It’s time to let other groups talk — particularly those who have been the most underrepresented, the most erased.

What would the soundscape be like, just hearing from people who have not been allowed to talk? We will discover folks who we didn’t even know existed. That’s also part of the lie — that certain groups don’t even exist, so they don’t have a voice, or they don’t participate in things. So the progress is: Let’s let other folks to the table, and stand back and listen. — DL

TEDScreen gems: The art onscreen at TED2018

A monumental part of what brings the TED conference to life is the speakers and the amazing ideas they share on the TED stage. But here’s a riddle: What also shares the spotlight with each person who spends their 3 to 18 minutes speaking on the red dot? The magnificent session art, of course!

TED has collaborated with design firm Colours & Shapes since 2014. They are the minds behind the mesmerizing animated art seen at the start and throughout each session, which is tailored specifically to that session’s theme.

We caught up with Colours & Shapes in their hometown of Vancouver, BC, to learn more about the process behind an integral part of what’s brought TED2018: The Age of Amazement to life.

Session04-HQ-large.gif

A globally changing landscape forms the backdrop for Session 4, the Audacious Project. Over the course of the evening’s session, the light slowly fades onscreen.

Q: Tell us about your team and company:

Colours & Shapes was founded in 2012 by Gordie Cochran and Anthony Diehl. We actually sort of stumbled into it. We saw an opportunity to leverage our diverse backgrounds in film, events and tech to craft amazing, meaningful experiences. Our passion has really been to architect “moments” that stick with you; moments that resonate with that deep “why” behind any event or experience.

Q: Take us through the creative process: from receiving the prompts to fruition … were there technical considerations or concerns you had to troubleshoot?

The creative process has been really wonderful. We love how open the TED curation team is to some pretty “out there” visual ideas. Our process was really all about understanding the session themes and curation and finding ways to to unpack “amazement” in each. We started with really rough sketches and motifs. We gave particular consideration to how we could use projection on the stage and the beautiful wood cases. We knew from the start that we wanted to treat the entire stage and screens as one unified canvas for content. We worked really closely with Mina, Mike and Martha to find just the right tone for each session. Our looks moved pretty quickly from sketches and moodboards to illustration and animation.

The creative process really followed the development of the sessions. As we learned more about the speakers and topics, there was so much great inspiration to draw on visually. From the unique red laser light in Mary Lou Jepsen’s talk to ocean exploration and intimate storytelling, we wanted each session to feel like the perfect space to hear each TED Talk. Our team worked incredibly hard in the weeks leading up to TED to produce all these diverse session environments. And we worked in a lot of different mediums! Traditional animation, illustration, film, compositing, VFX … At one point we found ourselves smearing around a lot of tea, cream and sugar in macro videography for one session look (Session 5: Space to Dream).

Q: What were you most excited about when you heard this year’s theme was Age of Amazement?

Love the theme! We were immediately intrigued and drawn in when we starting talking about this year theme. Each session really has its own way that it interacts with the theme in a way that is really fun and interesting. The early creative motifs we developed were all about exploring “amazement” through a variety of lenses: emotions, optical illusions, perspective shifts, shadow play, etc.

Q: The art for each session is based on the session title — any secret inspirations? (A little birdy told me about song lyrics inspiring Session 5 … are there others like that?)

  • There were a few sessions that we really wanted to tie into. The red laser light for Session 9: Body Electric is a nod to Mary Lou Jepsen’s talk.
  • Nerdish Delight is a playful nod to the ubiquitous “sexy tech product reveal” video. It’s all cool sculpted lines, slick materials and studio lighting … except we never get to see just what the product is!
  • “Wow. Just wow.” is an M.C. Escher-esque optical illusion. It’s all about the thrill of a perspective shift, that “wow” moment when you realize you are seeing something completely new and exciting.
  • “Space to Dream” really started as we asked ourselves, “What do astrophysicist daydream about?” We imagined ourselves staring into a cup of tea and losing ourselves in a waking dream about beautiful unseen corners of the universe. In one of our creative meetings with the TED team the lyrics to the Blondie song “Dreaming” came up: “I’ll have a cup of tea and tell you of my dreaming …” It’s a beautiful deep space daydream that is built entirely from filmed elements like tea, sugar, cream and food coloring. No actual nebulae were harmed in the filming of that session.

Q: Any “easter eggs” we should look for?

The Blondie song connection above is a fun one.

Session 10, Personally Speaking, is a session all about little scenes and objects that suggest a story, but don’t quite give you all the info. Sort of like the opening line of a good short story. The wood cases on stage and “rooms” in the session environment shift and turn.

Q: What do you want the audience to experience while watching your art?

In a word? Amazement! Our hope is that each visual environment serves to support the deep, intentional and thoughtful curation that has gone into each session for TED 2018. In working closely with the team at TED, we have worked to extract as many insights, themes, inspirations for each session and then have endeavoured to create visual environments that effectively captures the DNA of each session in thoughtful, creative and whimsical ways.

Session02-HQ-large.gif

The shifting panels and details of Session 2’s screen reflects the session title: “After the end of history …”

Q: What are you most proud of from this project?

Definitely our talented team members! Producing great experiences and beautiful creative takes a team that can bend and bow with evolving ideas and creative discovery. Getting to partner with the brilliant team at TED and come alongside and be able to visually bring big ideas to life in the theatre has been a really fantastic and creatively rich experience for C&S. Hard to pick favourites from the content but we really love how the conference opener came out. Jorge Canedo Estrada’s sumptuous animation is second to none. Also, seeing Mike Ellis’ gorgeous illustration come to life in a crazy shifting 3D world in “Wow. Just Wow.” is something we could watch all day!

Q: How many people work on making this happen?

We pulled together a team of multidisciplinary creatives to built out the visual worlds for TED2018. We have collaborated with a team of illustrators, designers, animators and composers, 13 people total.

Q: Any interesting or fun stories you’d like to share that happened during the process?

The intensity of taking everything on with a short timeline, and then throwing the opener into the mix weeks before the event. This led to some long nights in animation! But seeing it all come to life in the theatre was incredibly rewarding.

Q: Anything I’m missing? Anything you’d like to add?

Thank you to Chris, Mina, Mike, Martha and the TED team for having us along for the ride this year!

Bright colors and natural motifs tell the story of Session 11, “What matters.”

Cory DoctorowInterview with Monocle’s Meet the Writers


Last month, while at Adelaide Writers Week, I sat down with the excellent Georgina Godwin to record an interview (MP3) for Monocole’s “Meet the Writers” podcast. They’ve only just published it and I’m very pleased with how it turned out: we got into some territory that I don’t usually cover. Also: they had the interview in a bar and bought me whatever whisky I asked for, which was quite a bonus.

Planet DebianNorbert Preining: Analysing Debian packages with Neo4j – Part 1 – Debian

Overview on the blog series

The Ultimate Debian Database UDD collects a variety of data around Debian and Ubuntu: Packages and sources, bugs, history of uploads, just to name a few.

The database scheme reveals a highly de-normalized RDB. In this on-going work we extract (some) data from UDD and represent it as a graph database.

In the following series of blog entries we will report on this work. Part 1 (this one) will give a short introduction to Debian and the life time and structure of Debian packages. Part 2 will develop the graph database scheme (nodes and relations) from the inherent properties of Debian packages. The final part 3 will describe how to get the data from the UDD into Neo4j, give some sample queries, and discuss further work.

This work has been presented at the Neo4j Online Meetup and a video recording of the presentation is available on YouTube.

Part 1 – Debian

Debian is an open source Linux distribution, developed mostly by volunteers. With a history of already more than 20 years, Debian is one of the oldest Linux distributions. It sets itself apart from many other Linux distributions by a strict set of license rules that guarantees that everything within Debian is free according to the Debian Free Software Guidelines.

Debian also gave rise to a large set of offsprings, most widely known one is Ubuntu.

Debian contains not only the underlying operating system (Linux) and the necessary tools, but also a huge set of programs and applications, currently about 50000 software packages. All of these packages come with full source code but are already pre-compiled for easy consumption.

To understand what information we have transfered into Neo4j we need to take a look at how Debian is structured, and how a packages lives within this environment.

Debian releases

Debian employs release based software management, that is, a new Debian version is released in more or less regular intervals. The current stable release is Debian stretch (Debian 9.2) and was released first in June 2017, with the latest point release on October 7th, 2017.

To prepare packages for the next stable release, they have to go through a set of suites to make sure they conform to quality assurance criteria. These suites are:

  • Development (sid): the entrance point for all packages, where the main development takes place;
  • Testing: packages that are ready to be released as the next stable release;
  • Stable: the status of the current stable release.

There are a few other suites like experimental or targeting security updates, but we leave their discussion out here.

Package and suite transitions

Packages have a certain life cycle within Debian. Consider the following image (by Youheu Sasaki, CC-NC-SA):

Packages and Suites (Youhei Sasaki, CC-NC-SA)

Packages normally are uploaded into the unstable suite and remain there at least for 5 days. If no release critical bug has been reported, after these 5 days the package transitions automatically from unstable into the testing suite, which will be released as stable by the release managers at some point in the future.

Structure of Debian packages

Debian packages come as source packages and binary packges. Binary packages are available in a variety of architectures: amd64, i386, powerpc just to name a few.

Debian developers upload source packages (and often own’s own architecture’s binary package), and for other architectures auto-builders compile and package binary packages.

Debian auto-builders (from Debian Administrator’s Handbook, GPL)

Components of a package

Debian packages are not only a set of files, but contain a lot more information, let us listen a few important ones:

  • Maintainer: the entity (person, mailing list) responsible for the package
  • Uploaders: other developers who can upload a new version of the package
  • Version: a Debian version number (see below)
  • Dependency declarations (see below)

There are many further fields, but we want to concentrate here on the fields that we are representing the in the Graph database.

The Maintainer and Uploaders are standard emails, most commonly including a name. In the case of the packages I maintain the maintainer is set to a mailing list (debian-tex-maint AT ...) and myself put into the Uploaders field. This way bug reports will go not only to me but to the whole list – a very common pattern in Debian.

Next let us look at the version numbers: Since for a specific upstream release we sometimes do several packages in Debian (to fix packaging bugs, for different suites), the Debian version string is a bit more complicated then just the simple upstream version:

[epoch:]upstream_version[-debian_revision]

Here the upstream_version is the usual version under which a program is released. Taking for example one of the packages I maintain, asymptote, it currently has version number 2.41-4, indicating that upstream version is 2.41, and there have been four Debian revisions for it. A bit more complicated example would be musixtex which currently has the version 1:1.20.ctan20151216-4.

Some caveats concerning source and binary packages, and versions:

  • one source package can build many different binary packages
  • the names of source package and binary package are not necessary the same (necessarily different when building multiple binary packages)
  • binary packages of the same name (but differentversion) can be built from different source packages

Let us finally look at the most complicated part of the package meta-fields, the dependencies: There are two different sets of dependencies, one for source packages and one for binary packages:

  • source package relations: Build-Depends, Build-Depends-Indep, Build-Depends-Arch, Build-Conflicts, Build-Conflicts-Indep, Build-Conflicts-Arch
  • binary package relations: Depends, Pre-Depends, Recommends, Suggests, Enhances, Breaks, Conflicts

The former one specify package relations during package build, while the later package dependencies on the installed system.

A single package relation can take a variety of different forms providing various constraints on the relation:

  • Relation: pkg: no constraints at all
  • Relation: pkg (<< version): constraints on the version, can be strictly less, less or equal, etc
  • Relation: pkg | pkg: alternative relations
  • Relation: pkg [arch1 arch2]: constraints on the architectures

When properly registered for a package, these relations allow Debian to provide smooth upgrades between releases and guarantee functionality if a package is installed.


This concludes the short introduction to Debian and its packages. In the next blog entry we will describe the Ultimate Debian Database UDD and how to map the information presented here from the UDD into a Graph Database.

Planet DebianVasudev Kamath: Docker Private Registry and Self Signed Certificates

I was recently experimenting with hosting a private registry on an internal LAN network for publishing docker private images. I found out that docker-pull works only with TLS secured registry. There is possible to run insecure registry by editing daemon.json file but its better to use self-signed certificates instead.

Once I followed the step and started registry I tried to docker-pull and it started complaining about certificate not having any valid names. But this same certificate worked fine with browsers too, of course you need to add exception but no other errors were encountered.

Documentation for Docker does not speaks any specific settings needs to be done prior to generating a self-signed certificate so I was bit confused at beginning.A bit of searching showed up following issue filed against docker and then later re-assigned against *Golang* for its method of handling x509 certificate. It appears that with valid Subject Alternative Name Go crypto library ignores the Common Name.

From thread on Security Stack Exchange I found the command to create a self-signed certificate to contain self-signed certificate. Command in excepted answer does not work until you add --extensions option to it as mentioned in one of the comments. Full command is as shown below.

openssl req -new -sha256 -key domain.key \
             -subj "/C=US/ST=CA/O=Acme, Inc./CN=example.com" \
             -reqexts SAN -extensions SAN \
             -config \
<(cat /etc/ssl/openssl.cnf <(printf "[SAN]\nsubjectAltName=DNS:example.com,DNS:www.example.com")) -out domain.crt

You would need to replace values in -subj and under [SAN] extension. Benefit of this command is you need not modify the /etc/ssl/openssl.conf file.

If you do not have a domain name for the registry and using IP address instead consider replacing [SAN] section in above command to have IP: <ip-address> instead of DNS.

Happy hacking.!

Planet DebianSteinar H. Gunderson: Match day

We're live!

Edit: Day 1 is over, and the videos are up, although not quite cut yet. We had some issues here and there, but overall, things seem to have come out well. More fun with the playoffs tomorrow :-)

Planet DebianVasudev Kamath: Docker container as Development Environment

When you have a distributed team working on a project, you need to make sure that every one uses similar development environment. This is critical if you are working on embedded systems project. There are multiple possibility for this scenario.

  1. Use a common development server and provide every developer in your team an account in it.
  2. Provide description to every one how to setup their development environment and trust them they will do so.
  3. Use Docker to provide the developer with ready made environment and use build server (CI) for creating final deployment binaries.

1st approach was most common approach used, but it has some drawbacks. Since its a shared system you should make sure not every one is able to install or modify system and hence have a single administrator so that no one accidentally breaks the development environment. Other problems include forced to use older OS due to specific requirements of compilers etc.

2nd approach makes you put your trust on your developer to get the correct development environment. Of course you need to trust your team, but every one is a human being and humans can make mistake.

Enter Docker

Dockerfile is best way to document/setup development environment. A developer can read Dockerfile and understand what is being done to setup the environment and simply execute docker build command to generate his/her development environment, or better you build the image and publish it in either public registry and if that is not possible put it in a private registry, and ask developers to pull a image of development environment.

Don't be scared by private registry, setting one up is not a humongous task! Its just couple of docker commands and there is a pretty good documentation available to set one up.

While setting up a development environment you need to make sure last instruction in your Dockerfile is executing the shell of your choice. This is because when you start a container this last instruction is what actually is run by docker, and in our case we need to provide developer with a shell all build toolchain and libraries.

...
CMD ["/bin/bash"]

Now developer just needs to get/build the image, start the container and use it for their work!. To summarize below command is sufficient for developer to run a fresh environment.

$ docker build -t devenv . # If they are building it

# If they are pulling it from say private registry
$ docker pull private-registry-ip/path/to/devenv

$ docker run -itd --name development devenv
$ docker attach development

When container is started it will execute the shell and the next attach command will attach your shell's input/output to container. Now it can be used just like normal shell to build the application.

Another good thing which can be done is sharing the workspace with container so your container can just contain the toolchain and library that is needed and all version control, code editing and likewise can be done in the host machine. One thing that would be needed to make sure is your UID on host and UID of the user inside the container is same. This can be easily done by creating a separate user in the container with UID same as your UID on host system.

Advantages

Some advantages of using docker container include

  1. They are easy to setup and saves a lot of time to the team as a whole compared to traditional approaches.
  2. Easy to throwaway and start fresh: If a developer thinks he did something wrong with container he can prune it and create a fresh one based on the development environment image. So this gives a lot of freedom to developer to experiment.
  3. Uniformity: You will be sure that all your team will be using same evnironment.

So you might ask what about other container technologies like systemd-nspawn or lxc etc. Of course they can also be used in the similar fashion, infact before experimenting with Docker I was a vivid user of systemd-nspawn container. You might have also seen my previous blog posts on systemd-nspawn too. Only reason I switched to Docker is its easy to setup unlike systemd-nspawn which needs so many tweaking and tuning of things besides it does not have a Dockerfile like approach which makes things more time consuming. So for me Docker won the war and I shifted to using Docker more.

This entire post was based on my experiment and experience with Docker. If you feel something can be done in a better way please feel free to write to me.

Don MartiWhen can deceptive sellers outbid honest sellers for ad impressions?

Why does the Peak Advertising effect occur most in the most accurately targeted ad media? Why do people tend to filter out targeted ads, using habit power, technology, and regulation, while paying more attention to less finely targeted ad media?

One explanation is that buying ad space is an example of costly signaling. On this view, advertising is basically an exchange of signal for attention, and ads that don't pay their way with some kind of proof of spend are not worth paying attention to because they don't convey useful information about the seller's beliefs on how valuable the audience would find the product.

Another possible explanation is that targetable ad media are more suitable for deception, and that where advertisers bid for space in a medium, deceptive advertisers will tend to outbid the honest ones.

This seems counterintuitive, since we might suppose that the customer lifetime value of an honest seller's newly acquired customer could in many cases be greater than the profit from a quick score by a deceptive seller. But targeting doesn't just match ad impressions with prospective buyers. When used by a deceptive seller, it can also conceal an ad impression from potentially costly attention.

For honest direct marketers, the expected profit from reaching a buyer is positive, and the expected profit from reaching a non-buyer is zero. But the audience does not just contain buyers and non-buyers. People can also be divided into enforcers and non-enforcers. Enforcers can be anything from professional law enforcement people, to someone who takes apart a bogus product and makes a video about it, to just the writer of a bad online review. What enforcers have in common is that for a dishonest seller, the expected profit from reaching an enforcer is negative.

Some kinds of enforcer can impose costs even without buying. For example, a reader might send the publisher a screenshot containing a scam ad and get the advertiser added to an advertiser exclusion list. Other kinds of enforcer might only take action if they buy the product and find it to be a scam. A deceptive advertiser might incur costs when their ad is shown to either kind of enforcer.

For the honest advertiser, the expected profit from a single impression is:

probability of reaching a buyer × expected profit per sale

For the dishonest advertiser, the expected profit is:

probability of reaching a buyer × expected profit per sale − probability of reaching an enforcer × expected loss per enforcer

The expected loss per enforcer is typically high compared to the profit per sale. For example, a small number of contacts with review writers might require a seller to re-launch under a new name. In an ad impression market with both honest and deceptive sellers, where sellers can choose which impressions to bid on, an ad impression that a deceptive seller believes is unlikely to reach an enforcer has extra value to that deceptive advertiser but not to an honest advertiser. Deceptive sellers will tend to outbid honest ones for certain impressions.

A member of the audience might be able to see targeting criteria, but not the advertiser's internal weighting of targeting criteria. (For example, a targeted ad platform might reveal to you that you are being targeted for an ad because your computer is running the latest release of the OS. What they won't tell you is that the seller is bidding on impressions to your OS version because they're selling a tainted nutritional supplement, and the lead testing department at the Ministry of Health is still on the old OS version.)

So, some ad impressions will tend to be purchased by deceptive sellers, but a low-information member of the audience can't tell which impressions those are. Is this an ad from an honest seller that might be reaching both me and enforcers, or is this an ad from a dishonest seller targeted to reach me but not enforcers? When you read a magazine that reaches a community of practice of which you're a member, you can be confident that product reviewers and editors are seeing the same ads you are. A web ad could be targeted to avoid experienced and better-connected members of the community of practice.

One possible explanation for the Peak Advertising effect is the interaction between deceptive sellers discovering how to use a new ad medium's targeting capabilities to avoid enforcers, and the audience discovering the fraction of deceptive sellers.

Related: Ban Targeted Advertising by David Dayen in The New Republic. (I'm not so much interested in whether or not targeted advertising should be banned as I am in the reasoning behind why people choose to protect themselves from it. The story of matching the exact right buyer to the exact right product is much less compelling for most purchase decisions than the buyer's story of finding an adequate product and avoiding deceptive sellers.)

Planet DebianShirish Agarwal: cleartext passwords and transparency

I had originally thought of talking about the recent autonomous car project which killed a homeless lady in Tempe but guess that would have to wait for another day. I saw Lars Wirzenius’s blog post which led me to change the direction a bit.

So let me just jump in with Lars blog post where he talks about cleartext passwords. While he has actually surmised and shared what a security problem they are, the pity is we come to know of this only because the people in question tacitly admitted to bad practises. How many more such bad actors are there, developers putting user credentials in cleartext god only knows. There was even an April Fool’s joke in 2014 which shared why putting passwords in cleartext is bad.

This is one lesson which web developers are neither taught nor learnt. Most web development courses in India may talk about web frameworks, CSS, front-end and back-end web development and even may talk about UX but security will be something which is supposed to be magically gained while you do the above things. Please note I said most, not all but yes there is needed a whole lot of awakening in terms of safe web development practices but that’s time for another day and another tale. Casual interactions with course publishers has been that most students are looking for buzz words and neither the employers look for ‘security’ as a strong point.

There even have been casual studies which shared that 0.01 of financial crimes are reported in India . I myself am guilty of this when a bank mis-appropriates or does something stupid, my only thing is to get the transaction rectified or get it corrected rather than worry about if some small, medium or large-scale conspiracy is happening in the bank. But that malaise has to many factors to put in this small blog post.

Few years back EFF did a tremendous job of pursuing and getting everyday users and vendors like mozilla, chromium to adopt https globally, but to my knowledge many Indian websites including some of the biggest behemoths in India with whom we have day-to-day activity keep all their user passwords in cleartext. What perhaps may or may not be a shocker to many people that many ATM’s at least in India don’t work on https even today. Is there even a wonder why skinners are still able to cheat honest people and taxpayers .

The reasons for all of the above could be ranging from sheer incompetence to being lazy to not being regulated at all. Rather than sharing anecdotes and also not having INR 100 crores or INR 1 billion rupees ( that statement will become clear in a while) with developers who under casual circumstances have shared they neither do one-way-encryption or salting or any of the methods of securing passwords either because financial companies don’t demand it or know about it even though they should know better.

I can however share an anecdote however which resulted in a suit of law which a media house won sometime back. It isn’t so much about unsafe web practices but more about companies lack of morals for financial web gains and our (the commons) own lack of understanding of such matters.

I had to search on my blog before sharing and turns out I didn’t share this anecdote before, surprise, surprise.

Since 2008, I know of a media house called moneylife which is run by a beautiful, very intelligent woman called Sucheta Dalal and her husband Debasis. I believe Debasis is more into the admin side of things while Sucheta bears both the investigative and editorial responsibilities on her shoulders. While I have never met her whole team, to have the kinds of breath and length of news you often find on moneylife.in you do need to have a strong and competent team which I guess she has.

Sucheta Dalal with the compensation cheque

Copyright – Moneylife.in

I have met her twice, and have been a fan of her work since she started reporting the frauds which were happening in SEBI in Indian Express from where she was consequently fired as she had too many ethics. I have been blessed to meet her couple of times but each time was dumb-founded as you meet someone whom you admired so much. I might have flustered and said thank you for the work you do but couldn’t ever muster the courage to say anything more than that to her face-to-face.

Anyways, fast forward a few years or back couple of years back, Sucheta wrote a column in moneylife that there was unauthorized algorithmic trading happening and some traders were profiting from it in National Stock Exchange. This was apparently done by a whistle-blower (A Singapore-based trader and hedge fund owner) and Sucheta and her team confirmed and then printed the same. Interestingly, SEBI which regulates how finance intermediaries (like brokers, stock analysts, stock exchanges and companies share their expansion plans or any news) didn’t say anything and chose to keep mum although this was happening right below their noses. Please keep this in mind, this happened under the present Government dispensation who had the mottos of ‘being the most transparent’ and ‘we will not eat and will not let corrupt people eat’ to paraphrase their election sloganeering.

Before starting with the story, it would also be interesting to state a bit about NSE. IIRC, BSE for a long-time was a monopoly for share trading, there was Kolkatta Stock Exchange also but due to political winds in Kokatta and many other factors they couldn’t keep up with change in technology and kind of faded on the national scene over the years.

Due to BSE’s bullish ways or being the only action in town, quite a few private and public institutions came together and formed NSE. The Harshad Mehta stock manipulation scandal probably also accelerated the formation of the institute. The goals at formation were laudable but as it happens in institutes which work and value money over everything else, it’s possible to be corroded as will be seen shortly.

NSE in many terms is a strange beast with having investors from Public and Private Companies who supposedly counsel and come under the finance ministry and SEBI (as most of their investors are Government Institutions including the finance ministry). There were also talks of taking NSE as a publicly listed company but dunno what happened about that.

What has never made been public if NSE filed the suit on its own behalf or was persuaded to do so either by finance ministry, SEBI or the traders who were doing the illegal trading, guess this is something we will ever know. The significance of this why will be known at the last of the blog post. AFAIK these algo traders control 40-50% of the daily trading so have a huge grip on the market.

I believe NSE filed the first case in Bombay small causes court which moneylife won and subsequently they even tried in Bombay High Court

Unfortunately for them, Sucheta and her teams were no cub reporters as she had years of experience working both with Times of India and then Indian Express hence she had hard documentary proof which she was able to show in the court to which as far as I know the Prosecutor had no answers.

To cut the long story short, NSE had to withdraw their suit and even pay damages of INR 50 lakh or INR 5 million rupees.

There are many things which I have not covered about the case, some of which can be understood by Shri Lokeshwarri SK’s excellent article which was posted in the Hindu Business Line years ago. He has framed many a questions which are still an open question even today.

The reason I shared this story is pretty simple, its only a very tiny amount of people who invest in the share market. I would say 1-2% of the population . Almost all of these people are highly literate and somewhat financially literate as well. If they didn’t know such things were happening then how can a common man/person on the road question or know if his data is being kept safe or not. All the contracts, terms of conditions especially those which either come in Population or finance or actually anything can come under ‘National Security’.

The best part, the irony is that algorithmic trading in India is now a legal activity and apparently was also legal in 2015 when the suit was done. AFAIK, that change could only be done by SEBI. The whole affair has also been framed in an article on Indian Legal Live which actually raises a whole host of disquieting questions. There seems to be lot of back-dating happening but as mere spectators we can’t even talk about that.

Even the judgement narrowly focussed on some of the questions raised as can be inferred from the article but in the present dispensation judicial activism is on the wane.

While I can’t help in the above, I can share about a tor meetup which probably may help in some direct or indirect way,

I do hope to go there and gain as well as much share whatever little I can.

TEDInsanity. Humanity. Notes from Session 8 at TED2018

If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. Photo: Ryan Lash / TED

The seven speakers lived up to the two words in the title of the session. Their talks showcased both our collective insanity — the algorithmically-assembled extremes of the Internet — and our humanity — the values and desires that extremists astutely tap into — along with some speakers combining the two into a glorious salad. Let’s dig in.

Artificial Intelligence = artificial stupidity. How does a sweetly-narrated video of hands unwrapping Kinder eggs garner 30 million views and spawn more than 10 million imitators? Welcome to the weird world of YouTube children’s videos, where an army of content creators use YouTube “to hack the brains of very small children, in return for advertising revenue,” as artist and technology critic James Bridle describes. Marketing ethics aside, this world seems innocuous on the surface but go a few clicks deeper and you’ll find a surreal and sinister landscape of algorithmically-assembled cartoons, nursery rhymes built from keyword combos, and animated characters and human actors being tortured, assaulted and killed. Automated copycats mimic trusted content providers “using the same mechanisms that power Facebook and Google to create ‘fake news’ for kids,” says Bridle. He adds that feeding the situation is the fact “we’re training them from birth to click on the very first link that comes along, regardless of where the source is.” As technology companies ignore these problems in their quest for ad dollars, the rest of us are stuck in a system in which children are sent down auto-playing rabbit holes where they see disturbing videos filled with very real violence and very real trauma — and get traumatized as a result. Algorithms are touted as the fix, but Bridle declares, “Machine learning, as any expert on it will tell you, is what we call software that does stuff we don’t really understand, and I think we have enough of that already,” he says. Instead, “we need to think of technology not as a solution to all our problems but as a guide to what they are.” After his talk, TED Head of Curation Helen Walters has a blunt question for Bridle: “So are we doomed?” His realistic but ungrim answer: “We’ve got a hell of a long way to go, but talking is the beginning of that process.”

Technology that fights extremism and online abuse. Over the last few years, we’ve seen geopolitical forces wreak havoc with their use of the Internet. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. “Radicalization isn’t a yes or no choice,” she says. “It’s a process, during which people have questions about ideology, religion — and they’re searching online for answers which is an opportunity to reach them.” In 2016, Green collaborated with Moonshot CVE to pilot a new approach called the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups and used that information to create a campaign that deployed targeted advertising to reach people susceptible to ISIS’s recruiting and show them videos to counter those messages. Available in English and Arabic, the eight-week pilot program reached more than 300,000 people. In another project, she and her team looked for a way to combat online abuse. Partnering across Google with Wikipedia and the New York Times, the team trained machine-learning models to understand the emotional impact of language — specifically, to predict comments that were likely to make someone leave a conversation and to give commenters real-time feedback about how their words might land. Due to the onslaught of online vitriol, the Times had previously enabled commenting on only 10 percent of homepage stories, but this strategy led it to open up all homepage stories to comments. “If we ever thought we could build technology insulated from the dark side of humanity, we were wrong,” Green says. “If technology has any hope of overcoming today’s challenges, we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” In a post-talk Q & A, Green adds that banning certain keywords isn’t enough of a solution: “We need to combine human insight with innovation.”

Living life means acknowledging death. Philosopher-comedian Emily Levine starts her talk with some bad news — she’s got stage 4 lung cancer — but says there’s no need to “oy” or “ohhh” over her: she’s okay with it. After all, explains Levine, life and death go hand in hand; you can’t have one without the other. In fact, therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Levine muses about the scientists who are attempting to thwart death — she dubs them the Anti-Life Brigade — and calls them ungrateful and disrespectful in their efforts to wrest control from nature. “We don’t live in the clockwork universe,” she says wryly. “We live in a banana peel universe,” where our attempts at mastery will always come up short against mystery. She has come to view life as a “gift that you enrich as best you can and then give back.” And just as we should appreciate that life’s boundary line stops abruptly at death, we should accept our own intellectual and physical limits. “We won’t ever be able to know everything or control everything or predict everything,” says Levine. “Nature is like a self-driving car.” We may have some control, but we’re not at the wheel.

A high-schooler working on the future of AI. Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years — he’s now just 18 — he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of we call intelligence is just trial-and-error on a massive scale — machines try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. That can create computers that are champions at Go or Q-Bert, but it really doesn’t create general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives and think with these machines. What can he and these new brains accomplish together?

Come fly with her. From a young age, action and hardware engineer Elizabeth Streb wanted to fly like, well, a fly or a bird. It took her years of painful experimentation to realize that humans can’t swoop and veer like them, but perhaps she could discover how humans could fly. Naturally, it involves more falling than staying airborne. She has jumped through broken glass and toppled from great heights in order to push the bounds of her vertical comfort zone. With her Streb Extreme Action company, she’s toured the world, bringing the delight and wonder of human flight to audiences. Along the way, she realized, “If we wanted to go higher, faster, sooner, harder and make new discoveries, it was necessary to create our very own space-ships,” so she’s also built hardware to provide a boost. More recently, she opened Brooklyn’s Streb Lab for Action Mechanics (SLAM) to instruct others. “As it turns out, people don’t just want to dream about flying, nor do they want to watch people like us fly; they want to do it, too, and they can,” she says. In teaching, she sees “smiles become more common, self-esteem blossom, and people get just a little bit braver. People do learn to fly, as only humans can.”

Calling all haters. “You’re everything I hate in a human being” — that’s just one of the scores of nasty messages that digital creator Dylan Marron receives every day. While his various video series such as “Every Single Word” and “Sitting in Bathrooms With Trans People” have racked up millions of views, they’ve also sent a slew of Internet poison in his direction. “At first, I would screenshot their comments and make fun of their typos but this felt elitist and unhelpful,” recalls Marron. Over time, he developed an unexpected coping mechanism: he calls the people responsible for leaving hateful remarks on his social media, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace — you would have noticed — he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.” And he stresses that his solution is not right for everyone . In a Q & A afterward, he says that some people have told him that his podcast just gives a platform to those espousing harmful ideologies. Marron emphasizes, “Empathy is not endorsement.” His conversations represent his own way of responding to online hate, and he says, “I see myself as a little tile in the mosaic of activism.”

Rebuilding trust at work. Trust is the foundation for everything we humans do, but what do we do when it is broken? It’s a problem that fascinates Frances Frei, a professor at Harvard Business School who recently spent six months trying to restore trust at Uber. According to Frei, trust is a three-legged stool that rests on authenticity, logic, and empathy. “If any one of these three gets shaky, if any one of these three wobbles, trust is threatened,” she explains. So which wobbles did Uber have? All of them, according to Frei. Authenticity was the hardest to fix – but that’s not uncommon. “It is still much easier to coach people to fit in; it is still much easier to reward people when they say something that you were going to say,” Frei says, “but when we figure out how to celebrate difference and how to let people bring the best version of themselves forward, well, holy cow, is that the world I want my sons to grow up in.” You can read more about her talk here.

TEDBMW at TED2018: Putting its self-driving car to the reading, mascara and ramen test

“Driving” this autonomous vehicle is as easy as using a microwave, discovered TED Ideas Editor Daryl Chen at the BMW Personal CoPilot Experience at TED2018: The Age of Amazement, April 10. Photo: Lawrence Sumulong / TED

“The ultimate sitting machine.”

Please be kind — this is only my first attempt at a tagline for the BMW autonomous vehicle which I just went for a test drive, er, test ride in.

Yes, I can proudly say that I’ve gone for a ride in the future — and it’s smooth enough to eat ramen in.

Wait, let me back up (just like a car, get it?). At TED2018, BMW has been treating attendees to rides in its i3 cars that have been kitted out with level 5 autonomous vehicle capability. It’s not exaggerating for me to call this vehicle the future. “You will not be able to buy this in the next three years, but today BMW is working on this technology,” says BMW’s US Technology Office Vice President Simon Euringer. “Normally, we don’t do that but because of all the noise about the autonomous vehicles, we think it makes sense to give people a preview.” Even though BMW has been relatively quiet about its self-driving plans compared to some of its rivals, there’s plenty of action happening behind the scenes. In fact, the company just opened an Autonomous Driving Campus near Munich that brings together 80 teams that are working on this effort. The number of people at BMW focused on self-driving cars is estimated by Euringer to be “way north of 1,000.” He adds, “This is one of the biggest investments in car industry; this is probably a bigger investment than electro-mobility.

Level 5 means there is no driver in the vehicle, no person behind the wheel. In fact, speculates Euringer, “the car would probably not have even a steering wheel.” BMW, like the other auto companies, foresees driverless cars being used by children and other people who don’t have driver’s licenses.

Before my ride, I decided to put the car to a series of tests; I wanted to see if I was able to accomplish three common activities that are challenging in a moving car. My first activity: reading a book. I am an avid reader, but I’m unable to do so in a traditional car because I get carsick. Novel in hand, I got into the backseat of the BMW in the basement of the Vancouver Convention Center. Then, I started the car. “Driving” this autonomous vehicle was like watching a video or using a microwave — I used a touchscreen to enter a destination, hit the “start here” button, and the car began moving. That’s it. Whenever I wanted to stop, I hit the “pause” button and it slowed to a halt. The vehicle glided right through my reading test — the ride was smooth enough that it felt like I was enjoying my book in a comfy leather armchair. I dove into my novel and didn’t emerge until it came time for my next challenge.

TED Ideas Editor Daryl Chen puts the vehicle to the all-important mascara test at the BMW Personal CoPilot Experience at TED2018: The Age of Amazement, April 10, Vancouver. Photo: Lawrence Sumulong / TED

My second activity was applying mascara. As anyone who has ever put on makeup in a traveling car can tell you, the results are often not so pretty, and mascara is among the most difficult cosmetics to handle. Unfortunately, I must report that self-driving capabilities did not make my task that much easier. While the ride was mercifully bump-free, the turns were still enough to make my wand hand shake and smudge; the process was also complicated by the lack of mirror in the backseat for me to use (note to BMW execs: can you fix this?).

Finally, I was ready for my third and final activity: eating noodles. Noodles are an important part of my diet — and in a conventional car, consuming them means spilling, splattering, and needing a change of clothing. Riding in the car, I took out my piping-hot cup of instant ramen and a pair of chopsticks. And I’m glad to say — for me, my dress, the vehicle, but most especially, for the worried BMW representative — that I did not drip a drop. I ate happily and neatly.

With a final tap of the touchscreen, I ended my ride.

Okay, I think I’ve figured out the perfect tagline for this autonomous vehicle: the ultimate noodle machine. What do you think, BMW?

TEDAnnounced at TED2018: TED’s Hindi-language Star Plus TV series “TED Talks India: Nayi Soch” renewed for three seasons

Shah Rukh Khan hosts Season 1 of TED Talks India: Nayi Soch, which was just renewed for three more seasons on Star Plus. Photo: Amit Madheshiya / TED

New York, NY (April 13, 2018)—TED announced today that its highly acclaimed Star Plus TV prime-time series TED Talks India: Nayi Soch, a Hindi-language TV and digital series hosted by Shah Rukh Khan which premiered last fall, has been renewed for three more seasons.

The first season, which featured speakers delivering inspiring and informative talks in TED’s signature style of 18 minutes or less, drew an astounding 96 million viewers over its first season in fall 2017.

TED Talks India: Nayi Soch speakers deliver TED Talks in Hindi on topics as varied as science and social justice before a live studio audience, with professional subtitles in Hindi and in English provided for viewers at home. Almost every talk features a short Q&A between the speaker and Khan that dives deeper into the ideas shared onstage.

TED Talks India: Nayi Soch host and Bollywood star Shah Rukh Khan said: “I believe passionately that India is brimming with brave and brilliant ideas—and that those ideas have never mattered more. This program features India’s finest storytellers in a surprising blend of entertainment, inspiration and intellectualism, and I‘m more committed than ever to spreading their ideas to my country and the rest of the world.”

TED’s Head of Television Juliet Blake, who executive-produced the series, said: “We’re incredibly proud of this show’s accomplishments breaking barriers to reach new audiences, and look forward to spending the next several seasons inspiring a nation to embrace ideas and curiosity.”

Star TV CEO and Chairman Uday Shankar said: “Star TV is committed to developing programming that goes beyond pure entertainment to inspire and educate our massive audience. Both the critical response and the tremendous viewer love for this series were key factors in our decision to bring Ted Talks India: Nayi Soch back for at least three more seasons.”

Head of TED Chris Anderson said: “Ultimately TED’s goal is to develop compelling new content formats that can make ideas available and relevant to billions of people we haven’t reached yet. This journey with Star TV and Shah Rukh Khan has allowed us to make significant progress spreading ideas.”

Here’s what audiences have had to say:

  • “Amidst all the ruckus of daily soaps fighting for TRPs we’ve finally got a TV show that’s absolutely worth watching.” – Neeraj Chavan (via Quora)
  • “#TEDTalksIndiaNayiSoch is one of the best initiatives ever, I loved watching @TEDTalks and now that idea and platform have been brought to India I am amazed to see that how much potential India has! –@Umangkelani (via Twitter)
  • “This action-packed one hour show airing on Star Plus is definitely a great way to explore revolutionary ideas from Indians.” – Anuj Shikarkhane (via Quora)
  • “It is clear from today’s scenario of world that number of problems are way more than the problem solvers and we need crowd support to solve them. This is where TED Talks India helps. It makes people aware about the issues and ways to address it. This is the need of the hour.” – Aayush Wadhwa (via Quora)
  • “This show shows us that there are great undiscovered minds in our country and a [number] of unsung heroes. TED is like a light bulb in the dark field of the daily soaps which we experiencing for years.” – Rishabh Nayan (via Quora)

Here’s what reviewers have had to say:

  • “[P]robably the best show to have premiered on Indian television, in recent times.” – First Post
  • “The content is great with a host of speakers discussing ideas that leave people inspired and positive.” – India Today
  • “The best part about the show is that you get to learn so much. It is enlightening to say the least.” – Bollywood Life
  • “[S]omething absolutely must for Indian audiences.” – Chandigarh Metro

The TED Talks India: Nayi Soch audience stretches beyond television on TED.com/india and for TED mobile app users in India. Each episode has been conveniently broken out into individual TED Talks, one talk for each speaker on the program. Viewers can watch and share them on their own, or download them as playlists to watch one after another.

Host Shah Rukh Khan and TED’s head of television, Juliet Blake, wrap production of TED Talks India: Nayi Soch season 1. The show has been renewed after garnering 96 million viewers and sparkling reviews in its first season. Photo: Amit Madheshiya / TED

TEDBody electric: Notes from Session 9 of TED2018

Mary Lou Jepsen demonstrates the ability of red light to scatter when it hits our bodies. Can we leverage this property to see inside ourselves? She speaks at TED2018 on April 13, 2018. Photo: Ryan Lash / TED

During the week of TED, it’s tempting to feel like a brain in a jar — to think on a highly abstracted, intellectual, hypertechnical level about every single human issue. But the speakers in this session remind us that we’re still just made of meat. And that our carbon-based life forms aren’t problems to be transcended but, if you will, platforms. Let’s build on them, explore them, and above all feel at home in them.

When red light means go. The last time Mary Lou Jepsen took the TED stage, she shared the science of knowing what’s inside another person’s mind. This time, the celebrated optical engineer shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. Her demo doubles as a crash course in optics, with red and green lasers and all kinds of cool gear (some of which juuuuust squeaked through customs in time). And it’s a wildly inspiring look at a bold effort to solve an old problem in a new way.

Floyd E. Romesberg imagines a couple new letters in DNA that might allow us to create … who knows what. Photo: Jason Redmond / TED

What if DNA had more letters to work with? DNA is built on only four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the letters of the genetic alphabet are not all that unique. For the problem of life, perhaps, “maybe we’re not the only solution, maybe not even the best solution — just a solution.” And maybe new parts can be built to work alongside the natural parts. Inspired by these insights, Romesberg and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. Worried about unintended consequences? Romesberg says that his augmented 6-letter DNA cannot be replenished within the body. As the unnatural genetic materials are depleted, the semi-synthetic cells die off, protecting us against nightmarish sci-fi scenarios of rogue microorganisms.

On the slide behind Dan Gibson: a teleportation machine, more or less. It’s a “printer” that can convert digital information into biological material, and it holds the promise of sending things like vaccines and medicines over the internet. Photo: Ryan Lash / TED

Beam our DNA up, Scotty. Teleportation is real. That’s right, you read it here first. This method isn’t quite like what the minds behind Star Trek brought to life, but the massive implications attached are just as futuristic. Biologist and engineer Dan Gibson reports from the front lines of science fact, that we are now able to transmit not our entire selves, but the most fundamental parts of who we are: our DNA. Or, simply put, biological teleportation. “The characteristics and functions of all biological entities including viruses and living cells are written into the code of DNA,” says Gibson. “They can be reconstructed in a distant location if we can read and write the sequence of that DNA code.” The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one literally worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines. The process takes weeks now, but could someday come down to 1–2 days. (And don’t worry: Gibson, his team and the government screen every synthesis order against a database to make sure viruses and pathogens aren’t being made.) He says: “For now, I will be satisfied beaming new medicines across the globe, fully automated and on-demand to save lives from emerging deadly infectious diseases and to create personalized cancer medicines for those who don’t have time to wait.”

In a powerful talk, sex educator Emily Nagoski educates us about emotional nonconcordance — when our body and our mind “say” different things in an intimate situation. Which to listen to? Photo: Ryan Lash / TED

Busting one of our most dangerous myths about sex. When it comes to pleasure, humans have something that’s often called “the reward center” — but, explains sex educator Emily Nagoski, that “reward center” is actually three intertwined, separate systems: liking, or whether it feels good or bad; wanting, which motivates us to move toward or away from a stimulus; and learning. Learning is best explained by Pavlov’s dogs, whom he trained to salivate when he rang a bell. Were the dogs hungry for the bell (wanting)? Did they find the bell delicious (liking)? Of course not: “What Pavlov did was make the bell food-related.” The separateness of these three things, wanting, liking and learning, helps explain a phenomenon called emotional nonconcordance, when our physiological response doesn’t match our subjective experience. This happens with all sorts of emotional and motivational systems, including sex. “Research over the last thirty years has found that genital blood flow can increase in response to sex-related stimuli, even if those sex-related stimuli are not also associated with a subjective experience of wanting and liking,” she says. The problem is that we don’t recognize nonconcordance when it comes to sex: in fact, there is a dangerous myth that even if someone says they don’t want it or don’t like it, their body can say differently, and the body is the one telling the “truth.” This myth has serious consequences for victims of unwanted and nonconsensual sexual contact, who are sometimes told that their nonconcordant genital response invalidates their experience … and who can even have that response held up as evidence in sexual assault cases. Nagoski urges all of us to share this crucial information with someone — judges, lawyers, your partners, your kids. “The roots of this myth are deep and they are entangled with some very dark forces in our culture, but with every brave conversation we have, we make the world that little bit better,” she says to one of the biggest standing Os in a standing-O-heavy session.

The musicians and songwriters of LADAMA perform and speak at TED2018. Photo: Ryan Lash / TED

Bringing Latin alternative music to Vancouver. Singing in Spanish, Portuguese and English, LADAMA enliven the TED stage with a vibrant, energizing and utterly danceable musical set. The multinational ensemble of women — Maria Fernanda Gonzalez from Venezuela, Lara Klaus from Brazil, Daniela Serna of Colombia, and Sara Lucas from the US — and their bass player collaborator combine traditional South American and Caribbean styles like cumbia, maracatu and joropo with pop, soul and R&B to deliver a pulsing musical experience. The group took attendees on a musical journey with their modern and soulful compositions, playing original songs “Night Traveler” and “Porro Maracatu.”

Hugh Herr lost both legs below the knee, but the new legs he built allow him once again to run, climb and even dance. Photo: Ryan Lash / TED

“The robot became part of me.” MIT professor Hugh Herr takes the TED stage, his sleek bionic legs conspicuous under his sharp grey suit. “I’m basically nuts and bolts from the knee down,” Herr says, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward realizing a goal that has long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it neural embodied design, a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend of Herr’s who was in a climbing accident that resulted in the amputation of his foot. Using the Agonist-antagonist Myoneural Interface, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. “Jim moves and behaves as if the synthetic limb is part of him,” Herr says. And he’s even back climbing again. Taking a few moments to dream, Herr describes a future where humans have augmented their bodies in a way that fundamentally redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. “I believe humans will become superheroes,” Herr says. “During the twilight years of this century, I believe humans will be unrecognizable in morphology and dynamics from what we are today. Humanity will take flight and soar.”

Jim Ewing, left, lost a limb in a climbing accident; he partnered with MIT professor Hugh Herr, right, to build a limb that got him back up and climbing again. Photo: Ryan Lash / TED

,

TEDHow to rebuild trust … Frances Frei speaks at TED2018

Authenticity is critical to trust, but “if those of us who are different give in to the temptation to hold back our authentic selves, then the most interesting thing about us — our difference — is muted,” says Harvard Business School professor Frances Frei at TED2018: The Age of Amazement on April 13, Vancouver. Photo: Ryan Lash / TED

“It’s my belief that trust is the foundation for everything we do,” says Harvard Business School professor Frances Frei, “and that if we can learn to trust on another more, we can have unprecedented human progress.”

What to do, then, when trust is broken? In companies, there are many reasons that a rupture can happen. Things like data breaches, a culture of bias and discrimination, a CEO caught disparaging an employee, even a technological error that costs human life. And all those things were happening at Uber. Which is why, in 2017, Frei embedded as a full-time employee at Uber to help them figure out how to rebuild trust after the company had so completely lost it. The sheer scale of the hole that Uber had fallen into is what attracted her. “My favorite trait is redemption,” she says, “I believe that there is a better version of us around every corner, and I have seen firsthand how organizations and communities and individuals change at breathtaking speed.”

A loss of trust is no different than most problems — before you can solve it, first you have to understand how it works. Trust has three components, say Frei: authenticity, logic and empathy.  “When all three of these things are working, we have great trust,” she says, “but if any one of these three gets shaky, if any one of these three wobbles, trust is threatened.”

Frei believes the way to rebuild trust is to understand where you wobble — whether it’s authenticity, empathy or logic that generally gets in the way of someone trusting you — and learn strategies to correct that wobble.

“The most common wobble is empathy,” she says, because “people just don’t believe that we’re mostly in it for them, and they believe that we’re too self-distracted.” Due to the constant demands and distractions of daily life, it can be hard to create the time and space that empathy needs. The fix is pretty easy: “Identify where, when and to whom you are likely to offer your distraction,” says Frei. “That should trace pretty perfectly to when, where and to whom you are likely to withhold your empathy.” If you can truly listen to the people you’re with, you have a chance to fix the wobble.

Logic is a wobble either because your reasoning itself is shaky (in which case, Frei says, “I can’t really help you”) or your ability to communicate it is weak. If your tendency is the latter, consider changing the way you structure your communication. Instead of starting with a story and ending with your point, begin with your point in a crisp half sentence and then support your conclusion — rather than the other way around. Bonus to this approach: If you’re interrupted, you’ve already made your point!

The toughest of the three wobbles to correct is authenticity. While Frei’s prescription is simple — just be you — being your true self isn’t always easy, especially if you are different from the majority in any way. But remember this: Not being authentic damages trust, which can lead to all sorts of unintended consequences. “So, here’s my advice,” she says: “Wear whatever makes you feel fabulous. Pay less attention to what you think people want to hear from you and far more attention to what your authentic awesome self needs to say.” She urges leaders to make sure that their organizations are safe, welcome spaces for people to show up as themselves.

What was Uber’s wobble? “Empathy, logic, authenticity were all wobbling like crazy,” she says, “but we were able to find super-effective, super-quick fixes” for the first two. For example, it had become common practice at Uber meetings for people to text each other – about the meeting. “It did not create a safe, empathetic environment,” Frei smiles. The simple solution: Ask people to turn off their tech, and put it away while talking. When it came to logic, Uber had grown so rapidly that managers had been continuously promoted until they reached positions that outstripped their capabilities. “It was not their fault,” she says. The remedy: They brought in executive education that focused on logic, strategy and leadership.

The last wobble — authenticity — proved harder to change, which is common. “It is still much easier to coach people to fit in; it is still much easier to reward people when they say something that you were going to say.” But, she says, “when we figure out how to celebrate difference and how to let people bring the best version of themselves forward, well, holy cow, is that the world I want my sons to grow up in.”

In a post-talk Q&A with TED curator Helen Walters, Frei says she remains hopeful that trust can be rebuilt within our businesses and institutions, but it won’t happen right away: “I am super-optimistic that everyone can set the conditions to to be more empathic, to use more rigorous logic, to be their authentic selves.”

TEDAnnounced at TED2018: Google’s new TalkToBooks search

Here onstage at TED2018, futurist Ray Kurzweil has just formally announced a new way to query the text inside books using something called semantic search — which is a search on ideas and concepts, rather than specific words. Called TalkToBooks, the beta-stage product uses an experimental AI to query a database of 120,000 books in about a half a second. (As Kurzweil jokes: “It takes me hours to read a hundred thousand books.”)

Jump in and play with TalkToBooks »

Kurzweil suggests some questions to ask it:

How can I stop thinking and fall asleep?
What is the meaning of life?
How does eating fiber help you lose weight?
Why is the Turing test important?

Some answers are relevant, and others, while maybe not quite correct, intriguingly reveal the way the machine “thinks,” the kinds of connections it wants to make. If you want to dig further, read this blog post from Google’s Semantic Experiences group, and this detailed coverage from The Verge.

CryptogramFriday Squid Blogging: Eating Firefly Squid

In Tokama, Japan, you can watch the firefly squid catch and eat them in various ways:

"It's great to eat hotaruika around when the seasons change, which is when people tend to get sick," said Ryoji Tanaka, an executive at the Toyama prefectural federation of fishing cooperatives. "In addition to popular cooking methods, such as boiling them in salted water, you can also add them to pasta or pizza."

Now there is a new addition: eating hotaruika raw as sashimi. However, due to reports that parasites have been found in their internal organs, the Health, Labor and Welfare Ministry recommends eating the squid after its internal organs have been removed, or after it has been frozen for at least four days at minus 30 C or lower.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianSilva Arapi: Digital Born Media Carnival July 2017

As described in their website, Digital Born Media Carnival was a gathering of hundred of online media representatives, information explorers and digital rights enthusiasts. The event took place on 14 – 18 July in Kotor, Montenegro. I found out about it as one of the members of Open Labs Hackerspace shared the news on our forum. While struggling if I should attend or not because of a very busy period at work and at the University, the whole thing sounded very interesting and intriguing at the same time, so I decided to join the group of people who were also planning to go and apply with a workshop session too. No regrets at all! This turned out to be one of the greatest events I’ve attended so far and had a great impact in what I somehow decided to do next, regarding my work as a hacktivist and as a digital rights enthusiast.


The organizers of the Carnival had announced on the website that they were looking for online media representatives, journalists, bloggers, content creators, human right defenders, hacktivists, new media startups etc and as a hactivist I found myself willing to join and learn more about some topics which for a while had been very intriguing to me, while I was also looking at this as an opportunity to meet with other people with common interests as me.

I applied with a workshop where I was going to introduce some simple tools for people to better preserve their privacy online. The session was accepted and I was invited to lead altogether with Andrej Petrovski the sessions on Digital Security track, located in the Sailing club “Lahor”. I held my workshop there on Saturday late in the morning and I really enjoyed it. Most of the attendees where journalists or people not with a technical background, and they showed a lot of interest, asked me many questions and shared some stories, I also received very good feedback on the workshop and it really gave me some really good vibes since this was the first time for me speaking on cyber security in an important event of this kind, as it was the DBMC’17.

I spent the other days on the Carnival attending different workshops and talks, meeting new people, discussing with friends and enjoying the sun. We would go to the beach on the afternoon and had very cool drone photo shooting 😉

DBMC drone photo shootingDBMC drone photo shooting – Kotor, Montenegro

This was a great work from the SHARE Foundation and hopefully there will be other events as such in the near future and I would totally recommend for people to attend! If you are new with the topics discussed there, this is a great way to start. If you have been on the field for a while, this is the place to meet other professionals as you. If you are looking for an event which you can also combine with some days of vacation but also be in touch with causes you care about, this would once again be the place to go.

TEDAnnounced at TED2018: Explore satellite images with Planet Stories

Will Marshall and his company, Planet, has launched a fleet of small satellites to image the Earth every day, watching over changes both natural and human-made. At TED this week, he announced a way for all of us to play with this rich data set — 500 or so images of every location on Earth over time. Photo: Ryan Lash / TED

Back in 2014, Will Marshall took the TED stage to introduce us to his company, Planet, and their proposed fleet of tiny satellites. The goal: to image the planet every day, showing us how Earth changes in near-real time. In 2018, that vision has come good: Every day, a fleet of about 200 small satellites pictures every inch of the planet, taking 1.5 million 29mp images every day (about 6T of data daily), gathering data on changes both natural and human-made. The images are used by businesses, academics and other professionals to monitor our lovely planet.

This week at TED, Marshall announced a consumer version of Planet, called Planet Stories, to let ordinary people play with these images too. You can compare satellite images over time, at any location you choose, and produce time-lapse images that show change and movement. Watch a new neighborhood rise, see slow but dramatic changes in the environment, or watch the tides in a before-and-after comparison of seasonal change. It’s just a little bit addictive.

Two simple tutorials will get you started for two weeks of exploring this data set. Create an account on Planet Stories and start playing >>

 

TEDScenes from the Tech Playground at TED2018

Assembled by our tech curator Alex Moura, six exhibits around the theater explore the hands-on, playful and human side of tech. Every exhibit is in some way touchable, relatable — not a piece of shiny gear in a plexiglas box but instead something to step into and be part of and play with. Meet our Tech Playground:

What are Victoria and M doing here? Well, if you could see what Victoria sees, she is interacting with a piece of sculpture with height, depth and a space to crawl into. As sculptor M Eifler describes it: “Their act of looking will reveal the size and position of the sculpture to the rest of us.” Learn more about Invisible Sculpture. Photo: Jason Redmond / TED

Have a minute for magic? How about 3 minutes, or 5? If you press one of these buttons, the Short Edition machine at TED2018 will print you a short story or a poem. It’s a small reminder to make and take art every day. Bonus: You can enter a short story contest this week, and maybe see your own short story in these printers as they roll out across North America this year. Photo: Jason Redmond / TED

Each of these scrolls contains a short story to read and share. Photo: Jason Redmond / TED

Spatial AR is a computing platform based on AR; imagine a 3D interface that turns your computer into a collaborative creative canvas. It’s being developed by interface gurus Jinha Lee and Anand Agarawala, whom you may know from their previous TED Talks about making smarter, better user interfaces. Photo: Lawrence Sumulong / TED

Root Robotics is on a mission to help people explore the amazing things you can do with your imagination and a little bit of code. Root’s app is designed for all ages, and uses music, art and adventure to teach coding in simple, colorful ways. Photo: Jason Redmond / TED

In the Mira Prism experience, attendees can collaborate to solve a series of challenges assisted by holographic work instructions, all powered by a smartphone and seen through the transparent lenses of the Mira Prism headset. Photo: Lawrence Sumulong / TED

This is Kuri, the autonomous robot designed with personality, awareness, and mobility. Kuri’s job is to capture life’s little moments while learning the rhythm of your household. She can wake you up in time for work and greet you when you come home at night. Her expressive eyes and robot language add to her uniquely adorable personality. Photo: Jason Redmond / TED

 

 

CryptogramCOPPA Compliance

Interesting research: "'Won't Somebody Think of the Children?' Examining COPPA Compliance at Scale":

Abstract: We present a scalable dynamic analysis framework that allows for the automatic evaluation of the privacy behaviors of Android apps. We use our system to analyze mobile apps' compliance with the Children's Online Privacy Protection Act (COPPA), one of the few stringent privacy laws in the U.S. Based on our automated analysis of 5,855 of the most popular free children's apps, we found that a majority are potentially in violation of COPPA, mainly due to their use of third-party SDKs. While many of these SDKs offer configuration options to respect COPPA by disabling tracking and behavioral advertising, our data suggest that a majority of apps either do not make use of these options or incorrectly propagate them across mediation SDKs. Worse, we observed that 19% of children's apps collect identifiers or other personally identifiable information (PII) via SDKs whose terms of service outright prohibit their use in child-directed apps. Finally, we show that efforts by Google to limit tracking through the use of a resettable advertising ID have had little success: of the 3,454 apps that share the resettable ID with advertisers, 66% transmit other, non-resettable, persistent identifiers as well, negating any intended privacy-preserving properties of the advertising ID.

Worse Than FailureError'd: Surgeons, Put Down Your Scalpels

"I wonder what events, or lawsuits, lead TP-Link to add this warning presumably targeted individuals who updated firmware just ahead of performing medical procedures," writes Andrew.

 

"WalMart was very concerned I didn't bag my bags," wrote Rob C.

 

"I sure hope the pilots' map is more accurate than the one they show the passengers," wrote Maddie J.

 

"To me, messages like this are almost like saying to the user 'See? You are the reason why this is broken. Now go and code a fix for it.'," writes Philip B.

 

Brian J. wrote, "To add insult to injury here, neither the 'Yes' or 'No' button worked. Especially the "No" button."

 

Aankhen wrote, "If nothing else, CrashPlan is very confident, I’ll give it that."

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Don Martiworking post-creepy ads, and stuff

Post-creepy web ad sightings: What's next for web advertising after browser privacy improvements and regulatory changes make conventional adtech harder and harder?

The answer is probably something similar to what's already starting to pop up on niche sites. Here's a list of ad platforms that work more like print, less like spam: list of post-creepy web ad systems. Comments and suggestions welcome (mail me, or do a GitHub pull request from the link at the bottom.)

Fun with bug futures: we're in Mozilla's Internet Health Report. Previous items in that series:

ICYMI: Mozilla experiment aims to reduce bias in code reviews

Lots of GDPR and next-generation web ads stories in the past few weeks. A few must-read ones.

Publishers Haven't Realized Just How Big a Deal GDPR is My advice to you is rethink your approach to GDPR. This is your chance to be a part of the solution, rather than being part of the problem.

Brand Safety Is Not Driving Media Allocation Decisions in 2018/19

Mark Ritson: This is a critical point in marketers’ relationship with data privacy

What GDPR really means

Planet Linux AustraliaBen Martin: My little robotic pals

Years ago I decided to build an indoor robot with multiple kinects for navigation and a robotic arm for manipulation. It was an interesting time working out how to do this and what is needed to get a mobile base to map and navigate a static and dynamic indoor space. Any young players reading this might think that ROS can just magically make this all happen. There are some interesting issues to discover building your own base and some, um, "issues" shall we say that you will need to address that are not in the books or docs. I won't spoil it here for the new players other than to say be prepared to be persistent. 


There are two active wheels at the front and a single drag wheel at the back about 12 inches behind the front wheels. I wrote the code to control the arm myself as custom ROS nodes. A great trick here is you can inject sinusoidal movement by injecting a shim ROS node to take one target and smoothly move towards it.

Now I have a new friend for outdoor activity, the "hound bot". The little furry friend is still sans hair but has gps, imu, rc control override, and a ps4 eye camera mounted for depth perception and mapping. Taking a leaf out of one of the big car makers book and only using cameras for navigation. But for me it is about cost since a good lidar is still much to expensive for the hound.


The hound is a sort of monocoque where the copper looking square part at the front is part of a 1/4 inch aircraft grade alloy solid welded chassis that extends the lenght of the robot. The hound can do about 20km/h and is around 20kg in heft. The electronics bay in the middle is protected by a reinforced carbon fibre layup that I did. Mixing material for fun and slight weight loss.

One great part about doing this "because I want to" is that I am unbounded. Academic institutions might say that building robust alloy shells is not a worthwhile task and only the abstract algorithms matter. I get to pick and choose what matters based purely on what is interesting, what is hard to do (yay!), and what will help me get the robot to perform a task that I want.

The hound will get gripper(s) so it can autonomously "fetch" things for me such as the mail or go find and pick up objects on the lawn.

TEDWhat on earth do we do? Notes from Session 6 of TED2018

When Chinese citizens got angry about air quality, they also got creative; here’s an air quality rating system with personality. Angel Hsu tracks environmental data related to China, and studies how the world’s most populous nation is setting the pace for global responses to pollution. She speaks at TED2018 on April 12, 2018, in Vancouver. Photo: Bret Hartman / TED

This beautiful blue marble, our shared earth: On a sunny morning in Vancouver, we pack into the darkened TED theater to learn more about its mysteries, its challenges, and how we might help it thrive. There’s no shiny “nature porn” here (well, maybe a little bit) but a clear-eyed look at what’s going right and wrong.

The next renewable resource: the cold of space. Today, 17 percent of electricity used worldwide is for cooling systems — which also includes refrigeration — and the systems produce 8 percent of global greenhouse gas emissions. “What keeps me up at night is that the energy use for cooling is expected to grow six-fold by the year 2050,” says University of Pennsylvania physicist and applied engineer Aaswath Raman. And he adds, “The warmer the world gets, the more we are going to need cooling systems. This then has the potential to cause a feedback loop, where cooling stands to become one of the biggest single sources of greenhouse gas emissions this century.” Raman is exploring a potential solution that leverages a cool fact about infrared light and deep space. Read more about this big-picture idea »

The intricacies of removing CO2 from the atmosphere. The concentration of CO2 in today’s atmosphere is a staggering 400ppm, but we’re still not cutting emissions as fast as we need to. According to chemical engineer Jennifer Wilcox, this means we’ll need to actually pull CO2 back out of the atmosphere – a strategy known as negative emissions. The technology to do this already exists. Basically, a device known as an air contactor uses CO2-grabbing chemicals in solid materials or dissolved in water to pull the gas out of the air, sort of like a synthetic forest. What makes this process tricky, though, is that it’s energy-intensive, which drives up costs or, depending on the type of energy used, ends up emitting more CO2 than is captured. Several companies are working on making the process more cost-effective using a variety of techniques, as well as solving other problems of carbon capture like how and where we should build these “synthetic forests.” However, no matter how many technological advances we make, negative emissions are not enough on their own to solve our climate crisis, Wilcox cautions. “We should not see negative emissions as a replacement for stopping pollution but rather as an addition to an existing portfolio that includes everything,” she says. In a short Q&A with Chris Anderson, Wilcox predicts that we may see this technology roll out around the world in the next five to ten years.

Make pollution into a cute mascot? Iconic images of skylines buried in clouds of smog ensured China’s notoriety as one of the world’s biggest polluters. Yet Angel Hsu shows us that real change is afoot in the world’s biggest nation. China’s energy initiatives have unexpectedly placed it at the vanguard of the fight against pollution and climate change. As Hsu notes, “China is very much in the driver’s seat determining our global environmental future.” China truly has “a chance at global leadership on issues where nations need to rapidly shift course.” What caused this change? In 2011, when Hsu began conducting her research, China’s own environmental data — specifically for fine particulate matter, or PM2.5 — was kept secret. But thanks to citizen activism, pollution’s hazardous impacts on human health skyrocketed into China’s consciousness, as evidenced by everything from a music festival named “PM2.5” to a golf course that proclaimed “Golf sub-par, but don’t breathe sub-par air.” The emerging zeitgeist grabbed the government’s attention. Recognizing China’s toxic reliance on fossil fuels, they pulled the plug on over 300,000 coal plants, and began feverishly developing alternative energy (their wind and solar production alone could soon rival Germany’s total electricity consumption). Although China must still address its coal problem abroad, its efforts at home (although uncertain) could impact global pollution — and China’s massive carbon footprint — in a major way.

Rodin Lyasoff imagines a new era of flying things — small, safe aircraft that will help us transcend brutal traffic jams on our streets. Will there be traffic jams in airspace? Well, “The sky is underutilized,” he says. Photo: Bret Hartman / TED

The next golden age of aviation. The agony of bumper-to-bumper traffic is something that we have all experienced. Roads are congested, with few ways to expand them, and few solutions available to get around that congestion. Rodin Lyasoff, CEO of A³ in Silicon Valley, wants flight to change this. “The sky is underutilized,” he says, and will arguably never be congested like the ground is, thanks to air traffic management and safety requirements. Flight is a fast and convenient alternative to ground transportation, one that Lyasoff wants to make accessible and affordable to all. Imagine, instead of hopping in your car, you take a taxi or a ride-sharing service to a take-off spot — known as a vertiport — where an aircraft is waiting to fly you over the traffic to the vertiport on the other side. This future is not as far off as it may seem. At A³ they’ve built and flown an electric and autonomous prototype, called Vahana. Vahana is fully electric, self-piloted and takes off and lands vertically. For a single passenger traveling 20 miles, it would take 15 minutes and cost only about $40. Where helicopters are loud, expensive and hard to pilot, Vahana is quiet, efficient and intelligent. Companies around the world are building similar aircraft, and Lyasoff predicts that within the next decade vertiports will be a familiar sight around cities and towns. “In the past century, flight connected our planet,” he says, and hopes that with aircrafts like Vahana, in the next century flight “will connect our local communities and reconnect us to each other.” In a short post-talk Q&A, Lyasoff emphasizes the importance of making personal flight accessible through affordability. “One of the things we’re really focusing on is the cost. Some of the features are driven by price … which should make it accessible to a larger crowd,” he says.

Penny Chisholm studies Prochlorococcus, an elegant little microbe that might hold the key to sustainable energy. Photo: Bret Hartman / TED

Meet one of the most important microbes on Earth. Prochlorococcus is an ancient ocean-dwelling cyanobacterium that Penny Chisolm, a biological oceanographer at MIT, believes holds clues for sustainable energy in its genetic architecture. Prochlorococcus, which Chisolm discovered in the mid-1980s, is the most abundant photosynthetic cell on the planet, with a gene pool 4 times the size of the human genome, yet it’s only 1/100th the width of a human hair. Chisholm views it as an engineering masterpiece and believes: “if we could mimic its elegantly simple design, if we could truly understand how it works, it might inspire sustainable energy solutions to break our dependency on fossil fuel.”

Factory fishing is strip-mining our oceans, and as Enric Sala shares, it’s not even that good a business. He proposes a smart way to save both fishing industry jobs and our oceans. Photo: Bret Hartman / TED

To save our oceans, protect the high seas. How do we save marine life and improve the economics of industrial fishing? Enric Sala, marine ecologist and National Geographic Explorer-in- Residence, proposes the creation of a giant high seas reserve. For the past ten years, his team at the National Geographic Pristine Seas project has surveyed oceans and worked with governments to protect them. To accelerate sea protection, Sala believes we need to focus on the high seas. Falling outside of any single country’s jurisdiction, the high seas are the “wild west” of the ocean. Until recently, it was difficult to know who was fishing and how much on the high seas, but satellite technology and machine learning now enable the tracking of boats and revenue. As Sala says, he and his team found “the economics are dependent on huge government subsidies and, for some countries, on human rights violations. What this economic analysis reveals is that practically the entire high seas fishing proposition is misguided,” he continues. “What sane government would subsidize an industry anchored in exploitation and fundamentally destructive, and not so profitable anyway?” In response, Sala advocates for creating a high seas reserve that would include two-thirds of the ocean, protecting the ecological, economical and social benefits of our ocean. Sala concludes by asking: “One hundred years from now, if our grandchildren were to jump into a random spot in the ocean, what would they see? A barren landscape or an abundance of life? Our legacy to the future.”

Science journalist Charles C. Mann looks at the proposed solutions for feeding the Earth’s forthcoming 10 billion people — and finds that the answers fall into two camps. Photo: Ryan Lash / TED

Are you a wizard or a prophet? In 2050, there will be almost ten billion people in the world. How are we going to feed everybody, provide water for everybody, get power to everybody — all while avoiding the worst impacts of climate change? Science journalist Charles C. Mann has spent years asking these questions to researchers, and he’s found that their answers fall into two broad categories: wizards and prophets. Wizards believe that science and technology, properly applied, will let us produce our way out of our dilemmas — think: hyper-efficient megacities and robots tending genetically modified crops. Prophets believe close to the opposite; they see the world as governed by fundamental ecological processes with limits that we transgress to our peril. They pray for a world of smaller, interconnected communities, populated by people who capture and endlessly recycle rainwater — and who need less of it because they don’t have lawns or water-guzzling washers. “If the history of the last two centuries was one of uncontrolled growth, the history of the coming century will be the choice we make a species between these two paths,” Mann says. But if our species is going to survive in the long-term, the first step is obvious, Mann says: wizards and prophets need to join together. “Working together, wizards and prophets have many paths to success,” Mann says. “And success would mean more than survival … it would say that we actually are special.”

TEDThe next renewable resource? The cold of space … Aaswath Raman speaks at TED2018

Warm objects (like the guy in the background of this slide) radiate heat, which takes the form of infrared light. Physicist and engineer Aaswath Raman is exploring how to exploit a quirk of infrared light: that some of it escapes our planet and heads out to deep space. Could we use this principle to make better cooling systems, better solar-energy systems, a better planet? He  speaks at TED2018: The Age of Amazement, on April 12, 2018, in Vancouver, BC. Photo: Bret Hartman / TED

There’s a particular occupational hazard associated with covering TED conferences: having your mind blown. University of Pennsylvania physicist and applied engineer Aaswath Raman does just that with his rethinking of air-conditioning. Now, as anyone who’s ever escaped to the movies on a scorching day can attest, sitting in a room that has been artificially chilled to Arctic temps can be a wonderful thing. But like so many other wonderful things (think cat videos and deep-frying), humanity has gone a bit overboard on it. Today 17 percent of electricity used worldwide is for cooling systems — which also includes refrigeration — and the systems produce 8 percent of global greenhouse gas emissions. “What keeps me up at night is that the energy use for cooling is expected to grow six-fold by the year 2050,” says Raman. (Note to self: save that fact to chew on during next long solo car drive.) And he adds, “The warmer the world gets, the more we are going to need cooling systems. This then has the potential to cause a feedback loop, where cooling stands to become one of the biggest single sources of greenhouse gas emissions this century.”

But before you consign yourself to an eternity of muggy sleepless nights, Raman is exploring a potential solution. Its roots lie in a physical phenomenon that many of us have experienced: walking outside on a cool morning when the temperature is several degrees above freezing — and seeing frost on the ground. To explain why this happens, says Raman: “Think about a pie cooling on the windowsill — for it to cool down, its heat needs to flow to something cooler, namely the air that surrounds it. As implausible as it may sound, its cold is actually flowing to the cold of space.”

Whoa — we’ll give you a moment to push your brains back into your head.

Objects emit their heat as infrared light, something you can see with a thermographic, or heat vision, camera. The heat goes up to the atmosphere, which absorbs some but not all of it. At a certain window in the infrared spectrum, the heat escapes to a place that’s much colder than the Earth and our atmosphere: deep space, which can be -454 degrees Fahrenheit.

Night-sky, or radiated, cooling has long been familiar to meteorologists and scientists, but its power has never been fully exploited, in part because it’s counteracted much of the day by the sun. So Raman has designed an artificial multilayer optical material that effectively harnesses this process (fun fact: it’s 40 times thinner than the typical human hair). The material emits heat at the precise wavelengths in which our atmosphere lets out heat the best and it also thoroughly reflects sunlight so it can be used during the day. It is, as Raman puts it, “a material that’s colder than its surroundings, even though the sun is shining on it.” In field trials, attaching panels made from the material to an air-conditioning unit boosted its operating efficiency by as much as 12 percent. By integrating the panels with energy-efficient cooling systems, Raman speculates we could reduce energy expenditures by up to two-thirds. He hopes to test it in commercial settings over the next few years.

But hold onto your head — that’s not all. “We can use the cold darkness of the universe to improve the efficiency of every energy-related process here on Earth,” declares Raman. For example, he says these principles and materials could be used to cool and improve solar cells, which become less efficient as they heat up during use.

And Raman is thinking of still greater possibilities. Perhaps we could turn the vast temperature differential between the earth and space into a sort of “heat engine,” a nighttime power-generating device that could work when solar power doesn’t (and requires no electricity at all). He ends his talk by throwing down this intellectual challenge: “We’re constantly bathed in infrared light; if we could bend it to our will, we could profoundly re-imagine the flows of heat and energy that surround us. This ability, coupled with the cold resource of space points us to a future where we, as a civilization, might be able to more intelligently harness our thermal energy footprint at the very largest scales.”

[Explodes.]

SaveSave

TEDIn Case You Missed It: An audacious day 2 at TED2018

In Case You Missed It TED2018

Three stellar main stage sessions of talks — including the launch of the Audacious Project — plus workshops, exhibits and TED Unplugged, a session of talks given by audience members, made for a jam-packed day 2 at TED2018.

Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver.

Audacious explorers. Gwynne Shotwell was employee #7 at SpaceX and is now the company’s president. In conversation with TED curator Chris Anderson, she discusses what inspired her to pursue a career in engineering, how she drove SpaceX’s partnership with NASA, the company’s race to be the next company to put people into orbit … and what it’s like to work with Elon Musk — including the concept of “Elon time.” She also detailed SpaceX’s next big project: the BFR, or Big Falcon Rocket — which will be about two-and-a-half times the size of Falcon Heavy, the giant rocket they flew earlier this year (the one that delivered a Tesla roadster to space). BFR is what you need to take humanity to Mars, for sure — but it has a “residual capability,” as Gywnne puts it: rocket travel here on Earth. The plan is to fly BFR like an aircraft, doing point-to-point travel, taking off from New York City or Vancouver and flying halfway across the globe in roughly 40 minutes. Anderson says incredulously: “This is never going to happen!” and Gwynne shoots back: “Oh no, it’s definitely going to happen” — and within a decade. While SpaceX probes the cosmos, Heidi M. Sosik is exploring another equally alien place: the twilight zone in the deep ocean. In the Audacious Project session this evening, Sosik whisks us this “otherworldly realm,” which sits between 200 and 1,000 meters below the surface. It’s believed to be home to a million new species and may have more biomass of fish than the rest of the ocean combined. But scientists don’t know, because it just hasn’t been explored. She shares a bold vision: an unprecedented exploration of the twilight zone, leveraging Woods Hole Oceanographic Institution’s incredible ability to develop technology for this purpose. The findings will be stunning, says Sosik. “This is not just a journey for scientists,” she says. “It is for all of us.”

TED curator Chris Anderson speaks with historian and author Yuval Noah Harari, who visited TED2018 as a hologram live from Tel Aviv. (Photo: Bret Hartman / TED)

Can we save democracy? Historian and author Yuval Noah Harari joins TED2018 as a hologram from Tel Aviv, telling us about the rise of fascism and nationalism — and the difference between the two. “The greatest danger that now faces liberal democracy is that the revolution in information technology will make dictatorships more efficient than democracies,” Harari says. With the rise of AI, centralized data processing could give dictatorships a critical advantage over relatively decentralized democracies. What can we do to prevent this possibility? If you’re an engineer, find ways to prevent too much data from being concentrated in too few hands, Harari suggests. And non-engineers should think about how to avoid being manipulated by those who control their data. “It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. For a different take on how to save democracy, MIT physics professor César Hidalgo highlights a fundamental weakness with representative democracy: that it’s representative, dependent on the participation of people who can be manipulated, act inefficiently or simply not show up at the polls. His radical solution: what if scientists could create an AI that votes for you? The idea is, once you’ve trained your AI and validated a few of the decisions it makes for you, you could leave it on autopilot, voting and advocating for you … or you could choose to approve every decision it suggests. It’s easy to poke holes in his idea, but Hidalgo believes it’s worth trying out on a small scale. His bottom line: “Democracy has a very bad user interface. If you can improve the user interface, you might be able to use it more.”

What if a million Black women launched a health revolution? T. Morgan Dixon and Vanessa Garrison are on a mission to find out. (Photo: Bret Hartman / TED)

Addressing big problems — one individual at a time. With creativity and determination, three speakers from the Audacious Project session are making huge changes in people’s lives: restoring freedom, preventing blindness and guiding a public health and social movement. At any time, nearly half a million Americans are incarcerated even though they haven’t been convicted of a crime. Their offense: “They cannot afford to pay the price of their freedom, and that price is called bail,” says Bronx public defender Robin Steinberg. Ten years ago, Steinberg launched The Bronx Freedom Fund, which provides bail so that defendants are able to fight their cases from their home, not jail. So far, 96 percent of those they’ve assisted have returned the bail money, and only 2 percent were convicted of crimes. Now, with The Bail Project, her plan is to expand to the model that worked in the Bronx to 40 sites across the United States and bail out 160,000 people over the next five years, amounting to what Steinberg calls “one of the largest decarceration programs in the US.” Sightsavers’ Caroline Harper is concentrating on another vital human need: vision. Two million people around the world are blind or visually impaired due to trachoma, a painful and pernicious bacterial eye disease; 200 million more are at risk. There is good news: physicians and public-health officials have long known how to treat it, using a simple surgical procedure, antibiotics, face-washing and better sanitary and living conditions. Her organization’s goal is to eliminate the disease from 12 nations in Africa and completely wipe it out of the Americas and the Pacific. T. Morgan Dixon and Vanessa Garrison first spoke about their organization, GirlTrek, at TED2017, and they’ve returned to report on their progress and their ambitions. More than half of all Black women in the US are estimated to be obese. which puts them at increased risk of heart disease and other health problems. With the help of GirlTrek organizers, Black women form neighborhood groups and come together to walk, learn about health and nutrition, and support each other. GirlTrek has already mobilized 100 US organizers, and they’ve gotten 100,000 women to lace on their sneakers and get moving. Now Dixon and Garrison want to catalyze an additional 10,000 organizers to rally 1 million women, culminating in a 2019 event called “Summer of Selma” that will combine activism, exercise and storytelling.

What machines are learning about us — and what we might not want them to know. Poppy Crum, chief scientist at Dolby Laboratories, poses a provocative question: Do we actually possess control over what other people see, know and understand about us? “Today’s technology is starting to make it really easy to see the signals and tells that give us away,” she says. In her work, she’s found that we can learn a great deal about a person’s internal state by pairing sensors with machine learning. For example, infrared thermal imaging can reveal changes in our stress level, how hard our brain is working, and whether we’re fully engaged in what we’re doing. To make her point, she gives attendees a quick fright by showing a startling clip from the horror film The Woman in Black. Using tubes embedded in the theater, she captures the CO2 in the room — and shows a real-time data visualization of the CO2 levels that pinpoint the moment the audience collectively jumped. What’s even scarier than people and machines knowing how you’re feeling without you wanting them to? Watching an interview of someone you trust only to discover later that everything — their expression, their words, the wrinkles on their face — was fake, a product of sophisticated algorithms. Researching this kind of work is the specialty of computer scientist Supasorn Suwajanakorn, who developed software that can create ultra-convincing digital fakes while at the University of Washington. The process uses an algorithm that crunches huge sets of visual data (both photos and video) in order to generate a 3D moving model; then small, believable details like subtle mannerisms and expressions are blended in to reconstruct a talking head in unsettlingly accurate detail. And technologist Dina Katabi shares her work developing a flat, iPad-sized device that can use wireless signals to detect all sorts of things about a person, such as our movements, breathing, heartbeat, and even our sleep — all without wearables. This type of device could transform healthcare, she suggests, but it doesn’t take a big leap of imagination to see how it can used for more insidious applications. Technology already exists that could be integrated to prevent people from being monitored without their consent, explains Katabi, and and policy will also play an important role. “Hopefully, with the two of them, we can control any issues,” she says.

What does progress look like? According to Oxford University economist Kate Raworth: a doughnut. Her idea: we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits. Ultimately, our economies need to become “regenerative and distributive by design,” says Raworth. Meanwhile: “I want to avoid this silly carbon chauvinism idea that you can only be smart if you’re made of meat,” says MIT physicist Max Tegmark. Humanity has two options as we move closer to a world where artificially intelligent machines can do everything better and cheaper than we can. Either we can be complacent and not worry about the consequences as we build our technology, or we can be ambitious and envision a truly inspiring future, then figure out how to steer towards it. The smarter choice is clear, but to do that, we need to overhaul our attitudes. We currently function with a reactive “learning from mistakes” mindset; instead, we must be proactive as we develop powerful technology like nuclear weapons — because our first mistake may be our last. “I think that the essence of the age of amazement should lie in becoming not overpowered, but empowered by our technology,” Tegmark says.

“The true beauty of making useless things,” according to Simone Giertz, is “acknowledging you don’t always know what the best answer is, and it turns off that voice in your head that tells you that you know exactly how the world works. And maybe a toothbrush helmet isn’t the answer, but at least you’re asking the question.” (Photo: Bret Hartman / TED)

Gadgets that’ll make your jaw drop. Queen of Shitty Robots Simone Giertz anchored Session 4 of TED2018 with a fun, heartfelt talk that featured her fleet of useless (but awesome) creations. Her YouTube channel dedicated to her wonderfully wacky creations features robots that chop vegetables, wake her up, cut hair and even apply lipstick, but these inventions rarely — if ever — succeed. What drives her to build such hapless robots? “It turns off that voice in your head that tells you that you know exactly how the world works. And maybe a toothbrush helmet isn’t the answer, but at least you’re asking the question.” For a different take on our robotic future, Stanford University biomedical engineer Giada Gerboni shows us how we should look to nature to build better machines. Consider the octopus: it has no bones or stiff structures, allowing it to adapt to a huge variety of situations, and its tentacles to move in countless ways. Gerboni is now working on “robotic needle steering” — a flexible needle that could be used on minimally invasive procedures. For some gadgets a normal person might actually be able to get their hands on soon, engineer Rajiv Laroia (who invented much of the technology behind the LTE 4G wireless standard) presents the L16 — a camera made with 16 individual lenses that simultaneously capture a scene. The real magic, says Laroia, are its sophisticated computational and machine learning algorithms inside, which fuse all of the images together into a single 52-megapixel photo. And if you’re sick of passwords (and identity theft) technology entrepreneur Melanie Shapiro introduces Token, a sleek wearable ring that holds all your identity data in one place. She shares a compelling demo video of users tapping their rings against laptops to log in, against cars to open the door and start the engine, and against subway turnstiles and grocery store scanners to pay.

A TED conference is way more than just talks. With 22 options for workshops, TEDsters stayed busy during breaks between sessions. Some highlights: Ken Lacovara showed off some Tyrannosaurus Rex blood; Sarah Kay taught attendees about ideation, creation, and presentation through poetry; Tara Lemmey taught attendees how to be emergency responders and save someone trapped under a fallen building/beam; Jean Oelwang, Cindy Mercer and Ellie Kanner presented on the art of meaningful partnerships, examining more than 50 historical partnerships to create a blueprint for successful and impactful partnerships — from business partners to romantic partners to family members; and David Kwong hosted another one of his puzzle hunts, with clues hidden all over the sprawling conference venue.

Marjory Stoneman Douglas teacher (and TED speaker) Diane Wolk-Rogers and host Manoush Zomorodi speak with activist Emma González via Facebook Live at TED2018. (Photo: Jason Redmond / TED)

News from the loop. For the first time in North America, BMW i showcased an autonomous experience, offering TEDsters a chance to ride in a driverless car and imagine what a future without drivers might look like. Elsewhere, GenderFair quizzed attendees to find out how gender-equal their companies are; Intel showed off a quantum computer chip and an immersive experience featuring the movie Ready Player One; and The Void made its third TED appearance with an incredible Star Wars-themed, fully immersive VR experience. Marriott Hotels hosted a Facebook Live session with activist Emma González and Marjory Stoneman Douglas teacher (and TED speaker) Diane Wolk-Rogers. In conversation with podcaster Manoush Zomorodi, González and Wolk-Rogers discussed how students and teachers can learn from each other in the wake of the Parkland shooting.

TEDSpace to dream: Notes from Session 5 at TED2018

With two pieces created for the annual Burning Man festival as a backdrop, curator Nora Atkinson explains the unique artistic experience — and it’s not just the heat and grit — that can be found in the desert there, at TED2018: The Age of Amazement, April 12, 2018, Vancouver. Photo: Bret Hartman / TED

Session 5, hosted by TED Curator Chris Anderson and astrophysicist and TED Fellow Jedidah Isler, is called “space to dream.” As Isler points out, we’re not just talking about outer space — it’s also about “the right to take up space, to dream, to do.” Seven dreamers, doers and designers offer a variety of ways to look at this theme.

Learning from the art of Burning Man. To the uninitiated and skeptical, Burning Man may seem like an indulgent and dusty bacchanalian bash in the Nevada desert meant for those who can afford the time and expense to go. But to Nora Atkinson, craft curator at the Smithsonian American Art Museum, it’s a joyous, priceless exercise in ephemeral art for all participants, regardless of ability or experience. More than 70,000 people trek there each August to engage with 300+ installations, sculptures and structures. They must be constructed on the spot, and they’re destroyed or carted away and stored at week’s end. Atkinson was a first-time Burner in 2017 and discovered there what’s so often missing from museums, where visitors dutifully pass from label to label: curiosity and engagement. At Burning Man, there are no placards explaining the pieces, no guards telling you not to touch, and nothing is sold there. Instead, “art becomes a place for extended interaction,” says Atkinson. It’s “authentic and optimistic in a way we rarely see anywhere else.” Freed from worrying about critics and collectors, artists make work for themselves and their fellow Burners — art that can be as big and dangerous like Rebekah Waites’ “Church Trap,” a rustic chapel set precariously on a wooden beam that was built and burned in 2015. “The art is not always refined. It isn’t always viable. It’s not always even good,” but it is art that encourages, even demands, interaction and investigation. “What is art for,” Atkinson asks, “if not this?”

Why we need to design cities of difference. Shanghai, Paris, Tokyo, Venice, Morocco: these cities are iconic destinations, thanks to their distinctive architecture and charms. But sadly, most cities are not like them. There’s a creeping sameness in many urban buildings and streetscapes, no matter where in the world they are, says architect and Columbia University professor Vishaan Chakrabarti. The physical homogeneity — stemming from regulations, automobiles, accessibility and safety issues, and cost considerations, among other factors — has resulted in a social and mental one as well. Let’s strive to create cities of difference, magnetic places that embody an area’s cultural and local proclivities, exhorts Chakrabarti. Doing so will be more than a design challenge, of course. It will involve creating new products — a hovercraft wheelchair? cobblestones that generate energy when stepped on? — and processes to ensure that they’re sustainable and accessible to all. “When we invented the automobile, what happened was the world all bent toward the invention and we recreated our landscape around it,” he says. “In the 21st century, technology can be part of the solution if it bends toward the needs of the world.”

Bridges are useful — let’s make them beautiful, too. “The world needs bridges,” proclaims Ian Firth, an engineer who has designed spans all over the world, including the 3.3 kilometer-long suspension bridge over the Messina Strait in Italy. While these structures are essential for human growth and development, they symbolize so much more than that, too. “They shout about connectivity, community; they reveal something about creativity, ingenuity; they even hint at our identity,” he explains. Although bridges fall into only a few types, depending on the nature of their structural support, the potential for innovation and variety is huge. Yet change happens slowly in bridge design, according to Firth, primarily because of the high levels of risk should a span fail. He points to the Tacoma Bridge collapse in 1940, which put off designers for decades from constructing suspension bridges. Creativity can be found in new spans being erected, and Firth takes the audience on a mesmerizing tour of the cutting-edge in bridge technology, including a bridge that floats across a fjord in Norway. “I passionately believe bridges need to be elegant, they need to be beautiful,” Firth says, “Beauty enriches life, doesn’t it? It enhances our wellbeing. Ugliness and mediocrity does the exact opposite.”

Telling stories through video games. Avid readers tend to regard video games with disdain (guilty!), but we should instead think of video games as fiction in 3D. So believes game designer David Cage, who thinks of himself as a storyteller working with a different set of tools. “I’ve always been fascinated with the idea of recreating the notion of choice in fiction,” he says. “My dream is to put the audience in the shoes of the protagonist, let them make their own decisions, and by doing so, let them tell their own stories.” A novel is typically a linear experience devoid of choices — the reader follows the path set by the writer. Video games are interactive stories that give the reader autonomy and a plot that changes from moment to moment. Cage then presents a live video game demo, letting audience members control a character: an android trying to save a little girl taken hostage. (Thankfully for the audience’s blood pressure, the girl is freed.) “Interactive storytelling is a revolution in the way we tell stories. With the emergence of new platforms like interactive television, virtual reality and video games, it will become a new form of entertainment, and maybe even a new form of art.” Cage concludes. “This is a medium waiting for its Orson Welles or Stanley Kubrick.” Or, its Toni Morrison or Kazuo Ishiguro. In a short Q & A, TED curator Chris Anderson asks Cage about the hidden risks associated with video game violence. The game designer says that when violence is used appropriately in the context of a narrative or as storytelling device, it can enhance the story. That said, he believes we — designers and the game-playing public — need to be careful and take responsibility.

An intriguing visitor from another solar system. In October 2017, University of Hawaii astrobiologist Karen Meech got the call that she says every astronomer waits for: she learned an interstellar comet had been detected by the PanSTARRS telescope in Haleakalā,, Maui. Now thousands of asteroids cruise through our solar system every year, but the celestial body that was eventually named`Oumuamua — Hawaiian name for scout or messenger from the distant past reaching out to us — was, as Meech puts it, ”a package from the nearest star system 4.4 light years away,” having traveled on a journey of more than 50,000 years. What exactly is the oval, half-mile-long `Oumuamua? Meech believes it could be a chunk of rocky debris from a new star system; other researchers believe it may be something else all together — evidence of extraterrestrial civilizations, or material cast off in the death throes of a star. She and her colleagues were able to study ‘Oumuamua for only a few days before it traveled out of range. “This unexpected gift has generated more questions than answers,” says Meech, “but we were the first to say hello to this visitor from our distant past.

Alone in the cosmos. The universe is 13.8 billion years old and contains billions of galaxies — in fact, there are probably a trillion planets in our galaxy alone. People have long thought a civilization like ours must exist or should have existed somewhere out there, but British astronomer Stephen Webb brings up another possibility: we’re alone. Thinkers have speculated about all the barriers that a planet would need to surmount to house an alien civilization: it would need to be habitable; life would have to develop there; such life forms would need a certain technological intelligence to reach out; and they’d have to be able to communicate across space. Rather than viewing the situation with sorrow and the cosmos as a lonely place, “the silence of the universe is shouting: we’re the creatures who got lucky,” says Webb. “If we learn to appreciate how special our planet is; how important it is to look after our home and find others; how incredibly fortunate we all are simply to be aware of the universe — well, humanity might survive for a while. And all those amazing things we dream aliens might have done in the past – that could be our future.”

Our world is filled with problems, so which ones should we tackle? For the last ten years, moral philosopher Will MacAskill and others have developed a philosophy and research project called “effective altruism,” seeking to use evidence and reasoning to try to answer the question previously posed. They’ve developed a simple framework: the problems we tackle should be big, solvable and neglected. Using those criteria, MacAskill thinks our time and efforts might best be spent now in focusing on addressing the situations or events that could derail civilization or lead to the extinction of the human race, such as nuclear war or a global pandemic. We can all work on this problem, says MacAskill. “You can contribute with money, political engagement, or through your career.” With your money, you can support organizations that directly work on these risks, like the Nuclear Threat Initiative. You can vote in favor of candidates who take these risks seriously and who support greater international cooperation. And you can work at an organization that is working on these issues. “If we think carefully, and focus on problems that are big, solvable and neglected, we can make a truly tremendous difference to the world for thousands of years to come,” concludes MacAskill.

TEDThe Audacious Project: Notes from Session 4 of TED2018, with 7 bold ideas for global change — and $406 million to support them

Chris Anderson and Anna Verghese introduce the Audacious Project — a consortium of funders who are getting together with TED to make change at scale. The project was announced at TED2018 on April 11, 2018. Photo: Bret Hartman / TED

The lights go down. A voice emerges — strong and hopeful. It’s Laurene Powell Jobs, the founder of philanthropic powerhouse Emerson Collective. “Among the many things I love about TED is the simplicity and clarity of its mission — to spread ideas,” she says. “But in order to create lasting change at scale, we need to turn the boldest of these ideas into action.”

Powell Jobs is here to introduce TED’s newest initiative: The Audacious Project, a model to do that.

Chris Anderson, Head of TED, and Anna Verghese, Executive Director of The Audacious Project, step forward to explain more. Change, says Verghese, is driven by individuals who are passionate about a cause and have a vision for how things could be different. But making their vision a reality takes resources. “The nonprofit world lacks many of the tools open to business entrepreneurs,” says Anderson. There aren’t venture capitalists or IPOs on the stock market. “They must try to raise money one donor at a time.”

The Audacious Project aims to be the nonprofit version of an IPO. Housed at TED, it’s a collaboration among some of the biggest names in philanthropy. Every April, the project will open applications, asking individuals and organizations to present their most audacious dreams. Project partners will do due diligence and narrow the pool to a group of semifinalists with thrilling ideas that are also actionable. These ideas will be shaped with public feedback, and presented to groups of donors. Then at the annual TED Conference, the five most promising ideas will be presented, with an open invitation for the audience and world to get involved.

How $1 million becomes $50 million. The first speaker of the session is a familiar one — physician Raj Panjabi, winner of the 2017 TED Prize. He was a part of a beta test of The Audacious Project. “Last year, I shared with you a wish: the Community Health Academy, a global platform to connect, train and empower community health workers,” he says. But he has a confession: “I was pretty scared.” The problem — a billion people around the world lacking access to health care because they live too far from a clinic — seemed to big for him and his organization, Last Mile Health, to tackle alone. But through The Audacious Project, Panjabi got a new partner — Living Goods, a like-minded organization that supports community health workers in East Africa. Together, they created a plan to put mobile technology in the hands of community health workers and designed an app that will allow health workers to better diagnose patients and learn new skills, as well as earn a living. The Audacious Project partners have committed $50 million toward this $100 million project. Meaning that In the next four years, they’ll be able to digitally empower 50,000 community health workers and bring quality health care to 34 million people.

A special guest appears on video to tee up the next speaker: Shonda Rhimes. There’s a lot of conversation about the criminal justice system, she says. “The scale of our brokenness is so vast. How do we fix this? Where do we start? What might actually make a difference? This next speaker has an intriguing idea.” She introduces public defender Robin Steinberg.

Robin Steinberg is a public defender who saw firsthand the cost of the US’ twisted bail system — which keeps half a million people in jail without being accused of a crime, simply because they can’t come up with cash bail. At TED2018, she pitched an audacious idea: What if we start a fund to just pay that bail and let the justice system work as intended? Photo: Bret Hartman / TED

The peculiar injustice of bail. Robin Steinberg still remembers the first time she visited a client in jail. “The heavy metal door slammed behind me,” she says. “That was the moment that I understood viscerally — for a fleeting moment — what incarceration might feel like.” This moment propelled her through 35 years of work as a public defender, where she became familiar with a pernicious injustice of the American legal system: that every night, more than 450,000 people spend the night in jail without being convicted of a crime, simply because they can’t afford to pay bail. The sums in question are often about $500 — easy for some to pay, impossible for others — and it means that 75% of people in local jails are there for this reason. “Bail was never intended to create a two-tier system of justice — one for the rich and one for everybody else. But that is precisely what it has done,” says Steinberg. Being in jail for even a few days has repercussions — it can mean losing your job, jeopardizing your immigration status, losing custody of children. Many people plead guilty just to go home. It’s for all these reasons that Steinberg and her husband founded The Bronx Freedom Fund, which paid people’s bail. What happened when they started paying bail? They found that 50% of their clients had their cases dismissed — while the other half received non-criminal charges like parking tickets. Now, with The Audacious Project, Steinberg plans to take the idea national. With The Bail Project, she says, “We are going to bail out as many people as we can, as quickly as we can.” In the next five years, they’ll open in 40 high-need jurisdictions with an end result of bailing 160,000 people out of jail, so they can have their day in court from a place of freedom.

Another video starts. This time, it’s James Cameron, director of Titanic and Avatar. In 2014, he released the film James Cameron’s Deepsea Challenge 3D, about his quest to explore the Mariana Trench. The submersible built for that mission is now housed at Woods Hole Oceanographic Institution. “WHOI scientists do amazing things every day in probing the regions of the ocean that are still relatively untouched and unexplored,” he says, introducing ocean scientist Heidi M. Sosik.

Hidden wonders in the ocean’s twilight zone. Heidi M. Sosik whisks us to an “otherworldly realm” known as the twilight zone, the midwater region of the ocean, between 200 and 1000 meters below the surface. Sosik describes this place in the most beautiful way. “Tiny particles swirl down through the darkness,” she says, “while flashes of bioluminescence give us a clue that these waters teem with life.” The twilight zone is believed to be home to a million new species. It may have more biomass of fish than the rest of the ocean combined. But scientists don’t know, because it just hasn’t been explored. And as Sosik warns, if the fishing industry gets there first, with massive factory-fishing ships that strip-mine the oceans, it could have long-term effects for centuries to come — on the marine food web, and on the global climate system. “We need to get out ahead of fishing impacts and work to understand this critical part of the ocean,” says Sosik. Her vision: an unprecedented exploration of the twilight zone, leveraging WHOI’s incredible ability to develop technology for this purpose. The findings will be stunning, says Sosik. “This is not just a journey for scientists,” she says. “It is for all of us.” Recent studies have uncovered some unexpected facts — like the facts that sharks dive into the twilight zone to feed, and that tiny gelatinous creatures in the zone, called salps, might absorb carbon. “There’s an almost unlimited opportunity for new discovery,” she says.

Up next, Richard Branson appears onscreen. Branson is passionate about many causes, and he hears a lot of ideas from social entrepreneurs. “It’s incredibly rare to hear about a project where you can not just chip away at a problem but make history,” he says. “I’m honored to be a part of this mission and hope many of you will get involved too.” He introduces Caroline Harper.

The end of an ancient disease? Thousands of years ago, the ancient Nubians drew pictures on the wall of a terrible disease — one that turns the eyelids inside out, so the eyelashes scratch the cornea over and over, eventually causing blindness. This disease, trachoma, is still a scourge in many parts of the world today. “Trachoma is a strange disease,” says Caroline Harper, of the nonprofit Sightsavers. It’s a bacterial infection spread by flies and human contact, found in areas with poor sanitation and access to water. About 200 million people are at risk. “The crazy thing is, we know how to stop it,” she says. Her audacious idea: to eliminate this disease, one of the leading causes of preventable blindness. Sightsavers has already led a global mapping project in 29 countries that shows them exactly where trachoma is still a problem. With investment from The Audacious Project, they’ll focus on 12 key countries in Africa, as well as the Americas and the Pacific — where funding gaps are standing in the way of eliminating the disease. They’ll also scale up efforts in countries where need is most severe — like Nigeria and Ethiopia, which has almost half the world’s trachoma burden. In all of these countries, they’ll implement “SAFE,” the World Health Organization’s preferred method for stopping trachoma: surgery and antibiotics (when necessary), and face washing and environmental improvement (for all). “We are on the home straight of eliminating this disease from the whole world,” says Harper. She believes strongly we can consign this disease to the history books, where it belongs.

Actor Don Cheadle is the familiar face on video to introduce the next Audacious idea. “As we all know, climate change is one of if not the biggest threat to humanity at this time,” he says. “There are very few organizations with enough people power and reach to envision and execute on the scale that is required.” One of those is the Environmental Defense Fund. And he introduces its leader, Fred Krupp.

Our best chance to halt climate change. Fred Krupp is here to talk about global warming. He knows that, what with the floods and fires and earthquakes, people aren’t feeling a lot of hope. “When I leave the stage today, I don’t want you to have hope,” he says. “I want you to have certainty.” His big idea begins with methane. Methane causes 25 percent of the global warming we’re experiencing today — pound for pound, it’s 84 times more potent than carbon dioxide as measured over a 24-year period. The oil and gas industry is one of the biggest sources of methane emissions, yet when it leaks from wells and pipes and equipments, they’re losing a valuable product — natural gas. EDF launched a nationwide effort in the US to track methane emissions. “We learned that when you get information like that in peoples’ hands, they act,” says Krupp. Companies fixed faulty equipment; Colorado and California passed new laws. Krupp’s vision is to take this global by launching a satellite: MethaneSAT, which will track methane emissions with incredible precision on the global level. By taking data to corporations and governments, EDF will inspire quick action. Their goal: to cut methane pollution by 45 percent by 2025, which would have the same effect as shutting down one-third of the world’s coal-fired power plants. “This is our chance to see change in our lifetimes,” he says.

Darkness once move. And when the video screen comes on this time, it’s Oprah. “When we look at history, we know that some of the most potent change-makers are — let’s be real, people — Black women,” she says. “I’d like to introduce you to the seed planters, the co-founders of GirlTrek, Morgan Dixon and Vanessa Garrison.”

The most powerful woman you’ve never heard of. T. Morgan Dixon and Vanessa Garrison want to introduce you to someone who changed our world. Her name is Septima Clark, and Martin Luther King Jr. called her the “architect of the civil rights movement.” With her Citizenship Schools, she taught people to read and to organize — so they could vote and become activists. She taught Diane Nash, co-founder of the Student Nonviolent Coordinating Committee. She taught Fannie Lou Hamer, who registered 16,000 people to vote in Mississippi. “Her most famous student: Rosa Parks,” says Dixon. “When she sat down, she inspired a nation to stand.” Dixon and Garrison are the founders of GirlTrek, an organization that inspires Black women to walk regularly — to fight the health crisis that faces them, but also to lead cultural change forward. Their audacious idea: to create critical mass by establishing the Citizenship School for our own era. They call it Summer of Selma, and it will be an annual event that begins in 2019. “Imagine a revival-like tent festival, not unlike the teach-ins of the civil rights movement,” says Dixon. “This is gonna be the Woodstock of Black Girl healing.” At this event and ones like it, GirlTrek plans to train 10,000 women to lead a movement and organize walks back at home. Together, they will inspire one million to start moving — to heal themselves and their communities. Who among these 10,000 will be like Septima Clark?

With the five first ideas of the project revealed, Anderson and Verghese return to the stage. They reveal that Audacious Project’s coalition of foundations and individual donors has committed more than $250 million for these ideas. But funding on that scale has a way of creating momentum. Thanks to their catalytic effect, other parties have come in to support these ideas. The total committed so far is a hefty $406 million.

The total goals for these projects combined is $634 million — so there’s still quite a ways to go. No idea is fully funded, but all have what they need to move forward and launch. That’s pretty incredible.

A trip to New Orleans, by way of Vancouver. Dancer and choreographer Camille A. Brown is a TED Fellow who’s just come back from choreographing NBC’s Jesus Christ Superstar Live. She’s brought a piano player and a troupe of dancers for a powerful dance that riffs on the New Orleans tradition of the second line parade. As the live piano plays against and with the song “New Second Line” by Los Hombres Calientes, the seven dancers, dressed in black, umbrellas twirling overhead, move and interact in stunning shapes on the red carpet. It’s a blast of excitement and movement fitting for the potential of these projects. And a reminder, too, of art’s power to inspire change, to invigorate and challenge and teach.

Before the session ends, Verghese wants to address an elephant in the room: Are these five organizations able to absorb this kind of investment? Aren’t there dangers to scaling so fast? “This is the biggest question we’ve been asking in our due diligence and it’s a critical one,” says Verghese. To answer, she invites out one final speaker, Andrew Youn.

The power of major philanthropy. Andrew Youn runs the One Acre Fund, a nonprofit that helps small farmers in Africa boost their productivity. Their work has real impact — by bundling seeds, offering training, and encouraging the planting of long-term crops, like trees, they help farmers increase income by 50 percent. One Acre Fund’s model is based on a shocking fact: 70 percent of the world’s poorest and hungriest people are small farmers. Boost their ability to feed themselves and their countries — and we put a big dent in world hunger. One Acre Fund participated in a beta test of The Audacious Project, and with the money committed to their cause, they have tripled in size in three years. “As an organization gets bigger, it gets harder to grow. Audacious changed everything,” says Youn. “By 2020, we … will directly serve over 1,250,000 families per year, with more than 5 million children in those families. This is the power of major philanthropy. This really works.” Since he was a kid, Youn says, he’s been confused by how little our society spends on social change. Among the wealthiest nations of the world, he says, less than 1 percent of income goes to nonprofits. What would happen if we increased that to just 2 percent? “We are what we spend,” he says. “So we can choose who we want to be.” He shows a photo of a mother — one of the many female farmers One Acre Fund serves —  and her child. “Join me in a world where this family’s future is more important than the latest consumer technology,” he says. “Let us choose a world where we fund this family as much as any company.”

Inspired by one of the ideas from this session? Head to AudaciousProject.org and join support communities for these bold visions, and find out how you can be a part of making each a reality.

TEDShort talks, big energy: Notes from TED Unplugged at TED2018

Host Clint Smith welcomes us all to the Unplugged session at TED2018, April 11, 2018. Photo: Ryan Lash / TED

“This is a little different than the mainstage at TED, in a sense that this is a little more relaxed,” says our host, the poet and TED speaker Clint Smith. “These are speakers who have not been selected specifically for the mainstage, but they’re just as talented, just as brilliant, and just as important.”

A spectrum of ideas, stories, perspectives and insights hit the TED Unplugged stage on Wednesday afternoon. Thirteen speakers put their best talks forward in a playful, abbreviated version of TED Talks, in six minutes with timed slides, delivered to a packed audience full of energy and anticipation (and fueled by doughnuts).

Hacking romance. Let’s face it, online dating can suck. So many potential people, so much time wasted … is it even really worth it? Christina Wallace thinks so, if you do it right. Treat it sort of like a résumé review, she advises, and use sites or apps that emphasize the criteria you’re looking for. For example, she used OKCupid because she wanted to avoid the gamification of swipe-based apps (and also, she wanted a writing sample). Most important, try out her zero-date approach: go out with a person for one hour, have one drink, while asking yourself one question — would I like to have dinner with this person? Simple, super efficient (you could potentially fit three in one evening), and if the zero-date’s awesome, then you schedule an actual date. It worked for her, and if you’re looking for someone, it can serve you too — just don’t treat it like a game. “The point of this isn’t swiping,” she says. “It’s finding your person. Good luck.”

Steve Wilson is a champion in Combined Driving, an three-day equestrian sport that requires some new kinds of problem-solving. Photo: Ryan Lash / TED

Dances with horses. Steve Wilson horses around quite a bit — so much so that at 70 years old, he’s become the American National Champion in an equestrian field at the Olympic level for three years running. He competes in Combined Driving, a three-day sport that tests athletes on their ability to drive a carriage pulled by one, two or four horses. Each day comes with a new set of challenges: first dressage, then a marathon race, and finally a test of swift accuracy. For Wilson, whose eyesight has deteriorated to the point of legal blindness, he encounters the extra challenge of needing to learn the course in order to compete. He solves the problem by walking the paths for 11 miles each day until he has them committed to memory. “With steadfast determination,” he says, “the impossible is possible.”

Rodrigo Martinez explores epigenetics — the new (old) science of figuring out exactly what we inherit from our parents and ancestors. Photo: Ryan Lash / TED

Past lives of the deep. Rodrigo Martinez often dives deep into the underworld, a quiet place where the Mayans believed gods and goddesses lived, and life and death blurred. He does this all in a single breath, held for just over five minutes as he descends about 100 feet to the bottom of a cenote (a natural pit that gives way to groundwater). To him, it feels as if he’s entered a different dimension, that he’s connected with something old and natural, humbling and beautiful, while he hovers in calm blue waters. But what if this isn’t the first time this feeling has been experienced? As Martinez shares,  different experiments in epigenetics and neurobiology show that this may be the case — that like stressors, positive experiences and feelings could be inherited from our ancestors through specific molecular switches, like an echo of the past. We may be scratching the surface of something fascinating in science, that could connect us all in new, fascinating ways.

Rediagnosing failure. When almost everything in Lisa Genova’s life seemed to be falling apart, a few other things began shifting into place. She was a neuroscientist who wanted to write a novel (eventually), but there seemed no other time than now to start. And yet, she felt stuck and uncertain of the future. So she asked herself three questions: If I could do anything I wanted, what would I do? If I didn’t have to care about what anyone thought of me, what would I do? and if I didn’t have to worry about money, what would I do? Each question eventually led to the same answer — write the novel. Inspired to craft a story from a humanizing perspective of people living with neurological diseases and disorders, like her grandmother, she began writing a story about a woman with Alzheimer’s. After many months of selling her self-published book out of the trunk of her car, Genova sold her book to Simon & Schuster, and Still Alice continues to make waves and invite global conversation to this day. She asks: “What if you could let go of all limitations and allow yourself to do anything you want to do — what would you do?” (You can watch Lisa’s TED Talk here.)

A new lens on cities. Have you ever taken a closer look at the east sides of cities? (Think about places like East Orange, New Jersey; East Oakland, California; East London, United Kingdom; and even East Jerusalem.) More often than not, you may note that these sections often house more marginalized communities. They’re not the “nice” side of town. But why? Stephen DeBerry asked the same question and surfaced fascinating answers (the wind) alongside unsurprising culprits (humans). The Earth rotates counterclockwise, creating a wind that generally blows in the eastern direction, which was more or less a fun fact until the industrial era, when locomotives would cut through towns and cities spreading soot and smog — which would be carried on the eastern winds and into neighborhoods. Guess who got redlined into those neighborhoods: disenfranchised people. “The wrong side of the tracks” is a disparaging phrase as much as it is a slogan for disparity by design, a sinister decision-making that leads to the devastating effects of carving up a city based on bias and privilege. Elegant solutions to bad design can drive policy change and philanthropy, but we must be cognizant of what we’re designing for. “We got ourselves into this east side dilemma through bad design, so we can get out of it with good design” DeBerry says. “I believe the first principle of good design is actually really simple: We have to start with the commitment to design for the benefit for everyone.”

How does futurist Sheryl Connelly think about her job? As a “polite contrarian,” someone who asks others to re-examine their assumptions. Photo: Ryan Lash / TED

Confessions of a futurist. Sheryl Connelly has a confession: As a professional futurist, she can’t actually predict the future. Perhaps for accuracy, she pondered on stage, she should rebrand her title as a “polite contrarian” — because what she really does is challenge deeply rooted assumptions and constantly push against the status quo. No one can predict the future, she says — no, not even tarot card readers — but we do it every day by making decisions based on how we think the future will unfold. “I’m a firm believer that the only way to truly predict the future is to create it, but that will never happen unless you become mindful of things you can’t control or influence, become aware of the way the world is changing around you, and be more aware of the consequences these changes can bring,” she says. “If you learn to live in the realm of uncertainty, you will awaken to a spectrum of possibilities that are new and extraordinary.”

A search engine for the Earth. 21 rocket launches later, with more than 200 satellites in orbit and 31 ground stations across the world,  TED speaker Will Marshall can definitely say mission accomplished. As CEO and co-founder of Planet, Marshall’s company has the unique distinction of keeping a constant bird’s-eye view on Earth’s changing landscapes. But he’s moved onto his next big mission: using artificial intelligence to index everything within each image (trees, planes, ships, etc.) and making it searchable — or as he calls it “Queryable Earth.” He shared a vision of this database becoming a living almanac of physical change on Earth, documenting immense change of our planet for the Information Age. He says: “You can’t fix what what you can’t see and we wanted to give people the tools to take action.” As a bonus, he launches from stage a brand-new product aimed at everyday people, letting us play with satellite images. Create an account on Planet Stories to get a new view on our planet.

The upside of calamity. TED speaker Catherine Mohr is not entirely human and has a perilous tale as to why. So, she was scuba diving through a school of sharks, when a clutch of sea urchin spines pierced her hand right through her glove. Mohr lived to tell her tale after boiling said hand to cleanse it of toxins, only to later learn that a piece of sea urchin spine broke off inside her finger joint and needed to be removed. A week before that surgery, she broke her pelvis falling off a horse. During the healing process from that surgery, as she sat immobile on her couch for weeks, one particular friend kept showing up, dependably, day after day to keep her fed and entertained. Reader, she married him. And meanwhile, she discovered that the sea urchin spine had been metabolized by her body, as it scavenged for calcium to repair her shattered pelvis. She jokes that not being fully human is one of the reasons her family loves her — but she wouldn’t be with them if it wasn’t for a dash of disaster and distress. (And shout out to Catherine’s teenage daughter, who designed her mother’s wonderful and charming slides!)

Artist Dread Scott tells the story that leads up to this archival photo of himself, sitting on a rooftop across from the Chicago Art Institute as a crowd below protests his challenging work, a protest that worked all the way up to the Supreme Court. Photo: Ryan Lash / TED

The art of protest. Visual artist Dread Scott makes revolutionary art to propel history forward. His work subverts the socioeconomic and governmental foundations of the United States and encourages audiences to address big questions from that perspective. Scott shared the engrossing story behind one of his most transgressive art installations, “What Is the Proper Way to Hang an American Flag?” In 1989, the piece drew national attention for its controversial use of the American flag, which gave people the option to stand on the flag. Scott received numerous death threats — many evoking images of lynching — for his work and notoriety among politicians of the time (e.g. President George Bush Sr. and Senator Bob Dole). The installation was outlawed by Congress, which then involved Scott in a landmark First Amendment case that prevented the government from demanding patriotism be mandatory.

Hunches from a poker pro. “Like poker, life is also a game of skill and luck,” says professional poker player Liv Boeree. She has learned a lot from poker and on the TED Unplugged stage she shares what poker has taught her about decision-making in everyday life: Check your ego, train yourself to think in numbers and probability, and don’t rely too heavily on your intuition. “Why should we assume our intuitions are better calibrated than slow, proper analysis? They don’t have any data to be based off!”

Never stop writing your “how I spent my summer” essays, suggests Juan Enriquez, whose own life demonstrates the benefits of being always on the lookout for the most interesting possible thing to do. Photo: Ryan Lash / TED

Cruising for science. One random New Years’ Eve, prolific TED speaker and futurist Juan Enriquez sat down next to an obscure scientist and chatted for three hours. (Just a note, this “obscure scientist” was Craig Venter, who ended up sequencing the human genome.) Three weeks after meeting, they sailed across the Atlantic together on a trip that would change the course of their lives and eventually the direction of science itself. Years later, the two decided take another sailing trip — this time, across the Pacific (then the rest of the world) to study the genome across the planet. That expedition and the many that followed led to a series of new ideas, such as understanding the microbiome and developing programmable cells to create synthetic life. However, Enriquez’s takeaway is much simpler than the science he studies: he urges people to reflect on their accomplishments and think about how they reinvent themselves going forward. In other words, never stop writing that “how I spent my summer” essay … and live to write better and better ones.

The language of data. It may go without saying, but the founder of infographics.com, Tommy McCall, loves infographics. On the TED Unplugged stage, he waxed poetic about the many ways quantitative information can be communicated and transformed into transfixing chartforms. A combination of love letter and history lesson, McCall shared fascinating data sculpted into beautiful shapes, from historic charts like Florence Nightingale’s coxcomb chart and Charles Maynard’s epic combination of chart and maps to display Napoleon’s journey into Russia, to way-new graphs he’s made for business and science magazine (and some for fun). Tracing humanity’s vehicles for language from orality and literacy, to numeracy and now graphicacy, McCall makes a point that thousands of data points can also be works of art.

After a freak accident put her in the headlines, Dr. Kate Stone found a way to reclaim her story — and help prevent other people from losing their privacy. Photo: Ryan Lash / TED

What I learned when I lost my privacy. TED speaker Kate Stone closed out the session with a tough personal tale infused with humor and integrity. As she left a pub one evening, Kate felt a hard thump, was knocked to the ground, and woke up unable to breathe. Shockingly, she had been gored by a stag that had escaped from a nearby pen; the animal’s antler tore through Stone’s throat and fractured her spine. She was airlifted to the nearest hospital, where doctors induced a coma to save her life. While she was under, news outlets covered her story — but some outlets focused not just on the horrific accident, but on her gender as well. “I’m transgender, it’s not that big of a deal,” Stone says. “My hair color or my shoe size is way more interesting.” When she awoke, she was greeted with a slew of derogatory headlines that drove her to reclaim her own narrative, and to ensure that others would not have to endure this same indignity. She wrote to the news outlets and commenced her own media tour. The result: not only did she get the apology she rightly deserved, but in the process she was invited to sit on the journalistic ethics board of the same newspapers that once sensationalized her story.

Planet DebianKees Cook: security things in Linux v4.16

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

,

Planet DebianEnrico Zini: ansible nspawn connection plugin

I have been playing with system images using ansible and chroots, and I figured that using systemd-nspawn to handle the chroots would make things nice, giving ansible commands the benefit of a running system.

There has been an attempt which was rejected.

Here is my attempt. It does boot the machine then run commands inside it, and it works nicely. The only thing I missed is a way of shutting down the machine at the end, since ansible seems to call close() at the end of each command, and I do not know enough ansible internals to do this right.

I hope this can serve as inspiration for something that works well.

# Based on chroot.py (c) 2013, Maykel Moya <mmoya@speedyrails.com>
# Based on chroot.py (c) 2015, Toshio Kuratomi <tkuratomi@ansible.com>
# (c) 2018, Enrico Zini <enrico@debian.org>
#
# This is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible.  If not, see <http://www.gnu.org/licenses/>.
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type

import distutils.spawn
import os
import os.path
import pipes
import subprocess
import time
import hashlib

from ansible import constants as C
from ansible.errors import AnsibleError
from ansible.plugins.connection import ConnectionBase, BUFSIZE
from ansible.module_utils.basic import is_executable

try:
    from __main__ import display
except ImportError:
    from ansible.utils.display import Display
    display = Display()


class Connection(ConnectionBase):
    ''' Local chroot based connections '''

    transport = 'schroot'
    has_pipelining = True
    # su currently has an undiagnosed issue with calculating the file
    # checksums (so copy, for instance, doesn't work right)
    # Have to look into that before re-enabling this
    become_methods = frozenset(C.BECOME_METHODS).difference(('su',))

    def __init__(self, play_context, new_stdin, *args, **kwargs):
        super(Connection, self).__init__(play_context, new_stdin, *args, **kwargs)

        self.chroot = self._play_context.remote_addr
        # We need short and fast rather than secure
        m = hashlib.sha1()
        m.update(os.path.abspath(self.chroot))
        self.machine_name = "ansible-" + m.hexdigest()

        if os.geteuid() != 0:
            raise AnsibleError("nspawn connection requires running as root")

        # we're running as root on the local system so do some
        # trivial checks for ensuring 'host' is actually a chroot'able dir
        if not os.path.isdir(self.chroot):
            raise AnsibleError("%s is not a directory" % self.chroot)

        chrootsh = os.path.join(self.chroot, 'bin/sh')
        # Want to check for a usable bourne shell inside the chroot.
        # is_executable() == True is sufficient.  For symlinks it
        # gets really complicated really fast.  So we punt on finding that
        # out.  As long as it's a symlink we assume that it will work
        if not (is_executable(chrootsh) or (os.path.lexists(chrootsh) and os.path.islink(chrootsh))):
            raise AnsibleError("%s does not look like a chrootable dir (/bin/sh missing)" % self.chroot)

        self.nspawn_cmd = distutils.spawn.find_executable('systemd-nspawn')
        if not self.nspawn_cmd:
            raise AnsibleError("systemd-nspawn command not found in PATH")
        self.machinectl_cmd = distutils.spawn.find_executable('machinectl')
        if not self.machinectl_cmd:
            raise AnsibleError("machinectl command not found in PATH")
        self.run_cmd = distutils.spawn.find_executable('systemd-run')
        if not self.run_cmd:
            raise AnsibleError("systemd-run command not found in PATH")

        existing = subprocess.call([self.machinectl_cmd, "show", self.machine_name], stdout=open("/dev/null", "wb"))
        self.machine_exists = existing == 0

    def set_host_overrides(self, host, hostvars=None):
        super(Connection, self).set_host_overrides(host, hostvars)

    def _connect(self):
        ''' connect to the chroot; nothing to do here '''
        super(Connection, self)._connect()
        if not self._connected:
            if not self.machine_exists:
                display.vvv("Starting nspawn machine", host=self.chroot)
                self.chroot_proc = subprocess.Popen([self.nspawn_cmd, "-D", self.chroot, "-M", self.machine_name, "--register=yes", "--boot"], stdout=open("/dev/null", "w"))
                time.sleep(0.5)
            else:
                self.chroot_proc = None
                display.vvv("Reusing nspawn machine", host=self.chroot)
            self._connected = True

    def _local_run_cmd(self, cmd, stdin=None):
        display.vvv(" -exec %s" % repr(cmd), host=self.chroot)
        display.vvv(" -  or %s" % " ".join(pipes.quote(x) for x in cmd), host=self.chroot)
        p = subprocess.Popen(cmd, shell=False, stdin=subprocess.PIPE,
                             stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        stdout, stderr = p.communicate(stdin)
        display.vvv(" - got %d" % p.returncode, host=self.chroot)
        display.vvv(" - out %s" % repr(stdout), host=self.chroot)
        display.vvv(" - err %s" % repr(stderr), host=self.chroot)
        return p.returncode, stdout, stderr

    def _systemd_run_cmd(self, cmd, stdin=None):
        local_cmd = [self.run_cmd, "-M", self.machine_name, "-q", "--pipe", "--wait", "-E", "HOME=/root", "-E", "USER=root", "-E", "LOGNAME=root"] + cmd
        local_cmd = [x.encode("utf8") if isinstance(x, unicode) else x for x in local_cmd]
        return self._local_run_cmd(local_cmd, stdin=stdin)

    def exec_command(self, cmd, in_data=None, sudoable=False):
        ''' run a command on the chroot '''
        super(Connection, self).exec_command(cmd, in_data=in_data, sudoable=sudoable)

        display.vvv("cmd: %s" % repr(cmd), host=self.chroot)
        return self._systemd_run_cmd(["/bin/sh", "-c", cmd], stdin=in_data)

    def _prefix_login_path(self, remote_path):
        ''' Make sure that we put files into a standard path

            If a path is relative, then we need to choose where to put it.
            ssh chooses $HOME but we aren't guaranteed that a home dir will
            exist in any given chroot.  So for now we're choosing "/" instead.
            This also happens to be the former default.

            Can revisit using $HOME instead if it's a problem
        '''
        if not remote_path.startswith(os.path.sep):
            remote_path = os.path.join(os.path.sep, remote_path)
        return os.path.normpath(remote_path)

    def put_file(self, in_path, out_path):
        ''' transfer a file from local to chroot '''
        super(Connection, self).put_file(in_path, out_path)
        display.vvv("PUT %s TO %s" % (in_path, out_path), host=self.chroot)

        out_path = pipes.quote(self._prefix_login_path(out_path))
        p = subprocess.Popen([self.machinectl_cmd, "-q", "copy-to", self.machine_name, in_path, out_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        stdout, stderr = p.communicate()
        if p.returncode != 0:
            raise AnsibleError("failed to transfer file %s to %s:\n%s\n%s" % (in_path, out_path, stdout, stderr))

    def fetch_file(self, in_path, out_path):
        ''' fetch a file from chroot to local '''
        super(Connection, self).fetch_file(in_path, out_path)
        display.vvv("FETCH %s TO %s" % (in_path, out_path), host=self.chroot)

        in_path = pipes.quote(self._prefix_login_path(in_path))
        p = subprocess.Popen([self.machinectl_cmd, "-q", "copy-from", self.machine_name, in_path, out_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        stdout, stderr = p.communicate()
        if p.returncode != 0:
            raise AnsibleError("failed to transfer file %s from %s:\n%s\n%s" % (out_path, in_path, stdout, stderr))

    def close(self):
        super(Connection, self).close()

# FIXME: how can we power off the machine? close and __del__ seem to be called after each command
#    def __del__(self):
#        ''' terminate the connection; nothing to do here '''
#        # super(Connection, self).close()
#        display.vvv("CLOSE", host=self.chroot)
#        if self._connected:
#            p, stdout, stderr = self._local_run_cmd([self.machinectl_cmd, "poweroff", self.machine_name])
#            if p == 0 and self.chroot_proc:
#                self.chroot_proc.wait()
#            self._connected = False

TEDMore than 50 new Nasca Lines, located with the help of GlobalXplorer’s citizen archaeologists

This famous Nasca Line, believed to be an owl, stands about 100 feet tall. This week, National Geographic announced the discovery of more than 50 new lines — older, fainter and located with the help of citizen scientists. Photo by Aleksandr P. Thibaudeau (CC BY-NC-ND).

When it comes to archaeological features, the Nasca Lines are celebrities. A giant monkey with a curled tail, an owl waving hello, a wavy geometric spider — these geoglyphs, carved into the soil of the high desert, were made between 200 to 700 AD and are still visible from overhead. First studied in the 1920s, they’ve been the subject of fascination ever since.

This week, National Geographic reports in an exclusive that some 50 new lines have been discovered in the Nasca-Palpa region. These geoglyphs were spotted in part thanks to the 70,000 people who helped search satellite images through the citizen science platform GlobalXplorer.

Archaeologist Sarah Parcak created GlobalXplorer with the 2016 TED Prize, to help speed up the process of archaeological discovery. A “space archaeologist,” Parcak takes images of the earth’s surface captured by satellites, and processes them in order to make ancient features pop out. Her idea was to harness the power of the crowd and have them help with the time-consuming work of reading these images. With partners DigitalGlobe, National Geographic and the Sustainable Preservation Initiative, she launched GlobalXplorer and its first digital expedition: to scour the highlands and lowlands of Peru for signs of ancient sites and potential looting.

Together, the crowd searched more than 14 million images, or tiles, in Expedition Peru from February to April 2017. They covered an area of more than 150,000 square kilometers, stretching from Peru’s Pacific Coast to its Sacred Valley. The data collected was shared with Peru’s Ministry of Culture and with trusted archaeologists working in the region, to begin the process of exploring areas of interest on the ground, a process called “ground-truthing.”

In December 2017, archaeologist Luis Jaime Castillo of Pontificia Universidad Católica del Perú led a team to the Palpa-Nasca Valley to survey 27 sites noted by the crowd. Most of these sites were flagged because they displayed signs of looting. Castillo’s team found that most of this damage appeared to be old — from illegal gold mines that are no longer active. But when the team used low-flying drones to map the flagged sites, they found something else, something unexpected: dozens of geoglyphs unknown to archaeologists, like the one below.

A newly discovered geoglyph, as captured by a drone. Its abstract edges are so faint that it escaped detection by archaeologists until the drone camera, following up on clues from citizen scientists, imaged it from above. Photo courtesy of Luis Jaime Castillo, Palpa-Nasca Project

While the more famous Nasca Lines are deep depressions, created by building stone walls and then scraping away the earth in between them, many of these new lines are fainter — thinner than their more famous peers, and more worn down over time. These lines were also tucked into hillsides, rather than created on flat ground. Archaeologists suspect, that while some of these lines might have made by the Nasca people, many may predate them. They suspect that these lines were made by the Paracas and Topará cultures, as early as 500 BC. Interesting to note: in addition to geographic forms, many of these new lines also depict human forms. “Most of these figures are warriors,” Castillo told National Geographic.

Johny Isla, an archaeologist with the Peruvian Ministry of Culture who was part of the team, says that this is significant. “This is a tradition of over a thousand years that precedes the famous geoglyphs of the Nasca culture,” he said. “[It] opens the door to new hypotheses about the function and meaning.”

This, to Parcak, is the larger message. GlobalXplorer just released the results of its Peru expedition on Medium. Together, the crowd identified 19,084 features of archaeological interest, that could represent a wide array of cultures — from the Norte Chico, c. 3200 BCE, to the Inca, c. 1572 CE. GlobalXplorer’s team looked at all of these, and found that the crowd was accurate in its identification 85 percent of the time — pretty impressive. Of the sites, 342 of them were classified as high interest. They are being explored now. All thanks to the work of the site’s dedicated armchair archaeologists.

Planet Linux AustraliaDonna Benjamin: Leadership, and teamwork.

Photo by Mohamed Abd El Ghany - Women protestors in Tahrir Square, Egypt 2013.

I'm angry and defensive. I don't know why. So I'm trying hard to figure that out right now.

Here's some words.

I'm writing these words for myself to try and figure this out.
I'm hoping these words might help make it clear.
I'm fearful these words will make it worse.

But I don't want to be silent about this.

Content Warning: This post refers to genocide.

This is about a discussion at the teamwork and leadership workshop at DrupalCon. For perhaps 5 mins within a 90 minute session we talked about Hitler. It was an intensely thought provoking, and uncomfortable 5 minute conversation. It was nuanced. It wasn't really tweetable.

On Holocaust memorial day, it seems timely to explore whether or not we should talk about Hitler when exploring the nature of leadership. Not all leaders are good. Call them dictators, call them tyrants, call them fascists, call them evil. Leadership is defined differently by different cultures, at different times, and in different contexts.

Some people in the room were upset and disgusted that we had that conversation. I'm really very deeply sorry about that.

Some of them then talked about it with others afterwards, which is great. It was a confronting conversation, and one, frankly, we should all be having as genocide and fascism exist in very real ways in the very real world.

But some of those they spoke with, who weren't there, seem to have extrapolated from that conversation that it was something different to what I experienced in the room. I feel they formed opinions that I can only call, well, what words can I call those opinions? Uninformed? Misinformed? Out of context? Wrong? That's probably unfair, it's just my perspective. But from those opinions, they also made assumptions, and turned those assumptions into accusations.

One person said they were glad they weren't there, but clearly happy to criticise us from afar on twitter. I responded that I thought it was a shame they didn't come to the workshop, but did choose to publicly criticise our work. Others responded to that saying this was disgusting, offensive, unacceptable and inappropriate that we would even consider having this conversation. One accused me of trying to shut down the conversation.

So, I think perhaps the reason I'm feeling angry and defensive, is I'm being accused of something I don't think I did.

And I want to defend myself.

I've studied World War Two and the Genocide that took place under Hitler's direction.

My grandmother was arrested in the early 1930's and held in a concentration camp. She was, thankfully, released and fled Germany to Australia as a refugee before the war was declared. Her mother was murdered by Hitler. My grandfather's parents and sister were also murdered by Hitler.

So, I guess I feel like I've got a pretty strong understanding of who Hitler was, and what he did.

So when I have people telling me, that it's completely disgusting to even consider discussing Hitler in the context of examining what leadership is, and what it means? Fuck that. I will not desist. Hitler was a monster, and we must never forget what he was, or what he did.

During silent reflection on a number of images, I wrote this note.

"Hitler was a powerful leader. No question. So powerful, he destroyed the world."

When asked if they thought Hitler was a leader or not, most people in the room, including me, put up their hand. We were wrong.

The four people who put their hand up to say he was NOT a leader were right.

We had not collectively defined leadership at that point. We were in the middle of a process doing exactly that.

The definition we were eventually offered is that leaders must care for their followers, and must care for people generally.

At no point, did anyone in that room, consider the possibility that Hitler was a "Good Leader" which is the misinformed accusation I most categorically reject.

Our facilitator, Adam Goodman, told us we were all wrong, except the four who rejected Hitler as an example of a Leader, by saying, that no, he was not a leader, but yes, he was a dictator, yes he was a tyrant. But he was not a leader.

Whilst I agree, and was relieved by that reframing, I would also counter argue that it is English semantics.

Someone else also reminded us, that Hitler was elected. I too, was elected to the board of the Drupal Association, I was then appointed to one of the class Director seats. My final term ends later this year, and frankly, right now, I'm kind of wondering if I should leave right now.

Other people shown in the slide deck were Oprah Winfrey, Angela Merkel, Rosa Parks, Serena Williams, Marin Alsop, Sonia Sotomayor, a woman in military uniform, and a large group of women protesting in Tahrir Square in Egypt.

It also included Gandhi, and Mandela.

I observed that I felt sad I could think of no woman that I would list in the same breath as those two men.

So... for those of you who judged us, and this workshop, from what you saw on twitter, before having all the facts?
Let me tell you what I think this was about.

This wasn't about Hitler.

This was about leadership, and learning how we can be better leaders. I felt we were also exploring how we might better support the leaders we have, and nurture the ones to come. And I now also wonder how we might respectfully acknowledge the work and effort of those who've come and gone, and learn to better pass on what's important to those doing the work now.

We need teamwork. We need leadership. It takes collective effort, and most of all, it takes collective empathy and compassion.

Dries Buytaert was the final image in the deck.

Dries shared these 5 values and their underlying principles with us to further explore, discuss and develop together.

Prioritize impact
Impact gives us purpose. We build software that is easy, accessible and safe for everyone to use.

Better together
We foster a learning environment, prefer collaborative decision-making, encourage others to get involved and to help lead our community.

Strive for excellence
We constantly re-evaluate and assume that change is constant.

Treat each other with dignity and respect
We do not tolerate intolerance toward others. We seek first to understand, then to be understood. We give each other constructive criticism, and are relentlessly optimistic.

Enjoy what you do
Be sure to have fun.

I'm sorry to say this, but I'm really not having fun right now. But I am much clearer about why I'm feeling angry.

Photo Credit "Protesters against Egyptian President Mohamed Morsi celebrate in Tahrir Square in Cairo on July 3, 2013. Egypt's armed forces overthrew elected Islamist President Morsi on Wednesday and announced a political transition with the support of a wide range of political and religious leaders." Mohamed Abd El Ghany Reuters.

CryptogramObscure E-Mail Vulnerability

This vulnerability is a result of an interaction between two different ways of handling e-mail addresses. Gmail ignores dots in addresses, so bruce.schneier@gmail.com is the same as bruceschneier@gmail.com is the same as b.r.u.c.e.schneier@gmail.com. (Note: I do not own any of those email addresses -- if they're even valid.) Netflix doesn't ignore dots, so those are all unique e-mail addresses and can each be used to register an account. This difference can be exploited.

I was almost fooled into perpetually paying for Eve's Netflix access, and only paused because I didn't recognize the declined card. More generally, the phishing scam here is:

  1. Hammer the Netflix signup form until you find a gmail.com address which is "already registered". Let's say you find the victim jameshfisher.
  2. Create a Netflix account with address james.hfisher.
  3. Sign up for free trial with a throwaway card number.
  4. After Netflix applies the "active card check", cancel the card.
  5. Wait for Netflix to bill the cancelled card. Then Netflix emails james.hfisher asking for a valid card.
  6. Hope Jim reads the email to james.hfisher, assumes it's for his Netflix account backed by jameshfisher, then enters his card **** 1234.
  7. Change the email for the Netflix account to eve@gmail.com, kicking Jim's access to this account.
  8. Use Netflix free forever with Jim's card **** 1234!

Obscure, yes? A problem, yes?

James Fisher, who wrote the post, argues that it's Google's fault. Ignoring dots might give people an enormous number of different email addresses, but it's not a feature that people actually want. And as long as other sites don't follow Google's lead, these sorts of problems are possible.

I think the problem is more subtle. It's an example of two systems without a security vulnerability coming together to create a security vulnerability. As we connect more systems directly to each other, we're going to see a lot more of these. And like this Google/Netflix interaction, it's going to be hard to figure out who to blame and who -- if anyone -- has the responsibility of fixing it.

Sociological ImagesRedefining what it means to be #YourAverageMuslim

On November 1st, 2017, Muslim YouTube phenomenon Dina Tokio premiered her documentary project “#YourAverageMuslim,” a four-part Creators for Change series produced by YouTube. This documentary is a prime example of the meaningful feminist digital activism being undertaken by contemporary Muslim women. Such activism seeks to reframe the discourse around Muslim women by showing that successful, independent and bold Muslim women are not the exception, but the norm.

For centuries, Muslim women have been subject to the Orientalist gaze, which paints Muslim female bodies as exotic, veiled, and oppressed victims in various visual and written depictions. These depictions have largely shaped the experiences of average Muslim women, who must deal with constantly being stereotyped by the public as victims of their culture and religion. These Muslim women have now taken to the online world to fight against these stereotypes. By using online platforms to make documentaries such as #YourAverageMuslim and music videos like “Somewhere in America #Mipsterz” (both of which received millions of views online) these women have been quite successful in extending their perspectives to wider audiences.

“Somewhere In America” – dir. Habib Yazdi from XY CONTENT on Vimeo.

#YourAverageMuslim highlights the lives of three Muslim women in Europe – Dalya Mlouk, Emine, and Sofia Buncy. Dalya Mlouk is the world’s first female hijabi power-lifter, who has broken the world record for deadlifting in her age and weight category. German hip-hop dancer Emine dominates Berlin’s underground hip-hop dance world, and is the first hijabi dance teacher in Europe who also owns her own dance school. Sofia Buncy stands out from the other women, in that she doesn’t wear the hijab, but works primarily in an overlooked area of social work, catering to the needs of Muslim women in prisons. 

Dina Tokio with Dalya Mlouk, Emine, and Sofia Buncy

While all these women are doing exceptional work, whether it be individual or community based, the aim of this documentary is not to showcase how exceptional these women are. Rather, its priority is to normalize the idea that your average Muslim woman may come from diverse backgrounds and is successful, multi-talented, and determined to live her life the way she chooses. Western media representations of minority groups play a large role in shaping how the public conceptualizes its notions of such groups. When these conceptualizations are depicted repeatedly, they become normalized and shape the experiences of minority group members. #YourAverageMuslim seeks to disrupt those representations by normalizing an alternate conceptualization that refrains from reducing the complex nature of the Muslim female experience in the West. This project is unique as it is dedicated specifically to showing amazing women who are not breaking any stereotype, but are instead leading #YourAverageMuslim life.

Inaash Islam is a PhD student in Sociology at Virginia Tech. She specializes in the areas of race, culture and identity, and focuses specifically on the Muslim experience in the West. 

(View original at https://thesocietypages.org/socimages)

CryptogramCybersecurity Insurance

Good article about how difficult it is to insure an organization against Internet attacks, and how expensive the insurance is.

Companies like retailers, banks, and healthcare providers began seeking out cyberinsurance in the early 2000s, when states first passed data breach notification laws. But even with 20 years' worth of experience and claims data in cyberinsurance, underwriters still struggle with how to model and quantify a unique type of risk.

"Typically in insurance we use the past as prediction for the future, and in cyber that's very difficult to do because no two incidents are alike," said Lori Bailey, global head of cyberrisk for the Zurich Insurance Group. Twenty years ago, policies dealt primarily with data breaches and third-party liability coverage, like the costs associated with breach class-action lawsuits or settlements. But more recent policies tend to accommodate first-party liability coverage, including costs like online extortion payments, renting temporary facilities during an attack, and lost business due to systems failures, cloud or web hosting provider outages, or even IT configuration errors.

In my new book -- out in September -- I write:

There are challenges to creating these new insurance products. There are two basic models for insurance. There's the fire model, where individual houses catch on fire at a fairly steady rate, and the insurance industry can calculate premiums based on that rate. And there's the flood model, where an infrequent large-scale event affects large numbers of people -- but again at a fairly steady rate. Internet+ insurance is complicated because it follows neither of those models but instead has aspects of both: individuals are hacked at a steady (albeit increasing) rate, while class breaks and massive data breaches affect lots of people at once. Also, the constantly changing technology landscape makes it difficult to gather and analyze the historical data necessary to calculate premiums.

BoingBoing article.

Worse Than FailureTo Suffer The Slings and Arrows of Vendor Products…

Being a software architect is a difficult task. Part of the skill is rote software design based upon the technology of choice. Part of it is the very soft "science" of knowing how much to design to make the software somewhat extensible without going so far as to design/build something that is overkill. An extreme version of this would be the inner platform effect.

A bike with square wheels

Way back when I was a somewhat new developer, I was tasked with adding a fairly large feature that required the addition of a database to our otherwise database-less application. I went to our in-team architect, described the problem, and asked him to request a modest database for us. At the time, Sybase was the in-house tool. He decreed that "Sybase sucks", and that he could build a better database solution himself. He would even make it more functional than Sybase.

At the time, I didn't have a lot of experience, especially with databases, but intuition told me that Sybase had employed countless people for more than a decade to build and tweak Sybase. When I pointed this out, and the fact the it was unlikely that he was going to build a better database than all that effort - by himself - in only a few days, I received a full-on dressing down because I didn't know what was possible, and that a good architect could design and build anything. While I agreed that given enough time it might be possible, it was highly unlikely that it would happen in the next three days (because I needed time to do my coding against the database to meet the delivery schedule). I was instructed to wait and he would get it to me in time.

My Spidy-Sense™ told me not to trust him, so I went to the DBAs that day and told them what I needed. Since I had little relevant experience with setting up a database, I told them of my inexperience with such things and asked them to optimize it with indices, etc. They created it for me that day. Since it was their implementation of my (DB) requirements, I knew that it would at least pass their review. I then coded the required feature and delivered on time. Was it perfect? No. Could it have been designed better? In retrospect, sure. But I was new to databases and it was fast enough for the need at the time.

At every meeting for the next three months, our manager asked the architect how his Sybase-replacement was coming along. He sheepishly admitted that while it was coming along well, coming up with a design that would support all of the features provided by Sybase was proving to be a bit more involved than he had imagined, and that it would take a while longer.

Several months after that, he was still making schematics and flow diagrams to try and build a new and improved Sybase.

Our manager never did do anything to stop him from wasting time.

As for me, I learned an important lesson about knowing when to write code, and when not to write code.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

TEDNerdish delight: Notes from Session 3 of TED2018

The Queen of Shitty Robots, Simone Giertz, demonstrates one of her deliciously daffy creations — a toothbrushing helmet — in Session 3 of TED2018: The Age of Amazement, April 11, 2018, in Vancouver. And yes, her shirt is made of googley eyes. Photo: Bret Hartman / TED

For the Session 3 of the conference, TED Head of Curation Helen Walters says, “We’re throwing off all pretense of cool.” Seven speakers are queued up to discuss the latest advances in their fields of technology. And while the gadgets do all different things, they share one crucial function: the power to make jaws drop.

Dina Katabi is building tech that could  at TED2018 – The Age of Amazement, April 10 – 14, 2018, Vancouver, BC, Canada. Photo: Bret Hartman / TED

How wireless signals could revolutionize healthcare. Growing up, technologist Dina Katabi was fascinated by Star Wars — and specifically, the Force. Later, the MIT professor realized that wireless signals are kind of like the Force — they can travel through space, they bounce off walls, and they reflect off human bodies. Which got her thinking: “If I had a device that could sense these minute reflections, I would be able to feel people I could not see.” She turned her musings into a flat, iPad-sized device that can use wireless signals to detect all sorts of things about a person, such as our movements, breathing, heartbeat, and even our sleep — all without wearables. To demonstrate, Katabi called an assistant onto the stage and asked him to breathe. On screen, the audience could see a line perfectly charting his inhales and exhales, as well as his heartbeat. This type of device could transform healthcare, she suggests: “If we can monitor chronic disease patients in their home, we can detect changes in their breathing, heartbeat, mobility, sleep, and we can detect emergencies before they occur and have a doctor intervene earlier so that we can avoid hospitalization.” Katabi’s team is now partnering with multiple hospitals to use the device with patients who have various chronic diseases. In a brief Q&A, Walters asks the researcher how we could prevent her work from being used for more insidious applications. Technology already exists that could be integrated to prevent people from being monitored without their consent, explains Katabi, and and policy will also play an important role. “Hopefully, with the two of them, we can control any issues,” she says.

Supasorn Suwajanakorn explores technology that makes “fake video” — compelling simulations of people saying words they didn’t say in real life. Beyond the technical challenges of doing this work. the ethical challenges must be faced. Photo: Bret Hartman / TED

Tech that brings us face-to-face with our deepest fears. Imagine watching an interview with your favorite actor, only to discover later that everything — their expression, their words, the wrinkles on their face — was fake, a product of sophisticated algorithms. Researching this kind of work is the specialty of computer scientist Supasorn Suwajanakorn, who developed software that can create ultra-convincing digital fakes while at the University of Washington. The process he uses to construct these simulations is, unsurprisingly, extremely complex. An algorithm crunches huge sets of visual data (both photos and video) in order to generate a 3D moving model; then small, believable details like subtle mannerisms and expressions are blended in to reconstruct a talking head in unsettlingly accurate detail. Suwajanakorn is all too aware of how easily this technology could be used to manipulate the unaware or gullible on the Internet, so he is joining forces with others in his field to safeguard us against the same work he created.

Giada Gerboni shares a collection of soft robots that can move, grip, wiggle and squeeze into tight spaces. Photo: Bret Hartman / TED

Cutting-edge robots … with no edges. For years, robots have been designed for speed and precision, but their rigidity has limited the situations they can be used in. How could we create robots that can better fit into the real world? “We can look to nature,” says Stanford University biomedical engineer Giada Gerboni. Consider the octopus. The cephalopod has no bones and no stiff structures, allowing it to adapt to a huge variety of situations and its tentacles to move in countless ways. This is “embodied intelligence, a kind of intelligence that all living organisms have,” as Gerboni puts it. The emerging field of soft robotics aims to use embodied intelligence to make robots more flexible, nimble and safe to use. Researchers are using pliant materials to build the robots’ bodies and distribute their actuators — the mechanisms responsible for the movement of the machines — across the body, instead of only at joints, allowing for much more range of motion. Soft robots hold great promise for surgery and medicine, since most traditional robots are too stiff to be trusted in as sensitive an environment as the human body. Gerboni herself is now working on “robotic needle steering” — a flexible needle that could be used on minimally invasive procedures. And while researchers are still many years away from replicating an octopus’s graceful moves in machine form, they’re rapidly approaching the day that soft robots can play a role in helping us in our daily lives.

The beauty of building useless things. As all seven-year-olds know, brushing your teeth is boring. (That’s why it’s called a chore.) So, in 2015, Simone Giertz decided to make a robot that could brush her teeth. Then she filmed a seven-second clip showing her tooth-brushing helmet in action and posted it online. Since then, Giertz has become the unofficial Queen of Shitty Robots, with a YouTube channel dedicated to her wonderfully wacky creations. She has made robots that chop vegetables, robots that wake her up, robots that cut hair, and even robots that apply your lipstick. Her inventions rarely succeed, if ever, in accomplishing their tasks. What drives her to build such hapless robots? “The true beauty of making useless things,” according to Giertz, is “acknowledging you don’t always know what the best answer is, and it turns off that voice in your head that tells you that you know exactly how the world works. And maybe a toothbrush helmet isn’t the answer, but at least you’re asking the question.”

The hyper-detailed photo behind Rajiv Laroia was not taken with a high-end professional camera — but with a smartphone-sized camera that combines 16 individual cameras to capture complex images into 52-megapixel photos. Photo: Bret Hartman / TED

A pocket-sized camera that captures the world. Engineer Rajiv Laroia, who invented much of the technology behind the LTE 4G wireless standard, shows the audience a magnificently detailed photo of Times Square in New York City, zooming in on different features — a streetside Minnie Mouse, his wife, even the time on someone’s wristwatch. Then he pulls a small device from his pocket — it’s the camera he used to take the image. His mighty mini-machine, which he calls the L16, has 16 individual cameras that simultaneously capture the scene, and Laroia says its “real magic” are its sophisticated computational and machine learning algorithms, which fuse all of the images together into a single 52-megapixel photo. Thanks to its software and multiple lenses, the camera can deliver photos that are three-dimensional in their depth perception. Cameras like the L16 offer exciting opportunities in fields like self-driving vehicles, healthcare, security, VR and retail. “But photography isn’t interesting just because of the technology — it’s about what you can uncover about the world,” says Laroia.

The hands on the slide behind Melanie Shapiro are wearing the Token ring, which stores your personal data (including ID and credit card info) and lets you use it safely in many ways. Photo: Bret Hartman / TED

Solving the hard problem of human ID. Have you ever been hacked? If so, you’ve heard people make the same-old suggestions to you. Change your password again! Turn on two-factor authentication! But these are band-aids, and they distract us from the real problems we’re facing,” says technology entrepreneur Melanie Shapiro. “The issue is, in our digital systems, individual computers are very easy to identify — but individual humans aren’t.” That’s why many of us carry around multiple identity tokens — credit cards, ID cards, house keys, office swipe cards, and passwords (multiple passwords, please).The end result: “We have weak standards for our own identity, and it’s eroding the ability for us to trust each other, the systems that we’re using and the companies we interact with.” Shapiro’s solution is a product called Token, a sleek wearable ring that holds all of a person’s identity data in one place. She shows a compelling demo video of users tapping their rings against laptops to log in, against cars to open the door and start the engine, and against subway turnstiles and grocery store scanners to pay. Could the answer to our ID problem lie in the same technology ancient Romans used — a signet ring?

Host Chris Anderson, left, speaks with Gwynne Shotwell, the president of SpaceX. On the slide behind her, a proposal for a rocket-powered mode of travel. Imagine blasting off from Vancouver and being in Bahrain in 40 minutes. And as Shotwell says: It’s definitely going to happen.” Photo: Bret Hartman / TED

Fundamental risk reduction for humanity. Gwynne Shotwell has one of the most amazing jobs on the planet — she was employee #7 at SpaceX and is now the company’s president. In conversation with TED curator Chris Anderson, Shotwell discusses what inspired her to pursue a career in engineering, how she drove SpaceX’s partnership with NASA, the company’s race to be the next company to put people into orbit and what it’s like to work with Elon Musk — including the concept of “Elon time.” After discussing SpaceX’s design process, including their reusable rockets (something no national space program has been able to achieve) as well as a semi-secretive project to put a constellation of satellites into low-Earth orbit to cover the planet in internet (at an estimated cost of $10 billion), Shotwell took some time to dream a little. She detailed SpaceX’s next big project: the BFR, or Big Falcon Rocket — which will be about two-and-a-half times the size of Falcon Heavy, the giant rocket they flew earlier this year (the one that delivered a Tesla roadster to space). BFR is what you need to take humanity to Mars, for sure — but it has a “residual capability,” as Gywnne puts it: rocket travel here on Earth. The plan is to fly BFR like an aircraft, doing point-to-point travel, taking off from New York City or Vancouver and flying halfway across the globe in roughly 40 minutes. Anderson says incredulously: “This is never going to happen!” and Gwynne shoots back:  “Oh no, it’s definitely going to happen” — and within a decade. The timeframe for landing humans on Mars looks about the same, she says, since both projects are built on the same technology. And to the question of why, with all the problems here on earth, SpaceX has their eyes on the stars, Shotwell has a vision: “This is the first step to us moving to other solar systems and potentially other galaxies,” she says. “This is the only time I out-vision Elon: I want to meet people, or whatever they call themselves, in another solar system.”

TEDAfter the end of history … : Notes from Session 2 of TED2018

Historian Yuval Noah Harari steps into the future — by speaking as a hologram at TED2018: The Age of Amazement, April 11, 2018, in Vancouver. Photo: Bret Hartman / TED

After the collapse of the Soviet Union, says TED Curator Chris Anderson as he opens Session 2, “we used to think that democratically-powered capitalism was supposed to take over the world.” Political scientist Francis Fukuyama famously called it the “end of history.” So now what happens? Five TED speakers come to the stage to talk about the possibilities and perils that lie ahead.

The temptations of fascism. In his 2017 book Homo Deus, historian Yuval Noah Harari predicted that humanity would become digital in the future — but he didn’t think it would happen this quickly, or to him. To deliver his talk, Harari appears as a digital avatar projection live from Tel Aviv, Israel. These days, he says, the term “fascism” is frequently tossed around. But Harari explains we’ve forgotten what fascism actually is; many of us use it as a synonym for nationalism. But nationalism and fascism are quite different: the former can make allies out of strangers, telling us that our nation is unique and we have a special obligation towards it and each other; the latter tells us that our nation is supreme and we don’t have to care about anyone or anything else. But while fascists formerly took power by controlling land and machines, today’s threat comes from governments or corporations controlling another commodity. “The greatest danger that now faces liberal democracy is that the revolution in information technology will make dictatorships more efficient than democracies,” Harari says. With the rise of AI, centralized data processing could give dictatorships a critical advantage over relatively decentralized democracies. So what can we do to prevent this possibility? If you’re an engineer, find ways to prevent too much data from being concentrated in too few hands, Harari suggests. And non-engineers should think about how to avoid being manipulated by those who control the data. “It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. In a brief post-talk Q & A, Harari stresses the need to remain vigilant against tech companies that own our data and not assume that market incentives will prevent abuses. “In theory, you can rise against corporations but in practice it is extremely difficult,” he warns.

Being part of a democracy involves a lot of decisions — one reason we as citizens assign some of those decisions to our representatives. César Hidalgo has a bold — and in fact a pretty out-there — idea to make it work better. Photo: Bret Hartman / TED

Let’s automate democracy. There’s a fundamental weakness with representative democracy, says MIT physics professor César Hidalgo. It’s, well, that it’s representative, depending on the participation of people who can be manipulated, act inefficiently, or simply not show up at the polls. He, like other concerned citizens, wants to make sure we have elected governments that truly represent our values and wishes. His radical solution: what if scientists could create an AI that votes for you? Hidalgo envisions a “digital Jiminy Cricket that is able to answer questions on your behalf and make better decisions at larger scale.” Each voter could teach her own AI how to think like her, using quizzes, reading lists and other types of data. He is careful to say that this process would be aboveboard: in the kind of system he posits, data would not be used against you but as a way to best learn about your opinions. The idea is, once you’ve trained your AI and validated a few of the decisions it makes for you, you could leave it on autopilot, voting and advocating for you … or you could choose to approve every decision it suggests. It’s easy to poke holes in his idea, but Hidalgo believes it’s worth trying out on a small scale. His bottom line: “Democracy has a very bad user interface. If you can improve the user interface, you might be able to use it more.”

Poppy Crum studies how we express emotions — and suggests that the era of the poker face may be coming to an end, as new technologies allow us to know other people’s physiological responses. Photo: Bret Hartman / TED

Tech that can tell what you’re feeling. We like to think we have complete control over what other people see, know and understand about our internal world. But Poppy Crum, chief scientist at Dolby Laboratories, poses a provocative question: Do we actually possess that control, and even if we do, how long will it last? “Today’s technology is starting to make it really easy to see the signals and tells that give us away,” she says. She explains that we can learn a great deal about a person’s internal state by pairing sensors with machine learning. For example, infrared thermal imaging can reveal changes in our stress level, how hard our brain is working, and whether we’re fully engaged in what we’re doing. To make her point, she gives attendees a quick fright by showing a startling clip from horror film The Woman in Black. Using tubes embedded in the theater, she shows a real-time data visualization of CO2 in the room, pinpointing the moment the audience collectively jumped — and vividly displaying how a chemical analysis of our breath can betray our feelings. While this kind of technology may sound terrifying and Big Brother-ish, Crum believes it can help us, by making us more connected. “I’m not looking to create a world where our inner lives are ripped open and our personal data and privacy given away to people and entities where we don’t want it to go,” she explains, “but I am looking to create a world where we can care about each other more effectively, we can know more about when someone is feeling something that we ought to pay attention to, and we can have richer experiences of our technology.” In a short Q&A, Crum discusses the absence of rules governing this kind of technology. “I’m a believer that we have to step full force in and think about all the ways that this can be used to enable us to develop the right control over it,” she says.

The economic theories of Kate Raworth can be summed up by this doughnut: a space of stability, prosperity and balance that does not demand unchecked growth. Photo: Bret Hartman / TED

Reimagining the shape of progress. “From your children’s feet to the Amazon forest, nothing in nature grows forever,” says Oxford economist Kate Raworth. “So why would we imagine our economies as the one system that can buck this trend?” Since the mid-20th century, a single number has held nations — from policymakers to individuals — in its thrall: gross domestic product, or GDP. Like children blowing a soap bubble, we pray that it keeps growing and growing. Our economies have become “financially, politically and socially addicted” to relentless GDP growth, despite the fact that it’s impossible to achieve and too many people and the planet are being pummeled in the process. What would a sustainable, universally beneficial economy look like? A doughnut, says Raworth (this diagram clarifies her analogy). She says we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits. Ultimately, our economies need to become “regenerative and distributive by design,” says Raworth. The former means working with and within the natural cycles of the earth; the latter, spreading the sources of wealth and production into the hands of many. “We have inherited economies that need to grow, whether or not they make us thrive,” she explains. “Today — in the world’s richest countries — we need economies that make us thrive, whether or not they grow. (Read an excerpt from her book Doughnut Economics at TED Ideas.)

If we’re building ever more powerful AIs, says Max Tegmark, we should start thinking now about how to prevent the worst outcomes we can imagine. Because, as he suggests, AI is one of those technologies — like nuclear power and synthetic biology — where trial-and-error won’t really work. It’s best to get this one right the first time. Photo: Bret Hartman / TED

Getting ambitious about AI. “I want to avoid this silly carbon chauvinism idea that you can only be smart if you’re made of meat,” says MIT physicist Max Tegmark. As he sees it, humanity has two options as we move closer to a world where artificially intelligent machines can do everything better and cheaper than we can. Option #1: We could be complacent and not worry about the consequences as we build our technology. Or, Option #2: We could be ambitious and envision a truly inspiring future, then figure out how to steer towards it. The smarter choice is clear, but to do that, we  need to overhaul our attitudes. We currently function with a reactive “learning from mistakes” mindset; instead, we must be proactive as we develop powerful technology like nuclear weapons — because our first mistake may be our last. Yes, it’s a powerful responsibility, but our destiny lies in our hands. “We’re all here to celebrate the age of amazement,” says Tegmark. “I think that the essence of the age of amazement should lie in becoming not overpowered, but empowered by our technology.”

Planet Linux AustraliaJames Morris: Linux Security Summit North America 2018 CFP Announced

lss logo

The CFP for the 2018 Linux Security Summit North America (LSS-NA) is announced.

LSS will be held this year as two separate events, one in North America
(LSS-NA), and one in Europe (LSS-EU), to facilitate broader participation in
Linux Security development. Note that this CFP is for LSS-NA; a separate CFP
will be announced for LSS-EU in May. We encourage everyone to attend both
events.

LSS-NA 2018 will be held in Vancouver, Canada, co-located with the Open Source Summit.

The CFP closes on June 3rd and the event runs from 27th-28th August.

To make a CFP submission, click here.

,

TEDInto the fray, undaunted: Notes from TED Fellows Session 2 at TED2018

Visual artist, musician, collector and thinker Paul Rucker shows off one of his talents as he opens TED Fellows Session 2 at TED2018: The Age of Amazement, April 10, 2018, in Vancouver. Photo: Ryan Lash/TED

To commence TED Fellows Session 2, multi-hyphenate Paul Rucker takes the stage with his cello. (Spoiler alert: you will see him later in this writeup showcasing another artform.) Inspired by his mother, who learned to play the organ through a mail-order course, Rucker taught himself how to play this instrument. But right here on the TED Fellows stage, he’s not playing his mama’s cello (we would guess). Starting with one of Bach’s stately cello suites, he turns it on its head. He records it, loops it, and then improvises the heck out of it — at times, yelling into the cello’s body, placing a pencil between the strings, and thumping the wood. The overall effect: unfamiliar, intriguing and fun.

Kotchakorn Voraakhom wants to help her city of Bangkok cope with climate change. Her contribution: an urban park that collects and cleans water, and adds much-needed green space too. She speaks during the TED Fellows Session 2. Photo: Ryan Lash / TED

Designing climate-resilient cities. Before Bangkok grew into a glass-and-steel agglomeration, seasonal flooding was a welcome event that people associated with fertile lands. Now, with natural waterways blocked, flooding is dreaded — and this attitudinal turnabout has happened in many cities in southeast Asia. Following a devastating 2011 flood that affected her family and millions of others in Thailand, Bangkok landscape architect Kotchakorn Voraakhom resolved to help her sinking city become climate-resilient. Her first step: designing Chulalongkorn Centennial Park, which opened in 2017 and provides much more than recreational space. “Like a monkey holds food in its cheek and gradually eats it,” Voraakhom says, “the park is a place to hold overflow water when the ground is saturated.” Built on an incline to collect flood runoff, the urban refuge includes the biggest green roof in the country, a constructed wetland that cleans water, and a retention pond that stores it. It’s a powerful object lesson in both the potential of the landscape as a key part of a city’s climate infrastructure and in community-based design — because her park incorporated the feedback of those affected by global warming. “Climate is changing,” declares Voraakhom. “The real question is whether we are ready to change, too.”

Forecasting a future flu. A century ago, the Spanish Flu killed between 20 million and 40 million people worldwide, and the flu continues to be one of the planet’s most threatening diseases. Researchers and public-health officials have long wondered: how can we predict — and prevent — the next pandemic? The flu is difficult to study, it turns out, because so many different strains exist, and because outbreaks hit some places and people harder than others. London School of Hygiene & Tropical Medicine mathematician and researcher Adam Kucharski is attempting to separate the factors specific to a particular outbreak from the underlying principles that drive all outbreaks. He and his team retroactively studied the 2009 Hong Kong flu epidemic, coming up with 100 different forecasting mathematical models based on information about social behaviors and immunity. Out of the 100, “the most accurate one showed that if we want to predict infection patterns, we need data on physical contacts: things like handshakes and hugs,” he says. He and his team think their model might be applied to other countries if researchers have enough data about such physical contacts, so he and collaborators built an app to track these kinds of behaviors and launched a public-science project to recruit people in the UK. With the support of a BBC media campaign, more than 30,000 Brits have consented to participate. The subjects will be anonymous, and Kucharski plans to make the data publicly available to researchers. “With such data and our growing insights into how behavior shapes outbreaks,” he says, “we’ll be able to study flu pandemics in a whole new level of detail.”

Faith Osier fights the deadly scourge of malaria by studying people who’ve acquired immunity to the disease. What can we learn from their immune response? Photo: Lawrence Sumulong / TED

Creating a better malaria vaccine. Kenyan immunologist Faith Osier is another TED Fellow engaged in the fight against lethal diseases. Her foe: malaria. Every year, there are 200 million cases of malaria in Africa alone, and 500,000 of them result in death. But despite dramatic advances in technology that have revealed how complex the parasite behind the disease is, “the vaccines we have made to date are simply not good enough,” says Osier, a researcher at the Heidelberg University Hospital in Germany and founder of the South-South Malaria Antigen Research ParTnership network. She is trying to make a more effective vaccine by studying the antibody response of people who acquire immunity to the disease; in particular, she wants to understand how the proteins in a successful immune response interact with and kill the parasite. “Just like we can now see the parasite in greater definition, my team and I are focused on understanding how our bodies overcome this complexity,” says Osier.

Confronting a painful history through art. Paul Rucker (remember him?) is a multidisciplinary artist — the first artist-in-residence at the National Museum of African-American Culture, in fact — and a collector. But unlike most collectors, he accumulates artifacts without any positive connotations for him; his objects of choice are associated with America’s history of slavery. Having amassed everything from branding irons to postcards depicting lynchings, he decided — in the midst of researching the Ku Klux Klan — that he really needed to acquire a Klan robe. But “I couldn’t find the quality I was looking for,” he wryly says. “So, as a Black man in America, I decided I had no choice but to make the best-quality Klan robes.” Since 2015, the Baltimore-based artist has made 75 of these garments in non-traditional fabrics like denim, satin and kente cloth and in sizes that range from toddler to adult. Each one represents a reflection on the insidious nature of systemic racism. “I made this one in camouflage as a way to talk about the stealth aspect of racism,” Rucker says of one robe that he displays to the audience. “It blends in with its surroundings and is kept safe because it can hide.” The act of sewing these robes has proved to be cathartic for their maker. “I realized after making so many robes that they had lost their power over me,” he says. “If we can confront these objects of our history, we can diminish the power they hold over all of us.”

What can we learn when we listen to sea animals? Claire Simeone, of the Marine Mammal Center in Hawaii, introduces us to the concept of “zoognosis,” or learning between animals and humans. Photo: Lawrence Sumulong / TED

Shared knowledge can benefit humans and animals. A veterinarian, conservationist and director of the Marine Mammal Center in Hawaii, Claire Simeone calls for opening our minds to a new kind of learning: between humans and animals. Simeone has coined the term “zoognosis” to define this spread of knowledge. In one example of human-to-animal zoognosis, a combination of antibiotics and a human-intended medical gel were used by the vet to treat Carmella, a sea lion with a nasty eye ulcer. Similarly, in animal-to-animal zoognosis, humans can take the information gained from studying certain animals and apply it to other species. And, yes, animal knowledge can also be used to help humans. Sea lions, for instance, can warn us of ocean climate change if we’re willing to listen, according to Simeone. These mammals suffer seizures from algae bloom toxins, typically before the poison can be detected in water samples. As our oceans warm, the blooms are becoming more frequent. “Our health and the health of our oceans rests on us understanding the importance of sharing this zoognosis,” she concludes. “We must know our fauna to know ourselves.”

When the past texts us. History is written by the victors, as the saying goes, but what would it look like if it reflected the thoughts and experiences of everyone affected? Journalist and historian Mikhail Zygar has begun turning this “what if” scenario into reality in Russia — where many citizens believe their country has never or will never be truly democratic — by reframing its history through Project1917. For this effort, Zygar and his collaborators digitized the real diaries and letters of over 3,000 people who lived more than a century ago, created a user account for each person, and then updated their newsfeeds for every day of 1917, the year of the Russian Revolution. By allowing anyone on the Internet to read the daily thoughts and feelings of Igor Stravinsky, Leon Trotsky, Tsar Nicholas II and many less celebrated figures, the project contextualizes and rehumanizes history as it once was and as it could have been. Currently, Zygar and his team are working on a project centered on the year 1968, imagining what the year marked by monumental social change would have looked like if the main political actors had used smartphones (cue visual of Bobby Kennedy as an iPhone owner). “Knowing history and understanding how common people influenced history will help us create a better future,” says Zygar. “Ordinary people matter; ideas matter; journalists, media, philosophers, artists matter; we shape the society, we all make history.”

Did we humans evolve from monkeys, or from fish? Prosanta Chakrabarty asks a bigger question: Why do we humans think of ourselves as the end of a line, instead of part of a complex cycle? Photo: Ryan Lash / TED

A tiny speck in the ancient process of evolution. An expert on the fascinating lives of cavefish, ichthyologist Prosanta Chakrabarty teaches one of the largest evolutionary biology classes at Louisiana State University. With each fresh group of students, the TED Senior Fellow likes to start by dispelling common myths, such as the idea that we are monkeys (yes, according to the prof, we’re actually fish). “We learn plants and bacteria are primitive things, and fish give rise to amphibians, followed by reptiles and mammals, and then you get you — this perfectly evolved creature at the end of the line,” he explains. “But life doesn’t evolve in a line, and it doesn’t end with us.” He encourages us to remember we’re just a small part of a complex evolutionary process that has been happening for 4 billion years. “Perhaps it’s better still to think of us as a little fish out of water,” Chakrabarty says. “Yes, one that learned how to walk and talk, but one that still has a lot of learning to do about who we are and where we came from.”

Rethinking the hospital waiting room. In America, more than 90 percent of children, or 74 million kids, go to see a doctor at least once a year, which means countless hours spent in a waiting room for parents. Those hours are an unexploited opportunity, realized Boston pediatrician Lucy Marcil, and she’s turning the wasted time into monetary savings. In 2015, she cofounded StreetCred, which brings free financial services into clinics and hospitals. How it works: a hospital registers as a tax-preparation site, and prospective volunteers must study and pass an IRS exam in order to help patients there. Besides tax assistance, volunteers inform families about tax credits. For example, a family’s average return from the Earned Income Tax Credit (EITC) is $2000 to $3000 a year, but most of those who qualify are unaware that it’s available. In its first two years, StreetCred has returned $1.6 million to families in Boston, and it has expanded to nine sites in four states this year. These services are returning more than money to families, says Marcil. They’re giving back something just as valuable: hope.

Protecting all victims of gender violence. While federal civil-rights legislation is able to protect some victims of gender violence in the US, it doesn’t protect all of them. Many activists have focused their efforts on drafting and passing state or federal laws, but it’s not the most effective solution, contends Washington, DC, attorney Laura Dunn. She says, “it’s time to go to the Constitution — rather than institution to institution — for reform.” She explains that passage of the Equal Rights Amendment is the step we need to ensure full gender equality, and SurvJustice, her national nonprofit, is devoted to trying to make that happen. By ushering in sweeping change, says Dunn, “our legal systems can become a system of justice for survivors and #MeToo can finally become #NoMore.”

Essam Daod cofounded Humanity Crew, a consortium of psychiatrists who help refugees understand and reframe their experiences at every stage of their journey. Photo: Ryan Lash / TED

The refugee crisis as mental-health crisis. In the last three years, more than 12,000 refugees in the world have lost their lives, and more 350,000 displaced children don’t have the psychological support to weather the traumas of dislocation and conflict. Essam Daod, a child psychiatrist based in Haifa, Israel, has devised short and powerful psychological interventions to help refugees reframe their experiences and establish a more positive narrative. He and his wife, Maria Jammal, cofounded Humanity Crew, a nonprofit organization which provides mental health support to refugees at every stage of their journeys. So far, the group’s therapists and trained volunteers have provided more than 26,000 hours of counseling to over 10,000 displaced people. “We need to acknowledge that first aid is not just needed for the body but also has to  include the mind, the soul,” says Daod. “The impact on the soul is hardly visible, but the damage can be there for life.”

Sarah Sandman shares Brick x Brick, a powerful art project that reclaims slurs and insults by means of 200 jumpsuits. Photo: Ryan Lash / TED

Put your body where your heart is. “Taking to the streets is an invaluable and necessary human act,” says Brooklyn-based artist and designer Sarah Sandman. “There is no greater affirmation of our communal values than physically showing up together.” The TED Senior Fellow designs her projects to harness these magical moments of public togetherness and emotional solidarity. Brick x Brick, one of her most recent works, was inspired by the need to fight against the rampant sexism present in American politics and to transform collective anger into action and healing. Created by Sandman and fellow artists and launched in 2016, it has spread across the country, with protesters wearing jumpsuits covered with colorful patches (each containing derogatory slang about women) and standing in a strong, silent human wall against misogyny. It has elicited reactions of emotional recognition and transformation, especially from women who’ve experienced sexual assault and harassment. “Perhaps collective pain can only be healed through collective public expression,” says Sandman. “When we get together and creatively organize, we feel ourselves acting as an organism larger than ourselves. We start to see the power that lies dormant when we are isolated.”

Sandman inspires one last magical moment at the session when she leads the audience into making noise — clapping and pounding to sound out the Morse code for “water,” the liquid that joins us and sustains us.

Sustained by that rousing activity and by the buoyant tunes of “Blinky Bill” Sellanga, the attendees shimmy out of the room. Hey, Fellows: nicely played.

Blinky Bill closes down TED Fellows Session 2 with a performance that got the audience up and dancing. Photo: Ryan Lash / TED

TEDIn Case You Missed It: Exploring the amazing (in every sense of the word) at day 1 of TED2018

In Case You Missed It TED2018TED2018, themed “The Age of Amazement,” kicked off Tuesday with two eye-opening sessions of talks from this year’s TED Fellows — with tech and science demos, music, dance and comedy — as well as the opening of exhibits and, of course, a memorable Session 1 full of bold ideas, tough truths and jaw-dropping creative visions.

Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver.

Is the world getting better or worse? Was 2017 really the “worst year ever,” as some would have us believe? Comparing the most recent data on homicide, poverty and pollution with the same measures from 30 years ago, psychologist Steven Pinker shows that we’re doing better now in every one of them. The same goes for autocracies, nuclear weapons and deaths from terrorism, all of which have declined since 1988. But progress isn’t inevitable; it doesn’t mean everything gets better for everyone, everywhere, all the time; and it’s not a miracle. From the TED Fellows stage, international security researcher Benedetta Berti makes the case that policies written to stem terrorism after 9/11 haven’t actually made us more secure. Too often, officials have viewed global security as a zero-sum game and that the only way to become safer is by compromising human values and rights. “This narrative is flawed and, worse, counter-productive,” says Berti, fueling a never-ending cycle of conflict, trauma and radicalization.

Diane Wolk-Rogers, a teacher at Marjorie Stoneman Douglas High School in Parkland, Florida, offers three ways we could move forward to create more safety and responsibility around guns, speaking at TED2018 on April 10, 2018, in Vancouver, BC. (Photo: Ryan Lash / TED)

Activists tackling our biggest problems. Diane Wolk-Rogers teaches history at Marjorie Stoneman Douglas High School in Parkland, Florida, site of a horrific massacre two months ago. In a stirring talk that covered the history of the Second Amendment and the NRA, she asks us to engage with the issue of gun violence — to keep such a mass school killing from ever happening again. And, she says, “If you’re not sure where to start, look to my students as role models.” From the TED Fellows stage, editor Olga Yurkova tells the story of StopFake.org, which investigates and exposes fake news. She shares two easy ways we can all make sure we’re not reading (or sharing) untruth: be skeptical when you come across a story that is exceptionally dramatic, captivating or clickbait-y; and double-check the facts in what you read. “It’s on us to find a way to rebuild trust, because fake news destroys it,” Yurkova says. And conservation biologist Steve Boyes, a TED Senior Fellow, shared how he’s fighting to save the Okavango Delta, the largest undeveloped river basin in the world. “Preserving wilderness is far more than simply protecting ecosystems that clean the water we drink and create the air we breathe,” he says. “Preserving wilderness protects our basic human right to be wild, our basic human right to explore.”

Actor and activist Tracee Ellis Ross shares a powerful message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men.(Photo: Bret Hartman / TED)

How #MeToo can become #NoMore. Opening the conference from the main stage, actress Tracee Ellis Ross says we are in the midst of a cultural shift that is being led by women. Their experiences will not be ignored, and women will no longer be held responsible for the deplorable behaviors of men. She invites men in as allies, to be accountable and self-reflective; and asks women to acknowledge their fury instead of being afraid of it. Picking up this idea on the TED Fellows stage, artist Sarah Sandman shares her work, Brick x Brick, in which protesters wear jumpsuits covered with bright patches, each containing derogatory slang about women, and stand in a strong, silent human wall against misogyny. And attorney and TED Fellow Laura Dunn is taking this issue all the way to Washington, DC, with passage of the Equal Rights Amendment, legislation that would ensure full gender equality and protection of gender violence survivors. SurvJustice, her national nonprofit, is devoted to trying to make that happen.

In a bold talk, Jaron Lanier challenges tech companies to look beyond revenue models favoring virality over credibility, steering us toward a future where unbounded creativity, equality and love defines the human race. (Photo: Ryan Lash / TED)

How can we rediscover humanity in our modern systems? Scientist, musician and writer Jaron Lanier was there when the internet was being built — as a free, open, egalitarian space for communication. But haunting his pioneering vision has always been a lurking dark side of control and stratification — because if everything on the net is free, then the money has to come from online advertising. And “what started out as advertising really can’t be called advertising any more,” Lanier says. “It can only be called behavioral modification.” He challenges tech companies to look beyond revenue models favoring virality over credibility, and steers us toward a future where unbounded creativity, equality and love define the human race. “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them,” Lanier says to applause. On the TED Fellows stage, journalist and historian Mikhail Zygar grapples with our online identity crisis in a different way, telling the story of Project1917, which reframes history by asking what the internet would’ve looked like in 1917 Russia. By allowing anyone on the internet to read the daily thoughts and feelings of Igor Stravinsky, Leon Trotsky, Tsar Nicholas II and many less celebrated figures, the project contextualizes and rehumanizes history as it once was and as it could have been.

Asking difficult questions of ourselves and each other. In 2016, Williams College student Zachary R. Wood wrote to two conservative thinkers with whom he deeply disagreed, Bell Curve co-author Charles Murray and commentator John Derbyshire — and invited them to speak on his campus. Wood is the president of Uncomfortable Learning, a college group that specializes in the particular education that comes when we try to understand the other side. “Tuning out opposing viewpoints doesn’t make them go away,” he contends. “In order to understand the potential of society to progress forward, we need to understand the counter forces.” TED Fellow Paul Rucker is also confronting uncomfortable subjects, by curating an unusual collection of objects connected to America’s history of slavery. He’s made 75 Ku Klux Klan robes, in a range of colors, patterns and fabrics, to unravel the power they hold over him. “If we can confront these objects of our history,” Rucker says, “we can diminish the power they hold over all of us.”

If public-health superheroes wore capes … Malaria and influenza remain two of the world’s most threatening diseases, but TED speakers are laughing in their face. (Kind of.) TED Fellow Faith Osier is creating a vaccine for malaria; specifically, she’s looking at people who develop immune responses and trying to learn from how their proteins interact with and kill the plasmodium parasite. Meanwhile, fellow TED Fellow Adam Kucharski is on the trail of the flu. He sees prevention as key to countering future outbreaks, so he and a team have devised a predictive mathematical model that links patterns of human contact (things like hugs and handshakes) to the virus’s spread.

“Climate is changing,” says architect and TED Fellow Kotchakorn Voraakhom. “The real question is whether we are ready to change, too.”

What are we going to do about climate change? “Where there are glaciers, there are people, and the two have been influencing each other for the entirety of human history,” says glaciologist and TED Fellow M Jackson. If we want to understand what’s happening to our world today as our ice is melting, we need to start looking at how the changing landscape of glaciers is already impacting communities around the world from Iceland to Pakistan. Canada’s Minister of Science Kirsty Duncan picked up this thread, explaining how climate change information is suppressed or obscured across the world — and how Canada is fighting to keep science open. “We want to send a message that you don’t mess with something so fundamental, so precious, as science,” Duncan says. In a heartfelt talk about science’s mission to push boundaries, she makes the case that we must hold our leaders accountable — and speak up when we see science being suppressed. And back on the Fellows stage, Bangkok landscape architect Kotchakorn Voraakhom told the story of how she helped her sinking city become climate-resilient, designing Chulalongkorn Centennial Park, which opened in 2017 and provides more than recreational space. Like a monkey holds food in its cheek and gradually eats it, the park is a place to hold overflow water when the ground is saturated. “Climate is changing,” declares Voraakhom. “The real question is whether we are ready to change, too.”

Visitors to the Food Trend Lab at TED2018 tasted treats like puffed lily pad seeds and plant-based cheese.

News from the loop. TED’s partners are out in full force. Lounges designed by Steelcase and restoration (read: massages!) by Vitruvi offered TEDsters quiet places for impromptu meetings and recharging. At the Altair exhibit, attendees explored the intersection of human creativity, machine learning and simulation-driven innovation — and put their design skills to the test using Altair’s software to blueprint an ideal golf driver. At Target’s instillation, attendees immersed themselves in a soundscape of diverse stories about the changing meaning of 7pm for a city’s inhabitants. At the Tech Playground, visitors got a hands-on look at some amazing new technology, from augmented workplaces and companion droids to (micro) literature and (invisible) art. And at the Food Trends Lab, attendees sampled health elixirs, fresh juices, plant-based foods and innovative sweets and savories; today’s menu included puffed lily pad seeds from Lily Puffs, banana milk smoothies from Moola, kombucha floats from Betterwith and plant-based cheese by Blue Heron. Six different local coffee roasters (and one tea partner) kept everyone awake long enough to take it all in.

A Welcome party to remember, to close the day. Complete with rappelling dancers and fireworks on the bay, this year’s opening party was a bash to be remembered.

Krebs on SecurityWhen Identity Thieves Hack Your Accountant

The Internal Revenue Service has been urging tax preparation firms to step up their cybersecurity efforts this year, warning that identity thieves and hackers increasingly are targeting certified public accountants (CPAs) in a bid to siphon oodles of sensitive personal and financial data on taxpayers. This is the story of a CPA in New Jersey whose compromise by malware led to identity theft and phony tax refund requests filed on behalf of his clients.

Last month, KrebsOnSecurity was alerted by security expert Alex Holden of Hold Security about a malware gang that appears to have focused on CPAs. The crooks in this case were using a Web-based keylogger that recorded every keystroke typed on the target’s machine, and periodically uploaded screenshots of whatever was being displayed on the victim’s computer screen at the time.

If you’ve never seen one of these keyloggers in action, viewing their output can be a bit unnerving. This particular malware is not terribly sophisticated, but nevertheless is quite effective. It not only grabs any data the victim submits into Web-based forms, but also captures any typing — including backspaces and typos as we can see in the screenshot below.

The malware records everything its victims type (including backspaces and typos), and frequently takes snapshots of the victim’s computer screen.

Whoever was running this scheme had all victim information uploaded to a site that was protected from data scraping by search engines, but the site itself did not require any form of authentication to view data harvested from victim PCs. Rather, the stolen information was indexed by victim and ordered by day, meaning anyone who knew the right URL could view each day’s keylogging record as one long image file.

Those records suggest that this particular CPA — “John,” a New Jersey professional whose real name will be left out of this story — likely had his computer compromised sometime in mid-March 2018 (at least, this is as far back as the keylogging records go for John).

It’s also not clear exactly which method the thieves used to get malware on John’s machine. Screenshots for John’s account suggest he routinely ignored messages from Microsoft and other third party Windows programs about the need to apply critical security updates.

Messages like this one — about critical security updates available for QuickBooks — went largely ignored, according to multiple screenshots from John’s computer.

More likely, however, John’s computer was compromised by someone who sent him a booby-trapped email attachment or link. When one considers just how frequently CPAs must need to open Microsoft Office and other files submitted by clients and potential clients via email, it’s not hard to imagine how simple it might be for hackers to target and successfully compromise your average CPA.

The keylogging malware itself appears to have been sold (or perhaps directly deployed) by a cybercriminal who uses the nickname ja_far. This individual markets a $50 keylogger product alongside a malware “crypting” service that guarantees his malware will be undetected by most antivirus products for a given number of days after it is used against a victim.

Ja_far’s sales threads for the keylogger used to steal tax and financial data from hundreds of John’s clients.

It seems likely that ja_far’s keylogger was the source of this data because at one point — early in the morning John’s time — the attacker appears to have accidentally pasted ja_far’s jabber instant messenger address into the victim’s screen instead of his own. In all likelihood, John’s assailant was seeking additional crypting services to ensure the keylogger remained undetected on John’s PC. A couple of minutes later, the intruder downloaded a file to John’s PC from file-sharing site sendspace.com.

The attacker apparently messing around on John’s computer while John was not sitting in front of the keyboard.

What I found remarkable about John’s situation was despite receiving notice after notice that the IRS had rejected many of his clients’ tax returns because those returns had already been filed by fraudsters, for at least two weeks John does not appear to have suspected that his compromised computer was likely the source of said fraud inflicted on his clients (or if he did, he didn’t share this notion with any of his friends or family via email).

Instead, John composed and distributed to his clients a form letter about their rejected returns, and another letter that clients could use to alert the IRS and New Jersey tax authorities of suspected identity fraud.

Then again, perhaps John ultimately did suspect that someone had commandeered his machine, because on March 30 he downloaded and installed Spyhunter 4, a security product by Enigma Software designed to detect spyware, keyloggers and rootkits, among other malicious software.

Evidently suspecting someone or something was messing with his computer, John downloaded the trial version of Spyhunter 4 to scan his PC for malware.

Spyhunter appears to have found ja_far’s keylogger, because shortly after the malware alert pictured above popped up on John’s screen, the Web-based keylogging service stopped recording logs from his machine. John did not respond to requests for comment (via phone).

It’s unlikely John’s various clients who experience(d) identity fraud, tax refund fraud or account takeovers as a result of his PC infection will ever learn the real reason for the fraud. I opted to keep his name out of this story because I thought the experience documented and explained here would be eye opening enough and I have no particular interest in ruining his business.

But a new type of identity theft that the IRS first warned about this year involving CPAs would be very difficult for a victim CPA to conceal. Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms and using them to file phony refund requests. Once the IRS processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”

If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.

Tax professionals might consider using something other than Microsoft Windows to manage their client’s data. I’ve long dispensed this advice for people in charge of handling payroll accounts for small- to mid-sized businesses. I continue to stand by this advice not because there isn’t malware that can infect Mac or Linux-based systems, but because the vast majority of malicious software out there today still targets Windows computers, and you don’t have to outrun the bear — only the next guy.

Many readers involved in handling corporate payroll accounts have countered that this advice is impractical for people who rely on multiple Windows-based programs to do their jobs. These days, however, most systems and services needed to perform accounting (and CPA) tasks can be used across multiple operating systems — mainly because they are now Web-based and rely instead on credentials entered at some cloud service (e.g., UltraTax, QuickBooks, or even Microsoft’s Office 365).

Naturally, users still must be on guard against phishing scams that try to trick people into divulging credentials to these services, but when your entire business of managing other people’s money and identities can be undone by a simple keylogger, it’s a good idea to do whatever you can to keep from becoming the next malware victim.

According to the IRS, fraudsters are using spear phishing attacks to compromise computers of tax pros. In this scheme, the “criminal singles out one or more tax preparers in a firm and sends an email posing as a trusted source such as the IRS, a tax software provider or a cloud storage provider. Thieves also may pose as clients or new prospects. The objective is to trick the tax professional into disclosing sensitive usernames and passwords or to open a link or attachment that secretly downloads malware enabling the thieves to track every keystroke.”

The IRS warns that some tax professionals may be unaware they are victims of data theft, even long after all of their clients’ data has been stolen by digital intruders. Here are some signs there might be a problem:

  • Client e-filed returns begin to be rejected because returns with their Social Security numbers were already filed;
  • The number of returns filed with tax practitioner’s Electronic Filing Identification Number (EFIN) exceeds number of clients;
  • Clients who haven’t filed tax returns begin to receive authentication letters from the IRS;
  • Network computers running slower than normal;
  • Computer cursors moving or changing numbers without touching the keyboard;
  • Network computers locking out tax practitioners.

TEDDoom. Gloom. Outrage. Uproar. Notes and feelings from Session 1 of TED2018

Diane Wolk-Rogers is a teacher at Parkland, site of a horrific massacre two months ago. She asks us to engage with the issue of gun violence — to keep such a mass school killing from ever happening again. And, she says, “If you’re not sure where to start, look to my students as role models.” Wolk-Rogers speaks at TED2018 on April 10, 2018, in Vancouver. Photo: Ryan Lash / TED

Let it be noted, says co-host Chris Anderson, that this year’s TED started with a roaring cheer and with a scream of anguish. Our job this week is to explore what’s amazing, in every sense of the word, from the jaw-droppingly wonderful to the shocking and urgent. So Chris and co-host Helen Walters have decided to kick off Session 1 with a quick voice vote: How do you feel about things these days? They prompt the audience to respond with a happy “yay!” or a scream of “argh!” And it’s about 50/50 between them. Clearly, it’s time for TED.

Embracing lifetimes of fury. We are in the midst of a cultural shift — and that shift is being led by women. Generations of women have dealt with harassment, and generations have been silenced. Women have been told they are overreacting, they are being unreasonable, they are being too sensitive. But actor and activist Tracee Ellis Ross has a message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men. There is a “culture of men helping themselves to women,” from seemingly innocuous slights to the “most egregious, violent and horrific situations.” This culture exists on a spectrum where “the innocuous makes space for the horrific” — and women have to live with the effects of both. Ross believes it is past time that men take responsibility to change men’s bad behavior. She offers an invitation to men, calling them in as allies, with the hope they will “be accountable and self reflective. Compassionate and open,” that they will ask how they can be of service to change and supportive of women. As for women, she offers a different invitation: “acknowledge your fury.” Women have tamped down and rationalized away that fury for centuries, because women aren’t supposed to get angry. That fury is a result of never being able to directly address or express indignation, frustration and rage. It is the result of being ignored and quieted. And it is time to harness it, she says. “Your fury is not something to be afraid of. It holds lifetimes of wisdom. Let it breathe, and listen.”

Tracee Ellis Ross. Photo: Bret Hartman / TED

Tracee Ellis Ross opens the TED2018 conference with a manifesto for women and men. Photo: Bret Hartman / TED

Use supporting detail. What’s it like to be a teacher in the midst of a school shooting? Humanities teacher Diane Wolk-Rogers begins her talk with a calm, detailed recitation of what happened in her classroom during the massacre at Marjorie Stoneman Douglas High School in Parkland, Florida, two months ago. She lined up her students. She held up a sign so they could follow her. She led them down the hall, away from the pop pop pop of bullets. Outside, she sat on the curb, calling her family to say she was okay. And she knew that her world would never be the same. How to move forward? She calls on the same techniques she uses in her classroom: to admit when she doesn’t know the answer, do some research, and start coming up with options. We start with the history of the Second Amendment. Ratified in order to build a militia to protect a fledgling nation, it’s been re-interpreted in significant ways throughout the following years. And that gives Wolk-Rogers hope: “The interpretation of the Second Amendment and cultural attitudes about guns have changed over time — which gives me hope they could change again.” She offers three ways we could move forward to create more safety and responsibility around guns. And there’s a fourth option: the solution we can create ourselves.

Jaron Lanier. Photo: Bret Hartman / TED

Jaron Lanier helped to conceptualize our digital culture in its early years; he spoke at some very early TEDs to share this vision (here he is at TED2). He returns to the stage this year to review our progress and make a bold pitch to reclaim our humanity online. Photo: Bret Hartman / TED

A vision, a warning — and a way forward. Jaron Lanier steps onstage clearly affected by the talk that came before him. He takes a moment, wipes his eyes, and says to Diane, “Thank you.” A pause, and he begins. We too start in the past, in the early digital culture that Lanier was instrumental in thinking through. But haunting his pioneering vision of an internet built on “post-symbolic communication” — a place where humans share experiences through shared, beautiful interactive worlds — has always been a lurking dark side of control and stratification created by omnipotent personal devices that control our lives, monitoring us and feeding us stimuli. Of course, this dark side is, arguably, exactly the world we’ve built for ourselves today: bots influence elections, truth bows before “truthiness,” and the lowest common denominator sinks lower and lower. How did we get here? Lanier has a theory: It lies in Silicon Valley’s two conflicting desires: to make everything free, while we simultaneously venerate entrepreneurial money-making. If everything is free, where does the money come from? Online advertising. And, as he says, “What started out as advertising really can’t be called advertising any more. It can only be called behavioral modification,” targeted and data-driven and manipulative. These free platforms now use behavior modification to turn people into eternal consumers — a “globally tragic, ridiculous mistake.” As he says: “We cannot have a society in which if two people wish to communicate the only way that can happen is if it’s financed by a third person who wishes to manipulate them.” In this bold talk, Lanier challenges tech companies to look beyond revenue models favoring virality over credibility, and to instead strive for “peak social media” — a place where paid, high-quality content mimics the paid models of “Peak TV” (think HBO, Netflix and Amazon) and steers us toward a future where unbounded creativity, equality and love defines the human race. (Read an excerpt from his newest book here.)

Bringing some New Orleans flair to Vancouver. Live and direct from New Orleans, The Soul Rebels rocked the TED stage with a tight, rhythmic and energetic musical interlude. The eight-piece band, with two trumpet players, two trombones and two percussionists rounded out with a tuba and Sousaphone, played three songs — “Rebelosis,” “Rebel Rock” and “Rebel on That Level” — turning the red circle into a jazz club, if only for a few minutes.

Zachary Wood. Photo: Bret Hartman / TED

When Zachary Wood invited a controversial speaker to his campus, he got a fascinating lesson in how to listen to voices he disagreed with. Photo: Bret Hartman / TED

Why we need to be uncomfortable. In 2016, Williams College student Zachary R. Wood wrote to two conservative thinkers whose ideas he despised, Bell Curve coauthor Charles Murray and commentator John Derbyshire — and invited them to come speak on his campus. Wood is the president of Uncomfortable Learning, a college group that specializes in the particular education that comes when we try to understand the “other side.” His upbringing was defined by strong conflicting forces — the volatile mood swings of his mother, who was diagnosed with schizophrenia when he was 10, and also her fierce intellect and sense of empathy. “She encouraged me to see the world and the issues our world faces as complex, controversial and ever-changing,” says Wood. At Williams, however, he found his openness to “difficult ideas” was not shared by the entire student body — the outcry eventually led the college president to rescind Derbyshire’s invitation. “Yet tuning out opposing viewpoints doesn’t make them go away,” contends Wood, now a senior. “In order to understand the potential of society to progress forward, we need to understand the counter forces.” His parting wish: to live in a world with leaders who are “familiar with the depths of the views of those they deeply disagree with so they can understand the nuances of everyone they’re representing.” Washington, are you listening?

Why science must be free. Fake news, alternative facts, and other forms of suppression are affecting all types of scientific issues, including climate change – and that’s a big problem. “In our modern, technological age, when our very survival depends on discovery, innovation and science, it is absolutely critical that our scientists are free to undertake their work, free to collaborate with other scientists, free to speak to the media, and free to speak to the public,” says Kirsty Duncan, Canada’s first Minister of Science. What’s more, they must be able to explore controversial and unconventional topics, present uncomfortable truths, challenge the thinking of the day, and to fail. “That’s how scientists push boundaries and pushing boundaries is, after all, what science is all about,” she says. As Science Minister and former parliamentary member, Duncan has worked hard to make this a reality in Canada, where scientists like Max Bothwell were previously prevented from speaking freely to the media. “If you see that science is being stifled, suppressed or attacked, speak up. If you see that scientists are being silenced, speak up. We must hold our leaders to account,” she says. “After all, science is for everyone, and it will lead to a better, brighter, bolder future for us all.”

Steven Pinker. Photo: Bret Hartman / TED

“Is progress inevitable? Of course not. Progress does not mean that everything becomes better for everyone everywhere all the time. That would be a miracle, and progress is not a miracle, but problem solving.” Steven Pinker muses on the idea of progress onstage at TED2018. Photo: Bret Hartman / TED

Has the world (for all its troubles) gotten better over time? Every day, we read about shootings, inequality, pollution, dictatorships, war and the spread of nuclear weapons. These are some of the reasons that 2016 was called the “worst year ever” — until 2017 claimed that record and left many longing for earlier decades when the world seemed safer, cleaner and more equal. “Is this a sensible way to understand the human condition in the 21st century?” asks psychologist Steven Pinker. Comparing the most recent data on homicide, poverty and pollution with the same measures from 30 years ago, Pinker shows that we’re doing better now in every one of them. The same goes for autocracies, nuclear weapons and deaths from terrorism, all of which are on the decline when compared with 1988. So why do people still think the world is going to hell? Pinker thinks it has something to do with progress — specifically, an aversion to the idea of it, even by progressives. Progress isn’t a matter of faith or optimism of seeing the glass half full, Pinker says; it’s a testable hypothesis. But while humanity has become more democratic, safer, happier and more peaceful (not to mention more likely to know how to read and write and have access to running water and technology), you wouldn’t know it by looking at the news. The New York Times has gotten increasingly morose, and a sample of the world’s broadcasts have gotten steadily more glum, Pinker says, because of our cognitive bias to focus on the bad and not the good — and because news is about what happens, not what doesn’t happen. (You never hear a journalist say they’re reporting live from a country that’s been at peace for 40 years or a city that hasn’t been attacked by terrorists today.) Progress isn’t inevitable; it doesn’t mean everything gets better for everyone, everywhere, all the time; and it’s not a miracle, he says. Instead, it’s problem-solving, and we should look at climate change and nuclear war as problems to be solved, not apocalypses in waiting. “We’ll never have a perfect world, and it’d be dangerous to seek one,” Pinker says. “But there’s no limit to the betterments we can attain if we continue to apply knowledge to enhance human flourishing.” (Read an excerpt from his newest book here.)

Worse Than FailureSponsor Post: Make Your Apps Faster With Raygun APM

Your software is terrible, but that doesn’t make it special. All software is terrible, and yes, you know this is true. No matter how good you think it is, bugs and performance problems are inevitable.

But it’s not just the ugly internals and mysterious hacks and the code equivalent of duct-tape and chewing gum which make your software terrible. Your software exists to fill some need for your users, and how do you know that’s happening? And worse, when your application fails, how do you understand what happened?

In the past, we’ve brought your attention to Raygun, which allows you to add a real-time feedback loop that gives you a picture of exactly what’s happening on their device or their browser. And now, Raygun is making it even better, with Raygun APM.

The Raygun Logo

Raygun Application Performance Monitoring (APM) tackles the absolute worst part of releasing/supporting applications: dealing with performance issues. With Raygun APM, you can get real-time execution stats on your server-side code, and find out quickly which specific function, line, or database call is slowing down your application.

Raygun highlighting execution timelines and aligning them with code in GitHub

You won’t have to wait for someone to notice the issue, either- Raygun APM proactively identifies performance issues and builds a workflow for solving them. Raygun APM sorts through the mountains of data for you, surfacing the most important issues so they can be prioritized, triaged and acted on, cutting your Mean Time to Resolution (MTTR) and keeping your users happy.

Raygun's issue management interface

In addition to all this, Raygun is adding tight integration with source control, starting with GitHub.

Request access to the beta here. Or if you’re already tired of searching logs for clues in an effort to replicate an issue, try out Raygun’s current offerings and resolve errors, crashes and performance issues with greater speed and accuracy.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

CryptogramThe Digital Security Exchange Is Live

Last year I wrote about the Digital Security Exchange. The project is live:

The DSX works to strengthen the digital resilience of U.S. civil society groups by improving their understanding and mitigation of online threats.

We do this by pairing civil society and social sector organizations with credible and trustworthy digital security experts and trainers who can help them keep their data and networks safe from exposure, exploitation, and attack. We are committed to working with community-based organizations, legal and journalistic organizations, civil rights advocates, local and national organizers, and public and high-profile figures who are working to advance social, racial, political, and economic justice in our communities and our world.

If you are either an organization who needs help, or an expert who can provide help, visit their website.

Note: I am on their advisory committee.

Worse Than FailureA Comment on the Military Industrial Complex

Simon T tugged at his collar when the video played. It wasn’t much, just a video of their software being tested. It wasn’t the first time they’d tested Simon’s most recent patch, but it was going to be the last time. There were a lot of eyes in the conference room, and they were all turned on him.

Simon worked for the kind of company which made missiles. The test in the video was one of the highly expensive tests of a real missile under real-world conditions. Several of these had already been done with this software package, so Simon hadn’t expected any problems to crop up. In this case, though, the missile left its launcher and sailed in a perfect parabolic arc into the ground 5 meters away from the launch site.

Missiles diving headfirst into the ground mere meters from their launch site was officially considered a bad thing. There were all sorts of checkpoints and automated tests and simulations that were supposed to keep this thing from happening. It didn’t take long to find the problem.

if roll < 0 then
{
  {we're adjusting the roll here cos it's too high so we are going to take just half
  roll = roll / 2;
  zcdem = zdem; { add gravity }
}
else
{
  {roll is clockwise}
  …
}

This code happens to be Turbo Pascal 4, a version of the language released in 1987. Simon’s job had been to create this Turbo Pascal code by porting the logic from Fortran 68, running on a mainframe. Due to hardware constraints, the Fortran version took 8 hours to simulate and calculate a missile’s trajectory. Simon’s Turbo Pascal version could do the same job in near real time.

There’s just one problem. Curly brackets in Turbo Pascal can be used to mean radically different things. On a line by themselves, they can substitute for begin or end statements, but they can also serve as comment indicators.

And you can see where this is going. Simon left off his closing } on the {we're adjusting… line. You might expect an error like that to be caught by the compiler, and it might have… had he not also had the { add gravity } comment, which handily provided a closing curly brace, essentially commenting out the entire body of the if statement.

In their testing, they’d somehow never hit this condition. Even in the real-world tests, the wind had previously been blowing from the west, which meant the missile had a positive value for roll. Only on a day with an easterly wind did they catch this bug.

For want of a } the missile was lost…

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

TEDReimagining the future: Notes from TED Fellows Session 1 at TED2018

So, what’s it like to discover a rare galaxy? Burçin Mutlu-Pakdil speaks about her singular experiences during Fellows Session 1 at TED2018: The Age of Amazement, April 10, 2018, in Vancouver. Photo: Ryan Lash /TED

Forget those miserly folk who hoard their best for the last — TED believes in starting strong. Kicking off TED2018 is Session 1 of the TED Fellows, who count among their ranks artists, activists, scientists, researchers, conservationists, thinkers and changemakers of all kinds. The Fellows program now total 453 individuals from 96 countries. In this session, the 10 newest Fellows, and 4 Senior Fellows, took the stage in the Community Theater.

TED Senior Fellow Joshua Roman begins the session with his original composition “Riding Light.” It’s a tossup as to what is more compelling: His music, which ranges from challenging to melodic to rhythmic? Or his bright argyle socks? Answer: All of the above! (Session attendees check off “listen to shoeless cellist” from their bucket list.)

Our planet’s last wild places: breathtaking and at risk. We humans are fundamentally connected to and dependent on the natural world, but our stewardship of it has been, frankly speaking, lousy. Four years ago, it was declared that half of the planet’s wildlife had disappeared in just 40 years. “We urgently need to create safe spaces for these wild animals,” says explorer and conservation biologist Steve Boyes, a TED Senior Fellow. In 2014, Boyes and his colleagues launched an effort to explore and protect southern Africa’s Okavango Delta, the largest undeveloped river basin in the world. Navigating territorial hippos and incendiary landmines in dugout canoes, they explored and conducted detailed scientific surveys of all of the major rivers in the basin. A group of 57 scientists is also exploring an area called the Okavango-Zambezi Water Tower, and they’re currently working to establish a government system to preserve it. Boyes urges us all to act now — not a few months from now or a few years from now, but now now — to preserve the earth’s remaining wild spaces. “Preserving wilderness is far more than simply protecting ecosystems that clean the water we drink and create the air we breathe,” he says. “Preserving wilderness protects our basic human right to be wild, our basic human right to explore.”

In a passionate talk, conservation activist Steve Boyes makes the case to preserve wild spaces — not just for the wild animals and plants that live there, but for humanity’s sake too. Photo: Ryan Lash / TED

In a puzzling galaxy far, far away …  As far as we know, the universe houses more than one trillion galaxies, and most of them take the form of spirals like our own Milky Way. However, there are other kinds and shapes of galaxies, and scientists are still trying to grasp how they form and evolve. Hoag-type galaxies — symmetrical circular rings with nothing visibly connecting them — were believed to be the most uncommon type … until University of Arizona astrophysicist Burçin Mutlu-Pakdil and her team discovered the rarest celestial event they’d ever seen. Now named Burçin’s Galaxy, it was first thought to be a Hoag-type galaxy until close study revealed it to have an outer ring and a reddish second inner ring. “There is no known mechanism that can explain the existence of an inner ring in such a peculiar system,” Mutlu-Pakdil says. “Discovery of such rare galaxies tells us that we still have a lot to learn, and we should keep looking deeper and deeper in space and search for the unknown.” (Okay, a new item to add to bucket list: “discover galaxy and have it named after you.”)

Promoting civilian-centered security. Nearly two decades have passed since 9/11, and in its wake, countless policies have been written and implemented, ostensibly designed to bolster security. But have the policies actually made us more secure? No, according to international security researcher Benedetta Berti, a TED Senior Fellow. Too often, officials see global security as a zero-sum game, and that the only way to become safer is by compromising on human values and rights. “This narrative is flawed and, worse, counterproductive,” says Berti, fueling a never-ending cycle of conflict, trauma and radicalization. The alternative she proposes: shift away from a military-first approach and toward a sustainable focus on protecting civilians and building lasting peace. By shielding everyone from violence and ensuring that their lives can be lived in dignity, Berti says, we can succeed in creating a stable future for all.

Empowering and educating people to stop fake news. “Fake news is not only bad for journalism,” says Ukrainian editor Olga Yurkova, “it’s a threat for democracy and society.” In 2014, she and a group of journalists launched a website called StopFake.org, which investigates and exposes biased, inaccurate reporting about the Ukraine. Since its launch, StopFake has taught more than 10,000 people how to spot fake news and trained fact-checkers around the world. There are two easy ways we can all make sure we’re not reading (or sharing) untruth, according to Yurkova. First, be skeptical when you come across a story that is exceptionally dramatic, captivating or clickbait-y; the truth is often not that exciting. Second, don’t simply accept what you read; double-check the facts by consulting other sites and Googling names, addresses and authors. After all, “our society depends on trust — trust in our institutions, in science; trust in our leaders; trust in our news outlets,” Yurkova says. “And it’s on us to find a way to rebuild trust, because fake news destroys it.”

With her project StopFake.org, Olga Yurkova helps train people to avoid sharing viral untruths. She reminds us: Is that viral story in your social media feeds dramatic and emotional? It might just be clickbait, because “the truth is often not that exciting.” Photo: Ryan Lash / TED

Designing for equity and justice. Discrimination, racism, sexism and poverty don’t just happen by chance, says Antionette Carroll, a St. Louis–based social entrepreneur. They tend to be supported by systems that were created by people in order to exclude other people. “Designers such as myself have started to realize that if these different forms of oppression are by design, then they can be redesigned,” Carroll says. After the fatal shooting of Michael Brown by the police in Ferguson, Missouri, in 2014 and the demonstrations and unrest that followed, Carroll established the Creative Reaction Lab to attempt to dismantle systems of inequality. Her nonprofit trains black and Latino youths to be what she calls “equity designers”: individuals who are embedded in the community that they want to change and who use existing resources and good design practices to transform it. For example, they recently explored how the absence of public transportation has affected the lives of lower-income Black community members in St. Louis. Carroll asks, “How do we design a world that provides people with the resources and opportunities needed for them to be their best — and authentic — selves without judgement and hate?” We can’t wait to see what she comes up with.

The human price of cheap migrant labor. Think about any news footage you’ve ever seen of immigration discussions, and then ask yourself: Which of the parties involved is not in the room? “What often is missing in the global debate over refugees, migrants and immigrants — voices of the disenfranchised,” says investigative journalist Yasin Kakande. He chronicled the injustices and inequalities inflicted on the African migrant labor force in the Middle East — until his muckraking writing caused him to be forced out of Dubai and deported. Unable to take the TED2018 stage in Vancouver due to his asylum application status with the US, Kakande delivered an impassioned message via video, urging open discussion between migrants and political leaders. He calls for the drum beat for justice and opportunity to never cease. As he declares, “a hashtag, an op-ed, or an o