Planet Russell

,

Cryptogram — Detecting Laptop Tampering

Micah Lee ran a two-year experiment designed to detect whether or not his laptop was ever tampered with. The results are inconclusive, but demonstrate how difficult it can be to detect laptop tampering.

Worse Than Failure — Error'd: Version-itis

"No thanks, I'm holding out for version greater than or equal to 3.6 before upgrading," writes Geoff G.

"Looks like Twilio sent me John Doe's receipt by mistake," wrote Charles L.

"Little do they know that I went back in time and submitted my resume via punch card!" Jim M. writes.

Richard S. wrote, "I went to request a password reset from an old site that is sending me birthday emails, but it looks like the reCAPTCHA is no longer available and the site maintainers have yet to notice."

"It's nice to see that this new Ultra Speed Plus™ CD burner lives up to its name, but honestly, I'm a bit scared to try some of these," April K. writes.

"Sometimes, like Samsung's website, you have to accept that it's just ok to fail sometimes," writes Alessandro L.

Planet Debian — Daniel Pocock: GoFundMe: errors and bait-and-switch

Yesterday I set up a crowdfunding campaign to purchase some equipment for the ham radio demo at OSCAL.

It was the first time I tried crowdfunding and the financial goal didn't seem very big (a good quality AGM battery might only need EUR 250) so I only spent a little time looking at some of the common crowdfunding sites and decided to try GoFundMe.

While the campaign setup process initially appeared quite easy, it quickly ran into trouble after the first donation came in. As I started setting up bank account details to receive the money, errors started appearing:

I tried to contact support and filled in the form, typing a message about the problem. Instead of sending my message to support, it started trying to show me long lists of useless documents. Finally, after clicking through several screens of unrelated nonsense, another contact form appeared and the message I had originally typed had been lost in their broken help system and I had to type another one. It makes you wonder, if you can't even rely on a message you type in the contact form being transmitted accurately, how can you rely on them to forward the money accurately?

When I finally got a reply from their support department, it smelled more like a phishing attack, asking me to give them more personal information and email them a high resolution image of my passport.

If that was really necessary, why didn't they ask for it before the campaign went live? I felt like they were sucking people in to get money from their friends and then, after the campaign gains momentum, holding those beneficiaries to ransom and expecting them to grovel for the money.

When a business plays bait-and-switch like this and when their web site appears to be broken in more ways than one (both the errors and the broken contact form), I want nothing to do with them. I removed the GoFundMe links from my blog post and replaced them with direct links to Paypal. Not only does this mean I avoid the absurdity of emailing copies of my passport, but it also cuts out the five percent fee charged by GoFundMe, so more money reaches the intended purpose.

Another observation about this experience is the way GoFundMe encourages people to share the link to their own page about the campaign and not the link to the blog post. Fortunately in most communication I had with people about the campaign I gave them a direct link to my blog post and this makes it easier for me to change the provider handling the money by simply removing links from my blog to GoFundMe.

While the funding goal hasn't been reached yet, my other goal, learning a little bit about the workings of crowdfunding sites, has been helped along by this experience. Before trying to run something like this again I'll look a little harder for a self-hosted solution that I can fully run through my blog.

I've told GoFundMe to immediately refund all money collected through their site so donors can send money directly through the Paypal donate link on my blog. If you would like to see the ham radio station go ahead at OSCAL, please donate, I can't take my own batteries with me by air.

Planet Linux Australia — Simon Lyall: Audiobooks – April 2018

Viking Britain: An Exploration by Thomas Williams

Pretty straightforward, Tells as the uptodate research (no Winged Helmets ) and easy to follow (easier if you have a map of the UK) 7/10

Contact by Carl Sagan

I’d forgotten how different it was from the movie in places. A few extra characters and plot twists. many more details and explanations of the science. 8/10

The Path Between the Seas: The Creation of the Panama Canal, 1870-1914 by David McCullough

My monthly McCullough book. Great as usual. Good picture of the project and people. 8/10

Winter World: The Ingenuity of Animal Survival by Bernd Heinrich

As per the title this spends much of the time on [varied strategies for] Winter adaptation vs Summer World’s more general coverage. A great listen 8/10

A Man on the Moon: The Voyages of the Apollo Astronauts by Andrew Chaikin

Great overview of the Apollo missions. The Author interviewed almost all the astronauts. Lots of details about the missions. Excellent 9/10

Walkaway by Cory Doctorow

Near future Sci Fi. Similar feel to some of his other books like Makers. Switches between characters & audiobook switches narrators to match. Fastforward the Sex Scenes . Mostly works 7/10

The Neanderthals Rediscovered: How Modern Science Is Rewriting Their Story by Michael A. Morse

Pretty much what the subtitle advertises. Covers discoveries from the last 20 years which make other books out of date. Tries to be Neanderthals-only. 7/10

Straightforward story of the 1964 Alaska Earthquake. Follows half a dozen characters & concentrates on worst damaged areas. 7/10

Rondam Ramblings — I don't know where I'm a gonna go when the volcano blows

Hawaii's Kilauea volcano is erupting.  So is one in Vanuatu.  And there is increased activity in Yellowstone.  Hang on to your hats, folks, Jesus's return must be imminent. (In case you didn't know, the title is a line from a Jimmy Buffet song.)

Planet Debian — Norbert Preining: Onyx Boox Note 10.3 – first impressions

I recently go my hands on a new gadget, the Onyx Boox Note. I have now owned a Kindle Paperwhite (2nd gen), a Kobo Glo, a Kobo GloHD, and now the Onyx Boox Note. The prime reason for me getting this device were two: (i) ability to mark up pdfs and epubs with comments (something I need for research, review, check,…), (ii) the great pdf readability (automatic crop support), which is of course also related to the bigger screen.

The Note main screen gives the last read book and some others from the library, plus direct access to some apps. One can scroll through the most recently read books at the top by swiping right. I would have preferred a bit smaller icons on the big screen to see more of the books, maybe in a future firmware version.

Not too many applications are available, but the Play Store is there and one can get most programs. Unfortunately it seems that k9-mail – my main mail program on Android – does not support the Note.

Reading epubs is quite normal an experience. Nothing to complain here. Usual settings etc.

Where the Note is great is PDFs, which are a huge pain on my smaller devices. Neither the Kindle nor the Kobo have decent PDF support in my opinion, while the Note allows for auto-cropping (as seen in the image below), as well as manual cropping and several other features. Simply great.

Another wonderful feature is that one can scribble directly in the pdf or epub, and the notes will be saved. In addition to that, there is also a commenting mode in landscape format with the document on the left and the notes on the right, see below. Very useful, both of the modes.

Besides adding notes to pdfs and epubs, one can also have a notebook. Here is the Notes main interface screen, which allows selecting previous notes, adding new, and some more operations (I still don’t know the function of most icons).

Here is a simple example of scribbling around. Surprisingly good. I will see how much my normal paper note taking will be replaced by this.

Note taking and markup can of course be done with the finger, but the pen that comes with the device is much better. The sleeve that comes with the device has a holder for the pen, so it could be around wherever you go.

Finally some hardware specs from one of the hardware info programs.

I have used the Note now for two weeks for reading, pdf markup, and a bit of note taking. For now I have a very good impression, good battery run time, durable feeling. What I am missing is a backlight for reading in the night. I guess with more usage time I will find more points to criticize, but for now I think it was an excellent purchase.

Planet Debian — Junichi Uekawa: Seems like my raspberry pi root filesystems break after about 2 years.

Seems like my raspberry pi root filesystems break after about 2 years. Presumably because I have everything including /var/log there. Fails to write. Is there a good way to monitor wear like SMART for HDD ? Quick search didn't give me much.

,

Planet Linux Australia — Michael Still: How to make a privileged call with oslo privsep

Once you’ve added oslo privsep to your project, how do you make a privileged call? Its actually really easy to do. In this post I will assume you already have privsep running for your project, which at the time of writing limits you to OpenStack Nova in the OpenStack universe.

The first step is to write the code that will run with escalated permissions. In Nova, we have chosen to only have one set of escalated permissions, so its easy to decide which set to use. I’ll document how we reached that decision and alternative approaches in another post.

In Nova, all code that runs with escalated permissions is in the nova/privsep directory, which is a pattern I’d like to see repeated in other projects. This is partially because privsep maintains a whitelist of methods that are allowed to be run this way, but its also because it makes it very obvious to callers that the code being called is special in some way.

Let’s assume that we’re going to add a simple method which manipulates the filesystem of a hypervisor node as root. We’d write a method like this in a file inside nova/privsep:

import nova.privsep

...

def update_motd(message):
with open('/etc/motd', 'w') as f:
f.write(message)

This method updates /etc/motd, which is the text which is displayed when a user interactively logs into the hypervisor node. “motd” stands for “message of the day” by the way. Here we just pass a new message of the day which clobbers the old value in the file.

The important thing is that entrypoint decorator at the start of the method. That’s how privsep decides to run this method with escalated permissions, and decides what permissions to use. In Nova at the moment we only have one set of escalated permissions, which we called sys_admin_pctxt because we’re artists. I’ll discuss in a later post how we came to that decision and what the other options were.

We can then call this method from anywhere else in Nova like this:

import nova.privsep.motd

...

nova.privsep.motd('This node is currently idle')

Note that we do imports for privsep code slightly differently. We always import the entire path, instead of creating a shortcut to just the module we’re using. In other words, we don’t do:

from nova.privsep import motd

...

motd('This node is a banana')

The above code would work, but is frowned on because it is less obvious here that the motd() method runs with escalated permissions — you’d have to go and read the imports to tell that.

That’s really all there is to it. The only other thing to mention is that there is a bit of a wart — code with escalated permissions can only use Nova code that is within the privsep directory. That’s been a problem when we’ve wanted to use a utility method from outside that path inside escalated code. The restriction happens for good reasons, so instead what we do in this case is move the utility into the privsep directory and fix up all the other callers to call the new location. Its not perfect, but its what we have for now.

There are some simple review criteria that should be used to assess a patch which implements new code that uses privsep in OpenStack Nova. They are:

• Don’t use imports which create aliases. Use the “import nova.privsep.motd” form instead.
• Keep methods with escalated permissions as simple as possible. Remember that these things are dangerous and should be as easy to understand as possible.
• Calculate paths to manipulate inside the escalated method — so, don’t let someone pass in a full path and the contents to write to that file as root, instead let them pass in the name of the network interface or whatever that you are manipulating and then calculate the path from there. That will make it harder for callers to use your code to clobber random files on the system.

Adding new code with escalated permissions is really easy in Nova now, and much more secure and faster than it was when we only had sudo and root command lines to do these sorts of things. Let me know if you have any questions.

The post How to make a privileged call with oslo privsep appeared first on Made by Mikal.

Or if you don’t trust links in blogs like this (I get it) go to Twitter.com and change it from there. And then come back and read the rest of this. We’ll wait.

In a post to its company blog this afternoon, Twitter CTO Parag Agrawal wrote:

“When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.

A message posted this afternoon (and still present as a pop-up) warns all users to change their passwords.

Agrawal explains that Twitter normally masks user passwords through a state-of-the-art encryption technology called “bcrypt,” which replaces the user’s password with a random set of numbers and letters that are stored in Twitter’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal, who says the technology they’re using to mask user passwords is the industry standard.

“Due to a bug, passwords were written to an internal log before completing the hashing process,” he continued. “We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal wrote that while Twitter has no reason to believe password information ever left Twitter’s systems or was misused by anyone, the company is still urging all Twitter users to reset their passwords NOW.

A letter to all Twitter users posted by Twitter CTO Parag Agrawal

-Use a strong password that you don’t reuse on other websites.
Enable login verification, also known as two factor authentication. This is the single best action you can take to increase your account security.
-Use a password manager to make sure you’re using strong, unique passwords everywhere.

This may be much ado about nothing disclosed out of an abundance of caution, or further investigation may reveal different findings. It doesn’t matter for right now: If you’re a Twitter user and if you didn’t take my advice to go change your password yet, go do it now! That is, if you can.

Twitter.com seems responsive now, but some period of time Thursday afternoon Twitter had problems displaying many Twitter profiles, or even its homepage. Just a few moments ago, I tried to visit the Twitter CTO’s profile page and got this (ditto for Twitter.com):

What KrebsOnSecurity and other Twitter users got when we tried to visit twitter.com and the Twitter CTO’s profile page late in the afternoon ET on May 3, 2018.

If for some reason you can’t reach Twitter.com, try again soon. Put it on your to-do list or calendar for an hour from now. Seriously, do it now or very soon.

And please don’t use a password that you have used for any other account you use online, either in the past or in the present. A non-comprehensive list (note to self) of some password tips are here.

Update, 8:04 p.m. ET: Went to reset my password at Twitter and it said my new password was strong, but when I submitted it I was led to a dead page. But after logging in again at twitter.com the new password worked (and the old didn’t anymore). Then it prompted me to enter one-time code from app (you do have 2-factor set up on Twitter, right?) Password successfully changed!

Planet Debian — Benjamin Mako Hill: Climbing Mount Rainier

Mount Rainier is an enormous glaciated volcano in Washington state. It’s  4,392 meters tall (14,410 ft) and extraordinary prominent. The mountain is 87 km (54m) away from Seattle. On clear days, it dominates the skyline.

Rainier’s presence has shaped the layout and structure of Seattle. Important roads are built to line up with it. The buildings on the University of Washington’s campus, where I work, are laid out to frame it along the central promenade. People in Seattle typically refer to Rainier simply as “the mountain.”  It is common to here Seattlites ask “is the mountain out?”

Having grown up in Seattle, I have an deep emotional connection to the mountain that’s difficult to explain to people who aren’t from here. I’ve seen Rainier thousands of times and every single time it takes my breath away. Every single day when I bike to work, I stop along UW’s “Rainier Vista” and look back to see if the mountain is out. If it is, I always—even if I’m running late for a meeting—stop for a moment to look at it. When I lived elsewhere and would fly to visit Seattle, seeing Rainier above the clouds from the plane was the moment that I felt like I was home.

Given this connection, I’ve always been interested in climbing Mt. Rainier.  Doing so typically takes at least a couple days and is difficult. About half of people who attempt typically fail to reach the top. For me, climbing rainier required an enormous amount of training and gear because, until recently, I had no experience with mountaineering. I’m not particularly interested in climbing mountains in general. I am interested in Rainier.

On Tuesday, Mika and I made our first climbing attempt and we both successfully made it to the summit. Due to the -15°C (5°F) temperatures and 88kph (55mph) winds at the top, I couldn’t get a picture at the top. But I feel like I’ve built a deeper connection with an old friend.

Other than the picture from UW campus, photos were all from my climb and taken by (in order): Jennifer Marie, Jonathan Neubauer, Mika Matsuzaki, Jonathan Neubauer, Jonathan Neubauer, Mika Matsuzaki, and Jake Holthaus.

Rondam Ramblings — A quantum mechanics puzzle

Time to take a break from politics and sociology and geek out about quantum mechanics for a while. Consider a riff on a Michelson-style interferometer that looks like this: A source of laser light shines on a half-silvered mirror angled at 45 degrees (the grey rectangle).  This splits the beam in two.  The two beams are in actuality the same color as the original, but I've drawn them in

Planet Debian — Silva Arapi: Digital Born Media Carnival July 2017

As described in their website, Digital Born Media Carnival was a gathering of hundred of online media representatives, information explorers and digital rights enthusiasts. The event took place on 14 – 18 July in Kotor, Montenegro. I found out about it as one of the members of Open Labs Hackerspace shared the news on our forum. While struggling if I should attend or not because of a very busy period at work and at the University, the whole thing sounded very interesting and intriguing at the same time, so I decided to join the group of people who were also planning to go and apply with a workshop session too. No regrets at all! This turned out to be one of the greatest events I’ve attended so far and had a great impact in what I somehow decided to do next, regarding my work as a hacktivist and as a digital rights enthusiast.

The organizers of the Carnival had announced on the website that they were looking for online media representatives, journalists, bloggers, content creators, human right defenders, hacktivists, new media startups etc and as a hactivist I found myself willing to join and learn more about some topics which for a while had been very intriguing to me, while I was also looking at this as an opportunity to meet with other people with common interests as me.

I applied with a workshop where I was going to introduce some simple tools for people to better preserve their privacy online. The session was accepted and I was invited to lead altogether with Andrej Petrovski the sessions on Digital Security track, located in the Sailing club “Lahor”. I held my workshop there on Saturday late in the morning and I really enjoyed it. Most of the attendees where journalists or people not with a technical background, and they showed a lot of interest, asked me many questions and shared some stories, I also received very good feedback on the workshop and it really gave me some really good vibes since this was the first time for me speaking on cyber security in an important event of this kind, as it was the DBMC’17.

I spent the other days on the Carnival attending different workshops and talks, meeting new people, discussing with friends and enjoying the sun. We would go to the beach on the afternoon and had very cool drone photo shooting

This was a great work from the SHARE Foundation and hopefully there will be other events as such in the near future and I would totally recommend for people to attend! If you are new with the topics discussed there, this is a great way to start. If you have been on the field for a while, this is the place to meet other professionals as you. If you are looking for an event which you can also combine with some days of vacation but also be in touch with causes you care about, this would once again be the place to go.

Planet Debian — Daniel Pocock: Turning a dictator's pyramid into a ham radio station

(Update: due to concerns about GoFundMe, I changed the links in this blog post so people can donate directly through PayPal. Anybody who tried to donate through GoFundMe should be receiving a refund.)

I've launched a crowdfunding campaign to help get more equipment for a bigger and better ham radio demo at OSCAL (19-20 May, Tirana). Please donate if you would like to see this go ahead. Just EUR 250 would help buy a nice AGM battery - if 10 people donate EUR 25 each, we can buy one of those.

You can help turn the pyramid of Albania's former communist dictator into a ham radio station for OSCAL 2018 on 19-20 May 2018. This will be a prominent demonstration of ham radio in the city center of Tirana, Albania.

Under the rule of Enver Hoxha, Albanians were isolated from the outside world and used secret antennas to receive banned television transmissions from Italy. Now we have the opportunity to run a ham station and communicate with the whole world from the very pyramid where Hoxha intended to be buried after his death.

Donations will help buy ham and SDR equipment for communities in Albania and Kosovo and assist hams from neighbouring countries to visit the conference. We would like to purchase deep-cycle batteries, 3-stage chargers, 50 ohm coaxial cable, QSL cards, PowerPole connectors, RTL-SDR dongles, up-convertors (Ham-it-up), baluns, egg insulators and portable masts for mounting antennas at OSCAL and future events.

The station is co-ordinated by Daniel Pocock VK3TQR from the Debian Project's ham radio team.

Donations of equipment and volunteers are also very welcome. Please contact Daniel directly if you would like to participate.

Any donations in excess of requirements will be transferred to one or more of the non-profit organizations supporting education and leadership opportunities for young people in the Balkans. Any equipment purchased will also remain in the region for community use.

Please click here to donate if you would like to help this project go ahead. Without your contribution we are not sure that we will have essential items like the deep-cycle batteries we need to run ham radio transmitters.

Reporting plays a key role in the AdSense experience. In fact, two-thirds of our partners consider it the most important feature within AdSense. Helping partners understand and improve their performance metrics is a top priority. Today, we’re announcing updates to how we report AdSense impressions.

The Interactive Advertising Bureau (IAB) and the Media Rating Council (MRC) in partnership with other industry bodies such as the Mobile Marketing Association (MMA), periodically review and update industry standards for impression measurement. They recommend guidelines to standardize how impressions are counted across formats and platforms.

What’s changing?
How will it impact partners?
Switching to counting impressions on download helps to improve viewability rates, consistency across measurement providers, and aligns the impression metric better with advertiser value. For example, if an ad failed to download or if the user closed the tab before it arrived, it will no longer be counted as an impression. As a result, some AdSense users might see decreases in their impression counts, and corresponding improvements in their impression RPM, CTR, and other impression-based metrics. Your earnings, however, should not be impacted.

We will continue to review and update our impression counting technology as new industry standards are defined and implemented.

Posted by:
Andrew Gildfind, Product Manager

Cory Doctorow — Announcing “Petard,” a new science fiction story reading on my podcast

Here’s the first part of my reading (MP3) of Petard, a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Planet Debian — Julien Danjou: A simple filtering syntax tree in Python

Working on various pieces of software those last years, I noticed that there's always a feature that requires implementing some DSL.

The problem with DSL is that it is never the road that you want to go. I remember how creating my first DSL was fascinating: after using programming languages for years, I was finally designing my own tiny language!

A new language that my users would have to learn and master. Oh, it had nothing new, it was a subset of something, inspired by my years of C, Perl or Python, who knows. And that's the terrible part about DSL: they are an marvelous tradeoff between the power that they give to users, allowing them to define precisely their needs and the cumbersomeness of learning a language that will be useful in only one specific situation.

In this blog post, I would like to introduce a very unsophisticated way of implementing the syntax tree that could be used as a basis for a DSL. The goal of that syntax tree will be filtering. The problem it will solve is the following: having a piece of data, we want the user to tell us if the data matches their conditions or not.

To give a concrete example: a machine wants to grant the user the ability to filter the beans that it should keep. What the machine passes to the filter is the size of the current grain, and the filter should return either true or false, based on the condition defined by the user: for example, only keep beans that are bigger that are between 1 and 2 centimeters or between 4 and 6 centimeters.

The number of conditions that the users can define could be quite considerable, and we want to provide at least a basic set of predicate operators: equal, greater than and lesser than. We also want the user to be able to combine those, so we'll add the logical operators or and and.

A set of conditions can be seen as a tree, where leaves are either predicates, and in that case, do not have children, or are logical operators, and have children. For example, the propositional logic formula φ1 ∨ (φ2 ∨ φ3) can be represented with as a tree like this:

Starting with this in mind, it appears that the natural solution is going to be recursive: handle the predicate as terminal, and if the node is a logical operator, recurse over its children.
Since we will be doing Python, we're going to use Python to evaluate our syntax tree.

The simplest way to write a tree in Python is going to be using dictionaries. A dictionary will represent one node and will have only one key and one value: the key will be the name of the operator (equal, greater than, or, and…) and the value will be the argument of this operator if it is a predicate, or a list of children (as dictionaries) if it is a logical operator.

For example, to filter our bean, we would create a tree such as:

{"or": [
{"and": [
{"ge": 1},
{"le": 2},
]},
{"and": [
{"ge": 4},
{"le": 6},
]},
]}


The goal here is to walk through the tree and evaluate each of the leaves of the tree and returning the final result: if we passed 5 to this filter, it would return True, and if we passed 10 to this filter, it would return False.

Here's how we could implement a very depthless filter that only handles predicates (for now):

import operator

class InvalidQuery(Exception):
pass

class Filter(object):
binary_operators = {
"eq": operator.eq,
"gt": operator.gt,
"ge": operator.ge,
"lt": operator.lt,
"le": operator.le,
}

def __init__(self, tree):
# Parse the tree and store the evaluator
self._eval = self.build_evaluator(tree)

def __call__(self, value):
# Call the evaluator with the value
return self._eval(value)

def build_evaluator(self, tree):
try:
# Pick the first item of the dictionary.
# If the dictionary has multiple keys/values
# the first one (= random) will be picked.
# The key is the operator name (e.g. "eq")
# and the value is the argument for it
operator, nodes = list(tree.items())[0]
except Exception:
raise InvalidQuery("Unable to parse tree %s" % tree)
try:
# Lookup the operator name
op = self.binary_operators[operator]
except KeyError:
raise InvalidQuery("Unknown operator %s" % operator)
# Return a function (lambda) that takes
# the filtered value as argument and returns
# the result of the predicate evaluation
return lambda value: op(value, nodes)


You can use this Filter class by passing a predicate such as {"eq": 4}:

>>> f = Filter({"eq": 4})
>>> f(2)
False
>>> f(4)
True


This Filter class works but is quite limited as we did not provide logical operators. Here's a complete implementation that supports binary operators and and or:


import operator

class InvalidQuery(Exception):
pass

class Filter(object):
binary_operators = {
u"=": operator.eq,
u"==": operator.eq,
u"eq": operator.eq,

u"<": operator.lt,
u"lt": operator.lt,

u">": operator.gt,
u"gt": operator.gt,

u"<=": operator.le,
u"≤": operator.le,
u"le": operator.le,

u">=": operator.ge,
u"≥": operator.ge,
u"ge": operator.ge,

u"!=": operator.ne,
u"≠": operator.ne,
u"ne": operator.ne,
}

multiple_operators = {
u"or": any,
u"∨": any,
u"and": all,
u"∧": all,
}

def __init__(self, tree):
self._eval = self.build_evaluator(tree)

def __call__(self, value):
return self._eval(value)

def build_evaluator(self, tree):
try:
operator, nodes = list(tree.items())[0]
except Exception:
raise InvalidQuery("Unable to parse tree %s" % tree)
try:
op = self.multiple_operators[operator]
except KeyError:
try:
op = self.binary_operators[operator]
except KeyError:
raise InvalidQuery("Unknown operator %s" % operator)
return lambda value: op(value, nodes)
# Iterate over every item in the list of the value linked
# to the logical operator, and compile it down to its own
# evaluator.
elements = [self.build_evaluator(node) for node in nodes]
return lambda value: op((e(value) for e in elements))


To support the and and or operators, we leverage the all and any built-in Python functions. They are called with an argument that is a generator that evaluates each one of the sub-evaluator, doing the trick.

Unicode is the new sexy, so I've also added Unicode symbols support.

And it is now possible to implement our full example:

>>> f = Filter(
...     {"∨": [
...         {"∧": [
...             {"≥": 1},
...             {"≤": 2},
...         ]},
...         {"∧": [
...             {"≥": 4},
...             {"≤": 6},
...         ]},
...     ]})
>>> f(5)
True
>>> f(8)
False
>>> f(1)
True


As an exercise, you could try to add the not operator, which deserve its own category as it is a unary operator!

In the next blog post, we will see how to improve that filter with more features, and how to implement a domain-specific language on top of it, to make humans happy when writing the filter!

Am writing briefly to say that I believe a scam or pyramid scheme is currently using my name fraudulently in South Africa. I am not going to link to the websites in question here, but if you are being pitched a make-money-fast story that refers to me and crypto-currency, you are most likely being targeted by fraudsters.

Cryptogram — LC4: Another Pen-and-Paper Cipher

Interesting symmetric cipher: LC4:

Abstract: ElsieFour (LC4) is a low-tech cipher that can be computed by hand; but unlike many historical ciphers, LC4 is designed to be hard to break. LC4 is intended for encrypted communication between humans only, and therefore it encrypts and decrypts plaintexts and ciphertexts consisting only of the English letters A through Z plus a few other characters. LC4 uses a nonce in addition to the secret key, and requires that different messages use unique nonces. LC4 performs authenticated encryption, and optional header data can be included in the authentication. This paper defines the LC4 encryption and decryption algorithms, analyzes LC4's security, and describes a simple appliance for computing LC4 by hand.

Almost two decades ago I designed Solitaire, a pen-and-paper cipher that uses a deck of playing cards to store the cipher's state. This algorithm uses specialized tiles. This gives the cipher designer more options, but it can be incriminating in a way that regular playing cards are not.

Still, I like seeing more designs like this.

Worse Than Failure — CodeSOD: The Same Date

Oh, dates. Oh, dates in Java. They’re a bit of a dangerous mess, at least prior to Java 8. That’s why Java 8 created its own date-time libraries, and why JodaTime was the gold standard in Java date handling for many years.

But it doesn’t really matter what date handling you do if you’re TRWTF. An Anonymous submitter passed along this method, which is meant to set the start and end date of a search range, based on a number of days:

private void setRange(int days){
DateFormat df = new SimpleDateFormat("yyyy-MM-dd")
Date d = new Date();
Calendar c = Calendar.getInstance()
c.setTime(d);

Date start =  c.getTime();

if(days==-1){
assertThat(c.getTime()).isNotEqualTo(start)
}
else if(days==-7){
assertThat(c.getTime()).isNotEqualTo(start)
}
else if (days==-30){
assertThat(c.getTime()).isNotEqualTo(start)
}
else if (days==-365){
assertThat(c.getTime()).isNotEqualTo(start)
}

from = df.format(start).toString()+"T07:00:00.000Z"
to = df.format(d).toString()+"T07:00:00.000Z"
}

Right off the bat, days only has a handful of valid values- a day, a week, a month(ish) or a year(ish). I’m sure passing it as an int would never cause any confusion. The fact that they don’t quite grasp what variables are for is a nice touch. I’m also quite fond of how they declare a date format at the top, but then also want to append a hard-coded timezone to the format, which again, I’m sure will never cause any confusion or issues. The assertThat calls check that the Calendar.add method does what it’s documented to do, making those both pointless and stupid.

But that’s all small stuff. The real magic is that they never actually use the calendar after adding/subtracting dates. They obviously meant to include d = c.getTime() someplace, but forgot. Then, without actually testing the code (they have so many asserts, why would they need to test?) they checked it in. It wasn’t until QA was checking the prerelease build that anyone noticed, “Hey, filtering by dates doesn’t work,” and an investigation revealed that from and to always had the same value.

Planet Debian — Neil Williams: Upgrading the home server rack

My original home server rack is being upgraded to use more ARM machines as the infrastructure of the lab itself. I've also moved house, so there is more room for stuff and kit. This has allowed space for a genuine machine room. I will be using that to host test devices which are do not need manual intervention despite repeated testing. (I'll also have the more noisy / brightly illuminated devices in the machine room.) The more complex devices will sit on shelves in the office upstairs. (The work to put the office upstairs was a major undertaking involving my friends Steve and Andy - embedding ethernet cables into the walls of four rooms in the new house. Once that was done, the existing ethernet cable into the kitchen could be fixed (Steve) and then connected to my new Ubiquity AP, (a present from Steve and Andy)).

Before I moved house, I found that the wall mounted 9U communications rack was too confined once there were a few devices in use. A lot of test devices now need many cables to each device. (Power, ethernet, serial, second serial and USB OTG and then add a relay board with it's own power and cables onto the DUT....)

Devices like beaglebone-black, cubietruck and other U-Boot devices will go downstairs, albeit in a larger Dell 24U rack purchased from Vince who has moved to a larger rack in his garage. Vince also had a gigabit 16 port switch available which will replace the Netgear GS108 8-port Gigabit Ethernet Unmanaged Switch downstairs.

I am currently still using the same microserver to run various other services around the house (firewall, file server etc.): HP 704941-421 ProLiant Micro Server

I've now repurposed a reconditioned Dell Compact Form Factor desktop box to be my main desktop machine in my office. This was formerly my main development dispatcher and the desktop box was chosen explicitly to get more USB host controllers on the motherboard than is typically available with an x86 server. There have been concerns that this could be causing bottlenecks when running multiple test jobs which all try to transfer several hundred megabytes of files over USB-OTG at the same time.

I've now added a SynQuacer Edge ARM64 Server to run a LAVA dispatcher in the office, controlling several of the more complex devices to test in LAVA - Hikey 620, HiKey 960 and Dragonboard 410c via a Cambrionix PP15s to provide switchable USB support to enable USB network dongles attached to the USB OTG port which is also used for file deployment during test jobs. There have been no signs of USB bottlenecks at this stage.

This arm64 machine then supports running test jobs on the development server used by the LAVA software team as azrael.codehelp. It runs headless from the supplied desktop tower case. I needed to use a PCIe network card from TPlink to get the device operating but this limitation should be fixed with new firmware. (I haven't had time to upgrade the firmware on that machine yet, still got the rest of the office to kit out and the rack to build.) The development server itself is an ARM64 virtual machine, provided by the Linaro developer cloud and is used with a range of other machines to test the LAVA codebase, doing functional testing.

The new dispatcher is working fine, I've not had any issues with running test jobs on some of the most complex devices used in LAVA. I haven't needed to extend the RAM from the initial 4G and the 24 cores are sufficient for the work I've done using the machine so far.

The rack was moved into place yesterday (thanks to Vince & Steve) but the patch panel which Andy carefully wired up is not yet installed and there are cables everywhere, so a photo will have to wait. The plan now is to purchase new UPS batteries and put each of the rack, the office and the ISP modem onto dedicated UPS. The objective is not to keep the lab running in the event of a complete power cut lasting hours, just to survive brown outs and power cuts lasting a minute or two, e.g. when I finally get around to labelling up the RCD downstairs. (The new house was extended a few yours before I bought it and the organisation of the circuits is a little unexpected in some parts of the house.)

Once the UPS batteries are in, the microserver, a PDU, the network switch and patch panel, as well as the test devices, will go into the rack in the machine room. I've recently arranged to add a second SynQuacer server into the rack - this time fitted into a 1U server case. (Definite advantage of the new full depth rack over the previous half-depth comms box.) I expect this second SynQuacer to have a range of test devices to complement our existing development staging instance which runs the nightly builds which are available for both amd64 and arm64.

I'll post again once I've got the rest of the rack built and the second SynQuacer installed. The hardest work, by far, has been fitting out the house for the cabling. Setting up the machines, installing and running LAVA has been trivial in comparison. Thanks to Martin Stadler for the two SynQuacer machines and the rest of the team in Linaro Enterprise Group (LEG) for getting this ARM64 hardware into useful roles to support wider development. With the support from Debian for building the arm64 packages, the new machine simply sits on the network and does "TheRightThing" without fuss or intervention. I can concentrate on the test devices and get on with things. The fact that the majority of my infrastructure now runs on ARM64 servers is completely invisible to my development work.

Introduction

I wanted to set up my own apt repository to distribute packages to my own computers. This repository must be PGP-signed, but I want to use my regular PGP key rather than a PGP key stored on the server, because I don’t want to trust my server with root access to my laptop.

Further, I want to be able to add to my repo while offline, rather than dputting .changes files to my server.

The standard tools, mini-dinstall and reprepro, are designed to be executed on the same machine that will serve the apt repository. To satisfy the above, though, I need to be able to execute the repository generator offline, on my laptop.

Two new features of git-annex, git-annex-export and v6 repositories, can allow us to execute the repository generator offline and then copy the contents of the repository to the server in an efficient way.

(v6 repositories are not production-ready but the data in this repo is replaceable: I backup the reprepro config files, and the packages can be regenerated from the (d)git repositories containing the source packages.)

Schematic instructions

This should be enough to get you going if you have some experience with git-annex and reprepro.

In the following, athena is a host I can ssh to. On that host, I assume that Apache is set up to serve /srv/debian as the apt repository, with .htaccess rules to deny access to the conf/ and db/ subdirectories and to enable the following of symlinks.

1. apt-get install git-annex reprepro
2. git init a new git repository on laptop.
3. Create conf/distributions, conf/options, conf/do-sync.sh and .gitattributes per below.
4. Create other files such as README, sample foo.list, etc. if desired.
5. git add the various plain text files we just created and commit.
6. git annex init --version=6.
7. Add an origin remote, git config remote.origin.annex-ignore true and git push -u origin master git-annex. I.e. store repository metadata somewhere.
8. git config --local annex.thin true to save disc space.
9. git config --local annex.addunlocked true so that reprepro can modify files.
10. Tell git-annex about the /srv/debian directory on athena: git annex initremote athena type=rsync rsyncurl=athena:/srv/debian autoenable=true exporttree=yes encryption=none
11. Tell git-annex that the /srv/debian directory on athena should track the contents of the master branch: git annex export --fast --to=athena --tracking master
12. Now you can reprepro include foo.changes, reprepro export and git annex should do the rest: the do-sync.sh script calls git annex sync and gitannex knows that it should export the repo to /srv/debian on athena when told to sync.

Files

conf/distributions is an exercise for the reader – this is standard reprepro stuff.

conf/options:

endhook do-sync.sh


conf/do-sync.sh:

#!/bin/sh

git annex sync --content


.gitattributes:

* annex.largefiles=anything
conf/* annex.largefiles=nothing
\.gitattributes annex.largefiles=nothing


These git attributes tell git-annex to annex all files except the plain text config files, which are just added to git.

Bugs

I’m not sure whether these are fixable in git-annex-export, or not. Both can be worked around with hacks/scripts on the server.

• reprepro exportsymlinks won’t work to create suite symlinks: git-annex-export will create plain files instead of symlinks.

• git-annex-exports exports non-annexed files in git, such as README, as readable only by their owner.

Planet Debian — Thorsten Glaser: Happy Birthday, GPS Stash Hunt!

GPS Stash Hunt, also commercially known as “Geocaching”, “Terracaching”, or non-commercially (but also nōn-free) as “Opencaching”, is 18 years old today! Time for celebration or something!

,

Planet Debian — Norbert Preining: Docker, cron, mail and logs

If one searches for “docker cron“, there are a lot of hits but not really good solutions or explanations how to get a running system. In particular, what I needed/wanted was (i) stdout/stderr of the cron script is sent to some user, and (ii) that in case of errors the error output also appears in the docker logs. Well, that was not that easy …

There are three components here that need to be tuned:

• getting cron running
• getting mail delivery running
• redirecting some error message to docker

Let us go through the list. The full Dockerfile and support scripts can be found below.

Getting cron running

Many of the lean images do not contain cron, even less run it (this is due to the philosophy of one process per container). So the usual incantation to install cron are necessary:

RUN apt-get install -y cron


After that, one can use as entry point the cron daemon running in foreground:

CMD cron -f


Of course you have to install a crontab file somehow, in my case I did:

ADD crontab /etc/cron.d/mypackage
RUN chmod 0644 /etc/cron.d/mypackage


This works all nice and well, but if there are errors, or problems with the crontab file, you will not see any error message, because (at least on Debian and Ubuntu) cron logs to syslog, but there is no syslog daemon available to show you the output, and cron has no option to log to a file (how stupid!). Furthermore, other cron daemon options (bcron, cronie) are not available in Debian/stable for example

In my case the crontab file had a syntax error, and thus no actual program was ever run.

Getting mail delivery running

Assuming you have these hurdles settled one would like to get the output of the cron scripts mailed. For this, a sendmail compliant program needs to be available. One could set up a full blown system (exim, postfix), but this is overkill as one only wants to send out message. I opted for ssmtp which is a single program with straight-forward configuration and operation, and it provides a sendmail program.

RUN apt-get install -y ssmtp
RUN chown root.mail /etc/ssmtp/ssmtp.conf
RUN chmod 0640 /etc/ssmtp/ssmtp.conf


The configuration file can be rather minimal, here is an example:

Root=destuser@destination
mailhub=mail-server-name-ip


Of course, the mail-server-name-ip must accepts emails for destuser@destination. There are several more options for ssl support, rewriting of domains etc, see for example here.

Having this in place, cron will now duly send emails with the output of the cron jobs.

Redirecting some error message to docker

This leaves us with the last task, namely getting error messages into the docker logs. Since cron does capture the stdout and stderr of the cronjobs for mail sending, one can either redirect these outputs to docker, but then one will not get emails, or wrap the cron jobs up. I used the following wrapper to output a warning to the docker logs:

#!/bin/bash
#
if [ -z "$1" ] ; then echo "need name of cron job as first argument" >&2 exit 1 fi if [ ! -x "$1" ] ; then
echo "cron job file $1 not executable, exiting" >&2 exit 1 fi if "$1"
then
exit 0
else
echo "cron job $1 failed!" 2>/proc/1/fd/2 >&2 exit 1 fi  together with entries in the crontab like: m h d m w root /app/run-cronjob /app/your-script  The magic trick here is the 2>/proc/1/fd/2 >&2 which first redirects the stderr to the stderr of the process with the id 1, which is the entry point of the container and watched by docker, and then echo the message to stderr. One could also redirect stdout in the same way to /proc/1/fd/1 if necessary or preferred. The above thing combined gives a nice combination of emails with the output of the cron jobs, as well as entries in the docker logs if something broke and for example no output was created. Let us finish with a minimal Dockerfile doing these kind of things: from debian:stretch-slim RUN apt-get -y update RUN apt-get install -y cron ssmtp ADD . /app ADD crontab /etc/cron.d/mypackage RUN chmod 0644 /etc/cron.d/mypackage ADD ssmtp.conf /etc/ssmtp/ssmtp.conf RUN chown root.mail /etc/ssmtp/ssmtp.conf RUN chmod 0640 /etc/ssmtp/ssmtp.conf CMD cron -f  Planet Debian — Bits from Debian: New Debian Developers and Maintainers (March and April 2018) The following contributors got their Debian Developer accounts in the last two months: • Andreas Boll (aboll) • Dominik George (natureshadow) • Julien Puydt (jpuydt) • Sergio Durigan Junior (sergiodj) • Robie Basak (rbasak) • Elena Grandi (valhalla) • Peter Pentchev (roam) • Samuel Henrique (samueloph) The following contributors were added as Debian Maintainers in the last two months: • Andy Li • Alexandre Rossi • David Mohammed • Tim Lunn • Rebecca Natalie Palmer • Andrea Bolognani • Toke Høiland-Jørgensen • Gabriel F. T. Gomes • Bjorn Anders Dolk • Geoffroy Youri Berret • Dmitry Eremin-Solenikov Congratulations! Krebs on Security — When Your Employees Post Passwords Online Storing passwords in plaintext online is never a good idea, but it’s remarkable how many companies have employees who are doing just that using online collaboration tools like Trello.com. Last week, KrebsOnSecurity notified a host of companies that employees were using Trello to share passwords for sensitive internal resources. Among those put at risk by such activity included an insurance firm, a state government agency and ride-hailing service Uber. By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view). But that doesn’t stop individual Trello users from manually sharing personal boards that include proprietary employer data, information that may be indexed by search engines and available to anyone with a Web browser. And unfortunately for organizations, far too many employees are posting sensitive internal passwords and other resources on their own personal Trello boards that are left open and exposed online. A personal Trello board created by an Uber employee included passwords that might have exposed sensitive internal company operations. KrebsOnSecurity spent the past week using Google to discover unprotected personal Trello boards that listed employer passwords and other sensitive data. Pictured above was a personal board set up by some Uber developers in the company’s Asia-Pacific region, which included passwords needed to view a host of internal Google Documents and images. Uber spokesperson Melanie Ensign said the Trello board in question was made private shortly after being notified by this publication, among others. Ensign said Uber found the unauthorized Trello board exposed information related to two users in South America who have since been notified. “We had a handful of members in random parts of the world who didn’t realize they were openly sharing this information,” Ensign said. “We’ve reached out to these teams to remind people that these things need to happen behind internal resources. Employee awareness is an ongoing challenge, We may have dodged a bullet here, and it definitely could have been worse.” Ensign said the initial report about the exposed board came through the company’s bug bounty program, and that the person who reported it would receive at least the minimum bounty amount —$500 — for reporting the incident (Uber hasn’t yet decided whether the award should be higher for this incident).

The Uber employees who created the board “used their work email to open a public board that they weren’t supposed to,” Ensign said. “They didn’t go through our enterprise account to create that. We first found out about it through our bug bounty program, and while it’s not technically a vulnerability in our products, it’s certainly something that we would pay for anyway. In this case, we got multiple reports about the same thing, but we always pay the first report we get.”

Of course, not every company has a bug bounty program to incentivize the discovery and private reporting of internal resources that may be inadvertently exposed online.

Screenshots that KrebsOnSecurity took of many far more shocking examples of employees posting dozens of passwords for sensitive internal resources are not pictured here because the affected parties still have not responded to alerts provided by this author.

Trello is one of many online collaboration tools made by Atlassian Corporation PLC, a technology company based in Sydney, Australia. Trello co-founder Michael Pryor said Trello boards are set to private by default and must be manually changed to public by the user.

“We strive to make sure public boards are being created intentionally and have built in safeguards to confirm the intention of a user before they make a board publicly visible,” Pryor said. “Additionally, visibility settings are displayed persistently on the top of every board.”

If a board is Team Visible it means any members of that team can view, join, and edit cards. If a board is Private, only members of that specific board can see it. If a board is Public, anyone with the link to the board can see it.

Interestingly, updates made to Trello’s privacy policy over the past weekend may make it easier for companies to locate personal boards created by employees and pull them behind company resources.

A Trello spokesperson said the privacy changes were made to bring the company’s policies in line with new EU privacy laws that come into enforcement later this month. But they also clarify that Trello’s enterprise features allow the enterprise admins to control the security and permissions around a work account an employee may have created before the enterprise product was purchased.

Uber spokesperson Ensign called the changes welcome.

“As a result companies will have more security control over Trello boards created by current/former employees and contractors, so we’re happy to see the change,” she said.

Rondam Ramblings — This is inspiring

The Washington Post reports that: Two African American men arrested at a Philadelphia Starbucks last month have reached a settlement with the city and secured its commitment to a pilot program for young entrepreneurs.  Rashon Nelson and Donte Robinson chose not to pursue a lawsuit against the city, Mike Dunn, a spokesman from the Mayor’s Office, told The Washington Post. Instead, they agreed to

TED — Calling all social entrepreneurs + nonprofit leaders: Apply for The Audacious Project

Our first collection of Audacious Project winners takes the stage after a stellar session at TED2018, in which each winner made a big, big wish to move their organization’s vision to the next level with help from a new consortium of nonprofits. As a bonus during the Audacious Project session. we watched an astonishing performance of “New Second Line” from Camille A. Brown and Dancers. From left: The Bail Project’s Robin Steinberg; Heidi M. Sosik of the Woods Hole Oceanographic Institute; Caroline Harper of Sight Savers; Vanessa Garrison and T. Morgan Dixon of GirlTrek; Fred Krupp from the Environmental Defense Fund; Chloe Davis and Maleek Washington of Camille A. Brown and Dancers; pianist Scott Patterson; Andrew Youn of the One Acre Fund; and Catherine Foster, Camille A. Brown, Timothy Edwards, Juel D. Lane from Camille A. Brown and Dancers. Obscured behind Catherine Foster is Raj Panjabi of Last Mile Health (and dancer Mayte Natalio is offstage). Photo: Ryan Lash / TED

Creating wide-scale change isn’t easy. It takes incredible passion around an issue, and smart ideas on how to move the needle and, hopefully, improve people’s lives. It requires bottomless energy, a dedicated team, an extraordinary amount of hope. And, of course, it demands real resources.

TED would like to help, on the last part at least. This is an open invitation to all social entrepreneurs and nonprofit leaders: apply to be a part of The Audacious Project in 2019. We’re looking for big, bold, unique ideas that are capable of affecting more than a million people or driving transformational change on a key issue. We’re looking for unexplored plans that have a real, credible path to execution. That can inspire people around the world to come together to act.

Applications for The Audacious Project are open now through June 10. And here’s the best part — this isn’t a long, detailed grant application that will take hours to complete. We’ve boiled it down to the essential questions that can be answered swiftly. So apply as soon as you can. If your idea feels like a good fit, we’ll be in touch with an extended application that you’ll have four weeks to complete.

The Audacious Project process is rigorous — if selected as a Finalist, you’ll participate in an ideation workshop to help clarify your approach and work with us and our partners on a detailed project proposal spanning three to five years. But the work will be worth it, as it can turbocharge your drive toward change.

More than $406 million has already been committed to the first ideas in The Audacious Project. And further support is coming in following the simultaneous launch of the project at both TED2018 and the annual Skoll World Forum last week. Watch the full session from TED, or highlight reel above that screened the next day at Skoll. And who knows? Perhaps you’ll be a part of the program in 2019. Cryptogram — NIST Issues Call for "Lightweight Cryptography" Algorithms This is interesting: Creating these defenses is the goal of NIST's lightweight cryptography initiative, which aims to develop cryptographic algorithm standards that can work within the confines of a simple electronic device. Many of the sensors, actuators and other micromachines that will function as eyes, ears and hands in IoT networks will work on scant electrical power and use circuitry far more limited than the chips found in even the simplest cell phone. Similar small electronics exist in the keyless entry fobs to newer-model cars and the Radio Frequency Identification (RFID) tags used to locate boxes in vast warehouses. All of these gadgets are inexpensive to make and will fit nearly anywhere, but common encryption methods may demand more electronic resources than they possess. The NSA's SIMON and SPECK would certainly qualify. Worse Than Failure — CodeSOD: A Password Generator Every programming language has a *bias* which informs their solutions. Object-oriented languages are biased towards objects, and all the things which follow on. Clojure is all about function application. Haskell is all about type algebra. Ruby is all about monkey-patching existing objects. In any language, these things can be taken too far. Java's infamous Spring framework leaps to mind. Perl, being biased towards regular expressions, has earned its reputation as being "write only" thanks to regex abuse. Gert sent us along some Perl code, and I was expecting to see regexes taken too far. To my shock, there weren't any regexes. Gert's co-worker needed to generate a random 6-digit PIN for a voicemail system. It didn't need to be cryptographically secure, repeats and zeros are allowed (they exist on a keypad, after all!). The Perl-approach for doing this would normally be something like:  sub randomPIN { return sprintf("%06u",int(rand(1000000))); }   Gert's co-worker had a different plan in mind, though.  sub randomPIN { my$password;
my @num = (1..9);
my @char = ('@','#','$','%','^','&','*','(',')'); my @alph = ('a'..'z'); my @alph_up = ('A'..'Z'); my$rand_num1 = $num[int rand @num]; my$rand_num2 = $num[int rand @num]; my$rand_num3 = $num[int rand @num]; my$rand_num4 = $num[int rand @num]; my$rand_num5 = $num[int rand @num]; my$rand_num6 = $num[int rand @num];$password = "$rand_num1"."$rand_num2"."$rand_num3"."$rand_num4"."$rand_num5"."$rand_num6";

return password; }   This code starts by creating a set of arrays, @num, @char, etc. The only one that matters is @num, though, since this generates a PIN to be entered on a phone keypad and touchtone signals are numeric and also there is no "(" key on a telephone keypad. Obviously, the developer copied this code from a random password function somewhere, which is its own special kind of awful. Now, what's fascinating is that they initialize @num with the numbers 1 through 9, and then use the rand function to generate a random number from 0 through 8, so that they can select an item from the array. So they understood how the rand function worked, but couldn't make the leap to eliminate the array with something like rand(9). For now, replacing this function is simply on Gert's todo list. [Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how! Planet Debian — Neil Williams: Upgrading the home server rack My original home server rack is being upgraded to use more ARM machines as the infrastructure of the lab itself. I've also moved house, so there is more room for stuff and kit. This has allowed space for a genuine machine room. I will be using that to host test … Planet Debian — Norbert Preining: TeX Live 2018 released I guess everyone knows it already, Karl has released TeX Live 2018 officially just when I was off for some mountaineering, but perfectly in time for the current BachoTeX meeting. The DVDs are already being burnt and will soon be sent to the various TeX User groups who ordered. The .iso image is available on CTAN, and the net installer will pull all the newest stuff. Currently Karl is working on getting those packages updated during the freeze to the newest level in TeX Live. In addition to the changes I mentioned in the post about TL2018 in Debian, the news taken from the TeX Live documentation: • MacTEX: See version support changes below. In addition, the files installed in /Applications/TeX/ by MacTEX have been reorganized for greater clarity; now this location contains four GUI programs (BibDesk, LaTeXiT, TeX Live Utility, and TeXShop) at the top level and folders with additional utilities and documentation. • tlmgr: new front-ends tlshell (Tcl/Tk) and tlcockpit (Java); JSON output; uninstall now a synonym for remove; new action/option print-platform-info. • Platforms: • New: x86_64-linuxmusl and aarch64-linux. Removed: armel-linux, powerpc-linux. • x86_64-darwin supports 10.10–10.13 (Yosemite, El Capitan, Sierra, and High Sierra). • x86_64-darwinlegacy supports 10.6–10.10 (though x86_64-darwin is preferred for 10.10). All support for 10.5 (Leopard) is gone, that is, both the powerpc-darwin and i386-darwin platforms have been removed. • Windows: XP is no longer supported. That’s all, let the fun begin! Planet Debian — Sean Whitton: Debian Policy call for participation -- May 2018 We had a release of Debian Policy near the beginning of last month but there hasn’t been much activity since then. Please consider writing or reviewing patches for some of these bugs. Consensus has been reached and help is needed to write a patch #273093 document interactions of multiple clashing package diversions #314808 Web applications should use /usr/share/package, not /usr/share/doc/… #425523 Describe error unwind when unpacking a package fails #452393 Clarify difference between required and important priorities #556015 Clarify requirements for linked doc directories #578597 Recommend usage of dpkg-buildflags to initialize CFLAGS and al. #582109 document triggers where appropriate #685506 copyright-format: new Files-Excluded field #757760 please document build profiles #759316 Document the use of /etc/default for cron jobs #761219 document versioned Provides Wording proposed, awaiting review from anyone and/or seconds by DDs #582109 document triggers where appropriate #737796 copyright-format: support Files: paragraph with both abbreviated na… #756835 Extension of the syntax of the Packages-List field. #786470 [copyright-format] Add an optional “License-Grant” field #835451 Building as root should be discouraged #846970 Proposal for a Build-Indep-Architecture: control file field #864615 please update version of posix standard for scripts (section 10.4) #897217 Vcs-Hg should support -b too Merged for the next release #896749 footnote of 3.3 lists deprecated alioth mailinglist URL , Planet Debian — Ben Hutchings: Debian LTS work, April 2018 I was assigned 15 hours of work by Freexian's Debian LTS initiative and carried over 2 hours from March. I worked all 17 hours. In support of the "retpoline" mitigation for Spectre variant 2, I added a backport of gcc-4.9 to wheezy (as gcc-4.9-backport), based on work by Roberto Sánchez and on the existing gcc-4.8 backport (gcc-mozilla). I also updated the linux-tools package to support building external modules with retpolines enabled. Finally, I completed an update to the linux package, but delayed uploading it until 1st May due to an embargoed issue. Planet Debian — Reproducible builds folks: Reproducible Builds: Weekly report #157 This week’s report represents the three-year anniversary of the Reproducible Builds project reporting on its activities. We would like to thank all those who have contributed over the years, in particular thanking Jérémy Bobbio for starting this. Here’s what happened in the Reproducible Builds effort between Sunday April 22 and Saturday April 28 2018: Packages reviewed and fixed, and bugs filed 139 package categorisations were added, 71 were updated and 38 were removed this week. In addition, build failure bugs were reported by Adrian Bunk (111), Hilmar Preuße (1), Niko Tyni (1) & Sebastian Ramacher (4). Misc. This week’s edition was written by Chris Lamb, Holger Levsen and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists. Planet Debian — Ingo Juergensmann: #DeleteFacebook and alternative Social Networks Some weeks ago a more or less large scandal popped up in the media: Cambridge Analytica misused data from Facebook. Many users of Facebook were unhappy about abusing their personal data and deleted their Facebook accounts - or at least tried some alternatives like Friendica, Hubzilla, Diaspora, Mastodon and others. There has been a significant increase in user count since then and this gave a general boost for the networks. Apropos networks... basically there are two large networks: The Federation (Diaspora, Socialhome) and The Fediverse (GNU Social, Mastodon, postActiv, Pleroma). Within the two networks all solution can exchange information like posts, comments, user information and such, or in other words: they federate with each other. So, when you use Mastodon your posts won't be available to Diaspora users and vice versa as they use different protocols for federation. And here Friendica and Hubzilla do have their advantage: both are able to federate with both networks. Sean Tilley has some more information in his article "A quick guide to The Free Network" on medium.com. Another great resource you can use to find more about alternatives to Facebook is the great new Fediverse Wiki. From my point of view I would recommend either Friendica or Hubzilla, depending on what you want: • Friendica is in my opinion the best solution to have best of both worlds, i.e. Fediverse und Federation. It has active developers and a good and helping community. It concentrates on the social network topic. • Hubzilla is a more complete approach: you can add addons to have a cloud space, a wiki or create web pages. Both offers you to create multiple profiles with one account (Friendica: profiles, Hubzilla: channels) and of course you have a fine-grained control about your content. There is also a fresh Youtube channel describing some Friendica features. Although it is in German, others might get some helpful hints as well from those videos. Which alternative will be the best for you is up to you to decide. All alternatives have their pros and cons. If you don't already have a website, cloud space or such, Hubzilla might be the best choice for you. If you don't need such additional functions, you might be best suited with Friendica. If you like to install docker images, Mastodon will make you happy. In the end, it's all about choice! You will have better control about your own data in all cases. You can run your own instance, make it private only, or you can join one of the available servers and try out what best suites you. For Friendica you can find some public servers on https://dir.friendica.social. I'm running a Friendica node on https://nerdica.net/ as well as a Hubzilla hub on https://silverhaze.eu/ - feel invited to try both and register on either one to have a look onto some alternatives for Facebook. After all: Decentralize and spread the workd! Use the alternatives you have and don't sell you privacy to the big players like Facebook and Google if you don't need to. :-) PS: If you are already on one of those alternative networks, please feel free to connect me on my Friendica node or Hubzilla hub! Kategorie: Planet Debian — Bits from Debian: New Debian Developers and Maintainers (April and May 2018) The following contributors got their Debian Developer accounts in the last two months: • Andreas Boll (aboll) • Dominik George (natureshadow) • Julien Puydt (jpuydt) • Sergio Durigan Junior (sergiodj) • Robie Basak (rbasak) • Elena Grandi (valhalla) • Peter Pentchev (roam) • Samuel Henrique (samueloph) The following contributors were added as Debian Maintainers in the last two months: • Andy Li • Alexandre Rossi • David Mohammed • Tim Lunn • Rebecca Natalie Palmer • Andrea Bolognani • Toke Høiland-Jørgensen • Gabriel F. T. Gomes • Bjorn Anders Dolk • Geoffroy Youri Berret • Dmitry Eremin-Solenikov Congratulations! Planet Debian — Jeremy Bicha: Congratulations Ubuntu and Fedora Congratulations to Ubuntu and Fedora on their latest releases. This Fedora 28 release is special because it is believed to be the first release in their long history to release exactly when it was originally scheduled. The Ubuntu 18.04 LTS release is the biggest release for the Ubuntu Desktop in 5 years as it returns to a lightly customized GNOME desktop. For reference, the biggest changes from vanilla GNOME are the custom Ambiance theme and the inclusion of the popular AppIndicator and Dock extensions (the Dock extension being a simplified version of the famous Dash to Dock). Maybe someday I could do a post about the smaller changes. I think one of the more interesting occurrences for fans of Linux desktops is that these releases of two of the biggest LInux distributions occurred within days of each other. I expect this alignment to continue (although maybe not quite as dramatically as this time) since the Fedora and Ubuntu beta releases will happen at similar times and I expect Fedora won’t slip far from its intended release dates again. Cryptogram — IoT Inspector Tool from Princeton Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They've already used the tool to study a bunch of different IoT devices. From their blog post: Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties In many cases, consumers expect that their devices contact manufacturers' servers, but communication with other third-party destinations may not be a behavior that consumers expect. We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include: • Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook­even though we did not sign in or create accounts with any of them. • Amcrest WiFi Security Camera. The camera actively communicates with cellphonepush.quickddns.com using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest's website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer. • Halo Smoke Detector. The smart smoke detector communicates with broker.xively.com. Xively offers an MQTT service, which allows manufacturers to communicate with their devices. • Geeni Light Bulb. The Geeni smart bulb communicates with gw.tuyaus.com, which is operated by TuYa, a China-based company that also offers an MQTT service. We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services. Their first two findings are that "Many IoT devices lack basic encryption and authentication" and that "User behavior can be inferred from encrypted IoT device traffic." No surprises there. Boingboing post. Related: IoT Hall of Shame. Worse Than Failure — An Obvious Requirement Requirements. That magical set of instructions that tell you specifically what you need to build and test. Users can't be bothered to write them, and even if they could, they have no idea how to tell you what they want. It doesn't help that many developers are incapable of following instructions since they rarely exist, and when they do, they usually aren't worth the coffee-stained napkin upon which they're scribbled. That said, we try our best to build what we think our users need. We attempt to make it fairly straightforward to use what we build. The button marked Reports most likely leads to something to do with generating/reading/whatever-ing reports. Of course, sometimes a particular feature is buried several layers deep and requires multiple levels of ribbons, menus, sub-menus, dialogs, sub-dialogs and tabs before you find the checkbox you want. Since us developers as a group are, by nature, somewhat anal retentive, we try to keep related features grouped so that you can generally guess what path to try to find something. And we often supply a Help feature to tell you how to find it when you can't. Of course, some people simply cannot figure out how to use the software we build, no matter how sensibly it's laid out and organized, or how many hints and help features we provide. And there is nothing in the history of computer science that addresses how to change this. Nothing! Dimitri C. had a user who wanted a screen that performed several actions. The user provided requirements in the form of a printout of a similar dialog he had used in a previous application, along with a list of changes/colors/etc. They also provided some "helpful" suggestions, along the lines of, "It should be totally different, but exactly the same as the current application." Dimitri took pains to organize the actions and information in appropriate hierarchical groups. He laid out appropriate controls in a sensible way on the screen. He provided a tooltip for each control and a Help button. Shortly after delivery, a user called to complain that he couldn't find a particular feature. Dimitri asked "Have you tried using the Help button?" The user said that "I can't be bothered to read the instructions in the help tool because accessing this function should be obvious". Dimitri asked him "Have you looked on the screen for a control with the relevant name?" The user complained that "There are too many controls, and this function should be obvious". Dimitri asked "Did you try to hover your mouse over the controls to read the tooltips?" The user complained that "I don't have the time to do that because it would take too long!" (yet he had the time to complain). Frustrated, Dimitri replied "To make that more obvious, should I make these things less obvious?". The user complained that "Everything should be obvious". Dimitri asked how that could possibly be done, to which the user replied "I don't know, that's your job". When he realized that this user had no clue how to ask for what he wanted, he asked how this feature worked in previous programs, to which the user replied "I clicked this, then this, then this, then this, then this, then restarted the program". Dimitri responded that "That's six steps instead of the two in my program, and that would require you to reenter some of the data". The user responded "Yes, but it's obvious". So is the need to introduce that type of user to the business end of a clue-bat. [Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today! Planet Debian — Daniel Leidert: Re-enabling right click functionality for my Thinkpad touchpad I have a Lenovo Thinkpad Yoga 11e running Debian Sid. The touchpad has a left and a right click area at the bottem. For some reason, the right click ability recently stopped working. I have not yet found the reason, but I was able to fix it by adding the emphasized lines in /usr/share/X11/xorg.conf.d/70-synaptics.conf. Section "InputClass" Identifier "Default clickpad buttons" MatchDriver "synaptics" Option "ClickPad" "true" Option "EmulateMidButtonTime" "0" Option "SoftButtonAreas" "50% 0 82% 0 0 0 0 0" Option "SecondarySoftButtonAreas" "58% 0 0 15% 42% 58% 0 15%"EndSection Edit Nope. Stopped working again and both bottom areas act as left click. As it is working as a left click, I guess, the touchpad is physically ok. So I have no idea, what's going on :( Edit 2 Thanks to Cole Robinson and the link he provided I found the reason and a fix. GNOME 3.28 uses the clickfinger behaviour by default now. By setting the click method from the 'fingers' (clickfinger) method back to 'areas', either using gsettings or gnome-tweaks, the right click ability is back after rebooting the system. PS: With the new default clickfinger method, the right- and middle click are emulated using a two- and three-finger tap. Planet Debian — Russ Allbery: Review: Vallista Review: Vallista, by Steven Brust  Series: Vlad Taltos #15 Publisher: Tor Copyright: October 2017 ISBN: 1-4299-4699-7 Format: Kindle Pages: 334 This is the fifteenth book in the Vlad Taltos series, and, following the pattern, goes back to fill in a story from earlier in the series. This time, though, it doesn't go back far: Vallista takes place immediately before Hawk (at least according to the always-helpful Lyorn Records; it was not immediately obvious to me since it had been several years since I read Hawk). That means we have to wait at least one more book before Vlad is (hopefully) more free to act, but we get a bit more world-building and a few more clues about the broader arc of this series. As is hopefully obvious, this is not at all the place to start with this series. Vallista opens with Devera finding him and asking him for help. Readers of the series will recognize Devera as a regular and mysterious feature, but this is one of the most active roles she's played in a story. Following her, Vlad finds himself at a mysterious seaside house that he's sure wasn't there the last time he went by that area. When he steps inside, Devera vanishes and the door locks behind him. The rest of the book is Vlad working out the mystery of what this house is, why it was constructed, and the nature of the people who occupy it. This is explicitly an homage to Gothic romances. The dead daughter Vlad encounters isn't exactly a ghost, but she's close, and there's a locked-up monster, family secrets, star-crossed lovers, and ulterior motives everywhere. There's also a great deal of bizarre geometry, since this book is as detailed of an exploration of necromancy as we've gotten in the series to date. Like many words in Dragaera, necromancy doesn't mean what one expects from the normal English definition, although there's a tricky similarity. In this world it's more about planes of existence than death in particular, and since one of those planes of existence for Dragaerans is the Paths of the Dead and their strange connections across time and space, necromancy is also the magic of spacial and temporal connections. The mansion Vlad has to find his way out of is a creation of necromancy, as becomes clear early in the book, and there is death involved, but there are also a lot of mirrors, discussion of dimensional linkage and shifts, and as detailed of an explanation as we've gotten yet of Devera's unique abilities. Vlad seems less devious in his attempts to solve mysteries than he is in his heist setups. A lot of Vallista involves Vlad wandering around, asking questions, complaining about his head hurting, and threatening people until he has enough information to understand the nature of the house. Perhaps a careful reader armed with a good memory of series details would be able to guess the mystery before Vlad lays it out for the reader. I'm not that reader and spent most of the book worrying that I was missing things I was supposed to be following. Thankfully, Brust isn't too coy about the ending. Vlad lays out most of the details in the final chapter for those who weren't following the specifics, although I admit I visited the Lyorn Records wiki afterwards to pick up more of the details. The story apart from the mystery is a very typical iteration of Vlad being snarky, kind-hearted, slightly impatient, and intent on minding his own business except when other people force him to get involved in theirs. As is typical for the series entries that go back and fill in side stories, we don't get a lot of advancement of the main storyline. There is an intriguing scene in the Paths of the Dead with Vlad's memories, and a conversation between Vlad and Verra that provides one of the clearest indications of the overall arc of the series yet, but most of the story is concerned only with the puzzle of this mansion and its builder. I found that enjoyable but not exceptional. If you like Vlad (and if you're still reading this series, I assume you do), this is more of Vlad doing Vlad things, but I doubt it will stand out as anyone's favorite book in the series. But the series remains satisfying and worth reading even fifteen books in, which is a significant accomplishment. I eagerly await the next book, which will hopefully be the direct sequel to Hawk and an advancement of the main plot. Followed by (rumored, not yet confirmed) Tsalmoth. Rating: 7 out of 10 Planet Debian — Joachim Breitner: Avoid the dilemma of the trailing comma The Haskell syntax uses comma-separated lists in various places and does, in contrast to other programming language, not allow a trailing comma. If everything goes on one line you write  (foo, bar, baz) and everything is nice. Lining up But if you want to have one entry on each line, then the obvious plan  (foo, bar, baz ) is aesthetically unpleasing and moreover, extending the list by one to  (foo, bar, baz, quux ) modifies two lines, which produces less pretty diffs. Because it is much more common to append to lists rather than to prepend, Haskellers have developed the idiom of leading comma:  ( foo , bar , baz , quux ) which looks strange until you are used to it, but solves the problem of appending to a list. And we see this idiom in many places: • In Cabal files:  build-depends: base >= 4.3 && < 5 , array , deepseq >= 1.2 && < 1.5 • In module headers: {-# LANGUAGE DefaultSignatures , EmptyCase , ExistentialQuantification , FlexibleContexts , FlexibleInstances , GADTs , InstanceSigs , KindSignatures , RankNTypes , ScopedTypeVariables , TemplateHaskell , TypeFamilies , TypeInType , TypeOperators , UndecidableInstances #-} Think outside the list! I started to avoid this pattern where possible. And it is possible everywhere instead of having a declaration with a list, you can just have multiple declarations. I.e.: • In Cabal files:  build-depends: base >= 4.3 && < 5 build-depends: array build-depends: deepseq >= 1.2 && < 1.5 • In module headers: {-# LANGUAGE DefaultSignatures #-} {-# LANGUAGE EmptyCase #-} {-# LANGUAGE ExistentialQuantification #-} {-# LANGUAGE FlexibleContexts #-} {-# LANGUAGE FlexibleInstances #-} {-# LANGUAGE GADTs #-} {-# LANGUAGE InstanceSigs #-} {-# LANGUAGE KindSignatures #-} {-# LANGUAGE RankNTypes #-} {-# LANGUAGE ScopedTypeVariables #-} {-# LANGUAGE TemplateHaskell #-} {-# LANGUAGE TypeFamilies #-} {-# LANGUAGE TypeInType #-} {-# LANGUAGE TypeOperators #-} {-# LANGUAGE UndecidableInstances #-} It is a bit heavier, but it has a number of advantages: 1. Both appending and prepending works without touching other lines. 2. It is visually more homogeneous, making it – despite the extra words – easier to spot mistakes visually. 3. You can easily sort the declarations alphabetically with your editor. 4. Especially in Cabal files: If you have syntax error in your dependency specification (which I always have, writing << instead of < due to my Debian background), cabal will actually give you a helpful error location – it always only tells you which build-depends stanza was wrong, so if you have only one, then that’s not helpful. What when it does not work? Unfortunately, not every list in Haskell can have that treatment, and that’s why the recent GHC proposal on ExtraCommas wants to lift the restriction. In particular, it wants to allow trailing commas in subexport lists: module Foo ( Foo( A, B, ), fromFoo, ) (Weirdly, export lists already allow trailing commas). An alternative here might be to write module Foo ( Foo(A), Foo(B), fromFoo, ) and teach the compiler to not warn about the duplicate export of the Foo type. For plain lists, this idiom can be useful: list :: [Int] list = let (>>) = (++) in do [ 1 ] [ 2 ] [ 3 ] [ 4 ] It requires RebindableSyntax, so I do not recommend it for regular code, but it can be useful in a module that is dedicated to hold some generated data or configuration. And of course it works with any binary operator, not just (++) Planet Debian — Paul Wise: FLOSS Activities April 2018 Changes Issues Review Administration • whowatch: release, contact downstreams • Debian: redirect support request, investigate GDPR issues, investigate buildd kernel issue • Debian wiki: investigate signup errors, whitelist email addresses, whitelist email domain Sponsors The purple-discord work, the sysstat backport and the libipc-run-perl patch backports were sponsored by my employer. All other work was done on a volunteer basis. Cory Doctorow — Boston, Chicago and Waterloo, I’m heading your way! This Wednesday at 1145am, I’ll be giving the IDE Lunch Seminar at MIT’s Sloan School of Management, 100 Main Street. From there, I head to Chicago to keynote Thotcon on Friday at 11am. My final stop on this trip is Waterloo’s Perimeter Institute for Theoretical Physics, May 9 at 2PM. I hope to see you! I’ve got plenty more appearances planned this season, including Santa Fe, Phoenix, San Jose, Boston, San Diego and Pasadena! , Planet Debian — Norbert Preining: Analysing Debian packages with Neo4j – Part 3 Getting data from UDD into Neo4j In the first part and the second part of this series of articles on analyzing Debian packages with Neo4j we gave a short introduction to Debian and the life time and structure of Debian packages, as well as developed the database scheme, that is the set of nodes and relations and their attributes used in the representation of Debian packages. The current third (and for now last) part deals with the technical process of actually pulling the information from the Ultimate Debian Database UDD and getting it into Neo4j. We will close with a few sample queries and discuss possible future work. The process of pulling the data and entering into Neo4j consisted of three steps: • Dumping the relevant information from the UDD, • Parsing the output of the previous step and building up a list of nodes and relations, • Creating a Neo4j database from the set of nodes and relations. Pulling data from the Ultimate Debian Database UDD The UDD has a public mirror at https://udd-mirror.debian.net/ which is accessible via a PostgreSQL client. For the current status we only dumped information from two tables, namely the tables sources and packages, from which we dumped the relevant information discussed in the previous blog post. The complete SQL query for the sources table was SELECT source, version, maintainer_email, maintainer_name, release, uploaders, bin, architecture, build_depends, build_depends_indep, build_conflicts, build_conflicts_indep from sources ;  while the one for the packages table was SELECT package, version, maintainer_email, maintainer_name, release, description, depends, recommends, suggests, conflicts, breaks, provides, replaces, pre_depends, enhances from packages ;  We first tried to use the command line client psql but due to the limited output format options of the client we decided to switch to a Perl script that uses the database access modules, and then dumped the output immediately into a Perl readable format for the next step. The complete script can be accessed at the Github page of the project: pull-udd.pl. Generating the list of nodes and relations The complicated part of the process lies in the conversion from database tables to a graph with nodes and relations. We developed a Perl script that reads in the two tables dumped in the first step, and generates for each node and relation type a CSV file consisting of the a unique ID and the attributes listed at the end of the last blog post. The Perl script generate-graph operates in two steps: first it reads in the dump files and creates a unique structure (hash of hashes) that collects all the information necessary. This first step is necessary due to the heavy duplication of information in the dumps (maintainers, packages, etc, all appear several times but need to be merged into a single entity). We also generate for each node entity (each source package, binary package etc) unique id (UUID) so that we can later reference these nodes when building up the relations. The final step consists of computing the relations from the data parsed, creating additional nodes on the way (in particular for alternative dependencies), and writing out all the CSV files. Complications encountered As expected, there were a lot of steps from the first version to the current working version, in particular due to the following reasons (to name a few): • Maintainers are identified by email, but they sometimes use different names • Ordering of version numbers is non-trivial due to binNMU uploads and other version string tricks • Different version of packages in different architectures • udeb (installer) packages Creating a Neo4j Database The last step was creating a Neo4j database from the CSV files of nodes and relations. Our first approach was not via CSV files but using Cypher statements to generate nodes and relationships. This turned out to be completely infeasible, since each Cypher statement requires updates in the database. I estimated the time for complete data entry (automated) to several weeks. Neo4j recommends using the neo4j-import tool, which create a new Neo4j database from data in CSV files. The required format is rather simple, one CSV file for each node and relation type containing a unique id plus other attributes in columns. The CSV files for relations then link the unique ids, and can also add further attributes. To give an example, let us look at the head of the CSV for source packages sp, which has besides the name no further attribute: uuid:ID,name id0005a566e2cc46f295636dee7d504e82,libtext-ngrams-perl id00067d4a790c429b9428b565b6bddae2,yocto-reader id000876d57c85440e899cb93db27c835e,python-mhash  We see the unique id, which is tagged with :ID (see the neo4j-import tool manual for details), and the name of the source package. The CSV files defining relations all look similar: :START_ID,:END_ID id000072be5fd749328d0ec4c0ecc875f9,ide234044ae378493ab0af0151f775b8fe id000082711b4b4076922b5982d09b611b,ida404df8388a149479b130d6692c60f5e id000082711b4b4076922b5982d09b611b,idb5c2195d5b8f42bfbb9baa5fad6a066e  That is a list of start and end ids. In case of additional attributes like in the case of the depends relation we have :START_ID,reltype,relversion,:END_ID id00001319368f4e32993d49a8b1e61673,none,1,idcd55489608944012a02eadde55cbfa9e id0000143632404ad386e4564b3917a27c,>=,0.8-0,id156130a5c32a47d3918d3a4f4faff16f id0000143632404ad386e4564b3917a27c,>=,2.14.2-3,id1acca938752543efa4de87c60ff7b279  After having prepared these CSV files, a simple call to neo4j-import generates the Neo4j database in a new directory. Since we changed the set of nodes and relations several times, we named the generated files node-XXX.csv and edge-XXX.csv and generated the neo4j-import call automatically, see build-db script. This call takes, in contrast to the execution of the Cypher statements, a few seconds (!) to create the whole database:  neo4j-import ...
...
IMPORT DONE in 10s 608ms.
Imported:
528073 nodes
4539206 relationships
7540954 properties
Peak memory usage: 521.28 MB


Looking at the script build-all that glues everything together we see another step (sort-uniq) which makes sure that the same UUID is not listed multiple times.

Sample queries

Let us conclude with a few sample queries: First we want to find all packages in Jessie that build depends on tex-common. For this we use the Neo4j visualization kit and write Cypher statements to query and return the relevant nodes and relations:

match (BP:bp)<-[:build_depends]-(VSP:vsp)-[:builds]->(VBP:vbp)<-[:contains]-(S:suite)
where BP.name="tex-common" and S.name="jessie"
return BP, VSP, VBP, S


This query would give us more or less the following graph:

Another query is to find out the packages which are most often build depended upon, a typical query that is expensive for relational databases. In Cypher we can express this in very concise notation:

match (S:suite)-[:contains]->(VBP:vbp)-[:builds]-(VSP:vsp)-[:build_depends]-(X:bp)
where S.name = "sid"
with X.name as pkg,count(VSP) as cntr
return pkg,cntr order by -cntr


which returns the top package debhelper with 55438 packages build-depending on it, followed by dh-python (9289) and pkg-config (9102).

Conclusions and future work

We have shown that a considerable part of the UDD can be transformed and be represented in a graph database, which allows to efficiently and easily express complex queries about relations between the packages.

We have seen how conversion of an old and naturally grown RDB is a laborious job that requires lots of work - in particular domain knowledge is necessary to deal with subtle inconsistencies and corner cases.

Lessons we have learned are

• Finding a good representation is not a one-shot thing, but needs several iterations and lots of domain knowledge;
• using Cypher is only reasonable for query but not for importing huge amounts of data
• visualization in Chrome or Firefox is very dependent on the version of the browser, the OS, and probably the current moon phase.

There are also many things one could (I might?) do in the future:

Bug database: Including all the bugs reported including the versions in which they have been fixed would greatly improve the usefulness of the database.

Intermediate releases: Including intermediate releases of package that never made it into a release of Debian by parsing the UDD table for uploads would give a better view onto the history of package evolution.

Dependency management: Currently we carry the version and relation type as attribute of the dependency and pointing to the unversioned package. Since we already have a tree of versioned packages, we could point into this tree and only carry the relation type.

UDD dashboard: As a real-world challenge one could rewrite the UDD dashboard web interface using the graph database and compare the queries necessary to gather the data from the UDD and the graph database.

Graph theoretic issues: Find dependency cycles, connected components, etc.

This concludes the series of blog entries on representing Debian packages in a graph database.

Planet Debian — Dirk Eddelbuettel: RcppArmadillo 0.8.500.0

RcppArmadillo release 0.8.500.0, originally prepared and uploaded on April 21, has hit CRAN today (after having already been available via the RcppCore drat repo). A corresponding Debian release will be prepared as well. This RcppArmadillo release contains Armadillo release 8.500.0 with a number of rather nice changes (see below for details), and continues our normal bi-monthly CRAN release cycle.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 472 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.500.0 (2018-04-21)

• faster handling of sparse matrices by kron() and repmat()

• faster transpose of sparse matrices

• faster element access in sparse matrices

• faster row iterators for sparse matrices

• faster handling of compound expressions by trace()

• more efficient handling of aliasing in submatrix views

• expanded normalise() to handle sparse matrices

• expanded .transform() and .for_each() to handle sparse matrices

• added reverse() for reversing order of elements

• added repelem() for replicating elements

• added roots() for finding the roots of a polynomial

• Fewer LAPACK compile-time guards are used, new unit tests for underlying features have been added (Keith O'Hara in #211 addressing #207).

• The configure check for LAPACK features has been updated accordingly (Keith O'Hara in #214 addressing #213).

• The compile-time check for g++ is now more robust to minimal shell versions (#217 addressing #216).

• Compiler tests to were added for macOS (Keith O'Hara in #219).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux Australia — David Rowe: FreeDV 700D Part 3

After a 1 year hiatus, I am back into FreeDV 700D development, working to get the OFDM modem, LDPC FEC, and interleaver algorithms developed last year into real time operation. The aim is to get improved performance on HF channels over FreeDV 700C.

I’ve been doing lots of refactoring, algorithm development, fixing bugs, tuning, and building up layers of C code so we can get 700D on the air.

Steve ported the OFDM modem to C – thanks Steve!

I’m building up the software in the form of command line utilities, some notes, examples and specifications in Codec 2 README_ofdm.txt.

Last week I stayed at the shack of Chris, VK5CP, in a quiet rural location at Younghusband on the river Murray. As well as testing my Solar Boat, Mark (VK5QI) helped me test FreeDV 700D. This was the first time the C code software has been tested over a real HF radio channel.

After some tweaking, it worked! The frequency offset was a bit off, so I used the cohpsk_ch utility to shift it within the +/- 25Hz acquisition range of the FreeDV 700D demodulator. I also found some level sensitivity issues with the LDPC decoder. After implementing a form of AGC, the number of bit errors dropped by a factor of 10.

The channel had nasty fading of around 1Hz, here is a video of the “sample #32” spectrum bouncing around. This rapid fading is a huge challenge for modems. Note also the spurious birdie off to the left, and the effect of receiver AGC – the noise level rises during fades.

Here is a spectrogram of the same sample 33. The x axis is time in seconds. It’s like a “waterfall” SDR plot on it’s side. Note the heavy “barber pole” fading, which corresponds to the fades sweeping across the spectrum in the video above.

Here is the smoothed SNR estimate. The SNR is moving target for real world HF channels, the SNR moves between 2 and 6dB.

FreeDV 700D was designed to work down to 2dB on HF fading channels so pat on the back for me! Hundreds of hours of careful development and testing meant this thing actually worked when it went on air….

Sample 32 is a longer file that contains test frames instead of coded voice. The QPSK scatter diagram is a messy cross, typical of fading channels, as the amplitude of the signal moves in and out:

The LDPC FEC does a good job. Here are plots of the uncoded (raw) bit errors, and the bit errors after LDPC decoding, with the SNR estimates below:

Here are some wave and raw (headerless) audio files. The off air audio is error free, albeit at the low quality of Codec 2 at 700 bits/s. The goal of this work is to get intelligible speech through HF channels at low SNRs. We’ll look at improving the speech quality as a future step.

Still, error free digital voice on a heavily faded HF channel at 2dB SNR is pretty cool.

See below for how to use the last two raw file samples.

SNR estimation

After I sampled the files I had a problem – I needed to know the SNR. You see in my development I use simulated channels where I know exactly what the SNR is. I need to compare the performance of the real world, off-air signals to my expected results at a given SNR.

Unfortunately SNR on a fading channel is a moving target. In simulation I measure the total power and noise over the entire run, and the simulated fading channel is consistent. Real world channels jump all over the place as the ionosphere bounces around. Oh well, knowing we are in the ball park is probably good enough. We just need to know if FreeDV 700D is hanging onto real world HF channels at roughly the SNRs it was designed for.

I came up with a way of measuring SNR, and tested it with a range of simulated AWGN (just noise) and fading channels. The fading bandwidth is the speed at which the fading channel evolves. Slow fading channels might change at 0.2Hz, faster channels, like samples #32 and #33, at about 1Hz.

The blue line is the ideal, and on AWGN and slowly fading channels my SNR estimator does OK. It reads a dB low as the fading bandwidth increases to 1Hz. We are interested in the -2 to 4dB SNR range.

Command Lines

With the samples in the table above and codec2-dev SVN rev 3465, you can repeat some of my decodes using Octave and C:

octave:42> ofdm_ldpc_rx("32.raw")
EsNo fixed at 3.000000 - need to est from channel
Coded BER: 0.0010 Tbits: 54992 Terrs:    55
Codec PER: 0.0097 Tpkts:  1964 Terrs:    19
Raw BER..: 0.0275 Tbits: 109984 Terrs:  3021

david@penetrator:~/codec2-dev/build_linux/src$./ofdm_demod ../../octave/32.raw /dev/null -t --ldpc Warning EsNo: 3.000000 hard coded BER......: 0.0246 Tbits: 116620 Terrs: 2866 Coded BER: 0.0009 Tbits: 54880 Terrs: 47 build_linux/src$ ./freedv_rx 700D ../../octave/32.raw /dev/null --testframes
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$./freedv_rx 700D ../../octave/33.raw - | aplay -f S16  Next Steps I’m working steadily towards integrating FreeDV 700D into the FreeDV GUI program so anyone can try it. This will be released in May 2018. Reading Further Planet Debian — Eugene V. Lyubimkin: Copying pictures from Jolla phone to Debian desktop in 30 seconds via jmtpfs/libmtp Verified on Debian 9, amd64. Unlike some scary instructions may suggest (try searching for 'jolla phone linux connect' or 'jolla phone linux file transfer'), just transferring the pictures taken with the Jolla phone camera to your Debian machine can be done simpler and faster than enabling developer mode on the phone and doing SSH-over-USB. In this case, there happens to be a nice Debian package for the task which requires zero configuration: 1. connect the Jolla phone via USB, click on 'MTP transfer' in the phone pop-up; 2. install jmtpfs via your favourite package manager, e.g.: $ sudo cupt install jmtpfs

The following packages will be installed:

jmtpfs [0.5-2+b2(stretch)]
libmtp-common [1.1.13-1(stretch)]
libmtp-runtime [1.1.13-1(stretch)]
libmtp9 [1.1.13-1(stretch)]

Need to get 40,1KiB/355KiB of archives. After unpacking 2699KiB will be used.
Do you want to continue? [y/N/q/a/rc/?] y
...


3. mount the data to some user directory, e.g.:
$mkdir jolla$ jmtpfs jolla
Device 0 (...) is a Jolla Sailfish (...).


4. the pictures are ready to be processed by any file manager or picture organiser:
$cd jolla/$ ls
Mass storage


5. after we're done, the directory can be unmounted:
$fusermount -u jolla  Planet Debian — Chris Lamb: Free software activities in April 2018 Here is my monthly update covering what I have been doing in the free software world during April 2018 (previous month). Reproducible builds Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users. The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised. This month I: • Gave the keynote presentation at FLOSSUK's Spring Conference in Edinburgh, Scotland on reproducible builds and how it can prevent individual developers & centralised infrastructure from becoming targets from malicious actors. • Presented at foss-north 2018 in Gothenburg, Sweden to speak about diffoscope, our in-depth tool to analyse reproducibility issues in packages and how it can be used in quality-assurance efforts more generally. • Filed 10 upstream patches to make their build or output reproducible for the Sphinx documentation generator [...], the Lexicon DNS manager [...], the Dashell C++ stream library [...], Pylint static analysis tool [...], the vcr.py HTTP-interaction testing tool [...], the Click Python command-line parser [...], the ASDF interchange format [...], the strace system call tracer [...], the libdazzle Glib component library [...] and the Corosync Cluster Engine [...]. • Updated the diffoscope tests to prevent a build failure under file 5.33. (#897099) • Dropped support for stripping text from PHP Pear registry file in our strip-nondeterminism tool to remove specific non-deterministic results from a completed build as we can fix this in the toolchain instead. [...] • Added an example on how to unmount the manpage in disorderfs, our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues. [...] • Migrated our weekly blog reports and related machinery from the deprecated Alioth and Debian-branded service to the more-generic reproducible-builds.org domain as well as made some cosmetic changes to the site itself. [...] • In Debian, I: • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository. • Worked on publishing our weekly reports. (#154, #155 & #156). Debian My activities as the Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list. I contributed the following patches for Debian: • debhelper: Does not accept zero as a valid version number in debian/changelog. (#894895) • python-colormap: Please drop override of new-package-should-not-package-python2-module. (#896662) • whatthepatch: Please drop override of new-package-should-not-package-python2-module. (#896664) • libdazzle: Incorrect Homepage: field. (#896065) • python-click: Please correct Homepage: field. (#895277) • libmypaint: Incorrect Homepage: field. (#895402) • figlet: Add missing space in figlet(6) manpage. (#894541) Debian LTS This month I have been paid to work 16¼ hours on the Debian Long Term Support (LTS) project. In that time I did the following: Uploads • sphinx (1.7.2-1) — New upstream release, apply my upstream patch to make the set output reproducible (#895553), don't use Google Fonts to avoid local privacy breaches, fix testsuite to not rely on specific return types, etc. • python-django (1:1.11.12-1 & 2.0.4-1) — New upstream bugfix releases. • installation-birthday (9) — Also use /var/lib/vim to determine installation date. (#895686) • ruby-rjb (1.5.5-2) — Fix FBTFS under Java 9. (#874146) • redisearch: • 1.0.10-1 — New upstream release. • 1.0.10-2 — Drop -mpopcnt from CFLAGS. (#896593) • 1.0.10-3 — Use upstream's patch for removing -mpopcnt. • 1.1.0-1 — New upstream release. • libfiu (0.96-2) — Apply patch from upstream to make the build reproducible. (#894776) • redis (5:4.0.9-1) — New upstream release. • python-redis (2.10.6-3) — Fix tests when performed against an i386 Redis server. (#896864) I also performed six sponsored uploads: connman-ui (0~20150623-0.1), playerctl (0.5.0-1), yasnippet-snippets (0.2-1), nose2 (0.7.4-2), pytest-tornado (0.5.0-1) & django-ipware (2.0.2-1). Planet Debian — Jamie McClelland: The pain of installing custom ROMs on Android phones A while back I bought a Nexus 5x. During a three-day ordeal I finally got Omnirom installed - with full disk encryption, root access and some stitched together fake Google Play code that allowed me to run Signal without actually letting Google into my computer. A short while later, Open Whisper Systems released a version of Signal that uses Web Sockets when Google Play services is not installed (and allows for installation via a web page without the need for the Google Play store). Dang. Should have waited. Now, post Meltdown/Spectre, I worked up the courage to go through this process again. In the comments of my Omnirom post, I received a few suggestions about not really needing root. Hm - why didn't I think of that? Who needs root anyway? Combining root with full disk encryption was the real pain point in my previous install, so perhaps I can make things much easier. Also, not needing any of the fake Google software would be a definite plus. This time around I decided to go with LineageOS since it seems to be the most mainstream of the custom ROMs. I found perfectly reasonable sounding instructions. My first mistake was skipping the initial steps (since I already had TWRP recovery installed I didn't think I needed to follow them). I went straight to the step of installing LineageOS (including wiping the Cache, System and Data partitions). Unfortunately, when it came time to flash the ROM, I got the error that the ROM is for bullhead, but the hardware I was using is "" (yes, empty string there). After some Internet searching I learned that the problem is an out-dated version of TWRP. Great, let's upgrade TWRP. I went back and started from the beginning of the LineageOS install instructions. But when it came to the fastboot flashing unlock step, I got the message explaining that my phone did not have OEM unlocking enabled. There are plenty of posts in the Internet demonstrating how to boot into your phone and flip the switch to allow OEM unlocking from the Developer section of your System tools. Great, except that I could no longer boot into my phone thanks to the various deletion steps I had already taken. Dang. Did just brick my phone? I started thinking through how to buy a new phone. Then, I did more searching on the Internet and learned that I can flash a new version of TWRP the same way you flash anything else. Phew! New TWRP flashed and new LineageOS ROM installed! And, my first question: what is the point of locking your phone if you can still flash recovery images and even new ROMs? However, when I booted, I got an "OS vendor mismatch error". WTF. Ok, now my phone is really bricked. Fortunately, someone not only identified this problem but contributed an exceptionally well-written step-by-step set of directions to fix the problem. The post, in combination with some comments on it, explains that you have to download the Google firmware that corresponds to the error code in your message (in case that post ever goes away: unzip the file you download, then cd into the directory created and unzip the file that starts with image-bullhead. Then, minimally flash the vendor.img to the vendor partition in TWRP). In other words, the LineageOS ROM depends on having the right Google firmware installed. All of these steps were possible without unlocking the phone. However, when I tried to update the Radio and Bootloader using: fastboot flash bootloader bootloader-bullhead-bhz11l.img fastboot flash radio radio-bullhead-m8994f-2.6.37.2.21.img  It failed, so I booted into my now working install, enabled OEM unlock, unlocked the phone (which wiped everything so I had to start over) and then it all worked. And, kudos to LineageOS for the simple setup process and ease of getting full disk encryption. Now that I'm done, I am asking myself a few questions: I have my own custom ROM and I am not trusting Google with everything anymore. Hooray! So ... who am I trusting? This question I know the answer to (I think...): • Team Win, which provides the TWRP recovery software has total control of everything. Geez, I hope these people aren't assholes. • Google, since I blindly install their firmware vendor image, bootloader image and radio image. I guess they still can control my phone. • Fdroid, I hope they vette their packages, because I blindly install them from their default archives. • Guardian Project, since I enable their fdroid repo too - but hey at least I have met a few of these people and they are May First/People Link members. • Firefox, I download firefox directly from Mozilla since fdroid doesn't seem to really support them. • Signal, since I download that APK directly as well. • And the https certificate system (which pretty much means Let's Encrypt nowadays - since nearly everything depends on the integrity of the packages I am downloading over https. But I'm still not sure about one more question: Should I lock my phone? Given what I just accomplished without locking it, it seems that locking the phone could make my life harder the next time I upgrade and doesn't really stop someone else from replacing key components of my operating system without me knowing it. Cryptogram — Security Vulnerabilities in VingCard Electronic Locks Researchers have disclosed a massive vulnerability in the VingCard eletronic lock system, used in hotel rooms around the world: With a$300 Proxmark RFID card reading and writing tool, any expired keycard pulled from the trash of a target hotel, and a set of cryptographic tricks developed over close to 15 years of on-and-off analysis of the codes Vingcard electronically writes to its keycards, they found a method to vastly narrow down a hotel's possible master key code. They can use that handheld Proxmark device to cycle through all the remaining possible codes on any lock at the hotel, identify the correct one in about 20 tries, and then write that master code to a card that gives the hacker free reign to roam any room in the building. The whole process takes about a minute.

[...]

The two researchers say that their attack works only on Vingcard's previous-generation Vision locks, not the company's newer Visionline product. But they estimate that it nonetheless affects 140,000 hotels in more than 160 countries around the world; the researchers say that Vingcard's Swedish parent company, Assa Abloy, admitted to them that the problem affects millions of locks in total. When WIRED reached out to Assa Abloy, however, the company put the total number of vulnerable locks somewhat lower, between 500,000 and a million.

Patching is a nightmare. It requires updating the firmware on every lock individually.

And the researchers speculate whether or not others knew of this hack:

The F-Secure researchers admit they don't know if their Vinguard attack has occurred in the real world. But the American firm LSI, which trains law enforcement agencies in bypassing locks, advertises Vingcard's products among those it promises to teach students to unlock. And the F-Secure researchers point to a 2010 assassination of a Palestinian Hamas official in a Dubai hotel, widely believed to have been carried out by the Israeli intelligence agency Mossad. The assassins in that case seemingly used a vulnerability in Vingcard locks to enter their target's room, albeit one that required re-programming the lock. "Most probably Mossad has a capability to do something like this," Tuominen says.

Slashdot post.

Worse Than Failure — CodeSOD: Philegex

Last week, I was doing some graphics programming without a graphics card. It was low resolution, so I went ahead and re-implemented a few key methods from the Open GL Shader Language in a fashion which was compatible with NumPy arrays. Lucky for me, I was able to draw off many years of experience, I understood both technologies, and they both have excellent documentation which made it easy. After dozens of lines of code, I was able to whip up some pretty flexible image generator functions. I knew the tools I needed, I understood how they worked, and while I was reinventing a wheel, I had a very specific reason.

Philemon Eichin sends us some code from a point in his career where none of these things were true.

Philemon was building a changelog editor. As such, he wanted an easy, flexible way to identify patterns in the text. Philemon knew that there was something that could do that job, but he didn’t know what it was called or how it was supposed to work. So, like all good programmers, Philemon went ahead and coded up what he needed- he invented his own regular expression language, and built his own parser for it.

Thus was born Philegex. Philemon knew that regexes involved slashes, so in his language you needed to put a slash in front of every character you wanted to match exactly. He knew that it involved question marks, so he used the question mark as a wildcard which could match any character. That left the ’|" character to be optional.

So, for example: /P/H/I/L/E/G/E/X|??? would match “PHILEGEX!!!” or “PHILEGEWTF”. A date could be described as: nnnn/.nn/.nn. (YYYY.MM.DD or YYYY.DD.MM)

Living on his own isolated island without access to the Internet to attempt to google up “How to match patterns in text”, Philemon invented his own language for describing parts of a regular expression. This will be useful to interpret the code below.

PhilegexRegex
p1 Pattern / Regex
Block(s)Token(s)
CT CharType
SplitLineParseRegex
CC currentChar
auf_zu openParenthesis
Chars CharClassification

With the preamble out of the way, enjoy Philemon’s approach to regular expressions, implemented elegantly in VB.Net.

Public Class Textmarker
Const Datum As String = "nn/.nn/.nnnn"

Private Structure Blocks
Dim Type As Chars
Dim Multi As Boolean
Dim Mode As Char_Mode
Dim Subblocks() As Blocks
Dim passed As Boolean
Dim _Optional As Boolean
End Structure

Public Shared Function IsMaskable(p1 As String, Content As String) As Boolean
Dim ID As Integer = 0
Dim p2 As Chars
Dim _Blocks() As Blocks = SplitLine(p1)
For i As Integer = 0 To Content.Length - 1
p2 = GetCT(Content(i))
START_CASE:
'#If CONFIG = "Debug" Then
'            If ID = 2 Then
'                Stop
'            End If

'#End If
If ID > _Blocks.Length - 1 Then
Return False
End If
Select Case _Blocks(ID).Mode
Case Char_Mode._Char
If p2.Char_V = _Blocks(ID).Type.Char_V Then
_Blocks(ID).passed = True
If Not _Blocks(ID).Multi = True Then ID += 1
Exit Select
Else
If _Blocks(ID).passed = True And _Blocks(ID).Multi = True Then
ID += 1
GoTo START_CASE
Else
If Not _Blocks(ID)._Optional Then Return False

End If
End If
Case Char_Mode.Type
If _Blocks(ID).Type.Type = Chartypes.any Then
_Blocks(ID).passed = True
If Not _Blocks(ID).Multi = True Then ID += 1
Exit Select
Else

If p2.Type = _Blocks(ID).Type.Type Then
_Blocks(ID).passed = True
If Not _Blocks(ID).Multi = True Then ID += 1
Exit Select
Else
If _Blocks(ID).passed = True And _Blocks(ID).Multi = True Then
ID += 1
GoTo START_CASE
Else
If _Blocks(ID)._Optional Then
ID += 1
_Blocks(ID - 1).passed = True
Else
Return False

End If

End If
End If

End If

End Select

Next
For i = ID To _Blocks.Length - 1
If _Blocks(ID)._Optional = True Then
_Blocks(ID).passed = True
Else
Exit For
End If
Next
If _Blocks(_Blocks.Length - 1).passed Then
Return True
Else
Return False
End If

End Function

Private Shared Function GetCT(Char_ As String) As Chars

If "0123456789".Contains(Char_) Then Return New Chars(Char_, 2)
If "qwertzuiopüasdfghjklöäyxcvbnmß".Contains((Char.ToLower(Char_))) Then Return New Chars(Char_, 1)
Return New Chars(Char_, 4)
End Function

Private Shared Function SplitLine(ByVal Line As String) As Blocks()
Dim ret(0) As Blocks
Dim retID As Integer = -1
Dim CC As Char
For i = 0 To Line.Length - 1
CC = Line(i)
Select Case CC
Case "("
ReDim Preserve ret(retID + 1)
retID += 1
Dim ii As Integer = i + 1
Dim auf_zu As Integer = 1
Do
Select Case Line(ii)
Case "("
auf_zu += 1
Case ")"
auf_zu -= 1
Case "/"
ii += 1
End Select
ii += 1
Loop Until auf_zu = 0
ret(retID).Subblocks = SplitLine(Line.Substring(i + 1, ii - 1))
ret(retID).Mode = Char_Mode.subitems
ret(retID).passed = False

Case "*"
ret(retID).Multi = True
ret(retID).passed = False
Case "|"
ret(retID)._Optional = True

Case "/"
ReDim Preserve ret(retID + 1)
retID += 1
ret(retID).Mode = Char_Mode._Char
ret(retID).Type = New Chars(Line(i + 1), Chartypes.other)
i += 1
ret(retID).passed = False

Case Else

ReDim Preserve ret(retID + 1)
retID += 1
ret(retID).Mode = Char_Mode.Type
ret(retID).Type = New Chars(Line(i), TocType(CC))
ret(retID).passed = False
End Select
Next
Return ret
End Function
Private Shared Function TocType(p1 As Char) As Chartypes
Select Case p1
Case "c"
Return Chartypes._Char
Case "n"
Return Chartypes.Number
Case "?"
Return Chartypes.any
Case Else
Return Chartypes.other
End Select
End Function

Public Enum Char_Mode As Integer
Type = 1
_Char = 2
subitems = 3
End Enum
Public Enum Chartypes As Integer
_Char = 1
Number = 2
other = 4
any
End Enum
Structure Chars
Dim Char_V As Char
Dim Type As Chartypes
Sub New(Char_ As Char, typ As Chartypes)
Char_V = Char_
Type = typ
End Sub
End Structure
End Class

I’ll say this: building a finite state machine, which is what the core of a regex engine is, is perhaps the only case where using a GoTo could be considered acceptable. So this code has that going for it. Philemon was kind enough to share this code with us, so we knew he knows it’s bad.

Planet Linux Australia — OpenSTEM: Be Gonski Ready!

Gonski is in the news again with the release of the Gonski 2.0 report. This is most likely to impact on schools and teachers in a range of ways from funding to curriculum. Here at OpenSTEM we can help you to be ahead of the game by using our materials, which are already Gonski-ready! The […]

Planet Debian — Russ Allbery: Review: Full of Briars

Review: Full of Briars, by Seanan McGuire

 Series: October Daye #7.1 Publisher: DAW Copyright: August 2016 ISBN: 0-7564-1222-6 Format: Kindle Pages: 44

"Full of Briars" is a novella set in the October Daye series, between Chimes at Midnight and The Winter Long, although published four years later. It was published independently, so it gets a full review here, but it's $2 on Amazon and primarily fills in some background for series readers. It's also extremely hard to review without spoilers, since it is the direct consequences of a major plot revelation at the end of Chimes of Midnight that would spoil a chunk of that story and some of the series leading up to it. So I'm going to have to be horribly vague and fairly brief. "Full of Briars" is, unlike most of the series and all of the novels, told from Quentin's perspective rather than Toby's. The vague thing that I can say about the plot is that this is the story of Toby finally meeting Quentin's parents. Since Quentin is supposed to be in a blind fosterage and his parentage kept secret, this is a bit of a problem. It might be enough of a problem to end the fosterage and call him home. That is very much not something Quentin wants. Or Toby, or any of the rest of the crew Toby has gathered around her in the course of the series. The rest of the story is mostly talking, about that decision and its aftermath and then some other developments in Quentin's life. It lacks a bit of the drama of the novels of the series, but one of the reasons why I'm still reading this series is that I like these characters and their dialogue. They're all very much themselves here: Toby being blunt, May being random, and Quentin being honorable and determined and young. Tybalt is particularly good here, doing his own version of Toby's tendency to speak truth to power and strongly asserting the independence of the Court of Cats. The ending didn't have much impact for me, and I don't think the scene worked quite as well as McGuire intended, but it's another bit of background that's useful for series readers to be aware of. This is missable, but it's cheap enough and fast enough to read that I wouldn't miss it if you're otherwise reading the series. The core plot outcome is predictable, as is much of what happens in the process. But I liked Quentin's parents, I liked how they interact with McGuire's regular cast, and it's nice to know exactly what happened in this interlude. Followed by The Winter Long (and also see the Toby Short Stories page for a list of all the short fiction in this universe and where it falls in series order). Rating: 6 out of 10 , Harald Welte — OsmoDevCon 2018 retrospective One week ago, the annual Osmocom developer meeting (OsmoDevCon 2018) concluded after four long and intense days with old and new friends (schedule can be seen here It was already the 7th incarnation of OsmoDevCon, and I have to say that it's really great to see the core Osmocom community come together every year, to share their work and experience with their fellow hackers. Ever since the beginning we've had the tradition that we look beyond our own projects. In 2012, David Burgess was presenting on OpenBTS. In 2016, Ismael Gomez presented about srsUE + srsLTE, and this year we've had the pleasure of having Sukchan Kim coming all the way from Korea to talk to us about his nextepc project (a FOSS implementation of the Evolved Packet Core, the 4G core network). What has also been a regular "entertainment" part in recent years are the field trip reports to various [former] satellite/SIGINT/... sites by Dimitri Stolnikov. All in all, the event has become at least as much about the people than about technology. It's a community of like-minded people that to some part are still working on joint projects, but often work independently and scratch their own itch - whether open source mobile comms related or not. After some criticism last year, the so-called "unstructured" part of OsmoDevCon has received more time again this year, allowing for exchange among the participants irrespective of any formal / scheduled talk or discussion topic. In 2018, with the help of c3voc, for the first time ever, we've recorded most of the presentations on video. The results are still in the process of being cut, but are starting to appear at https://media.ccc.de/c/osmodevcon2018 If you want to join a future OsmoDevCon in person: Make sure you start contributing to any of the many Osmocom member projects now to become eligible. We need you! Now the sad part i that it will take one entire year until we'll reconvene. May the Osmocom Developer community live long and prosper. I want to meet you guys for many more years at OsmoDevCon! There is of course the user-oriented OsmoCon 2018 in October, but that's of course a much larger event with a different audience. Nevertheless, I'm very much looking forward to that, too. The OsmoCon 2018 Call for Participation is still running. Please consider submitting talks if you have anything open source mobile communications related to share! Sociological Images — Bouncers and Bias Originally Posted at TSP Discoveries Whether we wear stilettos or flats, jeans or dress clothes, our clothing can allow or deny us access to certain social spaces, like a nightclub. Yet, institutional dress codes that dictate who can and cannot wear certain items of clothing target some marginalized communities more than others. For example, recent reports of bouncers denying Black patrons from nightclubs prompted Reuben A Buford May and Pat Rubio Goldsmith to test whether urban nightclubs in Texas deny entrance for Black and Latino men through discriminatory dress code policies. For the study, recently published in Sociology of Race and EthnicityThe authors recruited six men between the ages of 21 and 23. They selected three pairs of men by race — White, Black, and Latino — to attend 53 urban nightclubs in Dallas, Houston, and Austin. Each pair shared similar racial, socioeconomic, and physical characteristics. One individual from each pair dressed as a “conformist,” wearing Ralph Lauren polos, casual shoes, and nice jeans that adhered to the club’s dress code. The other individual dressed in stereotypically urban dress, wearing “sneakers, blue jean pants, colored T-shirt, hoodie, and a long necklace with a medallion.” The authors categorized an interaction as discrimination if a bouncer denied a patron entrance based on his dress or if the bouncer enforced particular dress code rules, such as telling a patron to tuck in their necklace. Each pair attended the same nightclub at peak hours three to ten minutes apart. The researchers exchanged text messages with each pair to document any denials or accommodations. Black men were denied entrance into nightclubs 11.3 percent of the time (six times), while White and Latino men were both denied entry 5.7 percent of the time (three times). Bouncers claimed the Black patrons were denied entry because of their clothing, despite allowing similarly dressed White and Latino men to enter. Even when bouncers did not deny entrance, they demanded that patrons tuck in their necklaces to accommodate nightclub policy. This occurred two times for Black men, three times for Latino men, and one time for White men. Overall, Black men encountered more discriminatory experiences from nightclub bouncers, highlighting how institutions continue to police Black bodies through seemingly race-neutral rules and regulations. Amber Joy is a PhD student in sociology at the University of Minnesota. Her current research interests include punishment, sexual violence and the intersections of race, gender, age, and sexuality. Her work examines how state institutions construct youth victimization. Planet Linux Australia — David Rowe: Solar Boat Two years ago when I bought my Hartley TS16 sail boat I dreamed of converting it to solar power. In January I installed a Torqueedo electric outboard and a 24V, 100AH Lithium battery back. That’s working really well. Next step was to work out a way to mount some surplus 200W solar panels on the boat. The idea is to (temporarily) detach the mast, and use the boat on the river Murray, a major river that passes within 100km of where I live in Adelaide, South Australia. Over the last few weeks I worked with my friend Gary (VK5FGRY) to mount solar panels on the TS16. Gary designed and fabricated some legs from 40mm square aluminium: With a matching rubber foot on each leg, the panels sit firmly on the gel coat of the boat, and are held down by ropes or octopus straps. The panels maximum power point is at 28.5V (and 7.5A) which is close to the battery pack under charge (3.3*8 = 26.4V) so I decided to try a direct DC connection – no inverter or charger. I ran some tests in the back yard: each panel was delivering about 4A into the battery pack, and two in parallel delivered about 8A. I didn’t know solar panels could be connected in parallel, but happily this means I can keep my direct DC connection. Horizontal panels costs a few amps – a good example of why solar panels are usually angled at the sun. However the azimuth of the boat will be always changing so horizontal is the only choice. The panels are very sensitive to shadowing; a hand placed on a panel, or a small shadow is enough to drop the current to 0A. OK, so now I had a figure for panel output – about 4A from each panel. This didn’t look promising. Based on my sea voyages with the Torqueedo, I estimated I would need 800W (about 30A) to maintain my target houseboat speed of 4 knots (7 km/hr); that’s 8 panels which won’t ft on my boat! However the current draw on the river might be different without tides, and waves, and I wasn’t sure exactly how many AH I would get over a day from the sun. Would trees on the river bank shadow the panels? So it was off to Younghusband on the Murray, where our friend Chris (VK5CP) was hosting a bunch of Ham Radio guys for an extended Anzac day/holiday weekend. It’s Autumn here, with generally sunny days of about 23C. The sun is up from from 6:30am to 6pm. Turns out that even with two panels – the solar boat was really practical! Over three days we made three trips of 2 hours each, at speeds of 3 to 4 knots, using only the panels for charging. Each day I took friends out, and they really loved it – so quiet and peaceful, and the river scenery is really nice. After an afternoon cruise I would park the boat on the South side of the river to catch the morning sun, which in Autumn appears to the North here in Australia. I measured the panel current as 2A at 7am, 6A at 9am, 9A at 10am, and much to my surprise the pack was charged by 11am! In fact I had to disconnect the panels as the cell voltage was pushing over 4V. On a typical run upriver we measured 700W = 4kt, 300W = 3.1kt, 150W = 2.5kt, and 8A into the panels in full sun. Panel current dropped to 2A with cloud which was a nasty surprise. We experienced no shadowing issues from trees. The best current we saw at about noon was 10A. We could boost the current by 2A by putting three guys on one side of the boat and tipping the entire boat (and solar panels) towards the sun! Even partial input from solar can have a big impact. Lets say at 4 knots (30A) I can drive for 2 hours using 60% of my 100AH pack. If I back off the speed a little, so I’m drawing 20A, then 10A from the panels will extend my driving time to 6 hours. I slept on the boat, and one night I found a paddle steamer (the Murray Princess) parked across the river from me, all lit up with fairy lights: On our final adventure, my friend Darin (VK5IX) and I were entering Lake Carlet, when suddenly the prop hit something very hard, “crack crack crack”. My poor prop shaft was bent and my propeller is wobbling from side to side: We gently e-motored back and actually recorded our best results – 3 knots on 300W, 10A from the panels, 10A to the motor. With 4 panels I would have a very practical solar boat, capable of 4-6 hours cruising a day just on solar power. The 2 extra panels could be mounted as a canopy over the rear of the boat. I have an idea about an extended solar adventure of several days, for example 150km from Younghusband to Goolwa. Reading Further , Planet Debian — Steinar H. Gunderson: Nageru 1.7.2 released I've released version of 1.7.2 of Nageru, my free video mixer. The main new feature this time round is the ability to use sound from video inputs. This was originally intended for IP camera inputs from Android, but I suppose you could also it for playout if you're brave. :-) A/V sync keeps being a hard problem (it's easy to make something that feels reasonable and works 99% of the time, but fails miserably the last percent), so I don't recommend running your cameras over IP if you can avoid it, but sometimes lugging SDI around really is too inconvenient. Apart from that, the git log this time is dominated by a lot of small tweaks and bugfixes; things are getting increasingly refined as we get more experience with the larger setups. I wondered for a bit whether I should give it a version bump to 1.8.0, but in the end, I didn't consider IP inputs (nor the support for assisting Cubemap with HLS output) important enough. So 1.7.2 it is. The full changelog (with lots of things hidden under the last point :-) ) follows. As usual, new packages are also on their way up to Debian, although unfortunately, my CEF-enabling patch to Chromium still hasn't received any love, so if you want CEF support, you'll have to compile yourself. Nageru 1.7.2, April 28th, 2018 - Several improvements to video (FFmpeg) inputs: You can now use them as audio sources, you can right-click on video channels to change URL/filename on-the-fly, themes can ask for forced disconnection (useful for network sources that are hanging), and various other improvements. Be aware that the audio support may still be somewhat rough, as A/V sync of arbitrary video playout is a hard problem. - The included themes have been fixed to properly make the returned chain preparation functions independent of global state (e.g. if the white balance for a channel was changed before the frame was actually rendered). If you are using a custom theme, you may want to apply similar fixes to it. - In Metacube stream output, mark each keyframe with a pts metadata block. This allows Cubemap 1.4.0 or newer to serve fMP4 fragments for HLS from Nageru's output, without any further remuxing or transcoding. - If needed, Nageru will now automatically try to autodetect a usable --va-display parameter by probing all DRM nodes for H.264 encoders. This removes the need to set --va-display in almost all cases, and also removes the dependency on libpci. - For GPUs that support querying available memory (in practice only NVIDIA GPUs at the current time), expose the amount of used/total GPU memory both on standard output and in the Prometheus metrics (as well as included Grafana dashboard). - The Grafana dashboard now supports heatmaps for the chosen x264 speedcontrol preset (requires Grafana 5.1 or newer). (There used to be a heatmap earlier, but it was all broken.) - Various bugfixes.  Planet Linux Australia — Julien Goodwin: PoE termination board For my next big project I'm planning on making it run using power over ethernet. Back in March I designed a quick circuit using the TI TPS2376-H PoE termination chip, and an LMR16020 switching regulator to drop the ~48v coming in down to 5v. There's also a second stage low-noise linear regulator (ST LDL1117S33R) to further drop it down to 3.3v, but as it turns out the main chip I'm using does its own 5->3.3v conversion already. Because I was lazy, and the pricing was reasonable I got these boards manufactured by pcb.ng who I'd used for the USB-C termination boards I did a while back. Here's the board running a Raspberry Pi 3B+, as it turns out I got lucky and my board is set up for the same input as the 3B+ supplies. One really big warning, this is a non-isolated supply, which, in general, is a bad idea for PoE. For my specific use case there'll be no exposed connectors or metal, so this should be safe, but if you want to use PoE in general I'd suggest using some of the isolated convertors that are available with integrated PoE termination. For this series I'm going to try and also make some notes on the mistakes I've made with these boards to help others, for this board: • I failed to add any test pins, given this was the first try I really should have, being able to inject power just before the switching convertor was helpful while debugging, but I had to solder wires to the input cap to do that. • Similarly, I should have had a 5v output pin, for now I've just been shorting the two diodes I had near the output which were intended to let me switch input power between two feeds. • The last, and the only actual problem with the circuit was that when selecting which exact parts to use I optimised by choosing the same diode for both input protection & switching, however this was a mistake, as the switcher needed a Schottky diode, and one with better ratings in other ways than the input diode. With the incorrect diode the board actually worked fine under low loads, but would quickly go into thermal shutdown if asked to supply more than about 1W. With the diode swapped to a correctly rated one it now supplies 10W just fine. • While debugging the previous I also noticed that the thermal pads on both main chips weren't well connected through. It seems the combination of via-in-thermal-pad (even tented), along with Kicad's normal reduction in paste in those large pads, plus my manufacturer's use of a fairly thin application of paste all contributed to this. Next time I'll probably avoid via-in-pad. Coming soon will be a post about the GPS board, but I'm still testing bits of that board out, plus waiting for some missing parts (somehow not only did I fail to order 10k resistors, I didn't already have some in stock). Planet Linux Australia — Chris Smart: Fedora on ODROID-HC1 mini NAS (ARMv7) Hardkernel is a Korean company that makes various embedded ARM based systems, which it calls ODROID. One of their products is the ODROID-HC1, a mini NAS designed to take a single 2.5″ SATA drive (HC stands for “Home Cloud”) which comes with 2GB RAM and a Gigabit Ethernet port. There is also a 3.5″ model called the HC2. Both of these are based on the ODROID-XU4, which itself is based on the previous iteration ODROID-XU3. All of these are based on the Samsung Exynos5422 SOC and should work with the following steps. The Exynos SOC needs proprietary first stage bootloaders which are embedded in the first 1.4MB or so at the beginning of the SD card in order to load U-Boot. As these binary blobs are not re-distributable, Fedora cannot support these devices out of the box, however all the other bits are available including the kernel, device tree and U-Boot. So, we just need to piece it all together and the result is a stock Fedora system! To do this you’ll need the ODROID device, a power supply (5V/4A for HC1, 12V/2A for HC2), one of their UART adapters, an SD card (UHS-I) and probably a hard drive if you want to use it as a NAS (you may also want a battery for the RTC and a case). ODROID-HC1 with UART, RTC battery, SD card and 2.5″ drive. Note that the default Fedora 27 ARM image does not support the Realtek RTL8153 Ethernet adapter out of the box (it does after a kernel upgrade) so if you don’t have a USB Ethernet dongle handy we’ll download the kernel packages on our host, save them to the SD card and install them on first boot. The Fedora 28 image works out of the box, so if you’re installing 28 you can skip that step. Download the Fedora Minimal ARM server image and save it in your home dir. Install the Fedora ARM installer and U-Boot bootloader files for the device on your host PC. sudo dnf install fedora-arm-installer uboot-images-armv7 Insert your SD card into your computer and note the device (mine is /dev/mmcblk0) using dmesg or df commands. Once you know that, open a terminal and let’s write the Fedora image to the SD card! Note that we are using none as the target because it’s not a supported board and we will configure the bootloader manually. sudo fedora-arm-image-installer \ --target=none \ --image=Fedora-Minimal-armhfp-27-1.6-sda.raw.xz \ --resizefs \ --norootpass \ --media=/dev/mmcblk0 First things first, we need to enable the serial console and turn off cpuidle else it won’t boot. We do this by mounting the boot partition on the SD card and modifying the extlinux bootloader configuration. sudo mount /dev/mmcblk0p2 /mnt sudo sed -i "s|append|& cpuidle.off=1 \ console=tty1 console=ttySAC2,115200n8|" \ /mnt/extlinux/extlinux.conf As mentioned, the kernel that comes with Fedora 27 image doesn’t support the Ethernet adapter, so if you don’t have a spare USB Ethernet dongle, let’s download the updates now. If you’re using Fedora 28 this is not necessary. cd /mnt sudo wget http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-4.16.3-200.fc27.armv7hl.rpm \ http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-core-4.16.3-200.fc27.armv7hl.rpm \ http://dl.fedoraproject.org/pub/fedora/linux/updates/27/armhfp/Packages/k/kernel-modules-4.16.3-200.fc27.armv7hl.rpm cd ~/ Unmount the boot partition. sudo umount /mnt Now, we can embed U-Boot and the required bootloaders into the SD card. To do this we need to download the files from Hardkernel along with their script which writes the blobs (note that we are downloading the files for the XU4, not HC1, as they are compatible). We will tell the script to use the U-Boot image we installed earlier, this way we are using Fedora’s U-Boot not the one from Hardkernel. Download the required files from Hardkernel. mkdir hardkernel ; cd hardkernel wget https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/sd_fusing.sh \ https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/bl1.bin.hardkernel \ https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/bl2.bin.hardkernel.720k_uboot \ https://raw.githubusercontent.com/hardkernel/u-boot/odroidxu4-v2017.05/sd_fuse/tzsw.bin.hardkernel chmod a+x sd_fusing.sh Copy the Fedora U-Boot files into the local dir. cp /usr/share/uboot/odroid-xu3/u-boot.bin . Finally, run the fusing script to embed the files onto the SD card, passing in the device for your SD card. sudo ./sd_fusing.sh /dev/mmcblk0 That’s it! Remove your SD card and insert it into your ODROID, then plug the UART adapter into a USB port on your computer and connect to it with screen (check dmesg for the port number, generally ttyUSB0). sudo screen /dev/ttyUSB0 Now power on your ODROID. If all goes well you should see the SOC initialise, load Fedora’s U-Boot and boot Fedora to the welcome setup screen. Complete this and then log in as root or your user you have just set up. Welcome configuration screen for Fedora ARM. If you’re running Fedora 27 image, install the kernel updates, remove the RPMs and reboot the device (skip this if you’re running Fedora 28). sudo dnf install --disablerepo=* /boot/*rpm sudo rm /boot/*rpm sudo reboot Fedora login over serial connection. Once you have rebooted, the Ethernet adapter should work and you can do your regular updates sudo dnf update You can find your SATA drive at /dev/sda where you should be able to partition, format, mount it, share it and well, do whatever you want with the box. You may wish to take note of the IP address and/or configure static networking so that you can SSH in once you unplug the UART. Enjoy your native Fedora embedded ARM Mini NAS Planet Debian — Michal Čihař: What's being cooked for Weblate 3.0 Next release on Weblate roadmap is called 3.0 and will bring some important changes. Some of these are already present in the Git repository and deployed on Hosted Weblate, but more of that will follow. Component discovery Component discovery is useful feature if you have several translation components in one repository. Previously import_project management command was the only way to help you, however it had to be executed manually on any change. Now you can use Component discovery addon which does quite similar thing, however is triggered on VCS update, so it can be used to follow structure of your project without any manual interaction. This feature is already available in Git and on Hosted Weblate, though you have to ask for initial setup there. Code cleanups Over the years (first Weblate release was more than six years ago) the code structure is far from optimal. There are several code cleanups scheduled and some of them are already present in Git repository. This will make Weblate easier to maintain and extend (eg. with third party file format drivers). User management Since the beginning Weblate has relied on user management and permissions as provided by Django. This is really not a good fit for language / project matrix permissions which most people need so we've come with Group ACL to extend this. This worked quite well for some use cases, but it turned out to be problematic for others. It is also quite hard to setup properly. For Weblate 3.0 this will be dropped and replaced by access control which fits more use cases. This is still being finalized in our issue tracker, so if you have any comments to this, please share them. Migration path Due to above mentioned massive changes, migrations across 3.0 will not be supported. You will always have to upgrade to 3.0 first and then upgrade to further versions. The code cleanups will also lead to some changes in the configuration, so take care when upgrading and follow upgrading instructions. Filed under: Debian English SUSE Weblate , Cryptogram — Friday Squid Blogging: Bizarre Contorted Squid This bizarre contorted squid might be a new species, or a previously known species exhibiting a new behavior. No one knows. As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered. Read my blog posting guidelines here. Krebs on Security — Security Trade-Offs in the New EU Privacy Law On two occasions this past year I’ve published stories here warning about the prospect that new European privacy regulations could result in more spams and scams ending up in your inbox. This post explains in a question and answer format some of the reasoning that went into that prediction, and responds to many of the criticisms leveled against it. Before we get to the Q&A, a bit of background is in order. On May 25, 2018 the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues. In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses). Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free. But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups. The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally. ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse. But at an ICANN meeting in San Juan, Puerto Rico last month, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures. In a series of posts on Twitter, I predicted that the WHOIS changes coming with GDPR will likely result in a noticeable increase in cybercrime — particularly in the form of phishing and other types of spam. In response to those tweets, several authors on Wednesday published an article for Georgia Tech’s Internet Governance Project titled, “WHOIS afraid of the dark? Truth or illusion, let’s know the difference when it comes to WHOIS.” The following Q&A is intended to address many of the more misleading claims and assertions made in that article. Cyber criminals don’t use their real information in WHOIS registrations, so what’s the big deal if the data currently available in WHOIS records is no longer in the public domain after May 25? I can point to dozens of stories printed here — and probably hundreds elsewhere — that clearly demonstrate otherwise. Whether or not cyber crooks do provide their real information is beside the point. ANY information they provide — and especially information that they re-use across multiple domains and cybercrime campaigns — is invaluable to both grouping cybercriminal operations and in ultimately identifying who’s responsible for these activities. To understand why data reuse in WHOIS records is so common among crooks, put yourself in the shoes of your average scammer or spammer — someone who has to register dozens or even hundreds or thousands of domains a week to ply their trade. Are you going to create hundreds or thousands of email addresses and fabricate as many personal details to make your WHOIS listings that much harder for researchers to track? The answer is that those who take this extraordinary step are by far and away the exception rather than the rule. Most simply reuse the same email address and phony address/phone/contact information across many domains as long as it remains profitable for them to do so. This pattern of WHOIS data reuse doesn’t just extend across a few weeks or months. Very often, if a spammer, phisher or scammer can get away with re-using the same WHOIS details over many years without any deleterious effects to their operations, they will happily do so. Why they may do this is their own business, but nevertheless it makes WHOIS an incredibly powerful tool for tracking threat actors across multiple networks, registrars and Internet epochs. All domain registrars offer free or a-la-carte privacy protection services that mask the personal information provided by the domain registrant. Most cybercriminals — unless they are dumb or lazy — are already taking advantage of these anyway, so it’s not clear why masking domain registration for everyone is going to change the status quo by much. It is true that some domain registrants do take advantage of WHOIS privacy services, but based on countless investigations I have conducted using WHOIS to uncover cybercrime businesses and operators, I’d wager that cybercrooks more often do not use these services. Not infrequently, when they do use WHOIS privacy options there are still gaps in coverage at some point in the domain’s history (such as when a registrant switches hosting providers) which are indexed by historic WHOIS records and that offer a brief window of visibility into the details behind the registration. This is demonstrably true even for organized cybercrime groups and for nation state actors, and these are arguably some of the most sophisticated and savvy cybercriminals out there. It’s worth adding that if so many cybercrooks seem nonchalant about adopting WHOIS privacy services it may well be because they reside in countries where the rule of law is not well-established, or their host country doesn’t particularly discourage their activities so long as they’re not violating the golden rule — namely, targeting people in their own backyard. And so they may not particularly care about covering their tracks. Or in other cases they do care, but nevertheless make mistakes or get sloppy at some point, as most cybercriminals do. The GDPR does not apply to businesses — only to individuals — so there is no reason researchers or anyone else should be unable to find domain registration details for organizations and companies in the WHOIS database after May 25, right? It is true that the European privacy regulations as they relate to WHOIS records do not apply to businesses registering domain names. However, the domain registrar industry — which operates on razor-thin profit margins and which has long sought to be free from any WHOIS requirements or accountability whatsoever — won’t exactly be tripping over themselves to add more complexity to their WHOIS efforts just to make a distinction between businesses and individuals. As a result, registrars simply won’t make that distinction because there is no mandate that they must. They’ll just adopt the same WHOIS data collection and display polices across the board, regardless of whether the WHOIS details for a given domain suggest that the registrant is a business or an individual. But the GDPR only applies to data collected about people in Europe, so why should this impact WHOIS registration details collected on people who are outside of Europe? Again, domain registrars are the ones collecting WHOIS data, and they are most unlikely to develop WHOIS record collection and dissemination policies that seek to differentiate between entities covered by GDPR and those that may not be. Such an attempt would be fraught with legal and monetary complications that they simply will not take on voluntarily. What’s more, the domain registrar community tends to view the public display of WHOIS data as a nuisance and a cost center. They have mainly only allowed public access to WHOIS data because ICANN’s contracts state that they should. So, from registrar community’s point of view, the less information they must make available to the public, the better. Like it or not, the job of tracking down and bringing cybercriminals to justice falls to law enforcement agencies — not security researchers. Law enforcement agencies will still have unfettered access to full WHOIS records. As it relates to inter-state crimes (i.e, the bulk of all Internet abuse), law enforcement — at least in the United States — is divided into two main components: The investigative side (i.e., the FBI and Secret Service) and the prosecutorial side (the state and district attorneys who actually initiate court proceedings intended to bring an accused person to justice). Much of the legwork done to provide the evidence needed to convince prosecutors that there is even a case worth prosecuting is performed by security researchers. The reasons why this is true are too numerous to delve into here, but the safe answer is that law enforcement investigators typically are more motivated to focus on crimes for which they can readily envision someone getting prosecuted — and because very often their plate is full with far more pressing, immediate and local (physical) crimes. Admittedly, this is a bit of a blanket statement because in many cases local, state and federal law enforcement agencies will do this often tedious legwork of cybercrime investigations on their own — provided it involves or impacts someone in their jurisdiction. But due in large part to these jurisdictional issues, politics and the need to build prosecutions around a specific locality when it comes to cybercrime cases, very often law enforcement agencies tend to miss the forest for the trees. Who cares if security researchers will lose access to WHOIS data, anyway? To borrow an assertion from the Internet Governance article, “maybe it’s high time for security researchers and businesses that harvest personal information from WHOIS on an industrial scale to refine and remodel their research methods and business models.” This is an alluring argument. After all, the technology and security industries claim to be based on innovation. But consider carefully how anti-virus, anti-spam or firewall technologies currently work. The unfortunate reality is that these technologies are still mostly powered by humans, and those humans rely heavily on access to key details about domain reputation and ownership history. Those metrics for reputation weigh a host of different qualities, but a huge component of that reputation score is determining whether a given domain or Internet address has been connected to any other previous scams, spams, attacks or other badness. We can argue about whether this is the best way to measure reputation, but it doesn’t change the prospect that many of these technologies will in all likelihood perform less effectively after WHOIS records start being heavily redacted. Don’t advances in artificial intelligence and machine learning obviate the need for researchers to have access to WHOIS data? This sounds like a nice idea, but again it is far removed from current practice. Ask anyone who regularly uses WHOIS data to determine reputation or to track and block malicious online threats and I’ll wager you will find the answer is that these analyses are still mostly based on manual lookups and often thankless legwork. Perhaps such trendy technological buzzwords will indeed describe the standard practice of the security industry at some point in the future, but in my experience this does not accurately depict the reality today. Okay, but Internet addresses are pretty useful tools for determining reputation. The sharing of IP addresses tied to cybercriminal operations isn’t going to be impacted by the GDPR, is it? That depends on the organization doing the sharing. I’ve encountered at least two cases in the past few months wherein European-based security firms have been reluctant to share Internet address information at all in response to the GDPR — based on a perceived (if not overly legalistic) interpretation that somehow this information also might be considered personally identifying data. This reluctance to share such information out of a concern that doing so might land the sharer in legal hot water can indeed have a chilling effect on the important sharing of threat intelligence across borders. According to the Internet Governance article, “If you need to get in touch with a website’s administrator, you will be able to do so in what is a less intrusive manner of achieving this purpose: by using an anonymized email address, or webform, to reach them (The exact implementation will depend on the registry). If this change is inadequate for your ‘private detective’ activities and you require full WHOIS records, including the personal information, then you will need to declare to a domain name registry your specific need for and use of this personal information. Nominet, for instance, has said that interested parties may request the full WHOIS record (including historical data) for a specific domain and get a response within one business day for no charge.” I’m sure this will go over tremendously with both the hacked sites used to host phishing and/or malware download pages, as well as those phished by or served with malware in the added time it will take to relay and approve said requests. According to a Q3 2017 study (PDF) by security firm Webroot, the average lifespan of a phishing site is between four and eight hours. How is waiting 24 hours before being able to determine who owns the offending domain going to be helpful to either the hacked site or its victims? It also doesn’t seem likely that many other registrars will volunteer for this 24-hour turnaround duty — and indeed no others have publicly demonstrated any willingness to take on this added cost and hassle. I’ve heard that ICANN is pushing for a delay in the GDPR as it relates to WHOIS records, to give the registrar community time to come up with an accreditation system that would grant vetted researchers access to WHOIS records. Why isn’t that a good middle ground? It might be if ICANN hadn’t dragged its heels in taking GDPR seriously until perhaps the past few months. As it stands, the experts I’ve interviewed see little prospect for such a system being ironed out or in gaining necessary traction among the registrar community to accomplish this anytime soon. And most experts I’ve interviewed predict it is likely that the Internet community will still be debating about how to create such an accreditation system a year from now. Hence, it’s not likely that WHOIS records will continue to be anywhere near as useful to researchers in a month or so than they were previously. And this reality will continue for many months to come — if indeed some kind of vetted WHOIS access system is ever envisioned and put into place. After I registered a domain name using my real email address, I noticed that address started receiving more spam emails. Won’t hiding email addresses in WHOIS records reduce the overall amount of spam I can expect when registering a domain under my real email address? That depends on whether you believe any of the responses to the bolded questions above. Will that address be spammed by people who try to lure you into paying them to register variations on that domain, or to entice you into purchasing low-cost Web hosting services from some random or shady company? Probably. That’s exactly what happens to almost anyone who registers a domain name that is publicly indexed in WHOIS records. The real question is whether redacting all email addresses from WHOIS will result in overall more bad stuff entering your inbox and littering the Web, thanks to reputation-based anti-spam and anti-abuse systems failing to work as well as they did before GDPR kicks in. It’s worth noting that ICANN created a working group to study this exact issue, which noted that “the appearance of email addresses in response to WHOIS queries is indeed a contributor to the receipt of spam, albeit just one of many.” However, the report concluded that “the Committee members involved in the WHOIS study do not believe that the WHOIS service is the dominant source of spam.” Do you have something against people not getting spammed, or against better privacy in general? To the contrary, I have worked the majority of my professional career to expose those who are doing the spamming and scamming. And I can say without hesitation that an overwhelming percentage of that research has been possible thanks to data included in public WHOIS registration records. Is the current WHOIS system outdated, antiquated and in need of an update? Perhaps. But scrapping the current system without establishing anything in between while laboring under the largely untested belief that in doing so we will achieve some kind of privacy utopia seems myopic. If opponents of the current WHOIS system are being intellectually honest, they will make the following argument and stick to it: By restricting access to information currently available in the WHOIS system, whatever losses or negative consequences on security we may suffer as a result will be worth the cost in terms of added privacy. That’s an argument I can respect, if not agree with. But for the most part that’s not the refrain I’m hearing. Instead, what this camp seems to be saying is if you’re not on board with the WHOIS changes that will be brought about by the GDPR, then there must be something wrong with you, and in any case here a bunch of thinly-sourced reasons why the coming changes might not be that bad. Rondam Ramblings — Credit where it's due Richard Nixon is rightfully remembered as one of the great villains of American democracy. But he wasn't all bad. He opened relations with China, appointed four mostly sane Supreme Court justices, and oversaw the establishment of the EPA among many other accomplishments. Likewise, I believe that Donald Trump will eventually go down in history as one of the worst (if not the worst) president Rondam Ramblings — An open letter to Jack Phillips [Jack Phillips is the owner of the Masterpiece Cake Shop in Lakewood, Colorado. Mr. Phillips is being sued by the Colorado Civil Rights Commission for refusing to make a wedding cake for a gay couple. His case is currently before the U.S. Supreme Court. Yesterday Mr. Phillips published an op-ed in The Washington Post to which this letter is a response.] Dear Mr. Phillips: Imagine your child Rondam Ramblings — Paul Ryan forces out House chaplain Just in case there was the slightest ember of hope in your mind that Republicans actually care about religious freedom and are not just odious hypocritical power-grubbing opportunists, this should extinguish it once and for all: House Chaplain Patrick Conroy’s sudden resignation has sparked a furor on Capitol Hill, with sources in both parties saying he was pushed out by Speaker Paul Ryan (R-Wis Cryptogram — TSB Bank Disaster This seems like an absolute disaster: The very short version is that a UK bank, TSB, which had been merged into and then many years later was spun out of Lloyds Bank, was bought by the Spanish bank Banco Sabadell in 2015. Lloyds had continued to run the TSB systems and was to transfer them over to Sabadell over the weekend. It's turned out to be an epic failure, and it's not clear if and when this can be straightened out. It is bad enough that bank IT problem had been so severe and protracted a major newspaper, The Guardian, created a live blog for it that has now been running for two days. The more serious issue is the fact that customers still can't access online accounts and even more disconcerting, are sometimes being allowed into other people's accounts, says there are massive problems with data integrity. That's a nightmare to sort out. Even worse, the fact that this situation has persisted strongly suggests that Lloyds went ahead with the migration without allowing for a rollback. This seems to be a mistake, and not enemy action. Worse Than Failure — Error'd: Billboards Show Obvious Disasters "Actually, this board is outside a casino in Sheffield which is next to the church, but we won't go there," writes Simon. Wouter wrote, "The local gas station is now running ads for Windows 10 updates." "If I were to legally change my name to a GUID, this is exactly what I'd pick," Lincoln K. wrote. Robert F. writes, "Copy/Paste? Bah! If you really want to know how many log files are being generated per server, every minute, you're going to have to earn it by typing out this 'easy' command." "I imagine someone pushed the '10 items or fewer' rule on the self-checkout kiosk just a little too far," Michael writes. Wojciech wrote, "To think - someone actually doodled this JavaScript code!" [Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more. , Planet Debian — Steinar H. Gunderson: Anandtech and HPET issues Anandtech spots differences in their Intel-vs-Ryzen benchmarks compared to other media, pinpoints it to differences in whether HPET or TSC is used as the primary system timer on Windows, and goes on to immediately retract their Ryzen 2000-series benchmarks for correction. That's… impressive integrity and competence. I already trusted their benchmarks a far bit, and this doesn't exactly hurt. Cory Doctorow — Raleigh-Durham, I’m headed your way! CORRECTED! CORRECTION! The Flyleaf event is at 6PM, not 7! I’m delivering the annual Kilgour lecture tomorrow morning at 10AM at UNC, and I’ll be speaking at Flyleaf Books at 6PM — be there or be oblong! Also, if you’re in Boston, Waterloo or Chicago, you can catch me in the coming weeks!. Abstract: For decades, regulators and corporations have viewed the internet and the computer as versatile material from which special-purpose tools can be fashioned: pornography distribution systems, jihadi recruiting networks, video-on-demand services, and so on. But the computer is an unprecedented general purpose device capable of running every program we can express in symbolic language, and the internet is the nervous system of the 21st century, webbing these pluripotent computers together. For decades, activists have been warning regulators and corporations about the peril in getting it wrong when we make policies for these devices, and now the chickens have come home to roost. Frivolous, dangerous and poorly thought-through choices have brought us to the brink of electronic ruin. We are balanced on the knife-edge of peak indifference — the moment at which people start to care and clamor for action — and the point of no return, the moment at which it’s too late for action to make a difference. There was never a more urgent moment to fight for a free, fair and open internet — and there was never an internet more capable of coordinating that fight. Cory Doctorow — Little Brother is 10 years old today: I reveal the secret of writing future-proof science fiction It’s been ten years since the publication of my bestselling novel Little Brother; though the novel was written more than a decade ago, and though it deals with networked computers and mobile devices, it remains relevant, widely read, and widely cited even today. In an essay for Tor.com, I write about my formula for creating fiction about technology that stays relevant — the secret is basically to assume that people will be really stupid about technology for the foreseeable future. And now we come to how to write fiction about networked computers that stays relevant for 12 years and 22 years and 50 years: just write stories in which computers can run all the programs, and almost no one understands that fact. Just write stories in which authority figures, and mass movements, and well-meaning people, and unethical businesses, all insist that because they have a *really good reason* to want to stop some program from running or some message from being received, it *must* be possible. Write those stories, and just remember that because computers can run every program and the internet can carry any message, every device will someday be a general-purpose computer in a fancy box (office towers, cars, pacemakers, voting machines, toasters, mixer-taps on faucets) and every message will someday be carried on the public internet. Just remember that the internet makes it easier for people of like mind to find each other and organize to work together for whatever purpose stirs them to action, including terrible ones and noble ones. Just remember that cryptography works, that your pocket distraction rectangle can scramble messages so thoroughly that they can never, ever be descrambled, not in a trillion years, without your revealing the passphrase used to protect them. Just remember that swords have two edges, that the universe doesn’t care how badly you want something, and that every time we make a computer a little better for one purpose, we improve it for every purpose a computer can be put to, and that is all purposes. Just remember that declaring war on general purpose computing is a fool’s errand, and that that never stopped anyone. Ten Years of Cory Doctorow’s Little Brother [Cory Doctorow/Tor.com] (Image: Missy Ward, CC-BY) TED — TED hosts first-ever TED en Español Spanish-language speaker event at NYC headquarters Thursday, April 26, 2018 – Today marks the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event, held in TED’s theater in Manhattan, will feature eight speakers, a musical performance, five short films and fifteen 1-minute talks given by members of the audience. 150 people are expected to attend from around the world, and about 20 TEDx events in 10 countries plan to tune in to watch remotely. The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event – TEDxRiodelaPlata in Argentina – TED en Español includes a Facebook community, Twitter feed, weekly ¨Boletín¨ newsletter, YouTube channel and – as of earlier this month – an original podcast created in partnership with Univision Communications. “As part of our nonprofit mission at TED, we work to find and spread the best ideas no matter where the speakers live or what language they speak,” said Gerry. “We want everyone to have access to ideas in their own language. Given the massive global Hispanic population, we’ve begun the work to bring Spanish-language ideas directly to Spanish-speaking audiences, and today event is a major step in solidifying our commitment to that effort.” Today´s speakers include chef Gastón Acurio, futurist Juan Enriquez, entrepreneur Leticia Gasca, data scientist César A. Hidalgo, founder and funder Rebeca Hwang, ocean expert Enric Sala, assistant conductor of the LA Philharmonic Paolo Bortolameolli, and psychologist and dancer César Silveyra. Musical group LADAMA will perform. Cryptogram — New NSA/Cyber Command Head Confirmed by Senate Planet Linux Australia — Michael Still: A first program in golang, with a short aside about Google I have reached the point in my life where I needed to write my first program in golang. I pondered for a disturbingly long time what exactly to write, but then it came to me… Back in the day Google had an internal short URL service (think bit.ly, but for internal things). It was called “go” and lived at http://go. So what should I write as my first golang program? go of course. The implementation is on github, and I am sure it isn’t perfect. Remember, it was a learning exercise. I mostly learned that golang syntax is a bit bonkers, and that etcd hates me. This code stores short URLs in etcd, and redirects you to the right place if it knows about the short code you used. If you just ask for the root URL, you get a list of the currently defined short codes, as well as a form to create new ones. Not bad for a few hours hacking I think. The post A first program in golang, with a short aside about Google appeared first on Made by Mikal. Worse Than Failure — CodeSOD: If Not Null… Robert needed to fetch some details about pump configurations from the backend. The API was poorly documented, but there were other places in the code which did that, so a quick search found this block: var getConfiguration = function(){ .... var result = null; result = getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,service,format,result); result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,null,format,result); result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,service,null,result); result = getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,null,null,result); result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,null,null,result); return result; } This collection of lines lurked at the end of a 100+ line function, which did a dozen other things. At a glance, it’s mildly perplexing. I can see that result gets passed into the function multiple times, so perhaps this is an attempt at a fluent API? So this series of calls awkwardly fetches the data that’s required? The parameters vary a little with every call, so that must be it, right? Let’s check the implementation of getPumpConfiguration function getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,service,format,result) { if (result==null) { ... result = queryResult; ... } return result; } Oh, no. If the result parameter has a value… we just return it. Otherwise, we attempt to fetch data. This isn’t a fluent API which loads multiple pieces of data with separate requests, it’s an attempt at implementing retries. Hopefully one of those calls works. [Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today! Planet Linux Australia — Linux Users of Victoria (LUV) Announce: LUV May 2018 Workshop May 19 2018 12:30 May 19 2018 16:30 May 19 2018 12:30 May 19 2018 16:30 Location: Infoxchange, 33 Elizabeth St. Richmond Topic to be announced There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby. The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121. Late arrivals please call (0421) 775 358 for access to the venue. LUV would like to acknowledge Infoxchange for the venue. Linux Users of Victoria is a subcommittee of Linux Australia. May 19, 2018 - 12:30 Planet Linux Australia — Linux Users of Victoria (LUV) Announce: LUV May 2018 Main Meeting: "Share" with FOSS Software May 1 2018 18:30 May 1 2018 20:30 May 1 2018 18:30 May 1 2018 20:30 Location: Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053 PLEASE NOTE NEW LOCATION 6:30 PM to 8:30 PM Tuesday, May 1, 2018 Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053 Speakers: Linux Users of Victoria is a subcommittee of Linux Australia. May 1, 2018 - 18:30 read more Planet Linux Australia — Michael Still: etcd v2 and v3 data stores are separate Just noting this because it wasted way more of my time that it should have… So you write an etcd app in a different language from your previous apps and it can’t see the data that the other apps wrote? Check the versions of your client libraries. The v2 and v3 data stores in etcd are different, and cannot be seen by each other. You need to convert your v2 data to the v3 data store before it will be visible there. You’re welcome. The post etcd v2 and v3 data stores are separate appeared first on Made by Mikal. , Rondam Ramblings — Support Josh Harder for Congress I've been quiet lately in part because I'm sinking back into the pit of despair when I think about politics. The spinelessness and hypocrisy of the Republican party, the insidious and corrosive effects of corporate "free speech" embodied in soulless monsters like Sinclair and Fox News, and the fact that ultimately all this insanity has its foundation in the will of the people (or at least a TED — In Case You Missed It: The dawn of “The Age of Amazement” at TED2018 More than 100 speakers — activists, scientists, adventurers, change-makers and more — took the stage to give the talk of their lives this week in Vancouver at TED2018. One blog post could never hope to hold all of the extraordinary wisdom they shared. Here’s a (shamelessly inexhaustive) list of the themes and highlights we heard throughout the week — and be sure to check out full recaps of day 1, day 2, day 3 and day 4. Discomfort is a proxy for progress. If we hope to break out of the filter bubbles that are defining this generation, we have to talk to and connect with people we disagree with. This message resonated across the week at TED, with talks from Zachary R. Wood and Dylan Marron showing us the power of reaching out, even when it’s uncomfortable. As Wood, a college student who books “uncomfortable speakers,” says: “Tuning out opposing viewpoints doesn’t make them go away.” To understand how society can progress forward, he says, “we need to understand the counterforces.” Marron’s podcast “Conversations With People Who Hate Me” showcases him engaging with people who have attacked him on the internet. While it hasn’t led to world peace, it has helped him develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.” The Audacious Project, a new initiative for launching big ideas, seeks to create lasting change at scale. (Photo: Ryan Lash / TED) Audacious ideas for big impact. The Audacious Project, TED’s newest initiative, aims to be the nonprofit version of an IPO. Housed at TED, it’s a collaboration among some of the biggest names in philanthropy that asks for nonprofit groups’ most audacious dreams; each year, five will be presented at TED with an invitation for the audience and world to get involved. The inaugural Audacious group includes public defender Robin Steinberg, who’s working to end the injustice of bail; oceanographer Heidi M. Sosik, who wants to explore the ocean’s twilight zone; Caroline Harper from Sight Savers, who’s working to end the scourge of trachoma; conservationist Fred Krupp, who wants to use the power of satellites and data to track methane emissions in unprecedented detail; and T. Morgan Dixon and Vanessa Garrison, who are inspiring a nationwide movement for Black women’s health. Find out more (and how you can get involved) at AudaciousProject.org. Living means acknowledging death. Philosopher-comedian Emily Levine has stage IV lung cancer — but she says there’s no need to “oy” or “ohhh” over her: she’s OK with it. Life and death go hand in hand, she says; you can’t have one without the other. Therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse Rosenthal, wrote about their lives in an article read by millions of people: “You May Want to Marry My Husband” — a meditation on dying disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public — and challenged him to begin anew. He speaks to others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?” “It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” says Yuval Noah Harari. (Photo: Ryan Lash / TED) Can we rediscover the humanity in our tech? In a visionary talk about a “globally tragic, astoundingly ridiculous mistake” companies like Google and Facebook made at the foundation of digital culture, Jaron Lanier suggested a way we can fix the internet for good: pay for it. “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them,” he says. Historian Yuval Noah Harari, appearing onstage as a hologram live from Tel Aviv, warns that with consolidation of data comes consolidation of power. Fascists and dictators, he says, have a lot to gain in our new digital age; and “it’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. Hill turned her family’s apartment into a smart home, loading up on 18 internet-connected appliances; her colleague Mattu built a router that tracked how often the devices connected, who they were transmitting to, what they were transmitting. Through the data, he could decipher the Hill family’s sleep schedules, TV binges, even their tooth-brushing habits. And a lot of this data can be sold, including deeply intimate details. “Who is the true beneficiary of your smart home?” he asks. “You, or the company mining you?” An invitation to build a better world. Actor and activist Tracee Ellis Ross came to TED with a message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men. Ross believes it is past time that men take responsibility to change men’s bad behavior — and she offers an invitation to men, calling them in as allies with the hope they will “be accountable and self-reflective.” She offers a different invitation to women: Acknowledge your fury. “Your fury is not something to be afraid of,” she says. “It holds lifetimes of wisdom. Let it breathe, and listen.” Wow! discoveries. Among the TED Fellows, explorer and conservationist Steve Boyes’ efforts to chart Africa’s Okavango Delta has led scientists to identify more than 25 new species; University of Arizona astrophysicist Burçin Mutlu-Pakdil discovered a galaxy with an outer ring and a reddish inner ring that was unlike any ever seen before (her reward: it’s now called Burçin’s Galaxy). Another astronomer, University of Hawaii’s Karen Meech saw — and studied for an exhilarating few days — ‘Oumuamua, the first interstellar comet observed from Earth. Meanwhile, engineer Aaswath Raman is harnessing the cold of deep space to invent new ways to keep us cooler and more energy-efficient. Going from the sublime to the ridiculous, roboticist Simone Giertz showed just how much there is to be discovered from the process of inventing useless things. Walter Hood shares his work creating public spaces that illuminate shared memories without glossing over past — and present — injustices. (Photo: Ryan Lash / TED) Language is more than words. Even though the stage program of TED2018 consisted primarily of talks, many went beyond words. Architects Renzo Piano, Vishaan Chakbrabarti, Ian Firth and Walter Hood showed how our built structures, while still being functional, can lift spirits, enrich lives, and pay homage to memories. Smithsonian Museum craft curator Nora Atkinson shared images from Burning Man and explained how, in the desert, she found a spirit of freedom, creativity and collaboration not often found in the commercial art world. Designer Ingrid Fetell Lee uncovered the qualities that make everyday objects a joy to behold. Illustrator Christoph Niemann reminded us how eloquent and hilarious sketches can be; in her portraits of older individuals, photographer Isadora Kosofsky showed us that visuals can be poignant too. Paul Rucker discussed his painful collection of artifacts from America’s racial past and how the artistic act of making scores of Ku Klux Klan robes has brought him some catharsis. Our physical movements are another way we speak — for choreographer Elizabeth Streb, it’s expressing the very human dream to fly. For climber Alex Honnold, it was attaining a sense of mastery when he scaled El Capitan alone without ropes. Dolby Laboratories chief scientist Poppy Crum demonstrated the emotions that can be read through physical tells like body temperature and exhalations, and analytical chemist Simone Francese revealed the stories told through the molecules in our fingerprints. Kate Raworth presents her vision for what a sustainable, universally beneficial economy could look like. (Photo: Bret Hartman / TED) Is human growth exponential or limited? There will be almost ten billion people on earth by 2050. How are we going to feed everybody, provide water for everybody and get power to everybody? Science journalist Charles C. Mann has spent years asking these questions to researchers, and he’s found that their answers fall into two broad categories: wizards and prophets. Wizards believe that science and technology will let us produce our way out of our dilemmas — think: hyper-efficient megacities and robots tending genetically modified crops. Prophets believe close to the opposite; they see the world as governed by fundamental ecological processes with limits that we transgress to our peril. As he says: “The history of the coming century will be the choice we make as a species between these two paths.” Taking up the cause of the prophets is Oxford economist Kate Raworth, who says that our economies have become “financially, politically and socially addicted” to relentless GDP growth, and too many people (and the planet) are being pummeled in the process. What would a sustainable, universally beneficial economy look like? A doughnut, says Raworth. She says we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits. Seeing opportunity in adversity. “I’m basically nuts and bolts from the knee down,” says MIT professor Hugh Herr, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He dreams of a future where humans have augmented their bodies in a way that redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. In a beautiful, touching talk in the closing session of TED2018, Mark Pollock and Simone George take us inside their relationship — detailing how Pollock became paralyzed and the experimental work they’ve undertaken to help him regain motion. In collaboration with a team of engineers who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test — proving that progress is definitely still possible. TED Fellow and anesthesiologist Rola Hallam started the world’s first crowdfunded hospital in Syria. (Photo: Ryan Lash / TED) Spotting the chance to make a difference. The TED Fellows program was full of researchers, activists and advocates capitalizing on the spaces that go unnoticed. Psychiatrist Essam Daod, found a “golden hour” in refugees’ treks when their narratives can sometimes be reframed into heroes’ journeys; landscape architect Kotcharkorn Voraakhom realized that a park could be designed to allow her flood-prone city of Bangkok mitigate the impact of climate change; pediatrician Lucy Marcil seized on the countless hours that parents spend in doctors’ waiting rooms to offer tax assistance; sustainability expert DeAndrea Salvador realized the profound difference to be made by helping low-income North Carolina residents with their energy bills; and anesthesiologist Rola Hallam is addressing aid shortfalls for local nonprofits, resulting in the world’s first crowdfunded hospital in Syria. Catch up on previous In Case You Missed It posts from April 10 (Day 1), April 11 (Day 2), April 12 (Day 3), and yesterday, April 13 (Day 4). Planet Debian — Jonathan McDowell: Using collectd for Exim stats I like graphing things; I find it’s a good way to look for abnormal patterns or try to track down the source of problems. For monitoring systems I started out with MRTG. It’s great for monitoring things via SNMP, but everything else needs some custom scripts. So at one point I moved my home network over to Munin, which is much better at graphing random bits and pieces, and coping with collecting data from remote hosts. Unfortunately it was quite heavyweight on the Thecus N2100 I was running as the central collection point at the time; data collection resulted in a lot of forking and general sluggishness. So I moved to collectd, which is written in C, relies much more on compiled plugins and doesn’t do a load of forks. It also supports a UDP based network protocol with authentication + encryption, which makes it great for running on hosts that aren’t always up - the collection point doesn’t hang around waiting for them when they’re not around. The problem is that when it comes to things collectd doesn’t support out of the box it’s not quite so easy to get the stats - things a simple script would sort in MRTG need a bit more thought. You can go the full blown Python module route as I did for my Virgin Super Hub scripts, but that requires a bit of work. One of the things in particular I wanted to graph were stats for my mail servers and having to write a chunk of Python to do that seemed like overkill. Searching around found the Tail plugin, which follows a log file and applies regexes to look for stats. There are some examples for Exim on that page, but none were quite what I wanted. In case it’s of interest/use to anyone else, here’s what I ended up with (on Debian, of course, but I can’t see why it wouldn’t work elsewhere with minimal changes). First I needed a new data set specification for email counts. I added this to /usr/share/collectd/types.db: mail_count value:COUNTER:0:65535  Note if you’re logging to a remote collectd host this needs to be on both the host where the stats are collected and the one receiving the stats. I then dropped a file in /etc/collectd/collectd.conf.d/ called exim.conf containing the following. It’ll need tweaked depending on exactly what you log, but the first 4 <Match> stanzas should be generally useful. I have some additional logging (via log_message entries in the exim.conf deny statements) that helps me track mails that get greylisted, rejected due to ClamAV or rejected due to being listed in a DNSRBL. Tailor as appropriate for your setup: LoadPlugin tail <Plugin tail> <File "/var/log/exim4/mainlog"> Instance "exim" Interval 60 <Match> Regex "S=([1-9][0-9]*)" DSType "CounterAdd" Type "ipt_bytes" Instance "total" </Match> <Match> Regex "<=" DSType "CounterInc" Type "mail_count" Instance "incoming" </Match> <Match> Regex "=>" DSType "CounterInc" Type "mail_count" Instance "outgoing" </Match> <Match> Regex "==" DSType "CounterInc" Type "mail_count" Instance "defer" </Match> <Match> Regex ": greylisted.$"
DSType "CounterInc"
Type "mail_count"
Instance "greylisted"
</Match>
<Match>
Regex "rejected after DATA: Malware:"
DSType "CounterInc"
Type "mail_count"
Instance "malware"
</Match>
<Match>
Regex "> rejected RCPT <.* is listed at"
DSType "CounterInc"
Type "mail_count"
Instance "dnsrbl"
</Match>
</File>
</Plugin>


Finally, because my mail servers are low volume these days, I added a scaling filter to give me emails/minute rather than emails/second. This went in /etc/collectd/collectd.conf.d/filters.conf:

PreCacheChain "PreCache"

<Chain "PreCache">
<Rule>
<Match "regex">
Plugin "^tail$" PluginInstance "^exim$"
Type "^mail_count$" Invert false </Match> <Target "scale"> Factor 60 </Target> </Rule> </Chain>  Update: Some examples… Krebs on Security — DDoS-for-Hire Service Webstresser Dismantled Authorities in the U.S., U.K. and the Netherlands on Tuesday took down popular online attack-for-hire service WebStresser.org and arrested its alleged administrators. Investigators say that prior to the takedown, the service had more than 136,000 registered users and was responsible for launching somewhere between four and six million attacks over the past three years. The action, dubbed “Operation Power Off,” targeted WebStresser.org (previously Webstresser.co), one of the most active services for launching point-and-click distributed denial-of-service (DDoS) attacks. WebStresser was one of many so-called “booter” or “stresser” services — virtual hired muscle that anyone can rent to knock nearly any website or Internet user offline. Webstresser.org (formerly Webstresser.co), as it appeared in 2017. “The damage of these attacks is substantial,” reads a statement from the Dutch National Police in a Reddit thread about the takedown. “Victims are out of business for a period of time, and spend money on mitigation and on (other) security measures.” In a separate statement released this morning, Europol — the law enforcement agency of the European Union — said “further measures were taken against the top users of this marketplace in the Netherlands, Italy, Spain, Croatia, the United Kingdom, Australia, Canada and Hong Kong.” The servers powering WebStresser were located in Germany, the Netherlands and the United States, according to Europol. The U.K.’s National Crime Agency said WebStresser could be rented for as little as$14.99, and that the service allowed people with little or no technical knowledge to launch crippling DDoS attacks around the world.

Neither the Dutch nor U.K. authorities would say who was arrested in connection with this takedown. But according to information obtained by KrebsOnSecurity, the administrator of WebStresser allegedly was a 19-year-old from Prokuplje, Serbia named Jovan Mirkovic.

Mirkovic, who went by the hacker nickname “m1rk,” also used the alias “Mirkovik Babs” on Facebook where for years he openly discussed his role in programming and ultimately running WebStresser. The last post on Mirkovic’s Facebook page, dated April 3 (the day before the takedown), shows the young hacker sipping what appears to be liquor while bathing. Below that image are dozens of comments left in the past few hours, most of them simply, “RIP.”

A story in the Serbia daily news site Blic.rs notes that two men from Serbia were arrested in conjunction with the WebStresser takedown; they are named only as “MJ” (Jovan Mirkovik) and D.V., aged 19 from Ruma.

Mirkovik’s fake Facebook page (Mirkovik Babs) includes countless mentions of another Webstresser administrator named “Kris” and includes a photograph of a tattoo that Kris got in 2015. That same tattoo is shown on the Facebook profile of a Kristian Razum from Zapresic, Croatia. According to the press releases published today, one of the administrators arrested was from Croatia.

Multiple sources are now pointing to other booter businesses that were reselling WebStresser’s service but which are no longer functional as a result of the takedown, including powerboot[dot]net, defcon[dot]pro, ampnode[dot]com, ripstresser[dot]com, fruitstresser[dot]com, topbooter[dot]com, freebooter[dot]co and rackstress[dot]pw.

Tuesday’s action against WebStresser is the latest such takedown to target both owners and customers of booter services. Many booter service operators apparently believe (or at least hide behind) a wordy “terms of service” agreement that all customers must acknowledge, under the assumption that somehow this absolves them of any sort of liability for how their customers use the service — regardless of how much hand-holding and technical support booter service administrators offer customers.

In October the FBI released an advisory warning that the use of booter services is punishable under the Computer Fraud and Abuse Act, and may result in arrest and criminal prosecution.

In 2016, authorities in Israel arrested two 18-year-old men accused of running vDOS, until then the most popular and powerful booter service on the market. Their arrests came within hours of a story at KrebsOnSecurity that named the men and detailed how their service had been hacked.

Many in the hacker community have criticized authorities for targeting booter service administrators and users and for not pursuing what they perceive as more serious cybercriminals, noting that the vast majority of both groups are young men under the age of 21. In its Reddit thread, the Dutch Police addressed this criticism head-on, saying Dutch authorities are working on a new legal intervention called “Hack_Right,” a diversion program intended for first-time cyber offenders.

“Prevention of re-offending by offering a combination of restorative justice, training, coaching and positive alternatives is the main aim of this project,” the Dutch Police wrote. “See page 24 of the 5th European Cyber Security Perspectives and stay tuned on our THTC twitter account #HackRight! AND we are working on a media campaign to prevent youngsters from starting to commit cyber crimes in the first place. Expect a launch soon.”

In the meantime, it’s likely we’ll sooner see the launch of yet more booter services. According to reviews and sales threads at stresserforums[dot]net — a marketplace for booter buyers and sellers — there are dozens of other booter services in operation, with new ones coming online almost every month.

Sociological Images — Boozy Milkshakes and Sordid Spirits

The first nice weekend after a long, cold winter in the Twin Cities is serious business. A few years ago some local diners joined the celebration with a serious indulgence: the boozy milkshake.

When talking with a friend of mine from the Deep South about these milkshakes, she replied, “oh, a bushwhacker! We had those all the time in college.” This wasn’t the first time she had dropped southern slang that was new to me, so off to Google I went.

According to Merriam-Webster, “to bushwhack” means to attack suddenly and unexpectedly, as one would expect the alcohol in a milkshake to sneak up on you. The cocktail is a Nashville staple, but the origins trace back to the Virgin Islands in the 1970s.

Here’s the part where the history takes a sordid turn: “Bushwhacker” was apparently also the nickname for guerrilla fighters in the Confederacy during the Civil War who would carry out attacks in rural areas (see, for example, the Lawrence Massacre). To be clear, I don’t know and don’t mean to suggest this had a direct influence in the naming of the cocktail. Still, the coincidence reminded me of the famous, and famously offensive, drinking reference to conflict in Northern Ireland.

When sociologists talk about concepts like “cultural appropriation,” we often jump to clear examples with a direct connection to inequality and oppression like racist halloween costumes or ripoff products—cases where it is pretty easy to look at the object in question and ask, “didn’t they think about this for more than thirty seconds?”

Cases like the bushwhacker raise different, more complicated questions about how societies remember history. Even if the cocktail today had nothing to do with the Confederacy, the weight of that history starts to haunt the name once you know it. I think many people would be put off by such playful references to modern insurgent groups like ISIS. Then again, as Joseph Gusfield shows, drinking is a morally charged activity in American society. It is interesting to see how the deviance of drinking dovetails with bawdy, irreverent, or offensive references to other historical and social events. Can you think of other drinks with similar sordid references? It’s not all sex on the beach!

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

Cryptogram — Two NSA Algorithms Rejected by the ISO

The ISO has rejected two symmetric encryption algorithms: SIMON and SPECK. These algorithms were both designed by the NSA and made public in 2013. They are optimized for small and low-cost processors like IoT devices.

The risk of using NSA-designed ciphers, of course, is that they include NSA-designed backdoors. Personally, I doubt that they're backdoored. And I always like seeing NSA-designed cryptography (particularly its key schedules). It's like examining alien technology.

Worse Than Failure — The Search for Truth

Every time you change existing code, you break some other part of the system. You may not realize it, but you do. It may show up in the form of a broken unit test, but that presumes that a) said unit test exists, and b) it properly tests the aspect of the code you are changing. Sadly, more often than not, there is either no test to cover your change, or any test that does exist doesn't handle the case you are changing.

This is especially true if the thing you are changing is simple. It is even more true when changing something as complex as working with a boolean.

Mr A. was working at a large logistics firm that had an unusual error where a large online retailer was accidentally overcharged by millions of dollars. When large companies send packages to logistics hubs for shipment, they often send hundreds or thousands of them at a time on the same pallet, van or container (think about companies like Amazon). The more packages you send in these batches the less you pay (a single lorry is cheaper than a fleet of vans). These packages are lumped together and billed at a much lower rate than you or I would get.

One day, a particular developer saw something untidy in the code - an uninitialized Boolean variable in one of the APIs. The entire code change was from this:

    parcel.consolidated;


to this:

    parcel.consolidated = false;


There are some important things to note: the code was written in NodeJS and didn't really have the concept of Boolean, the developers did not believe in Unit Testing, and in a forgotten corner of the codebase was a little routine that examined each parcel to see if the discount applied.

The routine to see if the discount should be applied ran every few minutes. It looked at each package and if it was marked as consolidated or not, then it moved on to the next parcel. If the flag was NULL, then it applied the rules to see if was part of a shipment and set the flag to either True or False.

That variable was not Boolean but rather tri-state (though thankfully didn't involve FILE_NOT_FOUND). By assuming it was Boolean and initializing it, NO packages had a discount applied. Oopsie!

It took more than a month before anyone noticed and complained. And since it was a multi-million dollar mistake, they complained loudly!

Even after this event, Unit Testing was still not accepted as a useful practice. To this day Release Management, Unit Testing, Automated Testing and Source Code Management remain stubbornly absent...

Not long after this, Mr. A. continued his search for truth elsewhere.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Debian — Julien Danjou: Correct HTTP scheme in WSGI with Cloudflare

I've recently been using Cloudflare as an HTTP frontend for some applications, and getting things working correctly with WSGI was unobvious.

In Python, WSGI is the standard protocol to write a Web application. All Web frameworks that I know follows it. And many of those Web frameworks leverage some request environment variables to learn how the request has been made.

One of those environment variables is wsgi.url_scheme, and it contains either http or https, depending on the protocol that has been used to connect to your WSGI server.

And that's where things can get messy. If you enable SSL at Cloudflare in "Flexible" mode, your visitor will connect to your Web site using HTTPS, but Cloudflare will connect to your backend using HTTP. That means that for your application, the traffic will appear to be over HTTP, and not HTTPS: wsgi.url_scheme will be set to http.

That can lead to several problems with some frameworks. For example, the function url_for of Flask will rely on this variable to generate the scheme part of any URL. In this case, it would, therefore, generate URL starting with http:// whereas your visitors are using https.

The usual workaround is to leverage the X-Forwarded-Proto that is actually set by Cloudflare. In the case where Cloudflare proxies the request to your HTTP host, this will be set to https. By using the werkzeug.contrib.fixers.ProxyFix module, the variable wsgi.url_scheme will be set to what X-Forwarded-Proto is set.

That would work fine for any application that is directly behind Cloudflare, or any single HTTP reverse proxy.

But that does not work as soon as you have multiple reverse proxies. If your application runs on top of Heroku for example, they already provide a reverse proxy and overwrite those headers. That gives the following: Visitor -HTTPS-> Cloudflare -HTTP-> Heroku proxy -HTTP-> Heroku dyno. Once your dyno is reacher, X-Forwarded-For will be set to http.

Damn it!

The proper solution is, therefore, to have all your proxies implement RFC7239. This RFC defines a new Forwarded header that can contain all the hops that have forwarded this request, including all the scheme and IP addresses. Unfortunately, this is not implemented by Cloudflare nor Heroku. Bummer!

Finally, Cloudflare provides yet another custom header named Cf-Visitor. It contains a JSON payload with the original HTTP scheme used by the visitor: we can use that to solve our issue. Here's a WSGI middleware to do that:

class CloudflareProxy(object):
"""This middleware sets the proto scheme based on the Cf-Visitor header."""

def __init__(self, app):
self.app = app

def __call__(self, environ, start_response):
cf_visitor = environ.get("HTTP_CF_VISITOR")
if cf_visitor:
try:
except ValueError:
pass
else:
proto = cf_visitor.get("scheme")
if proto is not None:
environ['wsgi.url_scheme'] = proto
return self.app(environ, start_response)


You can then use it to encapsulate your WSGI application with app = CloudflareProxy(app).

If you're using JavaScript, I noticed that the forwarded library provides that same support for Cloudflare along all the other headers – even RFC7239!

Cryptogram — Computer Alarm that Triggers When Lid Is Opened

"Do Not Disturb" is a Macintosh app that send an alert when the lid is opened. The idea is to detect computer tampering.

Wired article:

Do Not Disturb goes a step further than just the push notification. Using the Do Not Disturb iOS app, a notified user can send themselves a picture snapped with the laptop's webcam to catch the perpetrator in the act, or they can shut down the computer remotely. The app can also be configured to take more custom actions like sending an email, recording screen activity, and keeping logs of commands executed on the machine.

Can someone please make one of these for Windows?

,

Planet Debian — Carl Chenet: Use Nginx Unit 1.0 with your Django project on Debian Stretch

Nginx Unit 1.0 was released 2018, April the 12th. It is a new application server written by the Nginx team.

Some features are really interesting, such as:

• Fully dynamic reconfiguration using RESTful JSON API
• Multiple application languages and versions can run simultaneously

I was setting up a new Django project at this time and it was a great opportunity to start using Unit. It has some unexpected pitfalls to install and configure.

1. Installing Nginx Unit for Django

Installing Unit is quite straightforward. I use a Debian Stretch. If you have another system, have a look at the official documentation.

If you install Unit on a dedicated server using a grsecurity kernel, it won’t work. Using the kernel of your GNU/Linux distributions solves this issue.

First we need to get the key of the remote Debian Nginx repository:

# wget -q -O - https://nginx.org/keys/nginx_signing.key | apt-key add

Next, create the /etc/apt/sources.list.d/unit.list file with the following lines:

deb https://packages.nginx.org/unit/debian/ stretch unit
deb-src https://packages.nginx.org/unit/debian/ stretch unit

Now update your list of repositories, install Nginx Unit and the module for Python 3:

# apt-get update
# apt install unit unit-python3.5

Now activate the Systemd unit service (yep, confusing, poor name choice IMO) and start Nginx Unit :

# systemctl enable unit
# systemctl start unit
# systemctl status unit
● unit.service - NGINX Unit
Active: active (running) since Sat 2018-04-21 16:51:31 CEST; 18h ago

2. Configure Nginx Unit

In order to configure Unit, you need to write a JSON file and post it to the Unit sock file on your server.

Here is my JSON configuration:

{
"listeners": {
"127.0.0.1:8300": {
"application": "myapp"
}
},

"applications": {
"myapp": {
"type": "python",
"processes": 5,
"module": "myapp.wsgi",
"user": "myapp",
"group": "myapp",
"path": "/home/myapp/prod/myapp"
}
}
}

Ok, here is a pitfall. You need to understand that Unit will use the path parameter as your application root, then try to load the wsgi.py from the module parameter. So here it means that my wsgi.py is located in /home/myapp/prod/myapp/myapp/wsgi.py

Now we’re ready to inject our Unit configuration with curl:

# curl -X PUT -d @myapp.unit.json --unix-socket /var/run/control.unit.sock http://localhost/
{
"success": "Reconfiguration done."
}

Great, now we need our good ol’ Nginx web server as a web proxy in front of Nginx Unit.

3. Install and configure Nginx with Let’s Encrypt

Let’s start by installing the Nginx webserver:

# apt install nginx

To configure Nginx, we will define an upstream receiving the requests from the Nginx web server. We also define a /static/ to indicate the Django static directory.

Here is the Nginx configuration you can put in /etc/nginx/conf.d:

upstream unit_backend {
server 127.0.0.1:8300;
}

server {
listen 80;
server_name myapp.com;
return 301 https://myapp.com$request_uri; } server { listen 443 ssl; ssl_certificate /etc/letsencrypt/live/myapp.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/myapp.com/privkey.pem; server_name myapp.com; access_log /var/log/nginx/myapp.com.access.log; error_log /var/log/nginx/myapp.error.log; root /home/myapp/prod/myapp; location = /favicon.ico { access_log off; log_not_found off; } location /static { root /home/myapps/prod/myapp; } location / { proxy_pass http://unit_backend; proxy_set_header Host$host;
}
location /.well-known {
allow all;
}
}

Before starting Nginx (stop it if it is running), we’ll get our SSL certificate from Let’s Encrypt.

# certbot certonly -d myapp.com

Spin a temporary web server and get your certificate.

Now we’re almost ready. Start the Nginx web server:

# systemctl start nginx

4. Configure Django for production

Your Django settings file, here the /home/myapp/prod/myapp/myapp/settings.py file should use paths existing on your server e.g you should have  the following STATIC_ROOT in the settings.py of your app:

STATIC_ROOT = '/home/myapp/prod/myapp/static/'

Pitfall here: the root in the Nginx configuration for the static we wrote above is one level upper: /home/myapp/prod/myapp Use the correct path or your static won’t appear.

Just a last step for Django: at the root of your Django app, you need to collect the static files in the dedicated directory with the following command:

$python3 manage.py collectstatic Conclusion This setup runs in production. Except two pitfalls, it’s quite straightforward to setup. If you encounter any error, please write a comment below and I’ll fix the article. About Me Carl Chenet, Free Software Indie Hacker, Founder of LinuxJobs.fr, a job board for Free and Open Source Jobs in France. Follow Me On Social Networks Cryptogram — Baseball Code Info on the coded signals used by the Colorado Rockies. TED — More TED2018 conference shorts to amuse and amaze Even in the Age of Amazement, sometimes you need a break between talks packed with fascinating science, tech, art and so much more. That’s where interstitials come in: short videos that entertain and intrigue, while allowing the brain a moment to reset and ready itself to absorb more information. For this year’s conference, TED commissioned and premiered four short films made just for the conference. Check out those films here! Mixed in with our originals, curators Anyssa Samari and Jonathan Wells hand-picked even more videos — animations, music, even cool ads — to play throughout the week. Here’s the program of shorts they found, from creative people all around the world: The short: Jane Zhang: “Dust My Shoulders Off.” A woman having a bad day is transported to a world of famous paintings where she has a fantastic adventure. The creator: Outerspace Leo Shown during: Session 2, After the end of history … The short: “zoom(art).” A kaleidoscopic, visually compelling journey of artificial intelligence creating beautiful works of art. The creator: Directed and programmed by Alexander Mordvintsev, Google Research Shown during: Session 2, After the end of history … The short: “20syl – Kodama.” A music video of several hands playing multiple instruments (and drawing a picture) simultaneously to create a truly delicious electronic beat. The creators: Mathieu Le Dude & 20syl Shown during: Session 3, Nerdish Delight The short: “If HAL-9000 was Alexa.” 2001: A Space Odyssey seems a lot less sinister (and lot more funny) when Alexa can’t quite figure out what Dave is saying. The creator: ScreenJunkies Shown during: Session 3, Nerdish Delight The short: “Maxine the Fluffy Corgi.” A narrated day in the life of an adorable pup named Maxine who knows what she wants. The creator: Bryan Reisberg Shown during: Session 3, Nerdish Delight The short: “RGB FOREST.” An imaginative, colorful and geometric jaunt through the woods set to jazzy electronic music. The creator: LOROCROM Shown during: Session 6, What on earth do we do? The short: “High Speed Hummingbirds.” Here’s your chance to watch the beauty and grace of hummingbirds in breathtaking slow motion. The creator: Anand Varma Shown during: Session 6, What on earth do we do? The short: “Cassius ft. Cat Power & Pharrell Williams | Go Up.” A split screen music video that cleverly subverts and combines versions of reality. The creator: Alex Courtès Shown during: Session 7, Wow. Just wow. The short: “Blobby.” A stop motion film about a man and a blob and the peculiar relationship they share. The creator: Laura Stewart Shown during: Session 7, Wow. Just wow. The short: “WHO.” David Byrne and St. Vincent dance and sing in this black-and-white music video about accidents and consequences. The creator: Martin de Thurah Shown during: Session 8, Insanity. Humanity. The short: “MAKIN’ MOVES.” When music makes the body move in unnatural, impossible ways. The creator: Kouhei Nakama Shown during: Session 9, Body electric The short: “The Art of Flying.” The beautiful displays the Common Starling performs in nature. The creator: Jan van IJken Shown during: Session 9, Body electric The short: “Kiss & Cry.” The heart-rending story of Giselle, a woman who lives and loves and wants to be loved. (You’ll never guess who plays the heroine.) The creators: Jaco Van Dormael and choreographer Michèle Anne De Mey Shown during: Session 10, Personally speaking The short: “Becoming Violet.” The power of the human body, in colors and dance. The creator: Steven Weinzierl Shown during: Session 10, Personally speaking The short: “Golden Castle Town.” A woman is transported to another world and learns to appreciate life anew. The creator: Andrew Benincasa Shown during: Session 10, Personally speaking The short: “Tom Rosenthal | Cos Love.” A love letter to love that is grand and a bit melacholic. The creator: Kathrin Steinbacher Shown during: Session 11, What matters TED — Insanity. Humanity. Notes from Session 8 at TED2018 If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. Photo: Ryan Lash / TED The seven speakers lived up to the two words in the title of the session. Their talks showcased both our collective insanity — the algorithmically-assembled extremes of the Internet — and our humanity — the values and desires that extremists astutely tap into — along with some speakers combining the two into a glorious salad. Let’s dig in. Artificial Intelligence = artificial stupidity. How does a sweetly-narrated video of hands unwrapping Kinder eggs garner 30 million views and spawn more than 10 million imitators? Welcome to the weird world of YouTube children’s videos, where an army of content creators use YouTube “to hack the brains of very small children, in return for advertising revenue,” as artist and technology critic James Bridle describes. Marketing ethics aside, this world seems innocuous on the surface but go a few clicks deeper and you’ll find a surreal and sinister landscape of algorithmically-assembled cartoons, nursery rhymes built from keyword combos, and animated characters and human actors being tortured, assaulted and killed. Automated copycats mimic trusted content providers “using the same mechanisms that power Facebook and Google to create ‘fake news’ for kids,” says Bridle. He adds that feeding the situation is the fact “we’re training them from birth to click on the very first link that comes along, regardless of where the source is.” As technology companies ignore these problems in their quest for ad dollars, the rest of us are stuck in a system in which children are sent down auto-playing rabbit holes where they see disturbing videos filled with very real violence and very real trauma — and get traumatized as a result. Algorithms are touted as the fix, but Bridle declares, “Machine learning, as any expert on it will tell you, is what we call software that does stuff we don’t really understand, and I think we have enough of that already,” he says. Instead, “we need to think of technology not as a solution to all our problems but as a guide to what they are.” After his talk, TED Head of Curation Helen Walters has a blunt question for Bridle: “So are we doomed?” His realistic but ungrim answer: “We’ve got a hell of a long way to go, but talking is the beginning of that process.” Technology that fights extremism and online abuse. Over the last few years, we’ve seen geopolitical forces wreak havoc with their use of the Internet. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. “Radicalization isn’t a yes or no choice,” she says. “It’s a process, during which people have questions about ideology, religion — and they’re searching online for answers which is an opportunity to reach them.” In 2016, Green collaborated with Moonshot CVE to pilot a new approach called the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups and used that information to create a campaign that deployed targeted advertising to reach people susceptible to ISIS’s recruiting and show them videos to counter those messages. Available in English and Arabic, the eight-week pilot program reached more than 300,000 people. In another project, she and her team looked for a way to combat online abuse. Partnering across Google with Wikipedia and the New York Times, the team trained machine-learning models to understand the emotional impact of language — specifically, to predict comments that were likely to make someone leave a conversation and to give commenters real-time feedback about how their words might land. Due to the onslaught of online vitriol, the Times had previously enabled commenting on only 10 percent of homepage stories, but this strategy led it to open up all homepage stories to comments. “If we ever thought we could build technology insulated from the dark side of humanity, we were wrong,” Green says. “If technology has any hope of overcoming today’s challenges, we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” In a post-talk Q & A, Green adds that banning certain keywords isn’t enough of a solution: “We need to combine human insight with innovation.” Living life means acknowledging death. Philosopher-comedian Emily Levine starts her talk with some bad news — she’s got stage 4 lung cancer — but says there’s no need to “oy” or “ohhh” over her: she’s okay with it. After all, explains Levine, life and death go hand in hand; you can’t have one without the other. In fact, therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Levine muses about the scientists who are attempting to thwart death — she dubs them the Anti-Life Brigade — and calls them ungrateful and disrespectful in their efforts to wrest control from nature. “We don’t live in the clockwork universe,” she says wryly. “We live in a banana peel universe,” where our attempts at mastery will always come up short against mystery. She has come to view life as a “gift that you enrich as best you can and then give back.” And just as we should appreciate that life’s boundary line stops abruptly at death, we should accept our own intellectual and physical limits. “We won’t ever be able to know everything or control everything or predict everything,” says Levine. “Nature is like a self-driving car.” We may have some control, but we’re not at the wheel. A high-schooler working on the future of AI. Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years — he’s now just 18 — he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of we call intelligence is just trial-and-error on a massive scale — machines try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. That can create computers that are champions at Go or Q-Bert, but it really doesn’t create general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives and think with these machines. What can he and these new brains accomplish together? Come fly with her. From a young age, action and hardware engineer Elizabeth Streb wanted to fly like, well, a fly or a bird. It took her years of painful experimentation to realize that humans can’t swoop and veer like them, but perhaps she could discover how humans could fly. Naturally, it involves more falling than staying airborne. She has jumped through broken glass and toppled from great heights in order to push the bounds of her vertical comfort zone. With her Streb Extreme Action company, she’s toured the world, bringing the delight and wonder of human flight to audiences. Along the way, she realized, “If we wanted to go higher, faster, sooner, harder and make new discoveries, it was necessary to create our very own space-ships,” so she’s also built hardware to provide a boost. More recently, she opened Brooklyn’s Streb Lab for Action Mechanics (SLAM) to instruct others. “As it turns out, people don’t just want to dream about flying, nor do they want to watch people like us fly; they want to do it, too, and they can,” she says. In teaching, she sees “smiles become more common, self-esteem blossom, and people get just a little bit braver. People do learn to fly, as only humans can.” Calling all haters. “You’re everything I hate in a human being” — that’s just one of the scores of nasty messages that digital creator Dylan Marron receives every day. While his various video series such as “Every Single Word” and “Sitting in Bathrooms With Trans People” have racked up millions of views, they’ve also sent a slew of Internet poison in his direction. “At first, I would screenshot their comments and make fun of their typos but this felt elitist and unhelpful,” recalls Marron. Over time, he developed an unexpected coping mechanism: he calls the people responsible for leaving hateful remarks on his social media, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace — you would have noticed — he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.” And he stresses that his solution is not right for everyone . In a Q & A afterward, he says that some people have told him that his podcast just gives a platform to those espousing harmful ideologies. Marron emphasizes, “Empathy is not endorsement.” His conversations represent his own way of responding to online hate, and he says, “I see myself as a little tile in the mosaic of activism.” Rebuilding trust at work. Trust is the foundation for everything we humans do, but what do we do when it is broken? It’s a problem that fascinates Frances Frei, a professor at Harvard Business School who recently spent six months trying to restore trust at Uber. According to Frei, trust is a three-legged stool that rests on authenticity, logic, and empathy. “If any one of these three gets shaky, if any one of these three wobbles, trust is threatened,” she explains. So which wobbles did Uber have? All of them, according to Frei. Authenticity was the hardest to fix – but that’s not uncommon. “It is still much easier to coach people to fit in; it is still much easier to reward people when they say something that you were going to say,” Frei says, “but when we figure out how to celebrate difference and how to let people bring the best version of themselves forward, well, holy cow, is that the world I want my sons to grow up in.” You can read more about her talk here. TED — What matters: Notes from Session 11 of TED2018 Reed Hastings, the head of Netflix, listens to a question from Chris Anderson during a sparky onstage Q&A on the final morning of TED2018, April 14, 2018. Photo: Ryan Lash / TED What a week. We’ve heard so much, from dystopian warnings to bold visions for change. Our brains are full. Almost. In this session we pull back to the human stories that underpin everything we are, everything we want. From new ways to set goals and move business forward, to unabashed visions for joy and community, it’s time to explore what matters. The original people of this land. One important thing to know: TED’s conference home of Vancouver is built on un-ceded land that once belonged to First Nations people. So this morning, two DJs from A Tribe Called Red start this session by remembering and honoring them, telling First Nations stories in beats and images in a set that expands on the concept of Halluci Nation, inspired by the poet, musician and activist John Trudell. In Trudell’s words: “We are the Halluci Nation / Our DNA is of earth and sky / Our DNA is of past and future.” The power of why, what and how. Our leaders and our institutions are failing us, and it’s not always because they’re bad or unethical. Sometimes, it’s simply because they’re leading us toward the wrong objectives, says venture capitalist John Doerr. How can we get back on track? The trick may be a system called OKR, developed by legendary management thinker Andy Grove. Doerr explains that OKR stands for ‘objectives and key results’ – and setting the right ones can be the difference between success and failure. However, before you set your objective (your what) and your key results (your how), you need to understand your why. “A compelling sense of why can be the launch pad for our objectives,” he says. He illustrates the power of OKRs by sharing the stories of individuals and organizations who’ve put them into practice, including Google’s Larry Page and Sergey Brin. “OKRs are not a silver bullet. They’re not going to be a substitute for a strong culture or for stronger leadership, but when those fundamentals are in place, they can take you to the mountaintop,” he says. He encourages all of us to take the time to write down our values, our objectives, and our key results – and to do it today. “Let’s fight for what it is that really matters, because we can take OKRs beyond our businesses. We can take them to our families, to our schools, even to our government. We can hold those governments accountable,” he says. “We can get back on the right track if we can and do measure what really matters.” What’s powering China’s tech innovation? The largest mass migration in the world occurs every year around the Chinese Spring Festival. Over 40 days, travelers — including 290 million migrant workers — take 3 billion trips all over China. Few can afford to fly, so railways strained to keep up, with crowding, fraud and drama. So the Chinese technology sector has been building everything from apps to AI to ease not only this process, but other pain points throughout society. But unlike the US, where innovation is often fueled by academia and enterprise, China’s tech innovation is powered by “an overwhelming need economy that is serving an underprivileged populace, which has been separated for 30 years from China’s economic boom.” The CEO of the China Morning Post, Gary Liu has a front-row seat to this transformation. As China’s introduction of a “social credit rating” system suggests, a technology boom in an authoritarian society hides a significant dark side. But the Chinese internet hugely benefits its 772 million users. It has spread deeply into rural regions, revitalizing education and creating jobs. There’s a long way to go to bring the internet to everyone in China — more than 600 million people remain offline . But wherever the internet is fueling prosperity, “we should endeavor to follow it with capital and with effort, driving both economic and societal impact all over the world. Just imagine for a minute what more could be possible if the global needs of the underserved become the primary focus of our inventions.” Netflix and chill, the interview. The humble beginnings of Netflix paved the way to transforming how we consume content today. Reed Hastings — who started out as a high school math teacher — admits that making the shift from DVDs to streaming was a big leap. “We weren’t confident,” he admits in his interview with TED Curator Chris Anderson. “It was scary.” Obviously, it paid off over time, with 117 million subscribers (and growing), more than$11 billion in revenue (so far) and a slew of popular original content (Black Mirror, anyone?) fueled by curated algorithmic recommendations. The offerings of Netflix, Hastings says, is a mixture of candy and broccoli — and it allows people to decide what a proper “diet” is for them. “We get a lot of joy from making people happy,” he says. The external culture of the streaming platform reflects its internal culture as well: they’re super focused on how to run with no process, but without chaos. There’s an emphasis on freedom, responsibility and honesty (as he puts it, “disagreeing silently is disloyal”). And though Hastings loves business — competing against the likes of HBO and Disney — he also enjoys his philanthropic pursuits supporting innovative education, such as the KIPP charter schools, and advocates for more variety in educational content. For now, he says, it’s the perfect job.

“E. Pluribus Unum” — ”Out of many, one.” It’s the motto of the United States, yet few citizens understand its meaning. Artist and designer Walter Hood calls for national landscapes that preserve the distinct identities of peoples and cultures, while still forging unity. Hood believes spaces should illuminate shared memories without glossing over past — and present — injustices. To guide his projects, Hood follows five simple guidelines. The first — “Great things happen when we exist in each other’s world” — helped fire up a Queens community garden initiative in collaboration with Bette Midler and hip-hop legend 50 Cent. “Two-ness” — or the sense of double identity faced by those who are “othered,” like women and African-Americans — lies behind a “shadow sculpture” at the University of Virginia that commemorates a forgotten, buried servant household uncovered during the school’s expansion. “Empathy” inspired the construction of a park in downtown Oakland that serves office workers and the homeless community, side-by-side. “The traditional belongs to all of us” — and to the San Francisco neighborhood of Bayview-Hunter’s Point, where Hood restored a Victorian opera house to serve the local community. And “Memory” lies at the core of a future shorefront park in Charleston, which will rest on top of Gadsden Wharf — an entry point for 40% of the United States’ slaves, where they were then “stored” in chains — that forces visitors to confront the still-resonating cruelty of our past.

The tension between acceptance and hope. When Simone George met Mark Pollock, it was eight years after he’d lost his sight. Pollock was rebuilding his identity — living a high-octane life of running marathons and racing across Antarctica to reach the South Pole. But a year after he returned from Antarctica, Pollock fell from a third-story window; he woke up paralyzed from the waist down. Pollock shares how being a realist — inspired by the writings of Admiral James Stockdale, a Vietnam POW — helped him through bleak days after this accident, when even hope seemed dangerous. George explains how she helped Pollock navigate months in the hospital; told that any sensation Pollock didn’t regain in the weeks immediately after the fall would likely never come back, the two looked to stories of others, like Christopher Reeve, who had pushed beyond what was understood as possible for those who are paralyzed. “History is filled with the kinds of impossible made possible through human endeavor,” Pollock says. So he started asking: Why can’t human endeavor cure paralysis in his lifetime? In collaboration with a team of engineers in San Francisco, who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who had developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test, proving that progress is definitely still possible. For now, “I accept the wheelchair, it’s almost impossible not to,” says Pollock. “We also hope for another life — a life where we have created a cure through collaboration, a cure that we’re actively working to release from university labs around the world and share with everyone who needs it.”

The pursuit of joy, not happiness. “How do tangible things make us feel intangible joy?” asks designer Ingrid Fetell Lee. She pursued this question for ten years to understand how the physical world relates to the mysterious, quixotic emotion of joy. In turns out, the physical can be a remarkable, renewable resource for fostering a happier, healthier life. There isn’t just one type of joy, and its definition morphs from person to person — but psychologists, broadly speaking, describe joy as intense, momentary experience of positive emotion (or, simply, as something that makes you want to jump up and down). However, joy shouldn’t be conflated with happiness, which measure how good we feel over time. So, Lee asked around about what brings people joy and eventually had a notebook filled with things like beach balls, treehouses, fireworks, googly eyes and ice cream cones with rainbow sprinkles, and realized something significant: the patterns of joy have roots in evolutionary history. Things like symmetrical shapes, bright colors, an attraction to abundance and multiplicity, a feeling of lightness or elevation — this is what’s universally appealing. Joy lowers blood pressure, improves our immune system and even increases productivity. She began to wonder: should we use these aesthetics to help us find more opportunities for joy in the world around us? “Joy begins with the senses,” she says. “What we should be doing is embracing joy, and finding ways to put ourselves in the path of it more often.”

And that’s a wrap. Speaking of joy, Baratunde Thurston steps out to close this conference with a wrap that shouts out the diversity of this year’s audience but also nudges the un-diverse selection of topics: next year, he asks, instead of putting an African child on a slide, can we put her onstage to speak for herself? He winds together the themes of the week, from the terrifying — killer robots, octopus robots, genetically modified piglets — to the badass, the inspiring and the mind-opening. Are you not amazed?

Planet Debian — Olivier Berger: Added docker container to my org-teaching framework to ease org-mode exports

I’ve improved a bit the org-teaching framework in order to prepare for the next edition of the CSC4101 classes.

I’ve now added a docker container which is in charge of performing the HTML or PDF exports of the slides (using org-reveal) or handbooks (using LaTeX).

Emacs and org-mode are still advised for editing contents, but having this container in the loop ensures that colleagues are able to preview the changes to the teaching material, and I’m no longer a bottleneck for generating the handouts. This also allows to export in a reproducible way, which doesn’t depend on my Emacs config tweaks.

I’ve also added Gitlab pages to the project’s CI so that the docs are updated live at https://olberger.gitlab.io/org-teaching/.

It’s probably not yet rady for use by anyone else, but I’d be glad to get feedback

Worse Than Failure — The Big Balls of…

The dependency graph of your application can provide a lot of insight into how objects call each other. In a well designed application, this is probably mostly acyclic and no one node on the graph has more than a handful of edges coming off of it. The kinds of applications we talk about here, on the other hand, we have a name for their graphs: the Enterprise Dependency and the Big Ball of Yarn.

Thomas K introduces us to an entirely new iteration: The Big Ball of Mandelbrot

This gives new meaning to points “on a complex plane”.

What you’re seeing here is the relationship between stored procedures and tables. Circa 1995, when this application shambled into something resembling life, the thinking was, “If we put all the business logic in stored procedures, it’ll be easy to slap new GUIs on there as technology changes!”

Of course, the relationship between what the user sees on the screen and the underlying logic which drives that display means that as they changed the GUI, they also needed to change the database. Over the course of 15 years, the single cohesive data model ubercomplexificaticfied itself as each customer needed a unique GUI with a unique feature set which mandated unique tables and stored procedures.

By the time Thomas came along to start a pseudo-greenfield GUI in ASP.Net, the first and simplest feature he needed to implement involved calling a 3,000 line stored procedure which required over 100 parameters.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Debian — Reproducible builds folks: Reproducible Builds: Weekly report #156

Here’s what happened in the Reproducible Builds effort between Sunday April 15 and Saturday April 21 2018:

• Holger Levsen announced preliminary result of our poll for our logo which was subsequently verified by Chris Lamb. The winner was “#6”, shown above.

• Chris Lamb will present at foss-north 2018 on Monday April 23rd in Gothenburg, Sweden to speak about diffoscope, our in-depth “diff-on-steroids” to analyse reproducible issues in packages. He will then be keynoting at FLOSSUK 2018 in Edinburgh, Scotland on April 26th to speak about reproducible builds more generally.

• Jan Bundesmann, Reiner Herrmann and Holger Levsen wrote an article about Reproducible Builds titled Aus der Schablone (“From the template”) for the May issue of the German “iX” magazine.

• Holger Levsen began a discussion with the Debian System Administrators regarding redirecting this blog in the future away from the (deprecated) Alioth service. Chris Lamb subsequently started on the migration work.

Packages reviewed and fixed, and bugs filed

In addition, Chris Lamb’s patch to the Freeland VPN client was merged upstream and build failure bugs were reported by Adrian Bunk (48), Paul Gevers (5) and Rafael Laboissière (1).

jenkins.debian.net development

A large number of changes were made to our Jenkins-based testing framework, including:

• Chris Lamb:
• Mattia Rizzolo:

Reviews of unreproducible packages

43 package reviews have been added, 49 have been updated and 97 have been removed in this week, adding to our knowledge about identified issues.

One new issue was added by Chris Lamb: build_path_in_index_files_generated_by_qdoc. In addition, three issue types were removed (random_ispell_hash_files, randomness_in_python_setuptools_pkg_info & timestamps_in_documentation_generated_by_asciidoctora) and one was updated (timestamp_in_pear_registry_files).

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet Debian — Holger Levsen: 20180423-technohippieparadise

So I'm in some 'jungle' in Brasil, enjoying a good time with some friends which another friend jokingly labeled as cryptohippies, enjoying the silence, nature, good food, some cats & dogs and 3g internet. Life is good here.

And then we decided to watch "Stare into the lights my pretties" and while it is a very good and insightful movie, it's also disturbing to see just how much we, as human societies, have changed ourselves mindlessly (or rather, out of our own minds) in very recent history.

Even though not a smartphone user myself and while seemingly aware and critical of many changes happening in the last two decades, the movie was still eyeopening to me. Now if there only werent 100 distractions per day I would maybe be able to build up on this. Or maybe I need to watch it every week, though this wouldn't work neither, as the movie explains so well...

The movie also reminded me why I dislike being cc:ed on email so much (unless urgent and when I'm subscribed to the list being posted to). Because usually during the day I (try to) ignore list mails, but I do check my personal inboxes. And if someone cc:s me, this breaks my lines of thoughts. So it seems I still need to get better at ignoring stuff, even if something is pushed to me. Maybe especially then. (And hints for good .procmail rules for this much appreciated.)

Another interesting point: while the number of people addicted to nicotine has been going down globally lately, the number of network addicts has outnumbered those by far now. And yet the long term effects of being online almost 24/365 have not yet been researched at all. The cigarette companies claimed that most doctors smoke. The IT industry claims it's normal to be online. What's your wakeup2smartphone time? Do you check email every day?

This movie also made me wonder what Debian's role will, can and should be in this future. (And where of course I don't only mean Debian, but free software, free societies, in general.)

So, this movie brings up many questions. (And nicely explains why people rather don't like that.) So go watch this movie! You will be touched, think and check your email/smartphone afterwards.

(Least, of course it's ironic that the movie is on youtube. I learned that to download subtitles you need to tell youtube-dl to do so, and it's easiest by using --all-subs. And btw, youtube-dl-gui needs help with running with python3 and thus with getting into Debian.)

Update: it's on archive.org as well.

,

Planet Debian — Benjamin Mako Hill: Is English Wikipedia’s ‘rise and decline’ typical?

This graph shows the number of people contributing to Wikipedia over time:

The number of active Wikipedia contributors exploded, suddenly stalled, and then began gradually declining. (Figure taken from Halfaker et al. 2013)

The figure comes from “The Rise and Decline of an Open Collaboration System,” a well-known 2013 paper that argued that Wikipedia’s transition from rapid growth to slow decline in 2007 was driven by an increase in quality control systems. Although many people have treated the paper’s finding as representative of broader patterns in online communities, Wikipedia is a very unusual community in many respects. Do other online communities follow Wikipedia’s pattern of rise and decline? Does increased use of quality control systems coincide with community decline elsewhere?

In a paper that my student Nathan TeBlunthuis is presenting Thursday morning at the Association for Computing Machinery (ACM) Conference on Human Factors in Computing Systems (CHI),  a group of us have replicated and extended the 2013 paper’s analysis in 769 other large wikis. We find that the dynamics observed in Wikipedia are a strikingly good description of the average Wikia wiki. They appear to reoccur again and again in many communities.

The original “Rise and Decline” paper (we’ll abbreviate it “RAD”) was written by Aaron Halfaker, R. Stuart Geiger, Jonathan T. Morgan, and John Riedl. They analyzed data from English Wikipedia and found that Wikipedia’s transition from rise to decline was accompanied by increasing rates of newcomer rejection as well as the growth of bots and algorithmic quality control tools. They also showed that newcomers whose contributions were rejected were less likely to continue editing and that community policies and norms became more difficult to change over time, especially for newer editors.

Our paper, just published in the CHI 2018 proceedings, replicates most of RAD’s analysis on a dataset of 769 of the  largest wikis from Wikia that were active between 2002 to 2010.  We find that RAD’s findings generalize to this large and diverse sample of communities.

We can walk you through some of the key findings. First, the growth trajectory of the average wiki in our sample is similar to that of English Wikipedia. As shown in the figure below, an initial period of growth stabilizes and leads to decline several years later.

The average Wikia wikia also experience a period of growth followed by stabilization and decline (from TeBlunthuis, Shaw, and Hill 2018).

We also found that newcomers on Wikia wikis were reverted more and continued editing less. As on Wikipedia, the two processes were related. Similar to RAD, we also found that newer editors were more likely to have their contributions to the “project namespace” (where policy pages are located) undone as wikis got older. Indeed, the specific estimates from our statistical models are very similar to RAD’s for most of these findings!

There were some parts of the RAD analysis that we couldn’t reproduce in our context. For example, there are not enough bots or algorithmic editing tools in Wikia to support statistical claims about their effects on newcomers.

At the same time, we were able to do some things that the RAD authors could not.  Most importantly, our findings discount some Wikipedia-specific explanations for a rise and decline. For example, English Wikipedia’s decline coincided with the rise of Facebook, smartphones, and other social media platforms. In theory, any of these factors could have caused the decline. Because the wikis in our sample experienced rises and declines at similar points in their life-cycle but at different points in time, the rise and decline findings we report seem unlikely to be caused by underlying temporal trends.

The big communities we study seem to have consistent “life cycles” where stabilization and/or decay follows an initial period of growth. The fact that the same kinds of patterns happen on English Wikipedia and other online groups implies a more general set of social dynamics at work that we do not think existing research (including ours) explains in a satisfying way. What drives the rise and decline of communities more generally? Our findings make it clear that this is a big, important question that deserves more attention.

We hope you’ll read the paper and get in touch by commenting on this post or emailing Nate if you’d like to learn or talk more. The paper is available online and has been published under an open access license. If you really want to get into the weeds of the analysis, we will soon publish all the data and code necessary to reproduce our work in a repository on the Harvard Dataverse.

Nate TeBlunthuis will be presenting the project this week at CHI in Montréal on Thursday April 26 at 9am in room 517D.  For those of you not familiar with CHI, it is the top venue for Human-Computer Interaction. All CHI submissions go through double-blind peer review and the papers that make it into the proceedings are considered published (same as journal articles in most other scientific fields). Please feel free to cite our paper and send it around to your friends!

This blog post, and the open access paper that it describes, is a collaborative project with Aaron Shaw, that was led by Nate TeBlunthuis. A version of this blog post was originally posted on the Community Data Science Collective blog. Financial support came from the US National Science Foundation (grants IIS-1617129,  IIS-1617468, and GRFP-2016220885 ), Northwestern University, the Center for Advanced Study in the Behavioral Sciences at Stanford University, and the University of Washington. This project was completed using the Hyak high performance computing cluster at the University of Washington.

Krebs on Security — Transcription Service Leaked Medical Records

MEDantex, a Kansas-based company that provides medical transcription services for hospitals, clinics and private physicians, took down its customer Web portal last week after being notified by KrebsOnSecurity that it was leaking sensitive patient medical records — apparently for thousands of physicians.

On Friday, KrebsOnSecurity learned that the portion of MEDantex’s site which was supposed to be a password-protected portal physicians could use to upload audio-recorded notes about their patients was instead completely open to the Internet.

What’s more, numerous online tools intended for use by MEDantex employees were exposed to anyone with a Web browser, including pages that allowed visitors to add or delete users, and to search for patient records by physician or patient name. No authentication was required to access any of these pages.

This exposed administrative page from MEDantex’s site granted anyone complete access to physician files, as well as the ability to add and delete authorized users.

Several MEDantex portal pages left exposed to the Web suggest that the company recently was the victim of WhiteRose, a strain of ransomware that encrypts a victim’s files unless and until a ransom demand is paid — usually in the form of some virtual currency such as bitcoin.

Contacted by KrebsOnSecurity, MEDantex founder and chief executive Sreeram Pydah confirmed that the Wichita, Kansas based transcription firm recently rebuilt its online servers after suffering a ransomware infestation. Pydah said the MEDantex portal was taken down for nearly two weeks, and that it appears the glitch exposing patient records to the Web was somehow incorporated into that rebuild.

“There was some ransomware injection [into the site], and we rebuilt it,” Pydah said, just minutes before disabling the portal (which remains down as of this publication). “I don’t know how they left the documents in the open like that. We’re going to take the site down and try to figure out how this happened.”

It’s unclear exactly how many patient records were left exposed on MEDantex’s site. But one of the main exposed directories was named “/documents/userdoc,” and it included more than 2,300 physicians listed alphabetically by first initial and last name. Drilling down into each of these directories revealed a varying number of patient records — displayed and downloadable as Microsoft Word documents and/or raw audio files.

Although many of the exposed documents appear to be quite recent, some of the records dated as far back as 2007. It’s also unclear how long the data was accessible, but this Google cache of the MEDantex physician portal seems to indicate it was wide open on April 10, 2018.

Among the clients listed on MEDantex’s site include New York University Medical Center; San Francisco Multi-Specialty Medical Group; Jackson Hospital in Montgomery Ala.; Allen County Hospital in Iola, Kan; Green Clinic Surgical Hospital in Ruston, La.; Trillium Specialty Hospital in Mesa and Sun City, Ariz.; Cooper University Hospital in Camden, N.J.; Sunrise Medical Group in Miami; the Wichita Clinic in Wichita, Kan.; the Kansas Spine Center; the Kansas Orthopedic Center; and Foundation Surgical Hospitals nationwide. MEDantex’s site states these are just some of the healthcare organizations partnering with the company for transcription services.

Unfortunately, the incident at MEDantex is far from an anomaly. A study of data breaches released this month by Verizon Enterprise found that nearly a quarter of all breaches documented by the company in 2017 involved healthcare organizations.

Verizon says ransomware attacks account for 85 percent of all malware in healthcare breaches last year, and that healthcare is the only industry in which the threat from the inside is greater than that from outside.

“Human error is a major contributor to those stats,” the report concluded.

Source: Verizon Business 2018 Data Breach Investigations Report.

According to a story at BleepingComputer, a security news and help forum that specializes in covering ransomware outbreaks, WhiteRose was first spotted about a month ago. BleepingComputer founder Lawrence Abrams says it’s not clear how this ransomware is being distributed, but that reports indicate it is being manually installed by hacking into Remote Desktop services.

Fortunately for WhiteRose victims, this particular strain of ransomware is decryptable without the need to pay the ransom.

“The good news is this ransomware appears to be decryptable by Michael Gillespie,” Abrams wrote. “So if you become infected with WhiteRose, do not pay the ransom, and instead post a request for help in our WhiteRose Support & Help topic.”

Ransomware victims may also be able to find assistance in unlocking data without paying from nomoreransom.org.

Thanks to your feedback around wanting more discussion of AdSense policies, we have created some new content on our AdSense YouTube channel in a series called “Let’s talk about Policy!”.

These standards were created by the Coalition for Better Ads to improve ad experiences for users across the web. Google is a member of the Coalition alongside other stakeholders within the industry. The Better Ads Standards were created with extensive consumer research to minimize annoying ad experiences across devices. The Standards are currently in place for users within North America and Europe, but we are advising all our publishers to abide by the standards to provide a quality user experience for your visitors and minimize the need to make changes in the future.

Be sure to subscribe to the channel to ensure you don’t miss an episode.

Cryptogram — Russia is Banning Telegram

Russia has banned the secure messaging app Telegram. It's making an absolute mess of the ban -- blocking 16 million IP addresses, many belonging to the Amazon and Google clouds -- and it's not even clear that it's working. But, more importantly, I'm not convinced Telegram is secure in the first place.

Such a weird story. If you want secure messaging, use Signal. If you're concerned that having Signal on your phone will itself arouse suspicion, use WhatsApp.

Cryptogram — Yet Another Biometric: Ear Shape

This acoustic technology identifies individuals by their ear shapes. No information about either false positives or false negatives.

Worse Than Failure — Representative Line: The Truth About Comparisons

We often point to dates as one of the example data types which is so complicated that most developers can’t understand them. This is unfair, as pretty much every data type has weird quirks and edge cases which make for unexpected behaviors. Floating point rounding, integer overflows and underflows, various types of string representation…

But file-not-founds excepted, people have to understand Booleans, right?

Of course not. We’ve all seen code like:

if (booleanFunction() == true) …

Or:

if (booleanValue == true) {
return true;
} else {
return false;
}

Someone doesn’t understand what booleans are for, or perhaps what the return statement is for. But Paul T sends us a representative line which constitutes a new twist on an old chestnut.

if ( Boolean.TRUE.equals(functionReturningBooleat(pa, isUpdateNetRevenue)) ) …

This is the second most Peak Java Way to test if a value is true. The Peak Java version, of course, would be to use an AbstractComparatorFactoryFactory<Boolean> to construct a Comparator instance with an injected EqualityComparison object. But this is pretty close- take the Boolean.TRUE constant, use the inherited equals method on all objects, which means transparently boxing the boolean returned from the function into an object type, and then executing the comparison.

The if (boolean == true) return true; pattern is my personal nails-on-the-chalkboard code block. It’s not awful, it just makes me cringe. Paul’s submission is like an angle-grinder on a chalkboard.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Debian — Vincent Bernat: A more privacy-friendly blog

When I started this blog, I embraced some free services, like Disqus or Google Analytics. These services are quite invasive for users’ privacy. Over the years, I have tried to correct this to reach a point where I do not rely on any “privacy-hostile” services.

Analytics🔗

I opted for a simpler solution: no analytics. It also enables me to think that my blog attracts thousands of visitors every day.

Fonts🔗

Google Fonts is a very popular font library and hosting service, which relies on the generic Google Privacy Policy. The google-webfonts-helper service makes it easy to self-host any font from Google Fonts. Moreover, with help from pyftsubset, I include only the characters used in this blog. The font files are lighter and more complete: no problem spelling “Antonín Dvořák”.

Videos🔗

• After: self-hosted

Some articles are supported by a video (like “OPL2LPT: an AdLib sound card for the parallel port“). In the past, I was using YouTube, mostly because it was the only free platform with an option to disable ads. Streaming on-demand videos is usually deemed quite difficult. For example, if you just use the <video> tag, you may push a too big video for people with a slow connection. However, it is not that hard, thanks to hls.js, which enables to deliver video sliced in segments available at different bitrates. Users with Java­Script disabled are still delivered with a progressive version of medium quality.

In “Self-hosted videos with HLS”, I explain this approach in more details.

For some time, I thought about implementing my own comment system around Atom feeds. Each page would get its own feed of comments. A piece of Java­Script would turn these feeds into HTML and comments could still be read without Java­Script, thanks to the default rendering provided by browsers. People could also subscribe to these feeds: no need for mail notifications! The feeds would be served as static files and updated on new comments by a small piece of server-side code. Again, this could work without Javascript.

I still think this is a great idea. But I didn’t feel like developing and maintaining a new comment system. There are several self-hosted alternatives, notably Isso and Commento. Isso is a bit more featureful, with notably an imperfect import from Disqus. Both are struggling with maintenance and are trying to become sustainable with a paid hosted version.1 Commento is more privacy-friendly as it doesn’t use cookies at all. However, cookies from Isso are not essential and can be filtered with nginx:

proxy_hide_header Set-Cookie;


In Isso, there is currently no mail notifications, but I have added an Atom feed for each comment thread.

Another option would have been to not provide comments anymore. However, I had some great contributions as comments in the past and I also think they can work as some kind of peer review for blog articles: they are a weak guarantee that the content is not totally wrong.

Search engine🔗

A way to provide a search engine for a personal blog is to provide a form for a public search engine, like Google. That’s what I did. I also slapped some Java­Script on top of that to make it look like not Google.

The solution here is easy: switch to DuckDuckGo, which lets you customize a bit the search experience:

<form id="lf-search" action="https://duckduckgo.com/">
<input type="hidden" name="kf" value="-1">
<input type="hidden" name="kaf" value="1">
<input type="hidden" name="k1" value="-1">
<input type="hidden" name="sites" value="vincent.bernat.im/en">
<input type="submit" value="">
<input type="text" name="q" value="" autocomplete="off" aria-label="Search">
</form>


The Java­Script part is also removed as DuckDuckGo doesn’t provide an API. As it is unlikely that more than three people will use the search engine in a year, this seems a good idea to not spend too much time on this non-essential feature.

MailChimp is a common solution to send newsletters. It provides a simple integration with RSS feeds to trigger a mail each time new items are added to the feed. From a privacy point of view, MailChimp seems a good citizen: data collection is mainly limited to the amount needed to operate the service. Privacy-conscious users can still avoid this service and use the RSS feed.

Less Java­Script🔗

• Before: third-party Java­Script code
• After: self-hosted Java­Script code

Many privacy-conscious people are disabling Java­Script or using extensions like uMatrix or NoScript. Except for comments, I was using Java­Script only for non-essential stuff:

For mathematical formulae, I have switched from MathJax to KaTeX. The later is faster but also enables server-side rendering: it produces the same output regardless of browser. Therefore, client-side Java­Script is not needed anymore.

For sidenotes, I have turned the Java­Script code doing the transformation into Python code, with pyquery. No more client-side Java­Script for this aspect either.

The remaining code is still here but is self-hosted.

Memento: CSP🔗

The HTTP Content-Security-Policy header controls the resources that a user agent is allowed to load for a given page. It is a safeguard and a memento for the external resources a site will use. Mine is moderately complex and shows what to expect from a privacy point of view:3

Content-Security-Policy:
default-src 'self' blob:;
script-src  'self' blob: https://d1g3mdmxf8zbo9.cloudfront.net/js/;
object-src  'self' https://d1g3mdmxf8zbo9.cloudfront.net/images/;
img-src     'self' data: https://d1g3mdmxf8zbo9.cloudfront.net/images/;
frame-src   https://d1g3mdmxf8zbo9.cloudfront.net/images/;
style-src   'self' 'unsafe-inline' https://d1g3mdmxf8zbo9.cloudfront.net/css/;
worker-src  blob:;
media-src   'self' blob: https://luffy-video.sos-ch-dk-2.exo.io;
frame-ancestors 'none';
block-all-mixed-content;


I am quite happy having been able to reach this result. 😊

1. For Isso, look at comment.sh. For Commento, look at commento.io↩︎

2. You may have noticed I am a footnote sicko and use them all the time for pointless stuff. ↩︎

3. I don’t have issue with using a CDN like CloudFront: it is a paid service and Amazon AWS is not in the business of tracking users. ↩︎

Planet Linux Australia — Michael Still: pyconau 2018 call for proposals now open

The pyconau call for proposals is now open, and runs until 28 May. I took my teenagers to pyconau last year and they greatly enjoyed it. I hadn’t been to a pyconau in ages, and ended up really enjoying thinking about things from topic areas I don’t normally need to think about. I think expanding one’s horizons is generally a good idea.

Should I propose something for this year? I am unsure. Some random ideas that immediately spring to mind:

• something about privsep: I think a generalised way to make privileged calls in unprivileged code is quite interesting, especially in a language which is often used for systems management and integration tasks. That said, perhaps its too OpenStacky given how disinterested in OpenStack talks most python people seem to be.
• nova-warts: for a long time my hobby has been cleaning up historical mistakes made in OpenStack Nova that wont ever rate as a major feature change. What lessons can other projects learn from a well funded and heavily staffed project that still thought that exec() was a great way to do important work? There’s definitely an overlap with the privsep talk above, but this would be more general.
• a talk about how I had to manage some code which only worked in python2, and some other code that only worked in python3 and in the end gave up on venvs and decided that Docker containers are like the ultimate venvs. That said, I suspect this is old hat and was obvious to everyone except me.
• something else I haven’t though of.

Also, here’s an image for this post. Its the stone henge we found at Guerilla Bay last weekend. I assume its in frequent use for tiny tiny druids.

The post pyconau 2018 call for proposals now open appeared first on Made by Mikal.

,

Harald Welte — osmo-fl2k - Using USB-VGA dongles as SDR transmitter

Yesterday, during OsmoDevCon 2018, Steve Markgraf released osmo-fl2k, a new Osmocom member project which enables the use of FL2000 USB-VGA adapters as ultra-low-cost SDR transmitters.

How does it work?

A major part of any VGA card has always been a rather fast DAC which generates the 8-bit analog values for (each) red, green and blue at the pixel clock. Given that fast DACs were very rare/expensive (and still are to some extent), the idea of (ab)using the VGA DAC to transmit radio has been followed by many earlier, mostly proof-of-concept projects, such as Tempest for Eliza in 2001.

However, with osmo-fl2k, for the first time it was possible to completely disable the horizontal and vertical blanking, resulting in a continuous stream of pixels (samples). Furthermore, as the supported devices have no frame buffer memory, the samples are streamed directly from host RAM.

As most USB-VGA adapters appear to have no low-pass filters on their DAC outputs, it is possible to use any of the harmonics to transmit signals at much higher frequencies than normally possible within the baseband of the (max) 157 Mega-Samples per seconds that can be achieved.

osmo-fl2k and rtl-sdr

Steve is the creator of the earlier, complementary rtl-sdr software, which since 2012 transforms USB DVB adapters into general-purpose SDR receivers.

Today, six years later, it is hard to think of where SDR would be without rtl-sdr. Reducing the entry cost of SDR receivers nearly down to zero has done a lot for democratization of SDR technology.

There is hence a big chance that his osmo-fl2k project will attain a similar popularity. Having a SDR transmitter for as little as USD 5 is an amazing proposition.

free riders

Please keep in mind that Steve has done rtl-sdr just for fun, to scratch his own itch and for the "hack value". He chose to share his work with the wider public, in source code, under a free software license. He's a very humble person, he doesn't need to stand in the limelight.

Many other people since have built a business around rtl-sdr. They have grabbed domains with his project name, etc. They are now earning money based on what he has done and shared selflessly, without ever contributing back to the pioneering developers who brought this to all of us in the first place.

So, do we want to bet if history repeats itself? How long will it take for vendors showing up online advertising the USB VGA dongles as "SDR transmitter", possibly even with a surcharge? How long will it take for them to include Steve's software without giving proper attribution? How long until they will violate the GNU GPL by not providing the complete corresponding source code to derivative versions they create?

If you want to thank Steve for his amazing work

• reach out to him personally
• contribute to his work, e.g.
• help to maintain it
• package it for distributions
• send patches (via osmocom-sdr mailing list)
• register an osmocom.org account and update the wiki with more information

And last, but not least, carry on the spirit of "hack value" and democratization of software defined radio.

Thank you, Steve! After rtl-sdr and osmo-fl2k, it's hard to guess what will come next :)

Planet Debian — Joachim Breitner: Verifying local definitions in Coq

TL;DR: We can give top-level names to local definitions, so that we can state and prove stuff about them without having to rewrite the programs.

Imagine you teach Coq to a Haskell programmer, and give them the task of pairing each element in a list with its index. The Haskell programmer might have

addIndex :: [a] -> [(Integer, a)]
addIndex xs = go 0 xs
where go n [] = []
go n (x:xs) = (n,x) : go (n+1) xs

in mind and write this Gallina function (Gallina is the programming language of Coq):

Require Import Coq.Lists.List.
Import ListNotations.

Definition addIndex {a} (xs : list a) : list (nat * a) :=
let fix go n xs := match xs with
| []    => []
| x::xs => (n, x) :: go (S n) xs
end
in go 0 xs.

Alternatively, imagine you are using hs-to-coq to mechanically convert the Haskell definition into Coq.

When a Coq user tries to verify that

Theorem addIndex_spec:
forall {a} n (xs : list a),
nth n (map fst (addIndex xs)) n = n.

If you just have learned Coq, you will think “I can do this, this surely holds by induction on xs.” But if you have a bit more experience, you will already see a problem with this (if you do not see the problem yet, I encourage you to stop reading, copy the few lines above, and try to prove it).

The problem is that – as so often – you have to generalize the statement for the induction to go through. The theorem as stated says something about addIndex or, in other words, about go 0. But in the inductive case, you will need some information about go 1. In fact, you need a lemma like this:

Lemma go_spec:
forall {a} n m k (xs : list a), k = n + m ->
nth n (map fst (go m xs)) k = k.

But go is not a (top-level) function! How can we fix that?

• We can try to awkwardly work-around not having a name for go in our proofs, and essentially prove go_spec inside the proof of addIndex_spec. Might work in this small case, but does not scale up to larger proofs.
• We can ask the programmer to avoid using local functions, and first define go as a top-level fixed point. But maybe we don’t want to bother them because of that. (Or, more likely, we are using hs-to-coq and that tool stubbornly tries to make the output as similar to the given Haskell code as possible.)
• We can copy’n’paste the definition of go and make a separate, after-the-fact top-level definition. But this is not nice from a maintenance point of view: If the code changes, we have to update this copy.
• Or we apply this one weird trick...

The weird trick

We can define go after-the-fact, but instead of copy’n’pasting the definition, we can use Coq’s tactics to define it. Here it goes:

Definition go {a} := ltac:(
let e := eval cbv beta delta [addIndex] in (@addIndex a []) in
(* idtac e; *)
lazymatch e with | let x := ?def in _ =>
exact def
end).

Let us take it apart:

1. We define go, and give the parameters that go depends upon. Note that of the two parameters of addIndex, the definition of go only depends on (“captures”) a, but not xs.
2. We do not give a type to go. We could, but that would again just be copying information that is already there.
3. We define go via an ltac expression: Instead of a term we give a tactic that calculates the term.
4. This tactic first binds e to the body of addIndex. To do so, it needs to pass enough arguments to addIndex. The concrete value of the list argument does not matter, so we pass []. The term @addIndex a [] is now evaluated with the evaluation flags eval cbv beta delta [addIndex], which says “unfold addIndex and do beta reduction, but nothing else”. In particularly, we do not do zeta reduction, which would reduce the let go := … definition. (The user manual very briefly describes these flags.)
5. The idtac e line can be used to peek at e, for example when the next tactic fails. We can use this to check that e really is of the form let fix go := … in ….
6. The lazymatch line matches e against the pattern let x := ?def in _, and binds the definition of go to the name def.
7. And the exact def tactic tells Coq to use def as the definition of go.

We now have defined go, of type go : forall {a}, nat -> list a -> list (nat * a), and can state and prove the auxiliary lemma:

Lemma go_spec:
forall {a} n m k (xs : list a), k = n + m ->
nth n (map fst (go m xs)) k = k.
Proof.
intros ?????.
revert n m k.
induction xs; intros; destruct n; subst; simpl.
1-3:reflexivity.
apply IHxs; lia.
Qed.

When we come to the theorem about addIndex, we can play a little trick with fold to make the proof goal pretty:

Theorem addIndex_spec:
forall {a} n (xs : list a),
nth n (map fst (addIndex xs)) n = n.
Proof.
intros.
fold (@go a).
(* goal here: nth n (map fst (go 0 xs)) n = n *)
apply go_spec; lia.
Qed.

Multiple local definitions

The trick extends to multiple local definitions, but needs some extra considerations to ensure that terms are closed. A bit contrived, but let us assume that we have this function definition:

Definition addIndex' {a} (xs : list a) : list (nat * a) :=
let inc := length xs in
let fix go n xs := match xs with
| []    => []
| x::xs => (n, x) :: go (inc + n) xs
end in
go 0 xs.

We now want to give names to inc and to go. I like to use a section to collect the common parameters, but that is not essential here. The trick above works flawlessly for inc':

Section addIndex'.
Context {a} (xs : list a).

Definition inc := ltac:(
let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
lazymatch e with | let x := ?def in _ =>
exact def
end).

But if we try it for go', like such:

Definition go' := ltac:(
let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
lazymatch e with | let x := _ in let y := ?def in _ =>
exact def
end).

we get “Ltac variable def depends on pattern variable name x which is not bound in current context”. To fix this, we use higher-order pattern matchin (@?def) to substitute “our” inc for the local inc:

Definition go' := ltac:(
let e := eval cbv beta delta [addIndex'] in (@addIndex' a xs) in
lazymatch e with | let x := _ in let y := @?def x in _ =>
let def' := eval cbv beta in (def inc) in
exact def'
end).

instead. We have now defined both inc and go' and can use them in proofs about addIndex':

Theorem addIndex_spec':
forall n, nth n (map fst (addIndex' xs)) n = n * length xs.
Proof.
intros.
fold inc go'. (* order matters! *)
(* goal here: nth n (map fst (go' 0 xs)) n = n * inc *)

Reaching into a match

This trick also works when the local definition we care about is inside a match statement. Consider:

Definition addIndex_weird {a} (oxs : option (list a))
:= match oxs with
| None => []
| Some xs =>
let fix go n xs := match xs with
| []    => []
| x::xs => (n, x) :: go (S n) xs
end in
go 0 xs
end.

Definition go_weird {a} := ltac:(
let e := eval cbv beta match delta [addIndex_weird]
in (@addIndex_weird a (Some [])) in
idtac e;
lazymatch e with | let x := ?def in _ =>
exact def
end).

Note the addition of match to the list of evaluation flags passed to cbv.

Conclusion

While local definitions are idiomatic in Haskell (in particular thanks to the where syntax), they are usually avoided in Coq, because they get in the way of verification. If, for some reason, one is stuck with such definitions, then this trick presents a reasonable way out.

Planet Debian — Sam Hartman: Shaving the DJ Software Yak

I'm getting married this June. (For the Debian folks, the Ghillie shirt and vest just arrived to go with the kilt. My thanks go out to the lunch table at Debconf that made that suggestion. formal Scottish dress would not have fit, but I wanted something to go with the kilt.)
Music and dance have been an important part of my spiritual journey. Dance has also been an import part of the best weddings I attended. So I wanted dance to be a special part of our celebration. I put together a play list for my 40th birthday; it was special and helped set the mood for the event. Unfortunately, as I started looking at what I wanted to play for the wedding, I realized I needed to do better. Some of the songs were too long. Some of them really felt like they needed a transition. I wanted a continuous mix not a play list.
I'm blind. I certainly could use two turn tables and a mixer--or at least I could learn how to do so. However, I'm a kid of the electronic generation, and that's not my style. So, I started looking at DJ software. With one exception, everything I found was too visual for me to use.
I've used Nama before to put together a mashup. It seemed like Nama offered almost everything I needed. Unfortunately, there were a couple of problems. Nama would not be a great fit for a live mix: you cannot add tracks or effects into the chain without restarting the engine. I didn't strictly need live production for this project, but I wanted to look at that long-term. At the time of my analysis, I thought that Nama didn't support tempo-scaling tracks. For that reason, I decided I was going to have to write my own software. Later I learned that you can adjust the sample rate on a track import, which is more or less good enough for tempo scaling. By that point I already had working code.
I wanted a command line interface. I wanted BPM and key detection; it looked like Mixxx was open-source software with good support for that. Based on my previous work, I chose Csound as the realtime sound backend.

Where I got

I'm proud of what I came up with. I managed to stay focused on my art rather than falling into the trap of focusing too much on the software. I got something that allows me to quickly explore the music I want to mix, but also managed to accomplish my goal and come up with a mix that I'm really happy with. As a result, at the current time, my software is probably only useful to me. However, it is available under the GPL V3. If anyone else would be interested in hacking on it, I'd be happy to spend some time documenting and working with them.
Here's a basic description of the approach.

• You are editing a timeline that stores the transformations necessary to turn the input tracks into the output mix.
• There are 10 mixer stereo channels that will be mixed down into a master output.
• there are a unlimited number of input tracks. Each track is associated with a given mixer channel. Tracks are placed on the timeline at a given start point (starting from a given cue point in the track) and run for a given length. During this time, the track is mixed into the mixer channel. Associated with each track is a gain (volume) that controls how the track is mixed into the mixer channel. Volumes are constant per track.
• Between the mixer channel and the master is a volume fader and an effect chain.
• Effects are written in Csound code. Being able to easily write Csound effects is one of the things that made me more interested in writing my own than in looking at adding better tempo scaling/BPM detection to Nama.
• Associated with each effect are three sliders that give inputs to the effect. Changing the number of mixer channels and effect sliders is an easy code change. However it'd be somewhat tricky to be able to change that dynamically during a performance. Effects also get an arbitrary number of constant parameters.
• Sliders and volume faders can be manipulated on the timeline. You can ask for a linear change from the current value to a target over a given duration starting at some point. So I can ask for the amplitude to move from 0 to 127 at the point where I want to mix in a track say across 2 seconds. You express slider manipulation in terms of the global timeline. However it is stored relative to the start of the track. That is, if you have a track fade out at a particular musical phrase, the fade out will stay with that phrase even if you move the cue point of the track or move where the track starts on the timeline. This is not what you want all the time, but my experience with Nama (which does it using global time) suggests that I at least save a lot of time with this approach.
• There is a global effect chain between the output of the final mixer and the master output. This allows you to apply distortion, equalization or compression to the mix as a whole. The sliders for effects on this effect chain are against global time not a specific track.
• There's a hard-coded compressor on the final output. I'm lazy and I needed it before I had effect chains.

There's some preliminary support for a MIDI controller I was given, but I realized that coding that wasn't actually going to save me time, so I left it. This was a really fun project. I managed to tell a story for my wedding that is really important for me to tell. I learned a lot about what goes into putting together a mix. It's amazing how many decisions go into even simple things like a pan slider. It was also great that there is free software for me to build on top of. I got to focus on the part of the problem I wanted to solve. I was able to reuse components for the realtime sound work and for analysis like BPM detection.

Planet Debian — Wouter Verhelst: host detection in bash

There are many tools to implement this, and yeah, this is not the fastest. But the advantage is that you don't need extra tools beyond "bash" and "ping"...

for i in $(seq 1 254); do if ping -W 1 -c 1 192.168.0.$i; then
HOST[$i]=1 fi done echo${!HOST[@]}


will give you the host addresses for the machines that are live on a given network...

Planet Linux Australia — Michael Still: Caliban’s War

This is the second book in the Leviathan Wakes series by James SA Corey. Just as good as the first, this is a story about how much a father loves his daughter, moral choices, and politics — just as much as it is the continuation of the story arc around the alien visitor. I haven’t seen this far in the Netflix series, but I sure hope they get this right, because its a very good story so far.

Caliban's War
James S. A. Corey
Fiction
Orbit Books
April 30, 2013
624

For someone who didn't intend to wreck the solar system's fragile balance of power, Jim Holden did a pretty good job of it. While Earth and Mars have stopped shooting each other, the core alliance is shattered. The outer planets and the Belt are uncertain in their new - possibly temporary - autonomy. Then, on one of Jupiter's moons, a single super-soldier attacks, slaughtering soldiers of Earth and Mars indiscriminately and reigniting the war. The race is on to discover whether this is the vanguard of an alien army, or if the danger lies closer to home.

The post Caliban’s War appeared first on Made by Mikal.

Planet Debian — Norbert Preining: Specification and Verification of Software with CafeOBJ – Part 2 – Basics of CafeOBJ

This blog continues Part 1 of our series on software specification and verification with CafeOBJ.

Availability of CafeOBJ

CafeOBJ can be obtained from the website cafeobj.org. The site provides binary packages built from Linux, MacOS, and Windows, as well as the source code for those who want to build the interpreter themselves. Other services provided are tutorial pages, all kind of documentation (reference manual, wiki, user manual).

What is CafeOBJ

Let us recall some of the items mentioned in the previous blog. CafeOBJ is an algebraic specification language, as well as a verification and programming language. This means, that specifications written in CafeOBJ can be verified right within the system without the need to regress to external utilities.

As algebraic specification language it is built upon the logical foundation formed by the following items: (i) order sorted algebras, (ii) co-algebras (or hidden algebras), and (iii) rewriting logic. As verification and programming language it provides the user with an executable semantics of the equational theory, a rewrite engine that supports conditional, order-sorted, AC (associative and commutative) rewriting, a sofisticated module system including parametrization and inheritance, and last but not least a completely free syntax.

The algebraic semantics can be represented by the CafeOBJ cube exhibiting the various extensions starting more many sorted algebras:

For the algebraically inclined audience we just mention that all the systems and morphisms are formalized as institutions and institution morphisms.

Let us now go through some the the logical foundations of CafeOBJ:

Term rewriting

Term rewriting is concerned with systems of rules to replace certain parts of an expression with another expression. A very simple example of a rewrite system is:

  append(nil, ys)    → ys
append(x : xs, ys) → x : append(xs, ys)


Here the first rule says that you can rewrite an expression append(nil, ys) where ys can be any list, with ys itself. And the second rule states how to rewrite an expression when the first element is not the empty list.

A typical reduction sequence – that is application of these rules – would be:

append(1 ∶ 2 ∶ 3 ∶ nil, 4 ∶ 5 ∶ nil) → 1 ∶ append(2 ∶ 3 ∶ nil, 4 ∶ 5 ∶ nil)
→ 1 ∶ 2 ∶ append(3 ∶ nil, 4 ∶ 5 ∶ nil)
→ 1 ∶ 2 ∶ 3 ∶ append(nil, 4 ∶ 5 ∶ nil)
→ 1 ∶ 2 ∶ 3 ∶ 4 ∶ 5 ∶ nil


Term rewriting is used in two different ways in CafeOBJ: First as execution engine that considers equations as directed rules and uses them to reduce expressions. And at the same time rewriting logic is included into the language specification allowing for reasoning about transitions.

Order sorted algebras

Most algebras we learn in school or even at the university are single sorted, that is all objects in the algebra are of the same type (e.g., integers, reals, function space). In this case an operation is determined by its arity, that is the number of arguments.

In the many sorted and order sorted case the simple number of arguments of a function is not enough, we need to know for each argument its type and also the type of the value the function returns. Thus, we assume a signature (S,F) given, such that S is a set of sorts, or simply sort names, and F is a set of operations f: s1, s2, ..., sk → s where all the s are sorts.

As an example assume we have two sorts, String and Int, one possible function would be

  substr: String, Int, Int → String


which would tell us that the function substr takes three arguments, the first of sort String, the others of sort Int, and it returns again a value of sort String.

In case the sorts are (partially ordered), we call the corresponding algebra order sorted algebra.

Using order sorted algebras has several advantages compared to other algebraic systems:

• polymorphism (parametric, subsort) and overloading are natural consequences of ordered sorts;
• error definition and handling via subsorts;
• multiple inheritance;
• rigorous model-theoretic semantics based on institutions;
• operational semantics that executes equations as rewrite rules (executable specifications).

We want to close this blog post with a short history of CafeOBJ and a short sample list of specifications that have been carried out with CafeOBJ.

History, background, relatives, and examples

CafeOBJ, as an algebraic specification language based on equational theory, has its roots in CLEAR (Burgstall and Goguen, early 70s) and the OBJ language (Goguen et al, 70-80s SRI and UC San Diego). The successor OBJ2 was developed by Futatsugi, Goguen, Jouannaud, and Meseguer at UC San Diego in 1984, based on Horn logic, sub-sorts, and parametrized modules.

The developer then moved on to different languages or extensions: Meseguer started to develop Maude, Jouannaud moved on to develop Coq, an unrelated language, and Futatsugi built upon the OBJ3 language by Kirchner et al to create CafeOBJ.

Example specifications carried out in CafeOBJ are authentication protocols (NSLPK, STS, Otway-Rees), key secrecy PACE protocol (German passport), e-commerce protocols (SET), real time algorithms (Fischer’s mutual exclusion protocol), UML semantics, formal fault tree analysis.

In the next blog post we will make first steps with the CafeOBJ interpreter and see how to define modules, the basic building blocks, and carry out simple computations.

,

Planet Debian — Benjamin Mako Hill: Mako Hate

I recently discovered a prolific and sustained community of meme-makers on Tumblr dedicated to expressing their strong dislike for “Mako.”

Two tags with examples are #mako hate and #anti mako but there are many others.

I’ve also discovered Tumblrs entirely dedicated to the topic!

For example, Let’s Roast Mako describes itself “A place to beat up Mako. In peace. It’s an inspiration to everyone!

The second is the Fuck Mako Blog which describes itself with series of tag-lines including “Mako can fuck right off and we’re really not sorry about that,” “Welcome aboard the SS Fuck-Mako;” and “Because Mako is unnecessary.” Sub-pages of the site include:

I’ll admit I’m a little disquieted.

Planet Linux Australia — Pia Waugh: Exploring change and how to scale it

Over the past decade I have been involved in several efforts trying to make governments better. A key challenge I repeatedly see is people trying to change things without an idea of what they are trying to change to, trying to fix individual problems (a deficit view) rather than recognising and fixing the systems that created the problems in the first place. So you end up getting a lot of symptomatic relief and iterative improvements of antiquated paradigms without necessarily getting transformation of the systems that generated the problems. A lot of the effort is put into applying traditional models of working which often result in the same old results, so we also need to consider new ways to work, not just what needs to be done.

With life getting faster and (arguably) exponentially more complicated, we need to take a whole of system view if we are to improve ‘the system’ for people. People sometimes balk when I say this thinking it too hard, too big or too embedded. But we made this, we can remake it, and if it isn’t working for us, we need to adapt like we always have.

I also see a lot of slogans used without the nuanced discussion they invite. Such (often ideological) assumptions can subtly play out without evidence, discussion or agreement on common purpose. For instance, whenever people say smaller or bigger government I try to ask what they think the role of government is, to have a discussion. Size is assumed to correlate to services, productivity, or waste depending on your view, but shouldn’t we talk about what the public service should do, and then the size is whatever is appropriate to do what is needed? People don’t talk about a bigger or smaller jacket or shoes, they get the right one for their needs and the size can change over time as the need changes. Indeed, perhaps the public service of the future could be a dramatically different workforce comprised of a smaller group of professional public servants complimented with and a large demographically representative group of part time citizens doing their self nominated and paid “civic duty year of service” as a form of participatory democracy, which would bring new skills and perspectives into governance, policy and programs.

We need urgently to think about the big picture, to collectively talk about the 50 or 100 year view for society, and only then can we confidently plan and transform the structures, roles, programs and approaches around us. This doesn’t mean we have to all agree to all things, but we do need to identify the common scaffolding upon which we can all build.

This blog posts challenges you to think systemically, critically and practically about five things:

• What future do you want? Not what could be a bit better, or what the next few years might hold, or how that shiny new toy you have could solve the world’s problems (policy innovation, data, blockchain, genomics or any tool or method). What is the future you want to work towards, and what does good look like? Forget about your particular passion or area of interest for a moment. What does your better life look like for all people, not just people like you?
• What do we need to get there? What concepts, cultural values, paradigm, assumptions should we take with us and what should we leave behind? What new tools do we need and how do we collectively design where we are going?
• What is the role of gov, academia, other sectors and people in that future? If we could create a better collective understanding of our roles in society and some of the future ideals we are heading towards, then we would see a natural convergence of effort, goals and strategy across the community.
• What will you do today? Seriously. Are you differentiating between symptomatic relief and causal factors? Are you perpetuating the status quo or challenging it? Are you being critically aware of your bias, of the system around you, of the people affected by your work? Are you reaching out to collaborate with others outside your team, outside your organisation and outside your comfort zone? Are you finding natural partners in what you are doing, and are you differentiating between activities worthy of collaboration versus activities only of value to you (the former being ripe for collaboration and the latter less so).
• How do we scale change? I believe we need to consider how to best scale “innovation” and “transformation”. Scaling innovation is about scaling how we do things differently, such as the ability to take a more agile, experimental, evidence based, creative and collaborative approach to the design, delivery and continuous improvement of stuff, be it policy, legislation or services. Scaling transformation is about how we create systemic and structural change that naturally drives and motivates better societal outcomes. Each without the other is not sustainable or practical.

How to scale innovation and transformation?

I’ll focus the rest of this post on the question of scaling. I wrote this in the context of scaling innovation and transformation in government, but it applies to any large system. I also believe that empowering people is the greatest way to scale anything.

• I’ll firstly say that openness is key to scaling everything. It is how we influence the system, how we inspire and enable people to individually engage with and take responsibility for better outcomes and innovate at a grassroots level. It is how we ensure our work is evidence based, better informed and better tested, through public peer review. Being open not only influences the entire public service, but the rest of the economy and society. It is how we build trust, improve collaboration, send indicators to vendors and influence academics. Working openly, open sourcing our research and code, being public about projects that would benefit from collaboration, and sharing most of what we do (because most of the work of the public service is not secretive by any stretch) is one of the greatest tools in try to scale our work, our influence and our impact. Openness is also the best way to ensure both a better supply chain as well as a better demand for things that are demonstrable better.

A quick side note to those who argue that transparency isn’t an answer because all people don’t have to tools to understand data/information/etc to hold others accountable, it doesn’t mean you don’t do transparency at all. There will always be groups or people naturally motivated to hold you to account, whether it is your competitors, clients, the media, citizens or even your own staff. Transparency is partly about accountability and partly about reinforcing a natural motivation to do the right thing.

Scaling innovation – some ideas:

• The necessity of neutral, safe, well resourced and collaborative sandpits is critical for agencies to quickly test and experiment outside the limitations of their agencies (technical, structural, political, functional and procurement). Such places should be engaged with the sectors around them. Neutral spaces that take a systems view also start to normalise a systems view across agencies in their other work, which has huge ramifications for transformation as well as innovation.
• Seeking and sharing – sharing knowledge, reusable systems/code, research, infrastructure and basically making it easier for people to build on the shoulders of each other rather than every single team starting from scratch every single time. We already have some communities of practice but we need to prioritise sharing things people can actually use and apply in their work. We also need to extend this approach across sectors to raise all boats. Imagine if there was a broad commons across all society to share and benefit from each others efforts. We’ve seen the success and benefits of Open Source Software, of Wikipedia, of the Data Commons project in New Zealand, and yet we keep building sector or organisational silos for things that could be public assets for public good.
• Require user research in budget bids – this would require agencies to do user research before bidding for money, which would create an incentive to build things people actually need which would drive both a user centred approach to programs and would also drive innovation as necessary to shift from current practices Treasury would require user research experts and a user research hub to contrast and compare over time.
• Staff mobility – people should be supported to move around departments and business units to get different experiences and to share and learn. Not everyone will want to, but when people stay in the same job for 20 years, it can be harder to engage in new thinking. Exchange programs are good but again, if the outcomes and lessons are not broadly shared, then they are linear in impact (individuals) rather than scalable (beyond the individuals).
• Support operational leadership – not everyone wants to be a leader, disruptor, maker, innovator or intrapreneur. We need to have a program to support such people in the context of operational leadership that isn’t reliant upon their managers putting them forward or approving. Even just recognising leadership as something that doesn’t happen exclusively in senior management would be a huge cultural shift. Many managers will naturally want to keep great people to themselves which can become stifling and eventually we lose them. When people can work on meaningful great stuff, they stay in the public service.
• A public ‘Innovation Hub’ – if we had a simple public platform for people to register projects that they want to collaborate on, from any sector, we could stimulate and support innovation across the public sector (things for which collaboration could help would be surfaced, publicly visible, and inviting of others to engage in) so it would support and encourage innovation across government, but also provides a good pipeline for investment as well as a way to stimulate and support real collaboration across sectors, which is substantially lacking at the moment.
• Emerging tech and big vision guidance - we need a team, I suggest cross agency and cross sector, of operational people who keep their fingers on the pulse of technology to create ongoing guidance for New Zealand on emerging technologies, trends and ideas that anyone can draw from. For government, this would help agencies engage constructively with new opportunities rather than no one ever having time or motivation until emerging technologies come crashing down as urgent change programs. This could be captured on a constantly updating toolkit with distributed authorship to keep it real.

Scaling transformation – some ideas:

• Convergence of effort across sectors – right now in many countries every organisation and to a lesser degree, many sectors, are diverging on their purpose and efforts because there is no shared vision to converge on. We have myriad strategies, papers, guidance, but no overarching vision. If there were an overarching vision for New Zealand Aotearoa for instance, co-developed with all sectors and the community, one that looks at what sort of society we want into the future and what role different entities have in achieving that ends, then we would have the possibility of natural convergence on effort and strategy.
• Obviously when you have a cohesive vision, then you can align all your organisational and other strategies to that vision, so our (government) guidance and practices would need to align over time. For the public sector the Digital Service Standard would be a critical thing to get right, as is how we implement the Higher Living Standards Framework, both of which would drive some significant transformation in culture, behaviours, incentives and approaches across government.
• Funding “Digital Public Infrastructure” – technology is currently funded as projects with start and end dates, and almost all tech projects across government are bespoke to particular agency requirements or motivations, so we build loads of technologies but very little infrastructure that others can rely upon. If we took all the models we have for funding other forms of public infrastructure (roads, health, education) and saw some types of digital infrastructure as public infrastructure, perhaps they could be built and funded in ways that are more beneficial to the entire economy (and society).
• Agile budgeting – we need to fund small experiments that inform business cases, rather than starting with big business cases. Ideally we need to not have multi 100 million dollar projects at all because technology projects simply don’t cost that anymore, and anyone saying otherwise is trying to sell you something If we collectively took an agile budgeting process, it would create a systemic impact on motivations, on design and development, or implementation, on procurement, on myriad things. It would also put more responsibility on agencies for the outcomes of their work in short, sharp cycles, and would create the possibility of pivoting early to avoid throwing bad money after good (as it were). This is key, as no transformative project truly survives the current budgeting model.
• Gov as a platform/API/enabler (closely related to DPI above) – obviously making all government data, content, business rules (inc but not just legislation) and transactional systems available as APIs for building upon across the economy is key. This is how we scale transformation across the public sector because agencies are naturally motivated to deliver what they need to cheaper, faster and better, so when there are genuinely useful reusable components, agencies will reuse them. Agencies are now more naturally motivated to take an API driven modular architecture which creates the bedrock for government as an API. Digital legislation (which is necessary for service delivery to be integrated across agency boundaries) would also create huge transformation in regulatory and compliance transformation, as well as for government automation and AI.
• Exchange programs across sectors – to share knowledge but all done openly so as to not create perverse incentives or commercial capture. We need to also consider the fact that large companies can often afford to jump through hoops and provide spare capacity, but small to medium sized companies cannot, so we’d need a pool for funding exchange programs with experts in the large proportion of industry.
• All of system service delivery evidence base – what you measure drives how you behave. Agencies are motivated to do only what they need to within their mandates and have very few all of system motivations. If we have an all of government anonymised evidence base of user research, service analytics and other service delivery indicators, it would create an accountability to all of system which would drive all of system behaviours. In New Zealand we already have the IDI (an awesome statistical evidence base) but what other evidence do we need? Shared user research, deidentified service analytics, reporting from major projects, etc. And how do we make that evidence more publicly transparent (where possible) and available beyond the walls of government to be used by other sectors?  More broadly, having an all of government evidence base beyond services would help ensure a greater evidence based approach to investment, strategic planning and behaviours.

Planet Debian — Joey Hess: my haskell controlled offgrid fridge

I'm preparing for a fridge upgrade, away from the tiny propane fridge to a chest freezer conversion. My home computer will be monitoring the fridge temperature and the state of my offgrid energy system, and turning the fridge on and off using a relay and the inverter control board I built earlier.

This kind of automation is a perfect fit for Functional Reactive Programming (FRP) since it's all about time-varying behaviors and events being combined together.

Of course, I want the control code to be as robust as possible, well tested, and easy to modify without making mistakes. Pure functional Haskell code.

There are many Haskell libraries for FRP, and I have not looked at most of them in any detail. I settled on reactive-banana because it has a good reputation and amazing testimonials.

"In the programming-language world, one rule of survival is simple: dance or die. This library makes dancing easy." – Simon Banana Jones

But, it's mostly used for GUI programming, or maybe some musical live-coding. There were no libraries for using reactive-banana for the more staid task of home automation, or anything like that. Also, using it involves a whole lot of IO code, so not great for testing.

So I built reactive-banana-automation on top of it to address my needs. I think it's a pretty good library, although I don't have a deep enough grokking of FRP to say that for sure.

Anyway, it's plenty flexible for my fridge automation needs, and I also wrote a motion-controlled light automation with it to make sure it could be used for something else (and to partly tackle the problem of using real-world time events when the underlying FRP library uses its own notion of time).

The code for my fridge is a work in progress since the fridge has not arrived yet, and because the question of in which situations an offgrid fridge should optimally run and not run is really rather complicated.

Here's a simpler example, for a non-offgrid fridge.

fridge :: Automation Sensors Actuators
fridge sensors actuators = do
-- Create a Behavior that reflects the most recently reported
-- temperature of the fridge.
btemperature <- sensedBehavior (fridgeTemperature sensors)
-- Calculate when the fridge should turn on and off.
let bpowerchange = calcpowerchange <$> btemperature onBehaviorChangeMaybe bpowerchange (actuators . FridgePower) where calcpowerchange (Sensed temp) | temp belowRange allowedtemp = Just PowerOff | temp aboveRange allowedtemp = Just PowerOn | otherwise = Nothing calcpowerchange SensorUnavailable = Nothing allowedtemp = Range 1 4  And here the code is being tested in a reproducible fashion: > runner <- observeAutomation fridge mkSensors > runner$ \sensors -> fridgeTemperature sensors =: 6
[FridgePower PowerOn]
> runner $\sensors -> fridgeTemperature sensors =: 3 [] > runner$ \sensors -> fridgeTemperature sensors =: 0.5
[FridgePower PowerOff]


BTW, building a 400 line library and writing reams of control code for a fridge that has not been installed yet is what we Haskell programmers call "laziness".

TED — The world takes us exactly where we should be: 4 questions with Meagan Fallone

Cartier believes in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with entrepreneur, designer and CEO of Barefoot College International, Meagan Fallone.

TED: Tell us who you are.
Meagan Fallone: I am an entrepreneur, a designer, a passionate mountaineer and a champion of women in the developing world and all women whose voices and potential remain unheard and unrealized. I am a mother and am grounded in the understanding that of all the things I may ever do in my life, it is the only one that truly will define me or endure. I am immovable in my intolerance to injustice in all its forms.

MF: I decided to leave the two for-profit companies I started and grow a nonprofit social enterprise.

TED: Tell us about a woman who inspires you.
MF: The women in my family who were risk-takers in their own individual ways: they are always with me and inspire me. My female friends who push me always to dig deeper within myself, to use my power and skills for ever bigger and better impact in the world. I am inspired always by every woman who has ever accepted to come to train with us at Barefoot College. They place their trust in us, leave their community and everyone they love to make an unimaginable journey on every level. It is the bravest thing I have ever seen.

TED: If you could go back in time, what would you tell your 18-year-old self?
MF: I would tell myself not to take myself so seriously. I would tell myself to trust that the world takes us exactly where we should be. It took me far too long to learn to laugh at how ridiculous I am sometimes. It took me even longer to accept that the path that was written for me was not exactly the one I envisaged for myself. Within the things I never imagined lay all the beauty and wonder of my journey so far — and the promise of what I have yet to impact.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

TED — Playlist: 10 TEDWomen talks for Earth Day

Earlier this week, I had the privilege and honor to plant trees with the daughter and granddaughter of environmentalist Wangari Maathai. In recognition of her life’s work promoting “sustainable development, democracy and peace,” Maathai received the 2004 Nobel Peace Prize. She was a lifelong activist who founded the Green Belt Movement in 1977.

At that time, rural women in Kenya petitioned the government for help. They explained that their streams were drying up, causing their food supplies to be less secure and longer walks to fetch firewood. Maathai established the Green Belt Movement and encouraged the women of Kenya to work together to grow seedlings and plant trees to bind the soil, store rainwater, provide food and firewood, and receive a small monetary token for their work. Through her efforts, over 51 million trees have been planted in Kenya. Although Maathai died in 2011, her daughter Wanjira continues her work improving the livelihoods of the women of Kenya and striving for a “cleaner, greener world.”

This Earth Day, the work of Professor Maathai and the Green Belt Movement is an inspiration and a “testament to the power of grassroots organizing, proof that one person’s simple idea — that a community should come together to plant trees, can make a difference.”

With that in mind, here are 10 TEDWomen talks from over the years that highlight innovative ideas, cutting-edge science, and the power that each of us has to safeguard our planet and make our world better for everyone.

1. Climate change is unfair. While rich countries can fight against rising oceans and dying farm fields, poor people around the world are already having their lives upended — and their human rights threatened — by killer storms, starvation and the loss of their own lands. Mary Robinson asks us to join the movement for worldwide climate justice.

2. Ocean expert Nancy Rabalais tracks the ominously named “dead zone” in the Gulf of Mexico — where there isn’t enough oxygen in the water to support life. The Gulf has the second largest dead zone in the world; on top of killing fish and crustaceans, it’s also killing fisheries in these waters. Rabalais tells us about what’s causing it — and how we can reverse its harmful effects and restore one of America’s natural treasures.

3. Filmmaker Penelope Jagessar Chaffer was curious about the chemicals she was exposed to while pregnant: Could they affect her unborn child? So she asked scientist Tyrone Hayes to brief her on one he studied closely: atrazine, a herbicide used on corn. (Hayes, an expert on amphibians, is a critic of atrazine, which displays a disturbing effect on frog development.) Onstage together at TEDWomen, Hayes and Chaffer tell their story.

4. Deepika Kurup has been determined to solve the global water crisis since she was 14 years old, after she saw kids outside her grandparents’ house in India drinking water that looked too dirty even to touch. Her research began in her family kitchen — and eventually led to a major science prize. Hear how this teenage scientist developed a cost-effective, eco-friendly way to purify water.

5. Days before this talk, journalist Naomi Klein was on a boat in the Gulf of Mexico, looking at the catastrophic results of BP’s risky pursuit of oil. Our societies have become addicted to extreme risk in finding new energy, new financial instruments and more … and too often, we’re left to clean up a mess afterward. Klein’s question: What’s the backup plan?

6. The water hyacinth may look like a harmless, even beautiful flowering plant — but it’s actually an invasive aquatic weed that clogs waterways, stopping trade, interrupting schooling and disrupting everyday life. In this scourge, green entrepreneur Achenyo Idachaba saw opportunity. Follow her journey as she turns weeds into woven wonders.

7. A skyscraper that channels the breeze … a building that creates community around a hearth … Jeanne Gang uses architecture to build relationships. In this engaging tour of her work, Gang invites us into buildings large and small, from a surprising local community center to a landmark Chicago skyscraper. “Through architecture, we can do much more than create buildings,” she says. “We can help steady this planet we all share.”

8. Architect Kate Orff sees the oyster as an agent of urban change. Bundled into beds and sunk into city rivers, oysters slurp up pollution and make legendarily dirty waters clean — thus driving even more innovation in “oyster-tecture.” Orff shares her vision for an urban landscape that links nature and humanity for mutual benefit.

9. Beverly + Dereck Joubert live in the bush, filming and photographing lions and leopards in their natural habitat. With stunning footage (some never before seen), they discuss their personal relationships with these majestic animals — and their quest to save the big cats from human threats.

10. Artist and poet Cleo Wade shares some truths about growing up (and speaking up) and reflects on the wisdom of a life well-lived, leaving us with a simple yet enduring takeaway: be good to yourself, be good to others, be good to the earth. “The world will say to you, ‘Be a better person,'” Wade says. “Do not be afraid to say, ‘Yes.'”

If you’re interested in attending TEDWomen later this year in Palm Springs, California, on November 28–30, we encourage you to sign up for our email newsletter now to stay up to date. We will be adding details on venue, sessions themes, guest curators and speakers soon. Don’t miss the news!

Planet Debian — Lisandro Damián Nicanor Pérez Meyer: moving Qt 4 from Debian testing (aka Buster): some statistics, update II

As in my previous blogpost I'm taking a look at our Qt4 removal wiki page.

Of a total of 438 filed bugs:

• 181 bugs (41.32%) have been already fixed by either porting the app/library to Qt 5 or a removal from the archive has happened. On most cases the code has been ported and most of the deletions are due to Qt 5 replacements already available in the archive and some due to dead upstreams (ie., no Qt5 port available).
• 257 bugs (58.68%) still need a fix or are fixed in experimental.
• 35 bugs (8% of the total, 13% of the remaining) of the remaining bugs are maintained inside the Qt/KDE team.
We started filing bugs around September 9. That means roughly 32 weeks which gives us around 5.65 packages fixed per week, aka 0.85 packages per day. Obviously not as good as we started (remaining bugs tend to be more complicated), but still quite good.

So, how can you help?

If you are a maintainer of any of the packages still affected try to get upstream to make a port and package it.

If you are not a maintainer you might want to take a look at the list of packages in our wiki page and try to create a patch for them. If you can submit it directly to upstream, the better. Or maybe it's time for you to become the package's upstream or maintainer!

Planet Debian — Vincent Bernat: OPL2 Audio Board: an AdLib sound card for Arduino

In a previous article, I presented the OPL2LPT, a sound card for the parallel port featuring a Yamaha YM3812 chip, also known as OPL2—the chip of the AdLib sound card. The OPL2 Audio Board for Arduino is another indie sound card using this chip. However, instead of relying on a parallel port, it uses a serial interface, which can be drived from an Arduino board or a Raspberry Pi. While the OPL2LPT targets retrogamers with real hardware, the OPL2 Audio Board cannot be used in the same way. Nonetheless, it can also be operated from ScummVM and DOSBox!

Unboxing🔗

The OPL2 Audio Board can be purchased on Tindie, either as a kit or fully assembled. I have paired it with a cheap clone of the Arduino Nano. A library to drive the board is available on GitHub, along with some examples.

One of them is DemoTune.ino. It plays a short tune on three channels. It can be compiled and uploaded to the Arduino with PlatformIO—installable with pip install platformio—using the following command:1

$platformio ci \ --board nanoatmega328 \ --lib ../../src \ --project-option="targets=upload" \ --project-option="upload_port=/dev/ttyUSB0" \ DemoTune.ino [...] PLATFORM: Atmel AVR > Arduino Nano ATmega328 SYSTEM: ATMEGA328P 16MHz 2KB RAM (30KB Flash) Converting DemoTune.ino [...] Configuring upload protocol... AVAILABLE: arduino CURRENT: upload_protocol = arduino Looking for upload port... Use manually specified: /dev/ttyUSB0 Uploading .pioenvs/nanoatmega328/firmware.hex [...] avrdude: 6618 bytes of flash written [...] ===== [SUCCESS] Took 5.94 seconds =====  Immediately after the upload, the Arduino plays the tune. 🎶 The next interesting example is SerialIface.ino. It turns the audio board into a sound card over serial port. Once the code has been pushed to the Arduino, you can use the play.py program in the same directory to play VGM files. They are a sample-accurate sound format for many sound chips. They log the exact commands sent. There are many of them on VGMRips. Be sure to choose the ones for the YM3812/OPL2! Here is a small selection: Usage with DOSBox & ScummVM🔗 Notice The support for the serial protocol used in this section has not been merged yet. In the meantime, grab SerialIface.ino from the pull request: git checkout 50e1717. When the Arduino is flashed with SerialIface.ino, the board can be driven through a simple protocol over the serial port. By patching DOSBox and ScummVM, we can make them use this unusual sound card. Here are some examples of games: • 0:00, with DOSBox, the first level of Doom 🎮 • 1:06, with DOSBox, the introduction of Loom 🎼 • 2:38, with DOSBox, the first level of Lemmings 🐹 • 3:32, with DOSBox, the introduction of Legend of Kyrandia 🃏 • 6:47, with ScummVM, the introduction of Day of the Tentacle ☢️ • 11:10, with DOSBox, the introduction of Another World2 🐅 DOSBox🔗 The serial protocol is described in the SerialIface.ino file: /* * A very simple serial protocol is used. * * - Initial 3-way handshake to overcome reset delay / serial noise issues. * - 5-byte binary commands to write registers. * - (uint8) OPL2 register address * - (uint8) OPL2 register data * - (int16) delay (milliseconds); negative -> pre-delay; positive -> post-delay * - (uint8) delay (microseconds / 4) * * Example session: * * Arduino: HLO! * PC: BUF? * Arduino: 256 (switches to binary mode) * PC: 0xb80a014f02 (write OPL register and delay) * Arduino: k * * A variant of this protocol is available without the delays. In this * case, the BUF? command should be sent as B0F? The binary protocol * is now using 2-byte binary commands: * - (uint8) OPL2 register address * - (uint8) OPL2 register data */  Adding support for this protocol in DOSBox is relatively simple (patch). For best performance, we use the 2-byte variant (5000 ops/s). The binary commands are pipelined and a dedicated thread collects the acknowledgments. A semaphore captures the number of free slots in the receive buffer. As it is not possible to read registers, we rely on DOSBox to emulate the timers, which are mostly used to let the various games detect the OPL2. The patch is tested only on Linux but should work on any POSIX system—not Windows. To test it, you need to build DOSBox from source: $ sudo apt build-dep dosbox
$git clone https://github.com/vincentbernat/dosbox.git -b feature/opl2audioboard$ cd dosbox
$./autogen.sh$ ./configure && make


Replace the sblaster section of ~/.dosbox/dosbox-SVN.conf:

[sblaster]
sbtype=none
oplmode=opl2
oplrate=49716
oplemu=opl2arduino
opl2arduino=/dev/ttyUSB0


Then, run DOSBox with ./src/dosbox. That’s it!

You will likely get the “OPL2Arduino: too slow, consider increasing buffer” message a lot. To fix this, you need to recompile SerialIface.ino with a bigger receive buffer:

$platformio ci \ --board nanoatmega328 \ --lib ../../src \ --project-option="targets=upload" \ --project-option="upload_port=/dev/ttyUSB0" \ --project-option="build_flags=-DSERIAL_RX_BUFFER_SIZE=512" \ SerialIface.ino  ScummVM🔗 The same code can be adapted for ScummVM (patch). To test, build it from source: $ sudo apt build-dep scummvm
$git clone https://github.com/vincentbernat/scummvm.git -b feature/opl2audioboard$ cd scummvm
$./configure --disable-all-engines --enable-engine=scumm && make  Then, you can start ScummVM with ./scummvm. Select “AdLib Emulator” as the music device and “OPL2 Arduino” as the AdLib emulator.3 Like for DOSBox, watch the console to check if you need a larger receive buffer. Enjoy! 😍 1. This command is valid for an Arduino Nano. For another board, take a look at the output of platformio boards arduino↩︎ 2. Another World (also known as Out of This World), released in 1991, designed by Éric Chahi, is using sampled sounds at 5 kHz or 10 kHz. With a serial port operating at 115,200 bits/s, the 5 kHz option is just within our reach. However, I have no idea if the rendering is faithful. It doesn’t sound like a SoundBlaster, but it sounds analogous to the rendering of the OPL2LPT which sounds similar to the SoundBlaster when using the 10 kHz option. DOSBox’ AdLib emulation using Nuked OPL3—which is considered to be the best—sounds worse. ↩︎ 3. If you need to specify a serial port other than /dev/ttyUSB0, add a line opl2arduino_device= in the ~/.scummvmrc configuration file. ↩︎ Planet Linux Australia — David Rowe: WaveNet and Codec 2 Yesterday my friend and fellow open source speech coder Jean-Marc Valin (of Speex and Opus fame) emailed me with some exciting news. W. Bastiaan Kleijn and friends have published a paper called “Wavenet based low rate speech coding“. Basically they take bit stream of Codec 2 running at 2400 bit/s, and replace the Codec 2 decoder with the WaveNet deep learning generative model. What is amazing is the quality – it sounds as good an an 8000 bit/s wideband speech codec! They have generated wideband audio from the narrowband Codec model parameters. Here are the samples – compare “Parametrics WaveNet” to Codec 2! This is a game changer for low bit rate speech coding. I’m also happy that Codec 2 has been useful for academic research (Yay open source), and that the MOS scores in the paper show it’s close to MELP at 2400 bit/s. Last year we discovered Codec 2 is better than MELP at 600 bit/s. Not bad for an open source codec written (more or less) by one person. Now I need to do some reading on Deep Learning! Reading Further , Planet Debian — Benjamin Mako Hill: Hyak on Hyak I recently fulfilled a yearslong dream of launching a job on Hyak* on Hyak. * Hyak is the University of Washington’s supercomputer which my research group uses for most of our computation-intensive research. M/V Hyak is a Super-class ferry operated by the Washington State Ferry System. Cryptogram — Friday Squid Blogging: Squid Prices Rise as Catch Decreases In Japan: Last year's haul sank 15% to 53,000 tons, according to the JF Zengyoren national federation of fishing cooperatives. The squid catch has fallen by half in just two years. The previous low was plumbed in 2016. Lighter catches have been blamed on changing sea temperatures, which impedes the spawning and growth of the squid. Critics have also pointed to overfishing by North Korean and Chinese fishing boats. Wholesale prices of flying squid have climbed as a result. Last year's average price per kilogram came to 564 yen, a roughly 80% increase from two years earlier, according to JF Zengyoren. As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered. Read my blog posting guidelines here. Krebs on Security — Is Facebook’s Anti-Abuse System Broken? Facebook has built some of the most advanced algorithms for tracking users, but when it comes to acting on user abuse reports about Facebook groups and content that clearly violate the company’s “community standards,” the social media giant’s technology appears to be woefully inadequate. Last week, Facebook deleted almost 120 groups totaling more than 300,000 members. The groups were mostly closed — requiring approval from group administrators before outsiders could view the day-to-day postings of group members. However, the titles, images and postings available on each group’s front page left little doubt about their true purpose: Selling everything from stolen credit cards, identities and hacked accounts to services that help automate things like spamming, phishing and denial-of-service attacks for hire. To its credit, Facebook deleted the groups within just a few hours of KrebsOnSecurity sharing via email a spreadsheet detailing each group, which concluded that the average length of time the groups had been active on Facebook was two years. But I suspect that the company took this extraordinary step mainly because I informed them that I intended to write about the proliferation of cybercrime-based groups on Facebook. That story, Deleted Facebook Cybercrime Groups had 300,000 Members, ended with a statement from Facebook promising to crack down on such activity and instructing users on how to report groups that violate it its community standards. In short order, some of the groups I reported that were removed re-established themselves within hours of Facebook’s action. I decided instead of contacting Facebook’s public relations arm directly that I would report those resurrected groups and others using Facebook’s stated process. Roughly two days later I received a series replies saying that Facebook had reviewed my reports but that none of the groups were found to have violated its standards. Here’s a snippet from those replies: Perhaps I should give Facebook the benefit of the doubt: Maybe my multiple reports one after the other triggered some kind of anti-abuse feature that is designed to throttle those who would seek to abuse it to get otherwise legitimate groups taken offline — much in the way that pools of automated bot accounts have been known to abuse Twitter’s reporting system to successfully sideline accounts of specific targets. Or it could be that I simply didn’t click the proper sequence of buttons when reporting these groups. The closest match I could find in Facebook’s abuse reporting system were, “Doesn’t belong on Facebook,” and “Purchase or sale of drugs, guns or regulated products.” There was/is no option for “selling hacked accounts, credit cards and identities,” or anything of that sort. In any case, one thing seems clear: Naming and shaming these shady Facebook groups via Twitter seems to work better right now for getting them removed from Facebook than using Facebook’s own formal abuse reporting process. So that’s what I did on Thursday. Here’s an example: Within minutes of my tweeting about this, the group was gone. I also tweeted about “Best of the Best,” which was selling accounts from many different e-commerce vendors, including Amazon and eBay: That group, too, was nixed shortly after my tweet. And so it went for other groups I mentioned in my tweetstorm today. But in response to that flurry of tweets about abusive groups on Facebook, I heard from dozens of other Twitter users who said they’d received the same “does not violate our community standards” reply from Facebook after reporting other groups that clearly flouted the company’s standards. Pete Voss, Facebook’s communications manager, apologized for the oversight. “We’re sorry about this mistake,” Voss said. “Not removing this material was an error and we removed it as soon as we investigated. Our team processes millions of reports each week, and sometimes we get things wrong. We are reviewing this case specifically, including the user’s reporting options, and we are taking steps to improve the experience, which could include broadening the scope of categories to choose from.” Facebook CEO and founder Mark Zuckerberg testified before Congress last week in response to allegations that the company wasn’t doing enough to halt the abuse of its platform for things like fake news, hate speech and terrorist content. It emerged that Facebook already employs 15,000 human moderators to screen and remove offensive content, and that it plans to hire another 5,000 by the end of this year. “But right now, those moderators can only react to posts Facebook users have flagged,” writes Will Knight, for Technologyreview.com. Zuckerberg told lawmakers that Facebook hopes expected advances in artificial intelligence or “AI” technology will soon help the social network do a better job self-policing against abusive content. But for the time being, as long as Facebook mainly acts on abuse reports only when it is publicly pressured to do so by lawmakers or people with hundreds of thousands of followers, the company will continue to be dogged by a perception that doing otherwise is simply bad for its business model. Update, 1:32 p.m. ET: Several readers pointed my attention to a Huffington Post story just three days ago, “Facebook Didn’t Seem To Care I Was Being Sexually Harassed Until I Decided To Write About It,” about a journalist whose reports of extreme personal harassment on Facebook were met with a similar response about not violating the company’s Community Standards. That is, until she told Facebook that she planned to write about it. Cryptogram — Securing Elections Elections serve two purposes. The first, and obvious, purpose is to accurately choose the winner. But the second is equally important: to convince the loser. To the extent that an election system is not transparently and auditably accurate, it fails in that second purpose. Our election systems are failing, and we need to fix them. Today, we conduct our elections on computers. Our registration lists are in computer databases. We vote on computerized voting machines. And our tabulation and reporting is done on computers. We do this for a lot of good reasons, but a side effect is that elections now have all the insecurities inherent in computers. The only way to reliably protect elections from both malice and accident is to use something that is not hackable or unreliable at scale; the best way to do that is to back up as much of the system as possible with paper. Recently, there have been two graphic demonstrations of how bad our computerized voting system is. In 2007, the states of California and Ohio conducted audits of their electronic voting machines. Expert review teams found exploitable vulnerabilities in almost every component they examined. The researchers were able to undetectably alter vote tallies, erase audit logs, and load malware on to the systems. Some of their attacks could be implemented by a single individual with no greater access than a normal poll worker; others could be done remotely. Last year, the Defcon hackers' conference sponsored a Voting Village. Organizers collected 25 pieces of voting equipment, including voting machines and electronic poll books. By the end of the weekend, conference attendees had found ways to compromise every piece of test equipment: to load malicious software, compromise vote tallies and audit logs, or cause equipment to fail. It's important to understand that these were not well-funded nation-state attackers. These were not even academics who had been studying the problem for weeks. These were bored hackers, with no experience with voting machines, playing around between parties one weekend. It shouldn't be any surprise that voting equipment, including voting machines, voter registration databases, and vote tabulation systems, are that hackable. They're computers -- often ancient computers running operating systems no longer supported by the manufacturers -- and they don't have any magical security technology that the rest of the industry isn't privy to. If anything, they're less secure than the computers we generally use, because their manufacturers hide any flaws behind the proprietary nature of their equipment. We're not just worried about altering the vote. Sometimes causing widespread failures, or even just sowing mistrust in the system, is enough. And an election whose results are not trusted or believed is a failed election. Voting systems have another requirement that makes security even harder to achieve: the requirement for a secret ballot. Because we have to securely separate the election-roll system that determines who can vote from the system that collects and tabulates the votes, we can't use the security systems available to banking and other high-value applications. We can securely bank online, but can't securely vote online. If we could do away with anonymity -- if everyone could check that their vote was counted correctly -- then it would be easy to secure the vote. But that would lead to other problems. Before the US had the secret ballot, voter coercion and vote-buying were widespread. We can't, so we need to accept that our voting systems are insecure. We need an election system that is resilient to the threats. And for many parts of the system, that means paper. Let's start with the voter rolls. We know they've already been targeted. In 2016, someone changed the party affiliation of hundreds of voters before the Republican primary. That's just one possibility. A well-executed attack that deletes, for example, one in five voters at random -- or changes their addresses -- would cause chaos on election day. Yes, we need to shore up the security of these systems. We need better computer, network, and database security for the various state voter organizations. We also need to better secure the voter registration websites, with better design and better internet security. We need better security for the companies that build and sell all this equipment. Multiple, unchangeable backups are essential. A record of every addition, deletion, and change needs to be stored on a separate system, on write-only media like a DVD. Copies of that DVD, or -- even better -- a paper printout of the voter rolls, should be available at every polling place on election day. We need to be ready for anything. Next, the voting machines themselves. Security researchers agree that the gold standard is a voter-verified paper ballot. The easiest (and cheapest) way to achieve this is through optical-scan voting. Voters mark paper ballots by hand; they are fed into a machine and counted automatically. That paper ballot is saved, and serves as a final true record in a recount in case of problems. Touch-screen machines that print a paper ballot to drop in a ballot box can also work for voters with disabilities, as long as the ballot can be easily read and verified by the voter. Finally, the tabulation and reporting systems. Here again we need more security in the process, but we must always use those paper ballots as checks on the computers. A manual, post-election, risk-limiting audit varies the number of ballots examined according to the margin of victory. Conducting this audit after every election, before the results are certified, gives us confidence that the election outcome is correct, even if the voting machines and tabulation computers have been tampered with. Additionally, we need better coordination and communications when incidents occur. It's vital to agree on these procedures and policies before an election. Before the fact, when anyone can win and no one knows whose votes might be changed, it's easy to agree on strong security. But after the vote, someone is the presumptive winner -- and then everything changes. Half of the country wants the result to stand, and half wants it reversed. At that point, it's too late to agree on anything. The politicians running in the election shouldn't have to argue their challenges in court. Getting elections right is in the interest of all citizens. Many countries have independent election commissions that are charged with conducting elections and ensuring their security. We don't do that in the US. Instead, we have representatives from each of our two parties in the room, keeping an eye on each other. That provided acceptable security against 20th-century threats, but is totally inadequate to secure our elections in the 21st century. And the belief that the diversity of voting systems in the US provides a measure of security is a dangerous myth, because few districts can be decisive and there are so few voting-machine vendors. We can do better. In 2017, the Department of Homeland Security declared elections to be critical infrastructure, allowing the department to focus on securing them. On 23 March, Congress allocated$380m to states to upgrade election security.

These are good starts, but don't go nearly far enough. The constitution delegates elections to the states but allows Congress to "make or alter such Regulations". In 1845, Congress set a nationwide election day. Today, we need it to set uniform and strict election standards.

This essay originally appeared in the Guardian.

Worse Than Failure — Error'd: Placeholders-a-Plenty

"On my admittedly old and cheap phone, Google Maps seems to have confused the definition of the word 'trip'," writes Ivan.

"When you're Gas Networks Ireland, and don't have anything nice to say, I guess you just say lorem ipsum," wrote Gabe.

Daniel D. writes, "Google may not know how I got 100 GB, but they seem pretty sure that it's expiring soon."

Peter S. wrote, "F1 finally did it. The quantum driver Lastname is driving a Ferrari and chasing him- or herself in Red Bull."

Hrish B. writes, "I hope my last name is not an example as well."

Peter S. wrote, "Not sure what IEEE wants me to vote for. But I would vote for hiring better coders."

"Well, at least they got my name right, half of the time," Peter S. writes.

Planet Linux Australia — OpenSTEM: NAPLAN and vocabulary

It is the time of year when the thoughts of teachers of students in years 3, 5, 7 and 9 turn (not so) lightly to NAPLAN. I’m sure many of you are aware of the controversial review of NAPLAN by Les Perelman, a retired professor from MIT in the United States. Perelman conducted a similar […]

Planet Linux Australia — Francois Marier: Using a Kenwood TH-D72A with Pat on Linux and ax25

Here is how I managed to get my Kenwood TH-D72A radio working with Pat on Linux using the built-in TNC and the AX.25 mode

Installing Pat

First of all, download and install the latest Pat package from the GitHub project page.

dpkg -i pat_x.y.z_amd64.deb


Then, follow the installation instructions for the AX.25 mode and install the necessary packages:

apt install ax25-tools ax25-apps


along with the systemd script that comes with Pat:

/usr/share/pat/ax25/install-systemd-ax25-unit.bash


Configuration

Once the packages are installed, it's time to configure everything correctly:

2. Enable TNC in packet12 mode (band A*).
3. Tune band A to VECTOR channel 420 (or 421 if you can't reach VA7EOC on simplex).
4. Put the following in /etc/ax25/axports (replacing CALLSIGN with your own callsign):

 wl2k    CALLSIGN    9600    128    4    Winlink

5. Set HBAUD to 1200 in /etc/default/ax25.

6. Download and compile the tmd710_tncsetup script mentioned in a comment in /etc/default/ax25:

 gcc -o tmd710_tncsetup tmd710_tncsetup.c

7. Add the tmd710_tncsetup script in /etc/default/ax25 and use these command line parameters (-B 0 specifies band A, use -B 1 for band B):

 tmd710_tncsetup -B 0 -S $DEV -b$HBAUD -s

8. Start ax25 driver:

 systemctl start ax25.service


To monitor what is being received and transmitted:

axlisten -cart


Then create aliases like these in ~/.wl2k/config.json:

{
"connect_aliases": {
"ax25-VA7EOC": "ax25://wl2k/VA7EOC-10",
"ax25-VE7LAN": "ax25://wl2k/VE7LAN-10"
},
}


Troubleshooting

If it doesn't look like ax25 can talk to the radio (i.e. the TX light doesn't turn ON), then it's possible that the tmd710_tncsetup script isn't being run at all, in which case the TNC isn't initialized correctly.

On the other hand, if you can see the radio transmitting but are not seeing any incoming packets in axlisten then double check that the speed is set correctly:

• HBAUD in /etc/default/ax25 should be set to 1200
• line speed in /etc/ax25/axports should be set to 9600
• SERIAL_SPEED in tmd710_tncsetup should be set to 9600
• radio displays packet12 in the top-left corner, not packet96

If you can establish a connection, but it's very unreliable, make sure that you have enabled software flow control (the -s option in tmd710_tncsetup).

If you can't connect to VA7EOC-10 on UHF, you could also try the VHF BCFM repeater on Mt Seymour, VE7LAN (VECTOR channel 65).

,

Cryptogram — Lifting a Fingerprint from a Photo

Police in the UK were able to read a fingerprint from a photo of a hand:

Staff from the unit's specialist imaging team were able to enhance a picture of a hand holding a number of tablets, which was taken from a mobile phone, before fingerprint experts were able to positively identify that the hand was that of Elliott Morris.

[...]

Speaking about the pioneering techniques used in the case, Dave Thomas, forensic operations manager at the Scientific Support Unit, added: "Specialist staff within the JSIU fully utilised their expert image-enhancing skills which enabled them to provide something that the unit's fingerprint identification experts could work. Despite being provided with only a very small section of the fingerprint which was visible in the photograph, the team were able to successfully identify the individual."

Worse Than Failure — CodeSOD: A Problematic Place

In programming, sometimes the ordering of your data matters. And sometimes the ordering doesn’t matter and it can be completely random. And sometimes… well, El Dorko found a case where it apparently matters that it doesn’t matter:

DirectoryInfo di = new DirectoryInfo(directory);
FileInfo[] files = di.GetFiles();
DirectoryInfo[] subdirs = di.GetDirectories();

// shuffle subdirs to avoid problematic places
Random rnd = new Random();
for( int i = subdirs.Length - 1; i > 0; i-- )
{
int n = rnd.Next( i + 1 );
DirectoryInfo tmp = subdirs[i];
subdirs[i] = subdirs[n];
subdirs[n] = tmp;
}

foreach (DirectoryInfo dir in subdirs)
{
// process files in directory
}

This code does some processing on a list of directories. Apparently while coding this, the author found themself in a “problematic place”. We all have our ways of avoiding problematic places, but this programmer decided the best way was to introduce some randomness into the equation. By randomizing the order of the list, they seem to have completely mitigated… well, it’s not entirely clear what they’ve mitigated. And while their choice of shuffling algorithm is commendable, maybe next time they could leave us a comment elaborating on the problematic place they found themself in.

Planet Linux Australia — Michael Still: Art with condiments

Mr 15 just made me watch this video, its pretty awesome…

You’re welcome.

The post Art with condiments appeared first on Made by Mikal.

,

Sociological Images — The Sociology Behind the X-Files

Originally Posted at TSP Clippings

Throughout history, human beings have been enthralled by the idea of the paranormal. While we might think that UFOs and ghosts belong to a distant and obscure dimension, social circumstances help to shape how we envision the supernatural. In a recent interview with New York Magazine, sociologist Joseph O. Baker describes the social aspects of Americans’ beliefs about UFOs.

Baker argues that pop culture shapes our understandings of aliens. In the 1950s and 1960s, pop culture imagined aliens in humanoid form, typically as very attractive Swedish blonde types with shining eyes. By the 1970s and 1980s, the abductor narrative took hold and extraterrestrials were represented as the now iconic image of the little gray abductor — small, grey-skinned life-forms with huge hairless heads and large black eyes. Baker posits that one of the main causes of UFOs’ heightened popularity during this time was the extreme distrust of the government following incidents such as Watergate. Baker elaborates,

“I think there is something to be said for a lack of faith in government and institutions in that era, and that coincided with UFOs’ rise in popularity. The lack of trust in the government, and the idea that the government knows something about this — those two things went together, and you can see it in the public reaction post-Vietnam, to Watergate, all that stuff.”

While the individual characteristics of “believers” are hard to determine, survey evidence suggests that men and people from low-income backgrounds are more likely to believe in the existence of alien life. Baker says that believing is also dependent upon religious participation rather than education or income. In his words,

“One of the other strongest predictors is not participating as strongly in forms of organized religion. In some sense, there’s a bit of a clue there about what’s going on with belief — it’s providing an alternative belief system. If you look at religious-service attendance, there will be a strong negative effect there for belief in UFOs.”

Baker’s research on the paranormal indicates that social circumstances influence belief in extraterrestrial beings. In short, these social factors help to shape whether you are a Mulder or a Scully. Believing in UFOs goes beyond abductions and encounters of the Third Kind. In the absence of trust in government and religious institutions, UFOs represent an appealing and mysterious alternative belief system.

Isabel Arriagada (is a Ph.D. student in the sociology department at the University of Minnesota. Her research focuses on the development of prison policies in South America and the U.S. and how technology shapes new experiences of imprisonment.

Krebs on Security — A Sobering Look at Fake Online Reviews

In 2016, KrebsOnSecurity exposed a network of phony Web sites and fake online reviews that funneled those seeking help for drug and alcohol addiction toward rehab centers that were secretly affiliated with the Church of Scientology. Not long after the story ran, that network of bogus reviews disappeared from the Web. Over the past few months, however, the same prolific purveyor of these phantom sites and reviews appears to be back at it again, enlisting the help of Internet users and paying people $25-$35 for each fake listing.

Sometime in March 2018, ads began appearing on Craigslist promoting part-time “social media assistant” jobs, in which interested applicants are directed to sign up for positions at seorehabs[dot]com. This site promotes itself as “leaders in addiction recovery consulting,” explaining that assistants can earn a minimum of $25 just for creating individual Google for Business listings tied to a few dozen generic-sounding addiction recovery center names, such as “Integra Addiction Center,” and “First Exit Recovery.” The listing on Craigslist.com advertising jobs for creating fake online businesses tied to addiction rehabilitation centers. Applicants who sign up are given detailed instructions on how to step through Google’s anti-abuse process for creating listings, which include receiving a postcard via snail mail from Google that contains a PIN which needs to be entered at Google’s site before a listing can be created. Assistants are cautioned not to create more than two listings per street address, but otherwise to use any U.S.-based street address and to leave blank the phone number and Web site for the new business listing. A screen shot from Seorehabs’ instructions for those hired to create rehab center listings. In my story Scientology Seeks Captive Converts Via Google Maps, Drug Rehab Centers, I showed how a labyrinthine network of fake online reviews that steered Internet searches toward rehab centers funded by Scientology adherents was set up by TopSeek Inc., which bills itself as a collection of “local marketing experts.” According to LinkedIn, TopSeek is owned by John Harvey, an individual (or alias) who lists his address variously as Sacramento, Calif. and Hawaii. Although the current Web site registration records from registrar giant Godaddy obscure the information for the current owner of seorehabs[dot]com, a historic WHOIS search via DomainTools shows the site was also registered by John Harvey and TopSeek in 2015. Mr. Harvey did not respond to requests for comment. [Full disclosure: DomainTools previously was an advertiser on KrebsOnSecurity]. TopSeek’s Web site says it works with several clients, but most especially Narconon International — an organization that promotes the rather unorthodox theories of Scientology founder L. Ron Hubbard regarding substance abuse treatment and addiction. As described in Narconon’s Wikipedia entry, Narconon facilities are known not only for attempting to win over new converts to Scientology, but also for treating all substance abuse addictions with a rather bizarre cocktail consisting mainly of vitamins and long hours in extremely hot saunas. Their Wiki entry documents multiple cases of accidental deaths at Narconon facilities, where some addicts reportedly died from overdoses of vitamins or neglect. A LUCRATIVE RACKET Bryan Seely, a security expert who has written extensively about the use of fake search listings to conduct online bait-and-switch scams, said the purpose of sites like those that Seorehabs pays people to create is to funnel calls to a handful of switchboards that then sell the leads to rehab centers that have agreed to pay for them. Many rehab facilities will pay hundreds of dollars for leads that may ultimately lead to a new patient. After all, Seely said, some facilities can then turn around and bill insurance providers for thousands of dollars per patient. Perhaps best known for a stunt in which he used fake Google Maps listings to intercept calls destined for the FBI and U.S. Secret Service, Seely has learned a thing or two about this industry: Until 2011, he worked for an SEO firm that helped to develop and spread some of the same fake online reviews that he is now helping to clean up. “Mr. Harvey and TopSeek are crowdsourcing the data input for these fake rehab centers,” Seely said. “The phone numbers all go to just a few dedicated call centers, and it’s not hard to see why. The money is good in this game. He sells a call for$50-$100 at a minimum, and the call center then tries to sell that lead to a treatment facility that has agreed to buy leads. Each lead can be worth$5,000 to $10,000 for a patient who has good health insurance and signs up.” This graph illustrates what happens when someone calls one of these Seorehabs listings. Source: Bryan Seely. Many of the listings created by Seorehab assistants are tied to fake Google Maps entries that include phony reviews for bogus treatment centers. In the event those listings get suspended by Google, Seorehab offers detailed instructions on how assistants can delete and re-submit listings. Assistants also can earn extra money writing fake, glowing reviews of the treatment centers: Below are some of the plainly bogus reviews and listings created in the last month that pimp the various treatment center names and Web sites provided by Seorehabs. It is not difficult to find dozens of other examples of people who claim to have been at multiple Seorehab-promoted centers scattered across the country. For example, “Gloria Gonzalez” supposedly has been treated at no fewer than seven Seorehab-marketed detox locations in five states, penning each review just in the last month: A reviewer using the name “Tedi Spicer” also promoted at least seven separate rehab centers across the United States in the past month. Getting treated at so many far-flung facilities in just the few months that the domains for these supposed rehab centers have been online would be an impressive feat: Bring up any of the Web sites for these supposed rehab listings and you’ll notice they all include the same boilerplate text and graphic design. Aside from combing listings created by the reviewers paid to promote the sites, we can find other Seorehab listings just by searching the Web for chunks of text on the sites. Doing so reveals a long list (this is likely far from comprehensive) of domain names registered in the past few months that were all created with hidden registration details and registered via Godaddy. Seely said he spent a few hours this week calling dozens of phone numbers tied to these rehab centers promoted by TopSeek, and created a spreadsheet documenting his work and results here (Google Sheets). Seely said while he would never advocate such activity, TopSeek’s fake listings could end up costing Mr. Harvey plenty of money if someone figured out a way to either mass-report the listings as fraudulent or automate calls to the handful of hotlines tied to the listings. “It would kill his business until he changes all the phone numbers tied to these fake listings, but if he had to do that he’d have to pay people to rebuild all the directories that link to these sites,” he said. WHAT YOU CAN DO ABOUT FAKE ONLINE REVIEWS Before doing business with a company you found online, don’t just pick the company that comes up at the top of search results on Google or any other search engine. Unfortunately, that generally guarantees little more than the company is good at marketing. Take the time to research the companies you wish to hire before booking them for jobs or services — especially when it comes to big, expensive, and potentially risky services like drug rehab or moving companies. By the way, if you’re looking for a legitimate rehab facility, you could do worse than to start at rehabs.com, a legitimate rehab search engine. It’s a good idea to get in the habit of verifying that the organization’s physical address, phone number and Web address shown in the search result match that of the landing page. If the phone numbers are different, use the contact number listed on the linked site. Take the time to learn about the organization’s reputation online and in social media; if it has none (other than a Google Maps listing with all glowing, 5-star reviews), it’s probably fake. Search the Web for any public records tied to the business’ listed physical address, including articles of incorporation from the local secretary of state office online. A search of the company’s domain name registration records can give you an idea of how long its Web site has been in business, as well as additional details about the the organization (although the ability to do this may soon be a thing of the past). Seely said one surefire way to avoid these marketing shell games is to ask a simple question of the person who answers the phone in the online listing. “Ask anyone on the phone what company they’re with,” Seely said. “Have them tell you, take their information and then call them back. If they aren’t forthcoming about who they are, they’re most likely a scam.” In 2016, Seely published a book on Amazon about the thriving and insanely lucrative underground business of fake online reviews. He’s agreed to let KrebsOnSecurity republish the entire e-book, which is available for free at this link (PDF). “This is literally the worst book ever written about Google Maps fraud,” Seely said. “It’s also the best. Is it still a niche if I’m the only one here? The more people who read it, the better.” TED — What can your phone do in the next mobile economy? A workshop with Samsung An attendee plays with an interface for exploring the possibilities of the mobile phone at the Samsung Social Space during TED2018: The Age of Amazement, in Vancouver. Photo: Lawrence Sumulong / TED What do you imagine your phone doing for you in the future? Sure, you can take calls, send texts, use apps and surf the internet. But according to Samsung, the next corner for mobile engagement could turn your cell phone into a superhero (of sorts) in industries like public safety and healthcare. 5G technology will not only improve a company’s ability to deliver faster, higher quality services, but the “greater connectivity paves the way for data-intensive solutions like self-driving vehicles, Hi-Res streaming VR, and rich real-time communications.” Imagine a world where your Facetime or Skype call doesn’t drop mid-conversation, you never have to wait for a video to buffer, and connecting to Wi-Fi becomes the slower option compared to staying on data. At their afternoon worksop during TED2018, Samsung provided a short list of real-world issues to guide thoughtful discussion among workshop groups on how the mobile economy can be a part of the big solutions. Scenarios included: data security in the an evolving retail world; hurricane preparedness; urban traffic management; and overburdened emergency rooms. These breakout out sessions lead to fascinating conversation between those with different perspectives, background and skill sets. Architects and scientists weighed in with writers and business development professionals to dream up a vision of the future where everything works seamlessly and interacts like a well-conducted symphony. After intense discussion, swapping ideas and possibilities, groups were encouraged to synthesize the conversation and share to the larger room. They didn’t just offer solutions, but posed fascinating questions on how we may unlock answers to the endless possibilities the next mobile economy will bring in the age of amazement. A view of Samsung’s social space at TED2018, which featured mobile phone activities for exploring the next mobile economy (as well as delicious coffee). Photo: Lawrence Sumulong / TED Cryptogram — Oblivious DNS Interesting idea: ...we present Oblivious DNS (ODNS), which is a new design of the DNS ecosystem that allows current DNS servers to remain unchanged and increases privacy for data in motion and at rest. In the ODNS system, both the client is modified with a local resolver, and there is a new authoritative name server for .odns. To prevent an eavesdropper from learning information, the DNS query must be encrypted; the client generates a request for www.foo.com, generates a session key k, encrypts the requested domain, and appends the TLD domain .odns, resulting in {www.foo.com}k.odns. The client forwards this, with the session key encrypted under the .odns authoritative server's public key ({k}PK) in the "Additional Information" record of the DNS query to the recursive resolver, which then forwards it to the authoritative name server for .odns. The authoritative server decrypts the session key with his private key, and then subsequently decrypts the requested domain with the session key. The authoritative server then forwards the DNS request to the appropriate name server, acting as a recursive resolver. While the name servers see incoming DNS requests, they do not know which clients they are coming from; additionally, an eavesdropper cannot connect a client with her corresponding DNS queries. News article. Worse Than Failure — The Proprietary Format Have you ever secured something with a lock? The intent is that at some point in the future, you'll use the requisite key to regain access to it. Of course, the underlying assumption is that you actually have the key. How do you open a lock once you've lost the key? That's when you need to get creative. Lock picks. Bolt cutters. Blow torch. GAU-8... In 2004, Ben S. went on a solo bicycle tour, and for reasons of weight, his only computer was a Handspring Visor Deluxe PDA running Palm OS. He had an external, folding keyboard that he would use to type his notes from each day of the trip. To keep these notes organized by day, he stored them in the Datebook (calendar) app as all-day events. The PDA would sync with a desktop computer using a Handspring-branded fork of the Palm Desktop software. The whole Datebook could then be exported as a text file from there. As such, Ben figured his notes were safe. After the trip ended, he bought a Windows PC that he had until 2010, but he never quite got around to exporting the text file. After he switched to using a Mac, he copied the files to the Mac and gave away the PC. Ten years later, he decided to go through all of the old notes, but he couldn't open the files! Uh oh. The Handspring company had gone out of business, and the software wouldn't run on the Mac. His parents had the Palm-branded version of the software on one of their older Macs, but Handspring used a different data file format that the Palm software couldn't open. His in-laws had an old Windows PC, and he was able to install the Handspring software, but it wouldn't even open without a physical device to sync with, so the file just couldn't be opened. Ben reluctantly gave up on ever accessing the notes again. Have you ever looked at something and then turned your head sideways, only to see it in a whole new light? One day, Ben was going through some old clutter and found a backup DVD-R he had made of the Windows PC before he had wiped its hard disk. He found the datebook.dat file and opened it in SublimeText. There he saw rows and rows of hexadecimal code arranged into tidy columns. However, in this case, the columns between the codes were not just on-screen formatting for readability, they were actual space characters! It was not a data file after all, it was a text file. The Handspring data file format was a text file containing hexadecimal code with spaces in it! He copied and pasted the entire file into an online hex-to-text converter (which ignored the spaces and line breaks), and voilÃ , Ben had his notes back! [Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more. Planet Linux Australia — Michael Still: City2Surf 2018 I registered for city2surf this morning, which will be the third time I’ve run in the event. In 2016 my employer sponsored a bunch of us to enter, and I ran the course in 86 minutes and 54 seconds. 2017 was a bit more exciting, because in hindsight I did the final part of my training and the race itself with a torn achilles tendon. Regardless, I finished the course in 79 minutes and 39 seconds — a 7 minute and 16 second improvement despite the injury. This year I’ve done a few things differently — I’ve started training much earlier, mostly as a side effect to recovering from the achilles injury; and secondly I’ve decided to try and raise some money for charity during the run. Specifically, I’m raising money for the Black Dog Institute. They were selected because I’ve struggled with depression on and off over my adult life, and that’s especially true for the last twelve months or so. I figure that raising money for a resource that I’ve found personally useful makes a lot of sense. I’d love for you to donate to the Black Dog Institute, but I understand that’s not always possible. Either way, thanks for reading this far! The post City2Surf 2018 appeared first on Made by Mikal. , Planet Linux Australia — David Rowe: Lithium Cell Amp Hour Tester and Electric Sailing I recently electrocuted my little sail boat. I built the battery pack using some second hand Lithium cells donated by my EV. However after 8 years of abuse from my kids and I those cells are of varying quality. So I set about developing an Amp-Hour tester to determine the capacity of the cells. The system has a relay that switches a low value power resistor (OK some coat hanger wire) across the 3.2V cell terminals, loading it up at about 27A, roughly the cruise current for my e-boat. It’s about 0.12 ohms once it heats up. This gets too hot to touch but not red hot, it’s only 86W being dissipated along about 1m of wire. When I built my EV I used the coat hanger wire load trick to test 3kW loads, that was a bit more exciting! The empty beer can in the background makes a useful insulated stand off. Might need to make more of those. When I first installed Lithium cells in my EV I developed a charge controller for my EV. I borrowed a small part of that circuit; a two transistor flip flop and a Battery Management System (BMS) module: Across the cell under test is a CM090 BMS module from EV Power. That’s the good looking red PCB in the photos, onto which I have tacked the circuit above. These modules have a switch than opens when the cell voltage drops beneath 2.5V. Taking the base of either transistor to ground switches on the other transistor. In logic terms, it’s a “not set” and “not reset” operation. When power is applied, the BMS module switch is closed. The 10uF capacitor is discharged, so provides a momentary short to ground, turning Q1 off, and Q2 on. Current flows through the automotive relay, switching on the load to the battery. After a few hours the cell discharges beneath 2.5V, the BMS switch opens and Q2 is switched off. The collector voltage on Q2 rises, switching on Q1. Due to the latching operation of the flip flip – it stays in this state. This is important, as when the relay opens, the cell will be unloaded and it’s voltage will rise again and the BMS module switch will close. In the initial design without a flip flop, this caused the relay to buzz as the cell voltage oscillated about 2.5V as the relay opened and closed! I need the test to stop and stay stopped – it will be operating unattended so I don’t want to damage the cell by completely discharging it. The LED was inserted to ensure the base voltage on Q1 was low enough to switch Q1 off when Q2 was on (Vce of Q2 is not zero), and has the neat side effect of lighting the LED when the test is complete! In operation, I point a cell phone taking time lapse video of the LED and some multi-meters, and start the test: I wander back after 3 hours and jog-shuttle the time lapse video to determine the time when the LED came on: The time lapse feature on this phone runs in 1/10 of real time. For example Cell #9 discharged in 12:12 on the time lapse video. So we convert that time to seconds, multiply by 10 to get “seconds of real time”, then divide by 3600 to get the run time in hours. Multiplying by the discharge current of 27(ish) Amps we get the cell capacity:  12:12 time lapse, 27*(12*60+12)*10/3600 = 55AH  So this cells a bit low, and won’t be finding it’s way onto my boat! Another alternative is a logging multimeter, one could even measure and integrate the discharge current over time. or I could have just bought or borrowed a proper discharge tester, but where’s the fun in that? Results It was fun to develop, a few Saturday afternoons of sitting in the driveway soldering, occasional burns from 86W of hot wire, and a little head scratching while I figured out how to take the design from an expensive buzzer to a working circuit. Nice to do some soldering after months of software based DSP. I’m also happy that I could develop a transistor circuit from first principles. I’ve now tested 12 cells (I have 40 to work through), and measured capacities of 50 to 75AH (they are rated at 100AH new). Some cells have odd behavior under load; dipping beneath 3V right at the start of the test rather than holding 3.2V for a few hours – indicating high internal resistance. My beloved sail e-boat is already doing better. Last weekend, using the best cells I had tested at that point, I e-motored all day on varying power levels. One neat trick, explained to me by Matt, is motor-sailing. Using a little bit of outboard power, the boat overcomes hydrodynamic friction (it gets moving in the water) and the sail is moved out of stall (like an airplane wing moving to just above stall speed). This means to boat moves a lot faster than under motor or sail alone in light winds. For example the motor was registering just 80W, but we were doing 3 knots in light winds. This same trick can be done with a stink-motor and dinosaur juice, but the e-motor is completely silent, we forgot it was on for hours at a time! Reading Further Cryptogram — Hijacking Emergency Sirens Turns out it's easy to hijack emergency sirens with a radio transmitter. Planet Linux Australia — Linux Users of Victoria (LUV) Announce: LUV April 2018 Workshop: Linux and Drupal mentoring and troubleshooting Apr 21 2018 12:00 Apr 21 2018 16:00 Apr 21 2018 12:00 Apr 21 2018 16:00 Location: Room B2:11, State Library of Victoria, 328 Swanston St, Melbourne As our usual venue at Infoxchange is not available this month due to construction work, we'll be joining forces with DrupalMelbourne at the State Library of Victoria. Linux Users of Victoria is a subcommittee of Linux Australia. April 21, 2018 - 12:00 Worse Than Failure — CodeSOD: Breaking Changes We talk a lot about the sort of wheels one shouldn’t reinvent. Loads of bad code stumbles down that path. Today, Mary sends us some code from their home-grown unit testing framework. Mary doesn’t have much to say about whatever case of Not Invented Here Syndrome brought things to this point. It’s especially notable that this is Python, which comes, out of the box, with a perfectly serviceable unittest module built in. Apparently not serviceable enough for their team, however, as Burt, the Lead Developer, wrote his own. His was Object Oriented. Each test case received a TestStepOutcome object as a parameter, and was expected to return that object. This meant you didn’t have to use those pesky, readable, and easily comprehensible assert… methods. Instead, you just did your test steps and called something like:  outcome.setPassed() Or  outcome.setPassed(False) Now, no one particularly liked the calling convention of setPassed(False), so after some complaints, Burt added a setFailed method. Developers started using it, and everyone’s tests passed. Everyone was happy. At least, everyone was happy up until Mary wrote a test she expected to see fail. There was a production bug, and she could replicate it, step by step, at the Python REPL. So that she could “catch” the bug and “prove” it was dead, she wanted a unit test. The unit test passed. The bug was still there, and she continued to be able to replicate it manually. She tried outcome.setFailed() and outcome.setFailed(True) and outcome.setFailed("OH FFS THIS SHOULD FAIL"), but the test passed. outcome.setPassed(False), however… worked just like it was supposed to. Mary checked the implementation of the TestStepOutcome class, and found this: class TestStepOutcome(object): def setPassed(self, flag=True): self.result = flag def setFailed(self, flag=True): self.result = flag  Yes, in Burt’s reluctance to have a setFailed message, he just copy/pasted the setPassed, thinking, “They basically do the same thing.” No one checked his work or reviewed the code. They all just started using setFailed, saw their tests pass, which is what they wanted to see, and moved on about their lives. Fixing Burt’s bug was no small task- changing the setFailed behavior broke a few hundred unit tests, proving that every change breaks someone’s workflow. [Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more. Planet Linux Australia — Gary Pendergast: Introducing: Click Sync Chrome’s syncing is pretty magical: you can see your browsing history from your phone, tablet, and computers, all in one place. When you install Chrome on a new computer, it automatically downloads your extensions. You can see your bookmarks everywhere, it even lets you open a tab from another device. There’s one thing that’s always bugged me, however. When you click a link, it turns purple, as all visited links should. But it doesn’t turn purple on your other devices. Google have had this bug on their radar for ages, but it hasn’t made much progress. There’s already an extension that kind of fixes this, but it works by hashing every URL you visit and sending them to a server run by the extension author: not something I’m particularly comfortable with. And so, I wrote Click Sync! When you click a link, it’ll use Chrome’s inbuilt sync service to tell all your other computers to mark it as visited. If you like watching videos of links turn purple without being clicked, I have just the thing for you: While you’re thinking about how Chrome syncs between all your devices, it’s good to setup a Chrome Passphrase, if you haven’t already. This encrypts your personal data before it passes through Google’s servers. Unfortunately, Chrome mobile doesn’t support extensions, so this is only good for syncing between computers. If you run into any bugs, head on over the Click Sync repository, and let me know! Don Marti — GDPR and client-side tools Lots of GDPR advice out there. As far as I can tell it pretty much falls into three categories. But what if there is another way? 1. Start with the clean version. (Here's that link again: How to: GDPR, consent and data processing). 2. Add microformats to label consent forms as consent forms, and appropriate links to the data usage policy to which the user is being asked to agree. 3. Release a browser extension that will do the right thing with the consent forms, and submit automatically if the user is fine with the data usage request and policy, and appears to trust the site. Lots of options here, since the extension can keep track of known data usage policies and which sites the user appears to trust, based on their activity. 4. Publish user research results from the browser extension. At this point the browsers can compete to do their own versions of step 3, in order to give their users a more trustworthy and less annoying experience. Browsers need to differentiate in order to attract new users and keep existing users. Right now a good way to do that is in creating a safer-feeling, more trustworthy environment. The big opportunity is in seeing the overlap between that goal for the browser and the needs of brands to build reputation and the needs of high-reputation publishers to shift web advertising from a hacking game that adtech/adfraud wins now, to a reputation game where trusted sites can win. TED — TEDFilms: Four new short films premiered at TED2018 For the TED conference this year, we wanted to entertain attendees between talks — and support and encourage up-and-coming filmmakers. Meet TEDFilms, a new program for promoting the creation of original short films. Executive-produced by Sinéad McDevitt and led up by TED’s director of Production and Video Operations, Mina Sabet, the short films acted as a creative palate-cleanser during the speaker program, a short blast of humor, beauty and awe. Each film is less than two minutes, and genres range from experimental art and documentary to PSA and dark comedy. Enjoy! Chromatic As light passes through defective glass, beams split into color spectra, causing ‘diffraction grating’. For the first time ever in film, we get up close and personal with this visual phenomenon in a series of beautiful chromatic abstractions. Director: Shane Griffin Music: Gavin Little With special thanks to: Ed Bruce at Screenscene Los York Illusions for a Better Society Could visual illusions be a cure for polarization? Co-Directors: Aaron Duffy Lake Buckley Jack Foster Director of Photography: William Atherton Production Design: Adam Pruitt Creative Partner: SpecialGuest Production Company: 1stAveMachine Producers: Dave Kornfield Andrew Geller Matt Snetzko Music: Bryn Bliska It’s Not Amazing Enough The pressures of having to make an amazing film sent this deadpan deep-voiced award winning filmmaker into a crippling spiral of self-doubt and comic indecision. Director, Writer & Producer: Duncan Cowles Music:Stillhead A.I. Therapy After 100 years of progress, AI bots have finally become too human for their own good. Mother London Directors: Emerald Fennell & Chris Vernon Director of Photography: Ben Kracun Production Design: Jessica Sutton VFX: Coffee & TV , Krebs on Security — Deleted Facebook Cybercrime Groups Had 300,000 Members Hours after being alerted by KrebsOnSecurity, Facebook last week deleted almost 120 private discussion groups totaling more than 300,000 members who flagrantly promoted a host of illicit activities on the social media network’s platform. The scam groups facilitated a broad spectrum of shady activities, including spamming, wire fraud, account takeovers, phony tax refunds, 419 scams, denial-of-service attack-for-hire services and botnet creation tools. The average age of these groups on Facebook’s platform was two years. On Thursday, April 12, KrebsOnSecurity spent roughly two hours combing Facebook for groups whose sole purpose appeared to be flouting the company’s terms of service agreement about what types of content it will or will not tolerate on its platform. One of nearly 120 different closed cybercrime groups operating on Facebook that were deleted late last week. In total, there were more than 300,000 members of these groups. The average age of these groups was two years, but some had existed for up to nine years on Facebook My research centered on groups whose singular focus was promoting all manner of cyber fraud, but most especially those engaged in identity theft, spamming, account takeovers and credit card fraud. Virtually all of these groups advertised their intent by stating well-known terms of fraud in their group names, such as “botnet helpdesk,” “spamming,” “carding” (referring to credit card fraud), “DDoS” (distributed denial-of-service attacks), “tax refund fraud,” and account takeovers. Each of these closed groups solicited new members to engage in a variety of shady activities. Some had existed on Facebook for up to nine years; approximately ten percent of them had plied their trade on the social network for more than four years. Here is a spreadsheet (PDF) listing all of the offending groups reported, including: Their stated group names; the length of time they were present on Facebook; the number of members; whether the group was promoting a third-party site on the dark or clear Web; and a link to the offending group. A copy of the same spreadsheet in .csv format is available here. The biggest collection of groups banned last week were those promoting the sale and use of stolen credit and debit card accounts. The next largest collection of groups included those facilitating account takeovers — methods for mass-hacking emails and passwords for countless online accounts such Amazon, Google, Netflix, PayPal, as well as a host of online banking services. This rather active Facebook group, which specialized in identity theft and selling stolen bank account logins, was active for roughly three years and had approximately 2,500 members. In a statement to KrebsOnSecurity, Facebook pledged to be more proactive about policing its network for these types of groups. “We thank Mr. Krebs for bringing these groups to our attention, we removed them as soon as we investigated,” said Pete Voss, Facebook’s communications director. “We investigated these groups as soon as we were aware of the report, and once we confirmed that they violated our Community Standards, we disabled them and removed the group admins. We encourage our community to report anything they see that they don’t think should be in Facebook, so we can take swift action.” KrebsOnSecurity’s research was far from exhaustive: For the most part, I only looked at groups that promoted fraudulent activities in the English language. Also, I ignored groups that had fewer than 25 members. As such, there may well be hundreds or thousands of other groups who openly promote fraud as their purpose of membership but which achieve greater stealth by masking their intent with variations on or mispellings of different cyber fraud slang terms. Facebook said its community standards policy does not allow the promotion or sale of illegal goods or services including credit card numbers or CVV numbers (stolen card details marketed for use in online fraud), and that once a violation is reported, its teams review a report and remove the offending post or group if it violates those policies. The company added that Facebook users can report suspected violations by loading a group’s page, clicking “…” in the top right and selecting “Report Group”. Users who wish to learn more about reporting abusive groups can visit facebook.com/report. “As technology improves, we will continue to look carefully at other ways to use automation,” Facebook’s statement concludes, responding to questions from KrebsOnSecurity about what steps it might take to more proactively scour its networks for abusive groups. “Of course, a lot of the work we do is very contextual, such as determining whether a particular comment is hateful or bullying. That’s why we have real people looking at those reports and making the decisions.” Facebook’s stated newfound interest in cleaning up its platform comes as the social networking giant finds itself reeling from a scandal in which Cambridge Analytica, a political data firm, was found to have acquired access to private data on more than 50 million Facebook profiles — most of them scraped without user permission. Google Adsense — Helping publishers recover lost revenue from ad blocking Today, the majority of the internet is supported by digital advertising. But bad ad experiences—the ones that blare music unexpectedly, or force you to wait 10 seconds before you get to the page—are hurting publishers who make the content, apps and services we use everyday. When people encounter annoying ads, and then decide to block all ads, it cuts off revenue for the sites you actually find useful. Many of these people don't intend to defund the sites they love when they install an ad blocker, but when they do, they block all ads on every site they visit. Last year we announced Funding Choices to help publishers with good ad experiences recover lost revenue due to ad blocking. While Funding Choices is still in beta, millions of ad blocking users every month are now choosing to see ads on publisher websites, or “whitelisting” that site, after seeing a Funding Choices message. In fact, in the last month over 4.5 million visitors who were asked to allow ads said yes, creating over 90 million additional paying page views for those sites. Over the coming weeks, we’re expanding Funding Choices to 31 additional countries, giving publishers the ability to ask visitors from those countries to choose between allowing ads on a site, or purchasing an ad removal pass through Google Contributor. Also, we’ve started a test that allows publishers to use their own proprietary subscription services within Funding Choices. How Funding Choices works Funding Choice gives publishers a way to have a conversation with their site visitors through custom messages they can use to express how ad blocking impacts their business and content. When a visitor arrives at a site using an ad blocker, Funding Choices allows the site to display one of three message types to that user: A dismissible message that doesn’t restrict access to content: A dismissible message that counts and limits the number of page views that person is allowed per month, as determined by the site owner, before the content is blocked. Or, a message that blocks access to content until the visitor chooses to allow ads on the site, or to pay to access the content with either the site’s proprietary subscription service or a pass that removes all ads on that site through Google Contributor. On average, publishers using Funding Choices are seeing 16 percent of visitors allow ads on their sites with some seeing rates as high as 37 percent. Ad blockers designed to remove all ads from all sites are making it difficult for publishers with good ad experiences to maintain sustainable businesses. Our goal for Funding Choices is to help publishers get paid for their work by reducing the impact of ad blocking on them, and we look forward to continuing to expand the product availability. TED — Wow. Just wow. Notes from Session 7 of TED2018 Renzo Piano makes the case for beauty in architecture during TED2018: The Age of Amazement, April 12, 2018, in Vancouver. Photo: Bret Hartman / TED What we need sometimes is a little awe, a little wonder. This session of TED Talks was designed to provoke an exquisite human emotion: the sense that the world is bigger and stranger than you’d known. Without further ado, wow. A blueprint for how humans and machines can coexist. American researchers are leading AI discoveries, Chinese engineers are leading AI implementations like speech recognition and machine translation, and taken together, they will bring about a technological revolution that will pose major challenges to society. All types of jobs will be replaced by AI in the near-future, from radiologists to truckers. “But what’s more serious than the loss of jobs is the loss of meaning,” says investor Kai-Fu Lee, “because the work ethic in the Industrial Age has brainwashed us into thinking work is the reason we exist, that work defines the meaning of our lives.” Lee confesses that for many years, he was guilty of being a workaholic — nearly missing the birth of his daughter to give a presentation to Apple’s CEO– until he was diagnosed with Stage IV lymphoma five years ago. The experience made him realize that his priorities were completely out of order, but it also gave him a new view about what AI should mean for humanity. “AI is taking away a lot of routine jobs, but routine jobs are not what we’re about. Why we exist is love,” he says. He explains how we might harness human compassion – along with human creativity — to work with AI in a way that may help solve both the loss of jobs and the loss of meaning. Personalized music composition through AI. Wouldn’t it be nice to finally have the perfect soundtrack for your life? Perhaps that pensive, melancholic song when you’re feeling down, or an upbeat tune when you’ve just received great news. Well, engineer and musician Pierre Barreau is making your personalized playlist a reality through AI. Barreau created an AI technology called AIVA, or Artificial Intelligence Visual Artist. He taught AIVA the art of music composition by inputting 30,000 scores from history’s greatest composers. “Using deep neural networks, AIVA looks for patterns in these scores.” he says. “From a couple of bars of existing music from an existing score, [AIVA] infers what should come next in those tracks. Once AIVA gets good at these predictions, it can actually build a set of mathematical rules for that style of music in order to create its own original compositions.” To make music unique to each person, he also taught AIVA to understand what makes a musical score emotionally unique by matching scores to different categories, including mood and note density. While personalized AI-generated music has clear applications for immersive media storytelling, like video games, it also has the power to better tell our life narratives. “This moment here together at TED is now a part of our life story. So it only felt fitting that AIVA would compose music for this moment.” Barreau concludes by playing a brief, mesmerizing song by AIVA, fittingly titled, “The Age of Amazement.” A retrospective in the pursuit of beauty. “Architecture is the art of making shelter for human beings,” says architect Renzo Piano. Throughout his prolific career, Piano has designed some of the most recognizable buildings across the world; notables include the Centre Georges Pompidou in Paris, The Shard in London and the Whitney Museum of Art in New York City, part of an impressive body of work spanning decades. When he sets out to create these buildings, he looks for them to the flirt with the surrounding world — with water, wind, and even light — and communicate with humanity’s most universal language: beauty. Real beauty, he believes, is when the invisible joins the visible. And this doesn’t just apply to art or nature, it can relate to science and human curiosity as well. “This universal beauty is one of the few thing that can change the world,” he says. “Believe me this beauty will change the world, one person at a time.” Hope for the organ transplant shortage — inside a baby pig. Every day, thousands of people wait for a desperately needed organ replacement — and by the end of today, 20 people will have died waiting. For almost half a century it’s been thought to be theoretically possible to transplant organs from pigs into humans, because pigs are similar enough to humans biologically and about the same size. But as scientist Luhan Yang puts it, there was one major problem: “The pig genome has a dangerous virus that does not express in pigs, but can be transmitted to humans.” If the virus, PERV, migrated to humans through a transplanted organ, it could spark a deadly HIV-like epidemic. So research had stalled, but Yang decided to dig in. Using CRISPR, a technique for editing genes, she and her lab have been working to create a pig without PERV in its genome. Difficulty: PERV expresses in 62 places on the pig genome. She shows a picture of Laika, one of the more than 30 PERV-free pigs her lab has bred. They may represent an exciting first step toward solving the organ transplantation crisis. After the talk, co-host Chris Anderson asks Yang for a timeline; she demurs at first but then says “We hope that it happens within one decade.” CSI: Fingerprints. TV crime dramas are credited with luring people, particularly young women, into the field of forensic science; they’re drawn by the combination of conducting serious lab work and helping catch the bad guys. Seeing a talk from Sheffield Hallam University analytical chemist Simona Francese about her fascinating research might have that same impact. An expert in fingerprint analysis, Francese says: “Molecules are the storytellers of who are we and what we’ve been up to. We just need to have the right technology to make them talk.” Through her work, she is revealing the tales to be found in the microscopic remnants that we all leave behind. A person’s prints can contain three types of molecules: ordinary sweat molecules; molecules of substances that we’ve introduced into our bodies and sweat out; and molecules of stuff that’s adhered to our hands. Francese and her team achieve their breathtakingly detailed analyses by using UV lasers — which release the molecules in fingerprints — and mass spectrometry imaging technology, which then measures the mass of those molecules, pinpointing what they are. They’ve been able to detect thousands of different molecules in a single print. They can also visualize the distribution of each molecule on the fingerprint — which allows them to separate prints when they’re overlapping, something that tends to stymie the police. Their work can also fill in faint prints by improving ridge pattern continuity and clarity. In 2017, law enforcement in the UK and in other parts of Europe began using Francese’s technology in their criminal investigations. Sometimes it’s awesome to be sad. “I love depressing songs,” says Luke Sital-Singh, “songs of sorrow, of grief, of longing … because they speak to a very real part of being human.” Accompanied only by a piano on a dark stage, the singer-songwriter performed two gorgeous and melancholic ballads, “Afterneath” and “Killing Me,” leaving many in the audience in tears, including our co-host. Conquering physical and mental mountains. Not many people would consider being stuck between a rock and … another rock … nearly 3,000 feet off the ground without a rope to be one of the best moments of their life. But for Alex Honnold, it was the cumulation of a nearly two-decade-long dream. On June 3, 2017, at the age of 31, Honnold became the only person to summit El Capitan, a nearly 3,000-foot climb in Yosemite National Park, without a rope. This is a style of climbing known as free soloing, and one that Honnold is recognized for internationally. During his talk, Honnold shared how preparing for El Capitan required as much mental exercise as it did physical. He spent a year rehearsing not only every physical move, but every thought and doubt he could have on the wall. “Doubt is the precursor to fear, and I knew that I couldn’t experience my perfect moment if I was afraid.” In the end, he had his perfect moment, soaring up the wall that takes the average climber three days to summit in a mere three hours and fifty-six minutes. Honnold’s talk ended with a standing ovation and a final note from Chris Anderson: “Don’t share this talk with your children, please.” — Michaela Eames TED — In Case You Missed It: Finding space to dream at day 3 at TED2018 TED2018 hit its stride on day 3, with talks from explorers of space and oceans, builders of cities and bridges, engineers of the future and many more. Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver. Are we alone in the cosmos? The universe is 13.8 billion years old and contains billions of galaxies — in fact, there are probably a trillion planets in our galaxy alone. People have long thought a civilization like ours must exist or should have existed somewhere out there, but British astronomer Stephen Webb sees another possibility: we’re alone. Thinkers have speculated about all the barriers that a planet would need to house an alien civilization: it would need to be habitable; life would have to develop there; such life forms would need a certain technological intelligence to reach out; and they’d have to be able to communicate across space. Rather than viewing the situation with sorrow and the cosmos as a lonely place, “the silence of the universe is shouting: we’re the creatures who got lucky,” says Webb. One cosmic visitor we just recently met can confirm something else is definitely out there — ‘Oumuamua, the first known interstellar object to pass through the Solar System. University of Hawaii astrobiologist Karen Meech introduces us to the mysterious object, which she says is a package from the nearest star system 4.4 light years away, having traveled on a journey of more than 50,000 years. She believes it could be a chunk of rocky debris from a new star system; other researchers believe it may be something else altogether — evidence of extraterrestrial civilizations, or material cast off in the death throes of a star. “This unexpected gift has generated more questions than answers,” says Meech, “but we were the first to say hello to this visitor from our distant past.” Penny Chisholm explains how an ancient, ocean-dwelling cyanobacterium — Prochlorococcus — could inspire us to break our dependency on fossil fuels. (Photo: Bret Hartman / TED) Ocean explorers. Prochlorococcus is an ancient ocean-dwelling cyanobacterium that Penny Chisholm, a biological oceanographer at MIT, discovered in the mid-1980s. It’s the most abundant photosynthetic cell on the planet and Chisholm believes that it could hold clues for sustainable energy in its genetic architecture. With a gene pool four times the size of the human genome but 1/100th the width of a human hair, this engineering masterpiece might inspire solutions to break our dependency on fossil fuel. If we hope to unlock the wonders of Prochlorococcus in the Age of Amazement, we’re going to need to protect the world’s waters first. Enric Sala, a marine ecologist and National Geographic Explorer-in-Residence, proposes the creation of a giant high seas reserve. Falling outside of any single country’s jurisdiction, the high seas are the “Wild West” of the ocean and until recently, it was difficult to know who was fishing (and how much). Satellite technology and machine learning now enable the tracking of boats and revenue, revealing that practically the entire high seas fishing proposition is misguided. In response, Sala advocates for creating a reserve that would include two-thirds of the ocean, protecting the ecological, economic and social benefits of our waters. Bridges reveal something about creativity, ingenuity; they even hint at our identity, says engineer Ian Firth. (Photo: Bret Hartman / TED) How we’re shaping (and reshaping) the built environment. TED is known for its fair share of tech wizardry, where innovation happens at the scale of microns. But our built environment is in need of some love in the Age of Amazement as well. Architect and Columbia University professor Vishaan Chakrabarti highlights the creeping sameness in many urban buildings and streetscapes throughout the world. This physical homogeneity — stemming from regulations, automobiles, accessibility and safety issues, and cost considerations, among other factors — has resulted in a social and mental one as well. Let’s strive to create cities of difference, magnetic places that embody an area’s cultural and local proclivities, exhorts Chakrabarti. One great way to beautify a city: an elegant, distinctive bridge! Ian Firth, an engineer who has designed spans all over the world, including the 3.3 kilometer-long suspension bridge over the Messina Strait in Italy, talks about the connectivity that makes them special pieces of infrastructure. “They reveal something about creativity, ingenuity; they even hint at our identity,” he explains. Although they fall into only a few types, depending on the nature of their structural support, bridges hold great potential for innovation and variety is tremendous. “Bridges need to be elegant, they need to be beautiful,” Firth says. Angel Hsu shows us that real change is afoot in China’s, as the country’s energy initiatives have unexpectedly placed it at the vanguard of the fight against pollution and climate change. (Photo: Bret Hartman / TED) Pollution problems — and solutions. Iconic images of skylines buried in clouds of smog ensured China’s notoriety as one of the world’s biggest polluters. But Angel Hsu shows us that real change is afoot in China’s, as the country’s energy initiatives have unexpectedly placed it at the vanguard of the fight against pollution and climate change. In 2011, when Hsu began conducting her research, China’s own environmental data — specifically for fine particulate matter, or PM2.5 — was kept secret. But thanks to citizen activism, pollution’s hazardous impacts on human health skyrocketed into China’s consciousness. The emerging zeitgeist grabbed the government’s attention. Recognizing China’s toxic reliance on fossil fuels, they pulled the plug on more than 300,000 coal plants, and began feverishly developing alternative energy. Although China must still address its coal problem abroad, its efforts at home (although uncertain) could impact global pollution — and China’s massive carbon footprint — in a major way. While cutting down on pollution is good, removing harmful greenhouse gases from the atmosphere would be even better. The concentration of CO2 in today’s atmosphere is a staggering 400ppm, but we’re still not cutting emissions as fast as we need to, according to chemical engineer Jennifer Wilcox. So we need to pull CO2 back out of the atmosphere — a strategy known as negative emissions. The technology to do this already exists: a device known as an air contactor uses CO2-grabbing chemicals in solid materials or dissolved in water to pull the gas out of the air, sort of like a synthetic forest. What makes this process tricky, though, is that it’s energy-intensive, which drives costs up or, depending on the type of energy used, ends up emitting more CO2 than is captured. Several companies are working on making the process more cost-effective using a variety of techniques, as well as solving other problems of carbon capture like how and where we should build these “synthetic forests.” And in a truly mind-blowing talk, applied engineer Aaswath Raman explains how the next great renewable resource might be … the cold of space. “What keeps me up at night is that the energy use for cooling is expected to grow six-fold by the year 2050,” he says. “The warmer the world gets, the more we are going to need cooling systems.” He’s exploring a potential solution that leverages a cool fact about infrared light and deep space. Untraditional storytellers. Three TED speakers evoked storytelling in their talks — perhaps that’s not so unexpected, but what was surprising was there wasn’t a writer, musician or filmmaker among them. Game designer David Cage entreated the audience to think of videogames as more than pixelated shooting ranges or mindless time-fillers. “I’ve always been fascinated with the idea of recreating the notion of choice in fiction,” he says. “My dream is to put the audience in the shoes of the protagonist, let them make their own decisions, and by doing so, let them tell their own stories.” While playing, gamers also get the chance to enjoy two tremendously liberating qualities not usually found when reading a book: personal autonomy and flexibility. Veteran architect and Pritzker winner Renzo Piano — who is responsible for such indelible buildings such as Paris’s Centre George Pompidou, the New York Times building, and London’s The Shard — took audience members through his life’s work and his thinking. He too views himself as a storyteller. But while Cage concentrated on the narrative aspects of that role, Piano extolled the love, happiness and other emotional reactions that beautiful structures evoke in all of us. And the most surprising of today’s storytellers? Your fingerprints — or, more specifically, the molecules in your fingerprints, according to analytical chemist Simona Francese: “Molecules are the storytellers of who are we and what we’ve been up to. We just need to have the right technology to make them talk.” Francese and a team at her lab at the UK’s Sheffield Hallam University have spent nearly a decade perfecting the process to identify as much as 1,000 molecules in a single fingerprint — and this technology is now being used by the police in Europe to catch criminals. Nora Atkinson invites us on a trip to Burning Man, to see the wonderful art that’s constructed and burned — and never sold — there each year. (Photo: Lawrence Sumulong / TED) Love, actually. In 2017, Smithsonian American Art Museum craft curator Nora Atkinson went to Burning Man in Nevada for the first time, and what she found was an artistic experience like no other. Every August, more than 70,000 people trek to the desert and engage with 300+ installations, sculptures and structures. None of the pieces are for sale (all are burned or taken away at week’s end), and anyone can make art. As a result, creativity there is driven by passion, not profit. Burning Man art is “authentic and optimistic in a way we rarely see anywhere else,” and it encourages, even demands, interaction and investigation. “What is art for,” Atkinson asks, “if not this?” Love was also on the mind of speaker Kai-Fu Lee. The longtime technology investor and executive admits love was absent from his career-minded trajectory until he was diagnosed with cancer (but shares he is now in remission). He feels it’s been similarly overlooked in discussions about technology and the future. “Love is what differentiates us from AI,” he says. “Despite what science fiction movies may portray, I can tell you responsibly that AI programs cannot love.” Lee urges people to think of how human love, compassion and technological brilliance can co-exist and help us create better, more connected lives. Musician Luke Sital-Singh brought down the house — and made TED curator Chris Anderson wipe his eyes — by playing and singing a beautiful composition called “Killing Me.” He wrote the song from the point of view of his grandmother, who has had to figure out how to live without her soulmate, Sital-Singh’s late grandfather, even as she experiences the joys of her family and their new members and accomplishments. He sang, “Oh you won’t believe, the wonders I can see/This world is changing me, but I’ll love you faithfully.” And while soft-spoken climber Alex Honnold, the day’s final speaker, didn’t use the L-word, it came through loud and clear as he talked about his record-setting, free-solo climb of El Capitan in 2017. He spoke about his intense mental preparation for the feat — he took months to memorize every handhold and foot placement, so the climb would come naturally and automatically to him. Of that day, he recalled, “With six hundred feet to go, it felt like the mountain was offering me a victory lap. I climbed with a smooth precision and enjoyed the sounds of the birds swooping around the cliff. It all felt like a celebration.” Workshops aplenty. TEDsters had 19 workshops to choose from on day 3. Adam Savage had attendees create armor helmets out of laser cut corrugated cardboard. Angelica Dass guided attendees through painting self portraits, asking people to revisit their childhood art class — specifically the moment they learned about the connections between the colors of their skin and their race. And OK Go led attendees to build an orchestra out of random objects around the room … like suitcases, wine bottles, cans and PVC pipes. TED — Altair at TED2018: In the “Age of Amazement,” simulation drives innovation Altair’s exhibit gallery at TED2018 features a vintage car with 3D-printed insides, a helmet designed to reduce football-related head injuries and a Wilson golf driver challenge, among much more. (Photo: Jason Redmond / TED) In a corner of the Vancouver Convention Center — set against a beautiful backdrop of Vancouver Harbour and the mountains of the North Shore, and right between a comfy simulcast lounge and a pop-up coffee and espresso shop — it’s hard to miss an eye-catching vintage red car. It’s the anchor of Altair’s exhibit gallery, showing off the possibilities of simulation-driven innovation. Altair is a leading provider of enterprise-class engineering software enabling innovation from concept design to operation. Their simulation-driven approach is powered by a suite of software that optimizes performance while providing data analytics and true-to-life visualization and rendering. Altair products range from biomimicry software that unlocks the potential of industrial 3D-printing to personalized healthcare with machine learning enabled by the Internet of Things. At TED2018, they invited TEDsters to explore the intersection of human creativity and technology — and the extraordinary impact it has on shaping the world around us. On display at their gallery: an IoT-enabled bodysuit from BioSerenity that records seizures to help diagnose epilepsy; a helmet designed to reduce football-related head injuries created in partnership VICIS, which is set to be used by Notre Dame in NCAA games this coming season; an advanced arm prosthetic … and a vintage car made up of a vintage frame with aluminum 3D-printed insides, created by Altair, APWORKS, csi entwicklungstechnik, EOS, GERG and Heraeus. Altair is also hosting an interactive design experience where attendees can use their simulation software to design a custom Wilson golf driver. The person with the leading design — the one that hits the ball furthest (and yes, thanks to machine learning and Altair HyperWorks’ Virtual Wind Tunnel, there is a right answer to this) by the end of TED2018 will receive a golf driver as a prize. In the “Age of Amazement” — TED’s theme in 2018 — simulation and machine learning will drive innovation. TED — In Case You Missed It: Bold visions for humanity at day 4 of TED2018 Three sessions of memorable TED Talks covering life, death and the future of humanity made the penultimate day of TED2018 a remarkable space for tech breakthroughs and dispatches from the edges of culture. Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver. The future built on genetic code. DNA is built on four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the four letters of the genetic alphabet are not all that unique. He and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. And maybe soon, we’ll be able to use that expanded DNA alphabet to teleport. That’s right, you read it here first: teleportation is real. Biologist and engineer Dan Gibson reports from the front lines of science fact that we are now able to transmit the most fundamental parts of who we are: our DNA. It’s called biological teleportation, and the idea is that biological entities including viruses and living cells can be reconstructed in a distant location if we can read and write the sequence of that DNA code. The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines. “If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. (Photo: Jason Redmond / TED) Dispatches from the fight against hate online. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. In 2016, Green collaborated with Moonshot CVE to pilot a new approach, the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups, and used what they learned to create targeted advertising aimed at people susceptible to ISIS’s recruiting — and counter those messages. In English and Arabic, the eight-week pilot program reached more than 300,000 people. “If technology has any hope of overcoming today’s challenges,” Green says, “we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” Dylan Marron is taking a different approach to the problem of hate on the internet. His video series, such as “Sitting in Bathrooms With Trans People,” have racked up millions of views, and they’ve also sent a slew of internet poison in his direction. He developed a coping mechanism: he calls up the people who leave hateful remarks, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace, he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.” Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years (he’s now just 18) he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of what we call intelligence is just trial-and-error on a massive scale — a machine can try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. Which really isn’t general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives. Picking up on the thread of pitfalls of current AI, artist and technology critic James Bridle describes how automated copycats on YouTube mimic trusted videos by using algorithmic tricks to create “fake news” for kids. End result: children exploring YouTube videos from their favorite cartoon characters are sent down autoplaying rabbit holes, where they can find eerie, disturbing videos filled with very real violence and very real trauma. Algorithms are touted as the fix, but as Bridle says, machine learning is really just what we call software that does things we don’t understand … and we have enough of that already, no? Chetna Gala Sinha tells us about a bank in India that meets the needs of rural poor women who want to save and borrow. (Photo: Jason Redmond / TED) Listen and learn. Takemia MizLadi Smith spoke up for the front-desk staffer, the checkout clerk, and everyone who’s ever been told they need to start collecting information from customers, whether it be an email, zip code or data about their race and gender. Smith makes the case to empower every front desk employee who collects data — by telling them exactly how that data will be used. Chetna Gala Sinha, meanwhile, started a bank in India that meets the needs of rural poor women who want to save and borrow — and whom traditional banks would not touch. How does the bank improve their service? As Chetna says: simply by listening. Meanwhile, sex educator Emily Nagoski talked about a syndrome called emotional nonconcordance, where what your body seems to want runs counter to what you actually want. In an intimate situation, ahem, it can be hard to figure out which one to listen to, head or body. Nagoski gives us full permission and encouragement to listen to your head, and to the words coming out of the mouth of your partner. And Harvard Business School prof Frances Frei gave a crash course in trust — building it, keeping it, and the hardest, rebuilding it. She shares lessons from her stint as an embed at Uber, where far from listening to in meetings, staffers would actually text each other during meetings — about the meeting. True listening, the kind that builds trust, starts with putting away your phone. Bionic man Hugh Herr envisions humanity soaring out of the 21st century. (Photo: Ryan Lash / TED) A new way to heal our bodies … and build new ones. Optical engineer Mary Lou Jepsen shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it and doesn’t let it pass through. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. MIT professor Hugh Herr is working on a different way to heal — and augment — our bodies. He’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it neural embodied design, a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend who lost a foot in a climbing accident. Using the Agonist-antagonist Myoneural Interface, or AAMI, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. What might be next? Maybe, the ability to fly. Announcements! Back in 2014, space scientist Will Marshall introduced us to his company, Planet, and their proposed fleet of tiny satellites. The goal: to image the planet every day, showing us how Earth changes in near-real time. In 2018, that vision has come good: every day, a fleet of about 200 small satellites pictures every inch of the planet, taking 1.5 million 29-megapixel images every day (about 6T of data daily), gathering data on changes both natural and human-made. This week at TED, Marshall announced a consumer version of Planet, called Planet Stories, to let ordinary people play with these images. Start playing now here. Another announcement comes from futurist Ray Kurzweil: a new way to query the text inside books using something called semantic search — which is a search on ideas and concepts, rather than specific words. Called TalkToBooks, the beta-stage product uses an experimental AI to query a database of 120,000 books in about a half a second. (As Kurzweil jokes: “It takes me hours to read a hundred thousand books.”) Jump in and play with TalkToBooks here. Also announced today: “TED Talks India: Nayi Soch” — the wildly popular Hindi-language TV series, created in partnership with StarTV and hosted by Shah Rukh Khan — will be back for three more seasons. Cryptogram — The DMCA and its Chilling Effects on Research The Center for Democracy and Technology has a good summary of the current state of the DMCA's chilling effects on security research. To underline the nature of chilling effects on hacking and security research, CDT has worked to describe how tinkerers, hackers, and security researchers of all types both contribute to a baseline level of security in our digital environment and, in turn, are shaped themselves by this environment, most notably when things they do upset others and result in threats, potential lawsuits, and prosecution. We've published two reports (sponsored by the Hewlett Foundation and MacArthur Foundation) about needed reforms to the law and the myriad of ways that security research directly improves people's lives. To get a more complete picture, we wanted to talk to security researchers themselves and gauge the forces that shape their work; essentially, we wanted to "take the pulse" of the security research community. Today, we are releasing a third report in service of this effort: "Taking the Pulse of Hacking: A Risk Basis for Security Research." We report findings after having interviewed a set of 20 security researchers and hackers -- half academic and half non-academic -- about what considerations they take into account when starting new projects or engaging in new work, as well as to what extent they or their colleagues have faced threats in the past that chilled their work. The results in our report show that a wide variety of constraints shape the work they do, from technical constraints to ethical boundaries to legal concerns, including the DMCA and especially the CFAA. Note: I am a signatory on the letter supporting unrestricted security research. Worse Than Failure — CodeSOD: All the Things! Yasmin needed to fetch some data from a database for a report. Specifically, she needed to get all the order data. All of it. No matter how much there was. The required query might be long running, but it wouldn’t be complicated. By policy, every query needed to be implemented as a stored procedure. Yasmin, being a smart prograammer, decided to check and see if anybody had already implemented a stored procedure which did what she needed. She found one called GetAllOrders. Perfect! She tested it in her report. Yasmin expected 250,000 rows. She got 10. She checked the implementation. CREATE PROCEDURE initech.GetAllOrders AS BEGIN SELECT TOP 10 orderId, orderNo, orderCode, … FROM initech.orders INNER JOIN… END In the original developer’s defense, at one point, when the company was very young, that might have returned all of the orders. And no, I didn’t elide the ORDER BY`. There wasn’t one. [Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out! TED — Personally speaking: Notes from Session 10 of TED2018 What does an illustrator’s life look like? Well, says Christoph Niemann, most of the time: this. He spoke at TED2018 on April 13, 2018, in Vancouver. Photo: Jason Redmond / TED Sketches that speak volumes. When illustrator Christoph Niemann wakes up after falling asleep on an airplane, he says, “I have the most terrible taste in my mouth that cannot be described with words … But it can be drawn.” Then he shows a spot-on sketch of an outstretched tongue with a dead-fish-rat-hybrid creature on it. Trying to recap his intensely visual talk in words resembles his struggle, because this talk speaks largely through witty, whimsical drawings. Niemann believes all people are bilingual, “fluent in the language of reading images,” and most of our fluency comes organically. For example, while you might remember learning to read the words “men” and “women,” can you recall anyone explaining to you what the symbols on the doors of the bathroom meant? You just figured it out. People share a rich and common visual vocabulary, so Niemann likes to take “images from remote cultural areas and bring them together” — hence his putting the words “ceci n’est pas une pipe” in cursive above white iPhone earbuds. Using this collective lexicon, Niemann and other artists can communicate information, satirize people and ideas, express empathy, and make us laugh — all without words. In that way, he says, as deft as his drawings are, they’d be nothing without an audience. He says, “The real magic happens in the mind of the viewer.” “Once your appliances can talk, who else will they talk to?” Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. To do this, Kashmir turned her San Francisco apartment into a full-fledged smart home, loading up on 18 different internet-connected appliances — including a “smart bed” that calculated her nightly “sleep score” to let her know if she was well-rested or not. Her colleague Surya built a special router to figure out how often the devices connected, who they were transmitting to, what they were transmitting — and what of that data could be sold. The results were surprising — and a little creepy. By poring over Kashmir’s family’s data, Surya could decipher their sleep schedules, TV binges, tooth-brushing habits. And while many appliances connected only for updates, the Amazon Echo connected shockingly often — every three minutes. All of this data can tell companies how rich or poor you are, whether or not you’re an insurance risk, and — perhaps worst of all — the state of your sex life. (A digital vibrator company was caught “data-mining their customers’ orgasms.”) All this may lead you to ask, as Surya does, “Who is the true beneficiary of your smart home? You, or the company mining you?” Embrace the diversity within. Rebeca Hwang has spent a lifetime juggling identities (Korean heritage, Argentinian upbringing, educated in the United States), and for a long time she had difficulty finding a place in the world to call home. Instead, one day, she had a pivotal realization: it was fruitless to search for total commonality with the people around her. Instead, she decided, she would embrace all the possible versions of herself — and the superpower it grants her to make connections with all kinds of people. Through thoughtful reinvention of her personhood, Hwang rid herself of constant anxiety, by “cultivating diversity within me and not just around me.” In the wake of her personal revolution, she’s continued to live a multifaceted life and accept the endless advantages it brings. She hopes to raise her children, who are already growing up with a unique combination of backgrounds, to help create a world where identities are used not to alienate others but to bring people together. Life after loss. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse-Rosenthal (a best-selling children’s book author), wrote about their lives in a New York Times article read by millions of people. “You May Want to Marry My Husband” was a meditation on dying, disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public, and gave him an empty page on which he could inscribe the rest of his life. For Jason, “The key to my being able to persevere is Amy’s express and very public edict that I must go on.” But grief carries memories, especially of the process of dying itself. Amy chose home hospice, which gave her happiness — but Jason is honest about the complications it caused for the survivors, including the inevitable, indelible memory of when Jason carried a lifeless Amy “down our stairs, through our living and our dining room, to a waiting gurney to have her body cremated.” Jason’s salvation lay in Amy’s challenge to begin anew, which he shares with others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?” An emotional reset. Many of us in the audience knew Amy Krouse Rosenthal, who had a key role in planning our TED Active conference; session co-host Juliet Blake asks for the house lights to dim for a moment to create some space for quiet reflection. Then the extraordinary violinist Lili Haydn steps onstage for a welcome musical interlude. Unaccompanied by a band, she performs an emotional and elegant piece of her original musical, called “The Last Serenade.” Can we help every employee be GRACED? Over the years, poet and trainer Tamekia MizLadi Smith has met her share of Miz Margarets — the longtime front-desk employee at a medical office who knows her job perfectly well and doesn’t take kindly to change. So when new rules for data collection come down from the top, and that suddenly she needs to ask every patient to self-identify by gender (with 6 options!), by race (with even more options!) and nationality (with even more options still!!), it’s no wonder that Miz Margaret starts thinking about early retirement. But what if she knew that this data would be used to help her patients, not to stereotype them — to help the office speak more respectfully to people of all genders, to get research funding for under-served groups? Smith shares an acrostic poem on the letters G.R.A.C.E.D. that will inspire bosses, trainers and data collectors to think carefully about the front-line employees who’ll be asking for this data. Bottom line: Always let people know that (and how) their work matters. A bank that helps women empower themselves. A few years after social entrepreneur Chetna Gala Sinha moved from Mumbai to a remote village in Maharashtra, India, she met a woman named Kantabai. She was a blacksmith who wanted to open a bank account to save her hard-earned money, but when Sinha accompanied her to the bank, she was turned down because they didn’t think her small savings rate was worth their effort or time. Sinha decided that if the bank wouldn’t open an account for poor women like Kantabai, she would start one that would – and the Mann Deshi Bank was born. Today, it has 100,000 accounts and has done over$20 million in business. Over the years, her women customers have consistently pushed Sinha to come up with better solutions to their needs, teaching her one of the biggest lessons she’s learned: “Never provide poor solutions to poor people.” She shares the stories of Kerabai, Sunita, and Sarita – other women like Kantabai who’ve inspired her over the years and profoundly influenced her work. “There are millions of women like Sarita, Kerabai, Sunita, who can be around you also, they can be all over the world, but at first glance, you may think that they do not have anything to say, they do not have anything to share. You would be so wrong,” she says. Encouraged by the women she works with, Sinha is now in the midst of creating the first fund for women micro entrepreneurs in India, and the first Small Finance Bank for women in the world.

Paging through the Chess Records catalog. “You can’t do Chuck Berry better than Chuck, or Fontella Bass better than Fontella,” says Elise LeGrow, but on her latest record, Playing Chess, the Canadian singer pays homage to these greats (and the American label Chess Records that produced them) with intimate, pared-down interpretations of their hits. On the TED stage, she and her band performed Chuck Berry’s “You Never Can Tell,” “Over the Mountain,” first popularized by Johnnie and Joe, and a slinky cover of Fontella Bass’s sensational “Rescue Me.”

Truth comes from the collision of ideas. Legendary artistic director Oskar Eustis closes session 10 with a beautiful message about the place of theater in modern (and ancient) life. Theater and democracy were born together in Athens in the late sixth century BCE, when the idea that power should stem from the consent of the governed — from below to above, not the other way around — was reshaping the world. At the same time, people were exploring how the truth can best emerge from the conflict between two points of view. Through dialogue, empathy with characters and the experience of watching a performance together with others in the audience, the theater and democracy become parts of a whole. Fast-forward 2,500 years to when Joseph Papp founded The Public Theater. Papp wanted everyone in America to be able to experience theater — so he created free Shakespeare in the Park, based on the idea that the best art that we can produce should be available for everybody. Over the next decades, The Public brought art to the people with plays like The Normal Heart, Chorus Line and Angels in America, among many others. In 2005, when Eustis took over artistic direction, he took Shakespeare in the Park on the road, bringing theater to the people and making it about them. With Hamilton, Lin-Manuel Miranda tapped into this idea of art for the people as well. “What Lin was doing is exactly what Shakespeare was doing — he was taking the language of the people, elevating it into verse and by doing so ennobling the language and ennobling the people who spoke the language.” But we need to go a step further on this form of inclusion, Eustis says, outlining his plan to reach (and listen to) people in places across the United States where the theater, like so many other institutions, has turned its back — like the de-industrialized Rust Belt. “Our job is to try to hold up a vision to America that shows not only who all of us are individually but that welds us back into the commonality that we need to be,” Eustis says.