Planet Russell

,

Planet DebianVincent Sanders: You can't make a silk purse from a sow's ear

Pile of network switches
I needed a small Ethernet network switch in my office so went to my pile of devices and selected an old Dell PowerConnect 2724 from the stack. This seemed the best candidate as the others were intended for data centre use and known to be very noisy.

I installed it into place and immediately ran into a problem, the switch was not quiet enough, in fact I could not concentrate at all with it turned on.

Graph of quiet office sound pressure
Believing I could not fix what I could not measure I decided to download an app for my phone that measured raw sound pressure. This would allow me to empirically examine what effects any changes to the switch made.

The app is not calibrated so can only be used to examine relative changes so a reference level is required. I took a reading in the office with the switch turned off but all other equipment operating to obtain a baseline measurement.

All measurements were made with the switch and phone in the same positions about a meter apart. The resulting yellow curves are the average for a thirty second sample period with the peak values in red.

The peak between 50Hz and 500Hz initially surprised me but after researching how a human perceives sound it appears we must apply the equal loudness curve to correct the measurement.

Graph of office sound pressure with switch turned onWith this in mind we can concentrate on the data between 200Hz and 6000Hz as the part of the frequency spectrum with the most impact. So in the reference sample we can see that the audio pressure is around the -105dB level.

I turned the switch on and performed a second measurement which showed a level around the -75dB level with peaks at the -50dB level. This is a difference of some 30dB, if we assume our reference is a "calm room" at 25dB(SPL) then the switch is causing the ambient noise level to similar to a "normal conversation" at 55dB(SPL).

Something had to be done if I were to keep using this device so I opened the switch to examine the possible sources of noise.

Dell PowerConnect 2724 with replacement Noctua fan
There was a single 40x40x20mm 5v high capacity sunon brand fan in the rear of the unit. I unplugged the fan and the noise level immediately returned to ambient indicating that all the noise was being produced by this single device, unfortunately the switch soon overheated without the cooling fan operating.

I thought the fan might be defective so purchased a high quality "quiet" NF-A4x20 replacement from Noctua. The fan has rubber mounting fixings to further reduce noise and I was hopeful this would solve the issue.

Graph of office sound pressure with modified switch turned on
The initial results were promising with noise above 2000Hz largely being eliminated. However the way the switch enclosure was designed caused airflow to make sound which produce a level around 40dB(SPL) between 200Hz and 2000Hz.

I had the switch in service for several weeks in this configuration eventually the device proved impractical on several points:

  • The management interface was dreadful to use.
  • The network performance was not very good especially in trunk mode.
  • The lower frequency noise became a distraction for me in an otherwise quiet office.

In the end I purchased an 8 port zyxel switch which is passively cooled and otherwise silent in operation and has none of the other drawbacks.

From this experience I have learned some things:

  • Higher frequency noise (2000Hz and above) is much more difficult to ignore than other types of noise.
  • As I have become older my tolerance for equipment noise has decreased and it actively affects my concentration levels.
  • Some equipment has a design which means its audio performance cannot be improved sufficiently.
  • Measuring and interpreting noise sources is quite difficult.

Planet DebianMichal Čihař: Weblate 3.0

Weblate 3.0 has been released today. It contains brand new access control module and 61 fixed isssues.

Full list of changes:

  • Rewritten access control.
  • Several code cleanups that lead to moved and renamed modules.
  • New addon for automatic component discovery.
  • The import_project management command has now slightly different parameters.
  • Added basic support for Windows RC files.
  • New addon to store contributor names in PO file headers.
  • The per component hook scripts are removed, use addons instead.
  • Add support for collecting contributor agreements.
  • Access control changes are now tracked in history.
  • New addon to ensure all components in a project have same translations.
  • Support for more variables in commit message templates.
  • Add support for providing additional textual context.

If you are upgrading from older version, please follow our upgrading instructions, the upgrade is more complex this time.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English phpMyAdmin SUSE Weblate

Worse Than FailureError'd: I Beg Your Entschuldigung?

"Delta does not seem to be so sure of what language to address me in," writes Pat.

 

"I'm wondering if the person writing the release notes made that typo when their mind was...ahem...somewhere else?" writes Pieter V.

 

Brad W. wrote, "For having "Caterpillar," "Revolver," and "Steel Toe" in the description the shoe seems a bit wimpy...maybe the wearer is expected to have an actual steel toe?"

 

"Tomato...tomahto...potato...potahto...GDPR...GPRD...all the same thing. Right?" writes Paul K.

 

"Apparently installing Ubuntu 18.04 on your laptop comes with free increase of battery capacity by almost 40x! Now that's what I call FREE software!" Jordan D. wrote.

 

Ian O. writes, "I don't know why Putin cares about the NE-2 Democratic primary, but I'm sure he added those eight extra precincts for a good reason."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Debianbisco: Second GSoC Report

A lot has happened since the last report. The main change in nacho was probably the move to integrate django-ldapdb. This abstracts a lot of operations one would have to do on the directory using bare ldap and it also provides the possibility of having the LDAP objects in the Django admin interface, as those are addressed as Django models. By using django-ldapdb i was able to remove around 90% of the self written ldap logic. The only functionality that still remains where i have to directly use the ldap library are the password operations. It would be possible to implement these features with django-ldapdb, but then i would have to integrate password hashing functionality into nacho and above all i would have to adjust the hashing function for every ldap server with a different hashing algorithm setting. This way the ldap server does the hashing and i won’t have to set the algorighm in two places.

This led to the next feature i implemented, which was the password reset functionality. It works as known from most other sites: one enters a username and gets an email with a password reset link. Related to this is also the mofification operation of the mail attribute: i wasn’t sure if the email address should be changeable right away or if a new address should be confirmed with a token sent by mail. We talked about this during our last mentors-student meeting and both formorer and babelouest said it would be good to have a confirmation for email addresses. So that was another feature i implemented.

Two more attribute that weren’t part of nacho up until now were SSH Keys and a profile image. Especially the ssh keys led to a redesign of the profile page, because there can be multiple ssh keys. So i changed the profile container to be a bootstrap card and the individual areas are tabs in this card:

Screenshot of the profile page

For the image i had to create a special upload form that saves the bytestream of the file directly to ldap which stores it as base64 encoded data. The display of the jpegPhot field is then done via

<img src=data:image/png;base64,...

This way we don’t have to store the image files on the server at all.

A short note about the ssh key schema

We are using this openssh-ldap schema. To include the schema in the slapd installation it to be converted to an ldif file. For that i had to create a temporary file, lets call it schema_convert.conf with the line

include /path/to/openssh-ldap.schema

using

sudo slaptest -f schema_convert.conf -F /tmp/temporaryfolder

one gets a folder containing the ldif file in /tmp/temporaryfolder/cn=config/cn=schema/cn={0}openssh-ldap.ldif. This file has to be edited (remove the metadata) and can then be added to ldap using:

ldapadd -Y EXTERNAL -H ldapi:/// -f openssh-ldap.ldif

What else happend

Another big improvement is the admin site. Using django-ldapdb i have a model view on selected ldap tree areas and can manage them using the webinterface. Using the group mapping feature of django-auth-ldap i was able to give management permissions to groups that are also stored in ldap.

I updated the nacho debian package. Now that django-ldapdb is in testing, all the dependecies can be installed from Debian packages. I started to use the salsa issue tracker for the issues which makes it a lot easier to keep track of things to do. I took a whole day to start getting into unit tests and i started writing some. On day two of the unit test experience i started using the gitlab continuous integration feature of salsa. Now every commit is being checked against the test suite. But there are only around 20 tests at the moment and it only covers registration and login and password reset- i guess there are around 100 test cases for all the other stuff that i still have to write ;)

Planet DebianPaul Wise: FLOSS Activities May 2018

Changes

Issues

Review

Administration

  • iotop: merge patch
  • Debian: buildd check, install package, redirect support, fix space in uid/gid, reboot lock workaround
  • Debian mentors: reboot for security updates
  • Debian wiki: whitelist email addresses,
  • Openmoko: web server restart

Communication

Sponsors

The tesseract/purple-discord work, bug reports for samba/git-lab/octotree/dh-make-golang and AutomaticPackagingTools change were sponsored by my employer. All other work was done on a volunteer basis.

,

TEDEbola and the future of vaccines: In conversation with Seth Berkley

At TED2015, Seth Berkley showed two Ebola vaccines under review at the time. One of these vaccines is now being deployed in the current Ebola outbreak in the DRC. Photo: Bret Hartman/TED

Dr. Seth Berkley is an epidemiologist and the CEO of Gavi, the Vaccine Alliance, a global health organization dedicated to improving access to vaccines in developing countries. When he last spoke at TED, in 2015, Seth showed the audience two experimental vaccines for Ebola — both of them in active testing at the time, as the world grappled with the deadly 2014–2016 outbreak. Just last week, one of these vaccines, the Merck rVSV-ZEBOV, was deployed in the Democratic Republic of the Congo to help slow the spread of a new Ebola outbreak in and around the city of Mbandaka. With more than 30 confirmed cases and a contact list of more than 600 people who may be at risk, the situation in the DRC is “on a knife edge,” according to the World Health Organization. Seth flew to the DRC to help launch the vaccine; now back in Geneva, he spoke to TED on the challenges of vaccine development and the stunning risks we are overlooking around global health epidemics.

This interview has been edited and condensed.

You were on the scene in Mbandaka; what were you working on there?

My role was to launch the vaccine — to make sure that this technology which wasn’t going to get made was made, and was made available in case there was another big emergency. And lo and behold, there it is. Obviously, given the emergency nature, a lot of the activity recently has been about how to accelerate the work and prepare the critical pieces that are going to be necessary to get this under control, and not have it spin out of control.

Health workers in the DRC prepare the first dose of the Ebola vaccine. Photo: Pascal Emmanuel Barollier/Gavi

This is the ninth outbreak in the DRC. They are more experienced [with Ebola] than any other country in the world, but the DRC is a massive country, and the people in Mbandaka, Bikoro and Iboko are in very isolated communities. The challenge right now is to set up the basic pillars of Ebola care — basic infection control procedures, making sure that you identify every case, that you create a line-list of cases, and that you identify the context that those cases have had. All of that is the prerequisite to vaccination.

The other thing you have to do is educate the population. They know vaccines — we vaccinate for all diseases in DRC, as we do across most countries in Africa — but the challenge is, people know we do vaccine campaigns where everybody goes to a clinic and get vaccinations, so the idea that somebody comes to your community, goes to a sick person’s house, and vaccinates just people in that house and surrounding family and friends is a concept that won’t make sense. The other important thing is, although the vaccine was 100% effective in the clinical trial … well, it’s 100% effective after 10 days, so people who were already incubating Ebola will go ahead and get diseased. If people don’t understand that, then they’re going to say the vaccine didn’t work and that the vaccine gave them Ebola.

The good news is, logistics is set up. There is an air-bridge from Kinshasa, there’s helicopters to go out to Bikoro, a cold chain of the vaccine is set up in Mbandaka and Bikoro, and there are these cool carriers that keep the vaccine cold so you can transport it out to vaccination campaigns in isolated areas. We have 16,000 doses there, with 300,000 doses total, and we can release more doses as it makes sense.

You mentioned the local communities — how do you navigate that intersection of medical necessity and the lack of education or misinformation? I read that some people are refusing medical treatment and are turning to local healers or churches, instead of getting vaccinated.

There is no treatment right now available in DRC; the hope is that some experimental treatments will come in. We don’t have the equivalent for the vaccines on the treatment side. It’s going to be very important to get those treatments because, without them, what you’re saying to people is: Leave your loved ones, go to an Ebola care facility and get isolated until you most likely die, and if you don’t die, you’ll be sick for a long time. Compare that to the normal process when you get hospitalized in the DRC, which is that your family will take care of you, feed you and provide nursing care. These are tough issues for people to understand even in the best of circumstances. In an ideal world, [health workers will] work with anthropologists and social scientists, but of course, it all has to be done in the local language by people who are trusted. It’s a matter of working to bring in workers from the DRC, religious leaders and elders to educate the community so that they understand what is happening, and can cooperate with the rather chaotic but rapid effort that needs to occur to get this under control.

We know now it’s in three different health zones; we don’t yet know whether cases are connected to other cases or if these are the correct numbers of cases. It could be twice or three or ten times as many. You don’t know until you begin to do the detective work of line-listing. In an ideal world, you know you’re getting where you need to get when 100% of new cases are from the contact list of previous cases, but if 50% or 30% or 80% of the cases are not connected to previous cases. then there’s rings of transmission that are occurring that you haven’t yet identified. This is painstaking, careful detective work.

The EPI manager Dr. Guillaume Ngoie Mwamba is vaccinated in the DRC in response to the 2018 Ebola outbreak. Photo: Pascal Emmanuel Barollier/Gavi

What is different about this outbreak from the 2014 crisis? What will be the impact of this particular vaccine?

It’s the same strain, the Ebola Zaire, just like in West Africa. The difference in West Africa is that they hadn’t seen Ebola before; they initially thought it was lassa fever or cholera, so it took a long time for them to realize this was Ebola. As Isaid, the DRC has had nine outbreaks, so the government and health workers are familiar with the situation and were able to say, “Okay, we know this is Ebola, let’s call for help and bring people in.” For the vaccine campaign, they brought in a lot of the vaccinators that worked in Guinea and other countries to help do the vaccination work, because it’s an experimental vaccine under clinical trial protocols, so informed consent is required.

The impact of the vaccine is that once the line-listings are there — it was highly effective in Guinea — if this is an accelerating epidemic and you get good listing of cases, you can stop the epidemic with intervention. The other thing is that you don’t want health workers or others to say “Oh, I got the vaccine now, I don’t have to worry about it!” They still need to use full precautions, because although the vaccine was 100% effective in previous trials, the confidence interval given the size was between 78% and 100%.

In your TED Talk, you mentioned the inevitability of deadly viruses; that they will incubate, that they are an evolutionary reality. On a global level, what more can be done to anticipate epidemics, and how can we be more proactive?

I talked about the concept of prevention: How do you build vaccines for these diseases before they become real problems, and try to treat them like they’re at global health emergency before they become one? There was the creation of the new initiative at last year’s Davos called CEPI (Coalition for Epidemic Preparedness and Innovation) that is working to develop new vaccines against agents that haven’t yet caused major epidemics but have caused small outbreaks, with an understanding that they could. The idea would be to make a risk assessment and leave the vaccines frozen like they were with Ebola; you can’t do a human trial until you have an outbreak.

In 2015, at the TED Conference, Seth Berkley showed this outbreak map. During our conversation last week, he told us: “The last outbreak in 2014 was the first major outbreak. There had been 24 previous outbreaks, a handful of cases to a few hundred cases, but that was the first case that had gone in the tens of thousands. This vaccine was tried in the waning days of that outbreak, so we know what it looks like in an emergency situation.” Photo: Bret Hartman/TED

Now, the biggest threat of all — and I did a different TED talk on this — is global flu. We’re not prepared in case of a flu pandemic. A hundred years ago, the Spanish flu killed between 50 and 100 million people, and today in an interconnected world, it could be many, many times more than that. A billion people travel outside of their countries these days, and there are 66 million displaced people. I often have dinner in Nairobi, breakfast in London, and lunch in New York, and that’s within the incubation period of any of these infections. It’s a very different world now, and we really have to take that seriously. Flu is the worst one; the good thing about Ebola is that it’s not so easy to transmit, whereas the flu is really easy to transmit, as are many other infectious diseases.

It’s interesting to go back to the panic that existed with Ebola — there were only a few cases in the US but this was the “ISIS of diseases,” “the news story of the decade”. The challenge is, people get so worked up and there’s such fear, and then as soon as the epidemic goes away, they forget about it. I tried to raise money after that TED Talk, and people in general weren’t interested: “Oh, that’s yesterday’s disease.” We persevered and made sure that in our agreement with Merck that they would produce those doses, even though these are not licensed doses — as soon as they get licensed, they’ll have to get rid of those doses and make more. This was a big commitment, but we said, “Can you imagine what would happen if they had an 100% efficacious vaccine and then an outbreak occurred and we didn’t have any doses of the vaccine?” It was a risky thing to do, but it was the right thing to do from a global risk perspective, and here we are in an outbreak. Maybe it’ll stay small, but right now in the DRC, we’re seeing new cases occurring every day. It’s a scary thing.

The idea that we can make a difference is exciting — we announced the Advance Purchase Commitment in January 2017, and it’s now about a year later and here we have it being used. And it’s amazing that Merck has put this much effort in. They’ve done great work and they deserve credit for this, because it’s not like they’re going to make any money out of this. If they break even, it’ll be lucky. They’re doing this because it’s important and because they can help. We need to bring together all of the groups who can help in these circumstances — it’s the dedication of all the people on the ground from the DRC, as well as international volunteers and agencies, that will provide the systems to get this epidemic under control. There’s a lot of heroes here.

The Wangata Hospital in Mbandaka. Photo: Pascal Emmanuel Barollier/Gavi

The financial aspect is interesting — with the scale and scope of a potential global health crisis like Ebola or the flu, once it’s too late, you wouldn’t even be thinking about the relatively small financial risk of creating a vaccine that could have kept us prepared. Even if there is an immediate financial risk, in the long term, it seems incomparable.

The costs of the last Ebola outbreak were huge. In those three countries, their GDP went from positive to negative, health workers died, it affected health work going forward, travel on the continent, selling of commodities, etc. Even in the US, the cost of vanishing the few cases that were there was huge. Even if you’re a cynic and say, “I don’t care about the people, I’m only interested in a capitalistic view of the world”, these outbreaks are really expensive. The problem is there isn’t necessarily a direct link between that and getting products developed and having them stockpiled and ready to go.

The challenge is investing years ahead of time not knowing when a virus will occur or what the strain is going to be. That’s the same thing here with Ebola — we agreed to invest up to $390 million to create a stockpile, at a time when we didn’t have the money and when others weren’t interested. But if we didn’t have those doses, we’d be sitting here saying, “Well gee, shouldn’t we make some doses now?” — it takes a long time to produce the doses, to quality assure and check them, to fill and finish them, and to get them to the site. [It’s important to have] that be done by the world even when the financial incentives aren’t there.

In an interview with NPR’s TED Radio Hour, you mention the “paradox of prevention”, the idea that we seem to view health care with a treatment-centered approach, rather than prevention. With diseases that kill quickly and spread rapidly, we can’t have a solely treatment mindset, we have to be thinking about preventing it from becoming epidemics.

That is right, but we can’t ignore the treatment too [and the context in which you give it]. Personalize it: If your mother gets sick, and you’re dedicated — you would give your life for your mother in that culture, family takes care of family — do you now ship your mother to a center that you’ve heard through the grapevine will lock her up and isolate her, where she will die alone, or do you hide her and pretend she has malaria or something else? But if a doctor can say, “There might be treatment that can save your mother’s life,” well, then you want to do that for her. It [helps create] the right mindset in the population, to know that people are trying to give the best treatment, that this isn’t hopeless.

How do you think that the current Ebola situation will affect the way that we approach vaccine development? The Advance Purchase Commitment was an instance of an industry innovation. How can we continue to create incentives for pharmaceutical companies to invest in long-term development of vaccines that don’t have an immediate or guaranteed market demand?

Every time we support industry with this type of public-private partnership, it increases confidence that vaccines will be bought and supported, and increases the likelihood of industry engagement for future projects. However, it is important to state that this will not be a highly profitable vaccine. There are opportunity costs associated with it, and risks. The commitment helps but doesn’t fully solve the problem. Using push mechanisms like the funding from BARDA, Wellcome Trust and others, or a mechanism like CEPI, also helps with the risk. In an ideal world, there would be more generous mechanisms to actively incentivize industry engagement. Also, by [offering] priority review vouchers, fast track designations and others, governments can put in really good incentives for these types of programs.

Outside of closely monitoring the DRC, what are the next steps in your work?

We just opened a window for typhoid vaccines. And this is perfect timing as we have just seen the first cluster of extreme antibiotic-resistant typhoid in Pakistan, with a case exported to the UK. Pakistan has already submitted an application for support, and the Gates Foundation has provided some doses in the interim. This is an example where prevention is way, way better than cure.

Planet DebianChris Lamb: Free software activities in May 2018

Here is my monthly update covering what I have been doing in the free software world during May 2018 (previous month):

Coding-wise, I:


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by ensuring identical results are generated from a given source. This allows multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

  • Fixed an issue in disorderfs (our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out issues) to ensure readdir(2) calls returns consistent and unique inode numbers. (#898287)
  • Presented on our diffoscope "diff-on-steroids" tool, as well as provided an update on the Reproducible Builds effort at the MiniDebConf in Hamburg, Germany.
  • Filed reproducibility-related issues upstream for Fontconfig, tweeny, vcr.py and zstd, as well as authored two patches for GNU mtools to fix reproducibility-related toolchain issues. (#900409 & #900410)
  • Make extensive changes to our website, including overhauling and updating our growing list of talks.
  • Submitted three Debian-specific patches to fix reproducibility issues in telepathy-gabble, vitrage & weston.
  • I categorised a large number of packages and issues in the notes repository and worked on publishing our weekly reports. (#157, #158, #159 & #160)
  • Provided three improvements to our extensive testing infrastructure:
    • Correct the "notes" link URL. [...]
    • Move the package name to the beginning of the "status change" subject lines. [...]
    • Add a X-Reproducible-Builds-Source header to "status change" emails. [...]
  • I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:
    • Clarified the No file format specific differences found inside, yet data differs message. [...]
    • Don't append rather useless "(data)" suffix in the output. [...]
    • Made a number of PEP8-related fixups. (eg. [...], [...], [...], etc.)
  • Finally, I updated the diffoscope.org website, including moving it to a Jekyll-based instance [...], adding a progress bar animation [...], updating the list of supported formats [...], etc.


Debian

  • Made some team-wide changes to packages under the care of the Debian Python Modules Team (DMPT) including:
    • Use HTTPS for Source field in debian/copyright files (eg. [...], [...], [...], etc.)
    • Made a large number of PEP8-related changes to Debian-specific scripts including limiting the line-length [...], placing colon-separated compound statement on separate lines [...], adding blank lines after end of function or class [...], fixing spacing after a comment [...], fixing indentation [...], etc.
    • Use HTTPS URLs for the Homepage field in debian/control. (eg. [...], [...], [...], etc.)
  • Fixed an permissions issue in an Alioth to Salsa repository migration script. [...]
  • Contributed specific patches:
    • cryptsetup: Make the failsleep parameter configurable. (#898495)
    • debhelper: Clarify the order of packages returned from dh_listpackages. (#897949)
    • mssh: Correct "develop" grammar in manual page. (#899368)
    • norwegian: Duplicate dh_build/dh_auto_build in debian/rules. (#900290)
  • Suggested a handful of PEP8-related changes to the Debian Archive Kit (dak) (eg. [...], [...], [...], etc.)
  • Removed build artefacts committed to the repository in the tvb-geodesic packaging. [...]
  • Use the <!nocheck> build profile over an explicit comment in the Python packaging of yarl. [...]
  • I also filed the following bug reports:
    • apt: Inconsistency between apt install ./binary.deb and dpkg -i ./binary.deb if package already up-to-date. (#900142)
    • ftp.debian.org: Please move the website.git repository to salsa. (#899109)
    • git-buildpackage: Add setting to ~/.gbp.conf to prevent debian/gbp.conf overrides. (#898613)
    • plymouth: Repository missing latest upload. (#898511)
    • python-aniso8601: Please revert Python 2.x package drop. (#898245)
    • lastpass-cli: error: Peer certificate cannot be authenticated with given CA certificates. (#898940)
  • Lastly, I submitted 5 patches to fix typos in debian/rules files against catch, grr, imanx, pd-purest-json & tinyos.

Debian LTS


This month I have been paid to work on the Debian Long Term Support (LTS). In that time I did the following:

  • Extensive "Frontdesk" duties including triaging CVEs, following-up with other developers, upstream developers.
  • Filing and cross-referencing bugs in the Debian BTS (eg. #898856).
  • Issued DLA 1379-1 for curl to prevent a heap-based buffer overflow.
  • Preparing uploads to the jessie distribution distribution.
  • Helping prepare the "end-of-life" of the wheezy distribution.

Uploads

  • redis (5:4.0.9-2) — Ignore test failures on problematic architectures to allow migration to testing.
  • ruby-rjb (1.5.5-3) — Replace call to the now-deprecated javah binary. (#897664)
  • python-django (1:1.11.13-1, 2:2.0.5-1 & 2:2.1~alpha1-1) — New upstream releases.
  • gunicorn (19.8.1-1) & redisearch (1.2.0-1) — New upstream releases.

I also performed the following sponsored uploads:


Cryptogram1834: The First Cyberattack

Tom Standage has a great story of the first cyberattack against a telegraph network.

The Blanc brothers traded government bonds at the exchange in the city of Bordeaux, where information about market movements took several days to arrive from Paris by mail coach. Accordingly, traders who could get the information more quickly could make money by anticipating these movements. Some tried using messengers and carrier pigeons, but the Blanc brothers found a way to use the telegraph line instead. They bribed the telegraph operator in the city of Tours to introduce deliberate errors into routine government messages being sent over the network.

The telegraph's encoding system included a "backspace" symbol that instructed the transcriber to ignore the previous character. The addition of a spurious character indicating the direction of the previous day's market movement, followed by a backspace, meant the text of the message being sent was unaffected when it was written out for delivery at the end of the line. But this extra character could be seen by another accomplice: a former telegraph operator who observed the telegraph tower outside Bordeaux with a telescope, and then passed on the news to the Blancs. The scam was only uncovered in 1836, when the crooked operator in Tours fell ill and revealed all to a friend, who he hoped would take his place. The Blanc brothers were put on trial, though they could not be convicted because there was no law against misuse of data networks. But the Blancs' pioneering misuse of the French network qualifies as the world's first cyber-attack.

Planet DebianShirish Agarwal: Authoritarianism and the slow death of Indian Railways

Definiton non-answer – An answer which is not actually an answer, it does everything except answer the question actually asked. Understanding this art and you understand how Indian Politics and the Indian bureaucracy works.

I was reading an article about nations and most of all my own country is getting into a well of authoritarianism and a cycle of fear and non-answers being generated by the present dispensation.

While I believe myself to be partly at fault for being self-censoring, I would try to share some of the issues which have been lying dormant in myself for quite some time.

To start with, there were couple of questions asked by my economic professor when I was studying Economics almost 20 years back.

The first question he asked was –

1. Why do people like status-quo so much ?

Some of the answers which were answered by the professor were –

a. People are happy with the way things are –

b. People do not know how the change will affect them.
The fear of how the change will affect them is unknown
and like magicians only one part/feature is known and
perceived while the other part is hidden.

It took me quite a few years of life, reading newspapers,
I understood what he meant by it.

c. Special Interests who survive and thrive due to the
way the status-quo is or as later I understood ‘Follow the
money’ .

I am going to use Indian Railways to explore the ‘Follow the money model’ way as I have loved Indian Railways since my childhood and it is also pertinent to majority of Indians who have the only means of cheap transport to go from A to B.

Indian Railways logo

A bit of history

Before we get to the present condition, a bit of historical reminiscing is important. Now the Indian Railways has been like the son which was never wanted since it’s birth. The British made lot of investment when they were ruling for their own benefit, most of which is still standing today details the kind of materials that were used. Independence and Partition were two gifts which the English gave us when they left which lead to millions of souls killed on either side. I wouldn’t go much into it as Khushwant Singh’s ‘Train to Pakistan‘ . It is probably one of the most hard books I have read as there are just too many threads to grasp and one is just unable to grasp the horror that Partition wrought to the Indian subcontinent.

The reason I am sharing about Partition because trains were the only means for lot of people to cover huge distances in those times. After Partition, when Pandit Nehruji became the P.M. the constitution he along with many leaders with Dr. Babasaheb Ambedkar (who is known as the Architect of the Constitution of India) wanted to have a secular, socialist India which would be self-sufficent in nature. The experiment which was also tried later by her daughter Mrs. Gandhi and later his grandson Mr. Rajiv Gandhi. All of the Prime Ministers did lot of investment in whatever they thought was best for the country except for Indian Railways. Especially from 1980’s onwards there was a dramatic shift (downwards) in creation of public infrastructure, especially the Railways even though the Governments knew we would be a young country in the coming years.

The 90’s

Before India’s Independence , India was a collection of several princely states consisting of today’s India, Pakistan, parts of Burma, Nepal so when the British came with the Rails, it was an innovation. During the period as the spread of Railways grow, three different railway systems were spawned on the gauge width, the Narrow Gauge, Standard Gauge and the (Indian) broad gauge. Wikipedia has a nice article about the different gauge networks so would leave it to them.

In the 90’s apart one of the dramatic change was from socialism to capitalism (as a policy initiative) and limited entry to foreign capital in specific sectors, one of the good intentions was the Project Uniguage for Indian Railways which was supposed to be finished by the end of the century has still not been done till date.

The other thing which was supposed to also happen is the impetus on Electrification of Indian Railways which is still far from over. There is lobbying from the diesel lobby at least in the locomotive space. As almost all the locomotive designs have been bought from various foreign vendors and then Indianized, they do not want their interests to be diluted.

Present situation

The present situation is that Indian Railways is in dire straits. While Indian Railways had an operating ratio of 94.9 percent

See the image of an Average Indian Household spend on various services –

Average Indian Household spend per month - Copyright Times of India.

– Copyright – Times of India.

As can be seen the biggest expenses are travel and eating out. In most developed economies, the share of travel expenses is not more than 4% of a typical household budget, but as can be seen for Indians the proportions are much more higher.

The Indian Railways has been the worst performer as far as on-time performance is concerned, at least since the present dispensation has taken over.

So who gains, if trains run late, the private buses and Air Services. The private bus transport have known to raise prices every year, especially whenever holidays or festivals happen. The same is the case with Airfares as well and is common occurrence and doesn’t register any shock anymore. There is a proposal to give fair compensation in cases of issues of flight delay and flight cancellation somewhat like what is available in European sectors but most operators say it will inevitably lead to higher fare prices across the board.

The airport infrastructure is also under severe strain while clocking increasing growth as people look to be at places at appointed times. The growth has been amazing while the on-time performance has been going in the opposite direction due to poor planning and mis-management. We need to have more CISF personnel and much wider airports (both land-side and air-side) to accommodate the increasing number of people traveling.

Just yesterday came across an interesting article on civil aviation which brings out all what I wanted to share and more.

Indian railways meanwhile seems to be running out of options as even though infrastructure is being increased, it’s not just fast enough with not enough talent which is going to cost us both in the short and medium-term 😦

I could share quite a lot of operational and policy issues but that might be boring for people who are not rail-fanners. I’ll just end with the simplest station that if you look at the work of at least the last couple of decades of Indian Railway Ministers, most Railway Ministers would present budgets where the emphasis was more on getting new railway services and something like 3-4% budget increase in creation of infrastructure which many a time will be lying unused without proper explanations or a non-answer.

Somewhat Good news

The only bright spot seem to the Dedicated Freight corridors which hopefully should increase Railways freight earnings and give more space for maintenance to happen on Indian Railways. The freight share of Indian Railways which used to be 90% has now shrunk to less than 18% due to number of reasons, among them opening up the freight sector from railways to roads, putting freight on passing loops and making freight a second-class citizen on railways, although roads have their own issues. A Livemint report couple of years back also shed some light on the situation.

On passenger front, only metro railways have some sort of good news but not enough. I am just hanging on hope as until we don’t value-add and move up in the value-chain on exports don’t really see India doing well.

There are a lot of challenges as well as opportunities for whichever Government comes next, have been thoroughly disappointed with the performance of the present Government in everything, including International Trade which was supposed to be unlocked by the Present PM.

Worse Than FailureImprov for Programmers: When Harddrives Attack

Put on some comfy pants, we're back again with a little something different, brought to you by Raygun. This week's installment starts with exploding hard drives, and only Steve Buscemi can save us. Today's episode contains small quantities of profanity.

Raygun provides a window into how users are really experiencing your software applications.

Unlike traditional logging, Raygun silently monitors applications for issues affecting end users in production, then allows teams to pinpoint the root cause behind a problem with greater speed and accuracy by providing detailed diagnostic information for developers. Raygun makes fixing issues 1000x faster than traditional debugging methods using logs and incomplete information.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integrated, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianHideki Yamane: Enabled power-saving feature in Linux 4.17

I've committed to enable power-saving feature on laptops in Linux 4.17 in Debian package and it has entered in experimental repository. Please try it (and give a report if you get any trouble with it).

Thanks to Fedora people to note its feature to its release note :)

,

Planet Linux AustraliaDavid Rowe: FreeDV 700D Released

Here is a sample of Mark, VK5QI, sending a FreeDV 700D signals from Adelaide, South Australia, to a Kiwi SDR at the Bay of Islands, New Zealand. It was a rather poor channel with a path length of 3200km (2000 miles). First SSB, then FreeDV 700D, then SSB again:

Last weekend FreeDV GUI 1.3 was released, which includes the new 700D mode. I’ve been working hard for the last few months to get 700D out of the lab and onto the air. Overall, I estimate about 1000 hours were required to develop FreeDV 700D over the last 12 months.

For the last few weeks teams of beta testers dotted around the world have been running FreeDV 1.3 in the wild. FreeDV 700D is fussy about lost samples so I had to do some work with care and feeding of the sound drivers, espcially on the Windows build. Special thanks to Steve K5OKC, Richard KF5OIM, Mark VK5QI, Bill VK5DSP; Hans PA0HWB and the Dutch team; Eric GW8LJJ, Kieth GW8TRO and the UK team; Mel K0PFX, Walt K5WH and the US team, Peter VK5APR, Peter VK2TPM, Bruce K6BP, Gerhard OE3GBB, John VK5DM/VK3IC, Peter VK3RV and the Sunbury team, and my local AREG club. I apologise if I have missed anyone, all input is greatly appreciated.

Anyone who writes software should be sentenced to use it. So I’ve poked a few antennas up into the air and, conditions permitting have made 700D contacts, getting annoyed with things that don’t work, then tweaking and improving. Much to my surprise it really does handle some nasty fading, and it really does work better than SSB in many cases. Engineers aren’t used to things working, so this is a bit of an adjustment for me personally.

Results

Here’a demo video of FreeDV 1.3 decoding a low SNR Transatlantic contact between Gerhard OE3GBB and Walt, K5WH:

You can see the fast fading on the signal. The speech quality is not great, but you get used to it after a little while and it supports conversations just fine. Remember at this stage we are targeting low SNR communications, as that has been the major challenge to date.

Here’s a screen shot of the FreeDV QSO Finder (thanks John K7VE) chat log, when the team tried SSB shortly afterwards:

FreeDV 700D also has some robustness to urban HF Noise. I’m not sure why, this still needs to be explored. Here is the off-air signal I received from Peter, VK2TPM. It’s full of nasty buzzing switching power supply noises, and is way down in the noise, but I obtained an 80% decode:

It’s hard to hear the modem signal in there!

FreeDV 700D Tips

Lots of information of FreeDV, and the latest software, at freedv.org. Here are some tips on using 700D:

  1. The 700 bit/s codec is’s sensitive to your microphone and the FreeDV microphone equaliser settings (Tools-Filter). Suggest you set up a local loopback to hear your own voice and tune the quality using the Tools-Filter Mic equaliser. You can play pre-recorded wave files of your own voice using Tools-Play File to Mic in or with the “voice keyer” feature.
  2. The current 700D modem is sensitive to tuning, you need to be within +/- 20Hz for it to acquire. This is not a practical problem with modern radios that are accurate to +/- 1Hz. One you have acquired sync it can track drift of 0.2Hz/s. I’ll get around to improving the sync range one day.
  3. Notes on the new features in FreeDV 1.3 User Guide.
  4. Look for people to talk to on the FreeDV QSO Finder (thanks John K7VE)
  5. Adjust the transmit drive to your radio so it’s just moving the ALC. Don’t hammer your PA! Less is more with DV. Aim for about 20W average power output on a 100W PEP radio.
  6. If you get stuck reach out for help on the Digital Voice mailing list (digitalvoice at googlegroups.com)

Significance

The last time a new HF voice mode was introduced was the 1950’s and it was called Single Side Band (SSB). It’s lasted so long because it works well.

So a new voice mode that competes with SSB is something rare and special. We don’t want the next HF Voice mode to be locked down by codec vendors. We want it to be open source.

I feel 700D is a turning point for FreeDV and open source digital voice. After 10 years of working on Codec 2 and FreeDV, we are now competitive with SSB on HF multipath channels at low SNRs. The 700 bits/ codec isn’t great. It’s fussy about microphones, EQ settings, and background noise. But it’s a start, and we can improve from here.

It takes some getting used to, but our growing experience has shown 700D is quite usable for conversations. Bear in mind SSB isn’t pretty at low SNRs either (see sample at the top), indeed untrained listeners struggle with SSB even at high SNRs.

Quite remarkably, the 700 bit/s codec outperforms locked down, proprietary, expensive, no you can’t look at my source or modify me, codecs like MELP and TWELP at around the same bit rate.

The FreeDV 700D waveform (the combined speech codec, FEC, modem, protocol) is competitive at low SNRs (-2dB AWGN, +2dB CCIR Poor channel), with several closed source commercial HF DV systems that we have explored.

FreeDV 700D requires about 1000 Hz of RF bandwidth, half of SSB.

Most importantly FreeDV and Codec 2 are open source. It’s freely available to not just Radio Amateurs, but emergency services, the military, humanitarian organisations, and commercial companies.

Now that we have some traction with low SNR HF fading channels, the next step is to improve the speech quality. We can further improve HF performance with experience, and I’d like to look at VHF/UHF again, and push down to 300 bit/s. The Lower SNR limit of Digital Voice is around -8dB SNR.

This is experimental radio. DV over HF is a very tough problem. Unlike other almost all other voice services (mobile phones, VHF/UHF radio), HF is still dominatted by analog SSB modulation. I’m doing much of the development by myself, so I’m taking one careful, 1000 man-hour, step at a time. Unlike other digital voice modes (I’m looking at you DStar/C4FM/DMR/P25) – we get to set the standard (especially the codec), rather than following it and being told “this is how it is”.

Get Involved

My work excites a lot of people, and the gets the brainstorms flowing. I get overwhelmed by people making well meaning suggestions about what I should do with my volunteer time, and underwhelmed by those who will step up and help me do it.

I actually know what to do, and the track record above demonstrates it. What I need is help to make it happen. I need people who can work with me on the items below:

  1. Support this work via Patreon or PayPal
  2. Refactor and maintain the FreeDV GUI source code. I should be working on DSP code where my skills are unique, not GUI programs and Windows sound problems. See bottom of FreeDV GUI README.
  3. Experienced or not, if you want to play DSP, I have some work for you too. You will learn a lot. Like Steve Did.
  4. Find corner cases where 700D breaks. Then help me fix it.
  5. Work with me to port 700D to the SM1000.
  6. Make freedv.org look great and maintain it.
  7. Help me use Deep Learning to make Codec 2 even better.
  8. Start a FreeDV Net.
  9. Set up a FreeDV beacon.
  10. Help me get some UHF/VHF FreeDV modes on the air. Some coding and messing with radios required.
  11. Help others get set up on FreeDV, 700D voice quality depends on the right microphone and equaliser settings, and noobs tend to over drive their PA.
  12. Create and Post Demo/instructional Videos.

Like the good people above, you have the opportunity to participate in the evolution of HF radio. This has happened once in the last 60 years. Lets get started.

If you are interested in development, please subscribe to the Codec 2 Mailing List.

Reading Further

Peter VK2TPM, blogs on 700D.
AREG Blog Post on FreeDV 700D
Steve Ports an OFDM modem from Octave to C. This is the sort of support I really need – thanks Steve for stepping up and helping!
Windows Installers for development versions of FreeDV.
Codec 2 700C
AMBE+2 and MELPe 600 Compared to Codec 2
Lower SNR limit of Digital Voice
700D OFDM modem README and specs
FreeDV User Guide, including new 700D features.
Bill, VK5DSP designed the LDPC code used in 700D and has helped with its care and feeding. He also encouraged me to carefully minimise the synchronisation (pilot symbol) overhead for the OFDM modem used in 700D.

TEDTED en Español: el primer evento de oradores TED de habla hispana

El presentador Gerry Garbulsky da inicio al evento TED en Español en el teatro TEDNYC, Nueva York, NY (Foto: Dian Lofton/TED)

El 26 de abril tuvo lugar el primer evento de oradores de TED en Español, presentado por TED en su oficina de Nueva York. El evento, completamente en español, contó con ocho oradores, una presentación musical, cinco cortometrajes y 13 charlas de un minuto dadas por miembros de la audiencia.

El evento en Nueva York es la última incorporación a la iniciativa “TED en Español” de TED, diseñada para difundir ideas en Español a la comunidad hispana mundial. El evento fue conducido por Gerry Garbulsky, director de TED en Español (también director del mayor evento de TEDx del mundo: TEDxRiodelaPlata en Argentina.) TED en Español, además, incluye su página en TED.com, una comunidad de Facebook, un feed de Twitter, un “Boletín” semanal, un canal de YouTube y, a principios de este mes, un podcast original creado en asociación con Univision.

¿Deberíamos automatizar la democracia? “¿Soy solo yo, o hay más personas que están un poco decepcionadas con la democracia?, pregunta César A. Hidalgo. Al igual que otros ciudadanos preocupados, el profesor e investigador de física del MIT quiere asegurarse de que hayamos elegido gobiernos que realmente representen nuestros valores y deseos. Su solución: ¿qué tal si los científicos pudieran crear una IA que votara por ti? Hidalgo visualiza un sistema en el que cada votante pueda enseñar a su propia IA, cómo pensar como ella, utilizando cuestionarios, listas de lectura y otros tipos de datos. Una vez que hayas entrenado a tu IA y validado algunas decisiones que toma por ti, puedes dejarla en piloto automático, votando y representándote… o puedes decidir aprobar cada cosa que sugiera. Es muy sencillo restarle credibilidad a su idea, pero Hidalgo cree que vale la pena probarlo a menor escala. Su conclusión: “la democracia tiene una pésima interfaz de usuario. Si se pudiera mejorar la interfaz, podríamos usarla más”.

Cuando el foco del fracaso cambia de lo que se pierde a lo que se gana, todos podemos aprender a “fallar conscientemente”, afirma Leticia Gasca (Foto: Jasmina Tomic/TED)

Cómo fallar conscientemente. Si tu negocio hubiera fallado en la Antigua Grecia, habrías tenido que pararte en la plaza del pueblo con una canasta sobre tu cabeza. Afortunadamente, hemos recorrido un largo camino… ¿o no? La dueña de un negocio fallido, Leticia Gasca, no lo cree. Motivada por su dolorosa experiencia, se dispuso a crear una forma para que otros como ella, transformaran la culpa y la vergüenza de un emprendimiento que salió mal, en un acelerador del crecimiento. En consecuencia, nació “Fuckup Nights” (FUN), una serie de eventos en diversos lugares del mundo para compartir historias de fracaso profesional; y “The Failure Institute” (el Instituto del Fracaso), un grupo de investigación, que estudia el fracaso y su impacto en las personas, empresas y comunidades. Para Gasca, cuando el foco del fracaso cambia de lo que se pierde a lo que se gana, todos podemos aprender a “fallar conscientemente” y ver los desenlaces como puertas a la empatía, la resiliencia y la renovación.

De cuatro países a un escenario. El grupo musical panlatinoamericano LADAMA trajo mucho más que música al escenario de TED en Español. La venezolana María Fernanda González, la brasilera Lara Klaus, la colombiana Daniela Serna y la estadounidense Sara Lucas cantan y bailan al son de una variedad de ritmos, que van desde estilos sudamericanos hasta fusiones caribeñas, invitando a la audiencia a bailar con ellas. Tocando “Night Traveler” y “Porro Maracatu”, LADAMA transformó el escenario en un espacio musical que vale la pena difundir.

Gastón Acurio comparte historias sobre el poder de la comida para cambiar vidas (Foto: Jasmina Tomic/TED)

El cambio mundial comienza en tu cocina. En su trabajo pionero por llevar la cocina peruana al mundo, Gastón Acurio descubrió el poder que tiene la comida para cambiar la vida de las personas. A medida que el ceviche apareció en restaurantes de renombre en todo el mundo, Gastón vio que su país natal, Perú, comenzaba a apreciar la diversidad de su gastronomía y se enorgullecía de su propia cultura. Pero la comida no siempre se ha usado para traer bien al mundo. Debido a la revolución industrial y al aumento del consumismo, “muere más cantidad de gente de obesidad que de hambre”, afirma, y el estilo de vida de muchas personas no es sostenible. Al interactuar y preocuparnos por los alimentos que comemos, dice Gastón, podemos cambiar nuestras prioridades como individuos y cambiar las industrias que nos sirven. Todavía no tiene las respuestas a cómo hacer de esto un movimiento sistemático que los políticos puedan respaldar, sin embargo, cocineros de renombre alrededor del mundo están llevando estas ideas a sus cocinas. Él cuenta historias sobre un restaurante en Perú que ayuda a los nativos obteniendo ingredientes de ellos, un chef famoso en Nueva York que lucha contra el uso de monocultivos y un restaurante emblemático en Francia que ha excluido la carne del menú. “Los cocineros alrededor del mundo estamos convencidos de que no podemos esperar a que otros hagan los cambios y que debemos ponernos en acción”, afirma. Pero los cocineros profesionales no pueden hacerlo todo. Si queremos realizar un cambio profundo, urge Gastón, necesitamos que la comida casera sea la clave.

La interconexión de la música y la vida. El director de orquesta chileno, Paolo Bortolameolli, envuelve su opinión sobre la música, alrededor de su recuerdo de haber llorado la primera vez que escuchó música clásica en vivo. Compartiendo las emociones que la música causó en él, Bortolameolli presenta la misma como una metáfora de la vida, llena de lo esperado y lo inesperado. Cree que escuchamos las mismas canciones una y otra vez porque, como humanos, nos gusta experimentar la vida desde un punto de vista de expectativa y estabilidad y, a la vez, sugiere que cada vez que escuchamos una canción, animamos la música, impregnándola con el potencial de no solo ser reconocida, sino también redescubierta.

Cosechamos lo que sembramos – sembremos algo distinto. Hasta mediados de los años 80, los ingresos en los principales países latinoamericanos estaban a la par de los de Corea. Pero ahora, menos de una generación después, los coreanos ganan entre dos y tres veces más que sus contrapartes latinoamericanos. ¿Cómo puede ser? La diferencia, afirma el futurista Juan Enríquez, radica en una priorización nacional de la capacidad intelectual y en identificar, educar y celebrar las mejores mentes. ¿Qué sucedería si en América Latina empezáramos a seleccionar la excelencia académica como lo hacemos hoy con la selección nacional de fútbol? Si los países latinoamericanos prosperan en la era de la tecnología y más, deberían buscar establecer sus propias universidades superiores en lugar de dejar que sus mentes más brillantes estén ansiosas de alimento, competencia y logros, y lo encuentren en otro lugar, en tierras extranjeras.

Rebeca Hwang comparte su sueño de un mundo donde las identidades se utilizan para unir a la gente, no para alienarlas (Foto: Jasmina Tomic/TED)

La diversidad es un superpoder. Rebeca Hwang nació en Corea, fue criada en Argentina y educada en los Estados Unidos. Como alguien que ha pasado su vida intercambiando varias identidades, Hwang afirma que tener un trasfondo variado, aunque a veces sea desafiante, es en realidad un superpoder. La inversora de riesgo compartió cómo su fluidez en muchos idiomas y culturas le permite establecer conexiones con todo tipo de personas de todo el mundo. Como madre de dos niños pequeños, Hwang espera transmitir esta perspectiva a sus hijos. Ella quiere enseñarles a abrazar sus orígenes y crear un mundo donde las identidades se utilicen para unir a las personas, no para alienarlas.

El ecologista marino Enric Sala desea proteger las últimas especies salvajes del océano (Foto: Jasmina Tomic/TED)

Cómo salvaremos nuestros océanos. Si saltas al océano en cualquier lugar, dice Enric Sala, tendrías un 98 por ciento de posibilidades de sumergirte en una zona muerta, un paisaje estéril, vacío de grandes peces y otras formas de vida marina. Como ecologista marino y explorador residente de National Geographic, Sala ha dedicado su vida a inspeccionar los océanos del mundo. Enfocándose en alta mar, propone una solución radical para ayudar a proteger los océanos, fomentando la creación de una reserva que incluiría dos tercios de los océanos del planeta. Al salvaguardar nuestra alta mar, Sala cree que restauraremos los beneficios ecológicos, económicos y sociales del océano y podremos asegurarnos de que cuando nuestros nietos salten a cualquier lugar en el mar, se encuentren con una gran cantidad de vida marina gloriosa en lugar de un espacio vacío.

Y para concluir… En una presentación improvisada de rap con muchos pasos de baile bien sincronizados, el psicólogo, rapero y bailarín César Silveyra cierra el evento. En una espectacular demostración de sus habilidades, Silveyra une las ideas de oradores anteriores del evento, incluyendo las advertencias de Enric Sala sobre la sobrepesca en los océanos, la revolución de la cocina peruana de Gastón Acurio e incluso un grito para la abuela de la oradora Rebeca Hwang… todo el tiempo “sintiéndose como Beyoncé”.

Cory DoctorowPodcast: Petard, Part 03


Here’s the third part of my reading (MP3) of Petard (part one, part two), a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Planet DebianBits from Debian: Debian welcomes its GSoC 2018 and Outreachy interns

GSoC logo

Outreachy logo

We're excited to announce that Debian has selected twenty-six interns to work with us during the next months: one person for Outreachy, and twenty-five for the Google Summer of Code.

Here is the list of projects and the interns who will work on them:

A calendar database of social events and conferences

Android SDK Tools in Debian

Automatic builds with clang using OBS

Automatic Packages for Everything

Click To Dial Popup Window for the Linux Desktop

Design and implementation of a Debian SSO solution

EasyGnuPG Improvements

Extracting data from PDF invoices and bills for financial accounting

Firefox and Thunderbird plugin for free software habits

GUI app for EasyGnuPG

Improving Distro Tracker to better support Debian teams

Kanban Board for Debian Bug Tracker and CalDAV servers

OwnMailbox Improvements

P2P Network Boot with BitTorrent

PGP Clean Room Live CD

Port Kali Packages to Debian

Quality assurance for biological applications inside Debian

Reverse Engineering Radiator Bluetooth Thermovalves

Virtual LTSP Server

Wizard/GUI helping students/interns apply and get started

Congratulations and welcome to all of them!

The Google Summer of Code and Outreachy programs are possible in Debian thanks to the efforts of Debian developers and contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or on each project's team mailing lists.

TEDTED en Español: TED’s first-ever Spanish-language speaker event in NYC

Host Gerry Garbulsky opens the TED en Español event in the TEDNYC theater, New York, NY. (Photo: Dian Lofton / TED)

Thursday marked the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event featured eight speakers, a musical performance, five short films and fifteen one-minute talks given by members of the audience.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event, TEDxRiodelaPlata in Argentina, TED en Español includes a Facebook community, Twitter feed, weekly “Boletín” newsletter, YouTube channel and — as of earlier this month — an original podcast created in partnership with Univision Communications.

Should we automate democracy? “Is it just me, or are there other people here that are a little bit disappointed with democracy?” asks César A. Hidalgo. Like other concerned citizens, the MIT physics professor wants to make sure we have elected governments that truly represent our values and wishes. His solution: What if scientists could create an AI that votes for you? Hidalgo envisions a system in which each voter could teach her own AI how to think like her, using quizzes, reading lists and other types of data. So once you’ve trained your AI and validated a few of the decisions it makes for you, you could leave it on autopilot, voting and advocating for you … or you could choose to approve every decision it suggests. It’s easy to poke holes in his idea, but Hidalgo believes it’s worth trying out on a small scale. His bottom line: “Democracy has a very bad user interface. If you can improve the user interface, you might be able to use it more.”

When the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully,” says Leticia Gasca. (Photo: Jasmina Tomic / TED)

How to fail mindfully. If your business failed in Ancient Greece, you’d have to stand in the town square with a basket over your head. Thankfully, we’ve come a long way — or have we? Failed-business owner Leticia Gasca doesn’t think so. Motivated by her own painful experience, she set out to create a way for others like her to convert the guilt and shame of a business venture gone bad into a catalyst for growth. Thus was born “Fuckup Nights” (FUN), a global movement and event series for sharing stories of professional failure, and The Failure Institute, a global research group that studies failure and its impact on people, businesses and communities. For Gasca, when the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully” and see endings as doorways to empathy, resilience and renewal.

From four countries to one stage. The pan-Latin-American musical ensemble LADAMA brought much more than just music to the TED en Español stage. Inviting the audience to dance with them, Venezuelan Maria Fernanda Gonzalez, Brazilian Lara Klaus, Colombian Daniela Serna and American Sara Lucas sing and dance to a medley of rhythms that range from South American to Caribbean-infused styles. Playing “Night Traveler” and “Porro Maracatu,” LADAMA transformed the stage into a place of music worth spreading.

Gastón Acurio shares stories of the power of food to change lives. (Photo: Jasmina Tomic / TED)

World change starts in your kitchen. In his pioneering work to bring Peruvian cuisine to the world, Gastón Acurio discovered the power that food has to change peoples’ lives. As ceviche started appearing in renowned restaurants worldwide, Gastón saw his home country of Peru begin to appreciate the diversity of its gastronomy and become proud of its own culture. But food hasn’t always been used to bring good to the world. With the industrial revolution and the rise of consumerism, “more people in the world are dying from obesity than hunger,” he notes, and many peoples’ lifestyles aren’t sustainable. 
By interacting with and caring about the food we eat, Gastón says, we can change our priorities as individuals and change the industries that serve us. He doesn’t yet have all the answers on how to make this a systematic movement that politicians can get behind, but world-renowned cooks are already taking these ideas into their kitchens. He tells the stories of a restaurant in Peru that supports native people by sourcing ingredients from them, a famous chef in NYC who’s fighting against the use of monocultures and an emblematic restaurant in France that has barred meat from the menu. “Cooks worldwide are convinced that we cannot wait for others to make changes and that we must jump into action,” he says. But professional cooks can’t do it all. If we want real change to happen, Gastón urges, we need home cooking to be at the center of everything.

The interconnectedness of music and life. Chilean musical director Paolo Bortolameolli wraps his views on music within his memory of crying the very first time he listened to live classical music. Sharing the emotions music evoked in him, Bortolameolli presents music as a metaphor for life — full of the expected and the unexpected. He thinks that we listen to the same songs again and again because, as humans, we like to experience life from a standpoint of expectation and stability, and he simultaneously suggests that every time we listen to a musical piece, we enliven the music, imbuing it with the potential to be not just recognized but rediscovered.

We reap what we sow — let’s sow something different. Up until the mid-’80s, the average incomes in major Latin American countries were on par with those in Korea. But now, less than a generation later, Koreans earn two to three times more than their Latin American counterparts. How can that be? The difference, says futurist Juan Enriquez, lies in a national prioritization of brainpower — and in identifying, educating and celebrating the best minds. What if in Latin America we started selecting for academic excellence the way we would for an Olympic soccer team? If Latin American countries are to thrive in the era of technology and beyond, they should look to establish their own top universities rather than letting their brightest minds thirst for nourishment, competition and achievement — and find it elsewhere, in foreign lands.

Rebeca Hwang shares her dream of a world where identities are used to bring people together, not alienate them. (Photo: Jasmina Tomic / TED)

Diversity is a superpower. Rebeca Hwang was born in Korea, raised in Argentina and educated in the United States. As someone who has spent a lifetime juggling various identities, Hwang can attest that having a blended background, while sometimes challenging, is actually a superpower. The venture capitalist shared how her fluency in many languages and cultures allows her to make connections with all kinds of people from around the globe. As the mother of two young children, Hwang hopes to pass this perspective on to her kids. She wants to teach them to embrace their unique backgrounds and to create a world where identities are used to bring people together, not alienate them.

Marine ecologist Enric Sala wants to protect the last wild places in the ocean. (Photo: Jasmina Tomic / TED)

How we’ll save our oceans If you jumped in the ocean at any random spot, says Enric Sala, you’d have a 98 percent chance of diving into a dead zone — a barren landscape empty of large fish and other forms of marine life. As a marine ecologist and National Geographic Explorer-in-Residence, Sala has dedicated his life to surveying the world’s oceans. He proposes a radical solution to help protect the oceans by focusing on our high seas, advocating for the creation of a reserve that would include two-thirds of the world’s ocean. By safeguarding our high seas, Sala believes we will restore the ecological, economic and social benefits of the ocean — and ensure that when our grandchildren jump into any random spot in the sea, they’ll encounter an abundance of glorious marine life instead of empty space.

And to wrap it up … In an improvised rap performance with plenty of well-timed dance moves, psychologist and dance therapist César Silveyra closes the session with 15 of what he calls “nano-talks.” In a spectacular showdown of his skills, Silveyra ties together ideas from previous speakers at the event, including Enric Sala’s warnings about overfished oceans, Gastón Acurio’s Peruvian cooking revolution and even a shoutout for speaker Rebeca Hwang’s grandmother … all the while “feeling like Beyoncé.”

Worse Than FailurePassing Messages

About 15 years a go, I had this job where I was requested to set up and administer an MQ connection from our company to the Depository Trust & Clearing Corporation (DTCC). Since I had no prior experience with MQ, I picked up the manual, learned a few commands, and in a day or so, had a script to create queue managers, queues, disk backing stores, etc. I got the system analysts (SA's) at both ends on the phone and in ten minutes had connectivity to their test and production environments. Access was applied for and granted to relevant individuals and applications, and application coding could begin.

Pyramid of Caius Cestius exterior, showing the giant wall which blocks everything
By Torquatus - Own work

I didn't know the full and complete way to manage most of the features of MQ, but I had figured out enough to properly support what we needed. Total time was 2.5 man-days of effort.

Fast forward to the next job, where file-drops and cron job checks for file-drops were the way interprocess communication was implemented. At some point, they decided to use a third party vendor to communicate with DTCC via MQ. Since I had done it before, they asked me to set it up. OK, easy enough. All I had to do was install it, set up a queue manager and a couple of queues to test things. Easy peasy. I put MQ on my laptop and created the queue managers and queues. Then I introduced myself to the SA at the vendor (who was in the same position I had been in at my previous job) and explained to him what I had done in the past and that I needed it set up at the current job. He agreed it was a good way to go. I then got our SAs and my counterpart at the vendor on the phone and asked them to hash out the low level connectivity. That's when I hit the wall of bureacracy-run-amok™.

It turns out that our SA's wouldn't talk to SA's outside the firm. That's what the networking team was for. OK, get them in the loop. We won't set up connectivity with outside entities without security approval. The networking team also informed me that they wouldn't support a router with connections outside the firewall, but that they would allow a router physically outside the firewall IF the vendor would support it (that's like saying I want to connect to Google, so I'll pay Google to support a router outside my firewall to connect to them).

The security people wanted to know whether hardware had been purchased yet (is the hardware "appropriate" for connecting to outside entities). The fact that it was just for a test queue to send one message fell on deaf ears; proper hardware must be approved, funded and in-place first!

The hardware people wanted to know if the hardware had been reviewed by the capacity planning team to be sure that it supported future growth (our plans were to replace a task that moved 4 (f-o-u-r) 2MB messages per day, and if successful, add 5-6 subsequent tasks comprising 10-20 similar messages each per day; a ten year old laptop would have been overpowered for this task).

This lunacy continued until we had 33 teams involved in a 342 line project plan comprising multiple man-years of effort - to set up a queue manager and 2 queues from a laptop to a vendor, to send a single test message.

At this point, everybody at the vendor was enraged at the stupidity to which our various departments were subjecting them (e.g.: you must program your firewall rules like xxx, you must provide and support a router outside out firewall, will a message sent from your hardware be able to be received on our hardware (seriously, MQ supported both platforms!), etc.), and ALL of them got on the phone to try and force me to change my mind (it wasn't coming from me: it was the other departments).

I was finally forced to say the stupidest thing I've ever had to say: Yes, I agree that the way you are proposing that we set things up is well understood, redundant, reliable, easy to set up and support, cost effective, efficient, secure, reasonably simple and generally accepted as the right way to do this, and our company will have none of that!

I then had to tell them yet again that it wasn't coming from me, and to beg them to just do it the way the bureaucracy wanted it done, or said bureaucracy would never let it happen.

At that point, I convinced my boss of the stupidity that was being inflicted on this vendor, and so he agreed to sign a five year contract, at premium rates, to get them to do it the way that our company wanted, even though we knew it was idiotic, wasteful and just plain wrong.

This went back and forth for a year. Long story short: we paid the vendor a crapton of money to supply, configure and remotely support a router at our location outside the firewall, we paid them a fortune for five years of capacity to push 10 million messages per day, and we spent more than $750,000 on super high powered redundant hot/standby hardware across dev, test, qa, pre-production, production and DR environments, all before we were allowed to send one test message across a test queue.

Our company then decided not to move to the more modern messaging technology because it was too difficult to set up and that they would continue to use cron job checks for file-drops as message-ready indicators. I pointed out that the difficulty was from the internal bureaucracy and not the vendor or the technology... <crickets/>. They never sent another message down that queue, and left all that iron - dedicated to the cancelled project - unused, fully supported and running in all the environments for five years, after which the process of decommissioning hardware was triggered (I'll leave this for your nightmares to imagine).

I later found out that I was the 5th person (out of 8 over 10 years) hired to perform this same migration. Apparently each of us ran into the same impenetrable wall-o-bureaucracy.

To this day, they are still doing interprocess communication via file-drops and cron jobs to check for the dropped files.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: Review: Bull by the Horns

Review: Bull by the Horns, by Sheila Bair

Publisher: Simon & Schuster
Copyright: 2012
Printing: September 2013
ISBN: 1-4516-7249-7
Format: Trade paperback
Pages: 365

Sheila Bair was the Chair of the Federal Deposit Insurance Corporation from 2006 to 2011, a period that spans the heart of the US housing crisis and the start of the Great Recession. This is her account, based on personal notes, of her experience heading the FDIC, particularly focused on the financial crisis and its immediate aftermath.

Something I would like to do in theory but rarely manage to do in practice is to read more thoughtful political writing from people who disagree with me. Partly that's to broaden my intellectual horizons; partly it's a useful reminder that the current polarized political climate in the United States does not imply that the intellectual tradition of conservatism is devoid of merit. While it's not a complete solution, one way to edge up on such reading is to read books by conservatives that are focused on topics where they and I largely agree.

In this case, that topic is the appalling spectacle of consequence-free government bailouts of incompetently-run financial institutions, coordinated by their co-conspirators inside the federal government and designed to ensure that obscenely large salaries and bonuses continued to flow to exactly the people most responsible for the financial crisis. If I sound a little heated on this topic, well, consider it advance warning for the rest of the review. Suffice it to say that I consider Timothy Geithner to be one of the worst Secretaries of the Treasury in the history of the United States, a position for which the competition is fierce.

Some background on the US financial regulatory system might be helpful here. I'm reasonably well-read on this topic and still learned more about some of the subtleties.

The FDIC, which Bair headed, provides deposit insurance to all of the banks. This ensures that whatever happens to the bank, all depositors of up to $100,000 (now $250,000 due to a law that was passed as part of the events of this book) are guaranteed to get every cent of their money. This deposit insurance is funded by fees charged to every bank, not by general taxes, although the FDIC has an emergency line of credit with the Treasury it can call on (and had to during the savings and loan crisis in the early 1990s).

The FDIC is also the primary federal regulator for state banks. It is not the regulator for federal banks; those are regulated by the Office of the Comptroller of the Currency (OCC) and, at the time of events in this book, the Office of Thrift Supervision (OTS), which regulated Savings and Loans. Some additional regulation of federal banks is done by the Federal Reserve. The FDIC is a "backup" regulator to those other institutions and has some special powers related to its function of providing deposit insurance, but it doesn't in general have the power to demand changes of federal banks, only the smaller state banks.

This turns out to be rather important in the financial crisis: bad state banks regulated by the FDIC were sold off or closed, but the huge federal banks regulated by the OCC and OTS were bailed out via various arranged mergers, loan guarantees, or direct infusions of taxpayer money. Bair's argument is that this difference is partly due to the ethos of the FDIC and its well-developed process for closing troubled banks. The standard counter-argument is that the large national banks were far too large to put through that or some similar process without massive damage to the economy. (Bair strenuously disagrees.)

Bair's account starts in 2006, by which point the crisis was already probably inevitable, and contains a wealth of information about the banking side of the crisis itself and its immediate aftermath. Her story is one of consistent pressure by the FDIC to increase bank capital requirements and downgrade risk ratings of institutions, and consistent pressure by the OCC, OTS, and Geithner (first as the head of the New York branch of the Federal Reserve and then as Treasury Secretary) to decrease capital requirements even in the height of the crisis and allow banks to use ever-more-creative funding models backed by government guarantees. Bair fleshes this out with considerable detail about how capital requirements are measured, how the loan guarantees were structured, the internal arguments over how to get control of the crisis, and the subsequent fights in Congress over Dodd-Frank and how TARP money was spent.

(TARP, the Troubled Asset Relief Program, was the Congressional emergency measure passed during the height of the crisis to fund government purchases and restructuring of troubled mortgage debt. As Bair describes, and has been exhaustively detailed elsewhere, it was never really used for that. The government almost immediately repurposed it for direct bailouts of financial institutions and provided almost no meaningful mortgage restructuring.)

This account also passes my primary sniff test for books about this crisis. Fannie and Freddie (two oddly-named US government institutions with a mandate to support mortgage lending and home ownership) are treated as bad actors and horribly mismanaged entities that made the same irresponsible investments as the private banking industry, but they aren't put at the center of the crisis and aren't blamed for the entire mortgage mess. This disagrees with some corners of Republican politics, but agrees with all other high-quality reporting about the crisis.

Besides fascinating details about the details of banking regulation in a crisis, the primary conclusion I drew from this book is the power of institutions, systems, and rules. One becomes good at things one does regularly. The FDIC closes failing banks without losing insured depositor money, and has been doing that since 1933, often multiple times a year. They therefore have a tested system for doing this, which they practice implementing reliably, efficiently, and quickly. Bair states as a point of deep institutional pride that no insured depositor had to wait more than one business day for access to their funds during the financial crisis. Banks are closed after business hours and, whenever possible, the branches was open for business under new supervision the next morning. This is as important as the insurance in preventing runs on the bank that would make the closing cost even more.

Part of that system, built into the FDIC principles and ethos, was a ranking of priorities and a deep sense of the importance of consequences. Insured depositors are sacrosanct. Uninsured depositors are not, but often they can be protected by selling the bank assets to another, healthier bank, since the uninsured depositors are often the bank's best customers. Investors in the bank, in contrast, are wiped out. And other creditors may also be wiped out, or at least have to take a significant haircut on their investment. That is the price of investing in a failed institution; next time, pay more attention to the health of the business you're investing in. The FDIC is legally required to choose the resolution approach that is the least costly to the deposit insurance fund, without regard to the impact on the bank's other creditors.

And, finally, when the FDIC takes over a failing bank, one of the first things they do is fire all of the bank management. Bair presents this as obvious and straight-forward common sense, as it should be. These were the people who created the problem. Why would you want to let them continue to mismanage the bank? The FDIC may retain essential personnel needed to continue bank operations, but otherwise gets rid of the people who should bear direct responsibility for the bank's failure.

The contrast with the government's approach with AIG, Citigroup, and other failed financial institutions, as spearheaded by Timothy Geithner, could not be more stark. I remember following the news at the time and seeing straight-faced and serious statements that it was important to preserve the compensation and bonuses of the CEOs of failed institutions so that they would continue to work for the institution to unwind all of its bad trades and troubled assets. Bair describes herself as furious over that decision.

The difficulty in critiques of the government's approach to the financial crisis has always been that it was a crisis, with unknown possible consequences, and the size of the shadow banking sector and the level of entangled risk was so large that any systematic bankruptcy process would have been too risky. I'm with Bair in finding this argument dubious but not clearly incorrect. The Lehman Brothers bankruptcy was rocky, but it's not clear to me that a similar process couldn't have worked for other firms. But that aside, retaining the corporate management (and their salaries and bonuses!) seems a clear indication to me of the corruption of the system. (Bair, possibly more to her credit than mine, carefully avoids using that term.)

Bair highlights this as one of the critical reasons why the FDIC process is legally akin to bankruptcy: these sorts of executives write themselves sweetheart employment contracts that guarantee huge payouts even if their company fails. In the FDIC resolution process, those contracts can be broken. If, as Geithner did, you take heroic measures to avoid going anywhere near bankruptcy law, breaking those contracts becomes more legally murky. (Dodd-Frank has a provision, strongly supported by Bair, to create a legal framework for clawing back compensation to executives after certain types of financial misreporting, although it's still far more limited than the FDIC resolution process.)

A note of caution here: this book is obviously Bair's personal account, and she's not an unbiased party. She took specific public positions during the crisis and defends them here, including against analysis in other books about the crisis. She also describes lots of private positions, some of which are disputed. (Andrew Ross Sorkin's book is the subject of some particularly pointed disagreement.) I have read enough other books about the crisis to believe that Bair's account is probably essentially correct, particularly given the nature of the contemporaneous criticism against her. But, that said, the public position against bailouts had become quite clear by the time she was writing this book, and there was doubtless some temptation to remember her previous positions as more in line with later public opinion than they were. This sort of insider account is always worth a note of caution and some effort to balance it with other accounts, particularly given Bair's love of the spotlight (which shines through in a few places in this book).

Bair is a life-long Republican and a Bush appointee. I suspect she and I would disagree on most political positions. But her position as head of the FDIC was that bank failure should come with consequences for those running the bank, that the priority of the government should be protection of insured bank depositors first and the deposit insurance fund second, and that other creditors should bear the brunt of their bad investment decisions, all of which I agree with wholeheartedly. This account is an argument for the importance of moral hazard, and an indictment and diagnosis of regulatory capture from someone who (refreshingly) is not just using that as a stalking horse to argue for eliminating regulation. Bair also directly tackles the question of whether the same moral hazard argument applies to the individual loan holders and concludes no, but this part of the argument was a bit light on detail and probably won't convince someone with the opposite opinion.

It's quite frustrating, reading this in 2018, how many of the reforms Bair argues for in this book never happened. (A ban on naked credit default swaps, for example, which Bair argues increase systemic risk by increasing the consequences of institutional bankruptcy, thus creating new "too big to fail" analyses like that applied to AIG. Timothy Geithner was central to defeating an effort to outlaw them.) It's also a tragic reminder of how blindly partisan our national debates over economic policies are. You can watch, in Bair's account, the way that Democrats who were sharply critical of the Bush administration handling of the financial crisis, including his appointed regulators, swung behind the exact same regulators and essentially the same policies when Obama appointed Geithner to head Treasury. Democrats are traditionally the party favoring stronger regulation, but that's less important than tribal affiliation. The change is sharp enough that at a few points I was caught by surprise at the political affiliation of a member of Congress who was supporting or opposing one of Bair's positions.

As infuriating as this book is in places, it is a strong reminder that there are conservatives with whom I can find common cause despite being on the hard left of US economic politics. Those tend to be the people who believe in the power of institutions, consistent principles, and repeated and efficient execution of processes developed through hard-fought political compromise. I think Bair and I would agree that it's very dangerous to start making up policies on the spot to deal with the crisis du jour. Corruption can more easily enter the system, and very bad decisions are made. This is a failure on both the left and the right. I suspect Bair would turn to a principle of smaller government far more than I would, but we both believe in better government and clear, principled regulation, and on that point we could easily find workable compromises.

You should not read this as your first in-depth look at the US financial crisis. For that, I still recommend McLean & Nocera's All the Devils are Here. But this is a good third or fourth book on the topic, and a deep look at the internal politics around TARP. If that interests you, recommended.

Rating: 8 out of 10

,

Planet DebianJonathan McDowell: Actually switching something with the SonOff

Getting a working MQTT temperature monitoring setup is neat, but not really what we think of when someone talks about home automation. For that we need some element of control. There are various intelligent light bulb systems out there that are obvious candidates, but I decided I wanted the more simple approach of switching on and off an existing lamp. I ended up buying a pair of Sonoff Basic devices; I’d rather not get into trying to safely switch mains voltages myself. As well as being cheap the Sonoff is based upon an ESP8266, which I already had some experience in hacking around with (I have a long running project to build a clock I’ll eventually finish and post about). Even better, the Sonoff-Tasmota project exists, providing an alternative firmware that has some support for MQTT/TLS. Perfect for my needs!

There’s an experimental OTA upgrade approach to getting a new firmware on the Sonoff, but I went the traditional route of soldering a serial header onto the board and flashing using esptool. Additionally none of the precompiled images have MQTT/TLS enabled, so I needed to build the image myself. Both of these turned out to be the right move, because using the latest release (v5.13.1 at the time) I hit problems with the device rebooting as soon as it got connected to the MQTT broker. The serial console allowed me to see the reboot messages, and as I’d built the image myself it was easy to tweak things in the hope of improving matters. It seems the problem is related to the memory consumption that enabling TLS requires. I went back a few releases until I hit on one that works, with everything else disabled. I also had to nail the Espressif Arduino library version to an earlier one to get a reliable wifi connection - using the latest worked fine when the device was power via USB from my laptop, but not once I hooked it up to the mains.

Once the image is installed on the device (just the normal ESP8266 esptool write_flash 0 sonoff-image.bin approach), start mosquitto_sub up somewhere. Plug the Sonoff in (you CANNOT have the Sonoff plugged into the mains while connected to the serial console, because it’s not fully isolated), and you should see something like the following:

$ mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo
tele/sonoff/LWT Online
cmnd/sonoff/POWER (null)
tele/sonoff/INFO1 {"Module":"Sonoff Basic","Version":"5.10.0","FallbackTopic":"DVES_123456","GroupTopic":"sonoffs"}
tele/sonoff/INFO3 {"RestartReason":"Power on"}
stat/sonoff/RESULT {"POWER":"OFF"}
stat/sonoff/POWER OFF
tele/sonoff/STATE {"Time":"2018-05-25T10:09:06","Uptime":0,"Vcc":3.176,"POWER":"OFF","Wifi":{"AP":1,"SSId":"My SSID Is Here","RSSI":100,"APMac":"AA:BB:CC:12:34:56"}}

Each of the Sonoff devices will want a different topic rather than the generic ‘sonoff’, and this can be set via MQTT:

$ mosquitto_pub -h mqtt.o362.us -p 8883 --capath /etc/ssl/certs/ -t 'cmnd/sonoff/topic' -m 'sonoff-snug' -u user1 -P foo

The device will provide details of the switchover via MQTT:

cmnd/sonoff/topic sonoff-snug
tele/sonoff/LWT (null)
stat/sonoff-snug/RESULT {"Topic":"sonoff-snug"}
tele/sonoff-snug/LWT Online
cmnd/sonoff-snug/POWER (null)
tele/sonoff-snug/INFO1 {"Module":"Sonoff Basic","Version":"5.10.0","FallbackTopic":"DVES_123456","GroupTopic":"sonoffs"}
tele/sonoff-snug/INFO3 {"RestartReason":"Software/System restart"}
stat/sonoff-snug/RESULT {"POWER":"OFF"}
stat/sonoff-snug/POWER OFF
tele/sonoff-snug/STATE {"Time":"2018-05-25T10:16:29","Uptime":0,"Vcc":3.103,"POWER":"OFF","Wifi":{"AP":1,"SSId":"My SSID Is Here","RSSI":76,"APMac":"AA:BB:CC:12:34:56"}}

Controlling the device is a matter of sending commands to the cmd/sonoff-snug/power topic - 0 for off, 1 for on. All of the available commands are listed on the Sonoff-Tasmota wiki.

At this point I have a wifi connected mains switch, controllable over MQTT via my internal MQTT broker.

(If you want to build your own Sonoff Tasmota image it’s actually not too bad; the build system is Ardunio style on top of PlatformIO. That means downloading a bunch of bits before you can actually build, but the core is Python based so it can be done as a normal user within a virtualenv. Here’s what I did:

# Make a directory to work in and change to it
mkdir sonoff-ws
cd sonoff-ws
# Build a virtual Python environment and activate it
virtualenv platformio
source platformio/bin/activate
# Install PlatformIO core
pip install -U platformio
# Clone Sonoff Tasmota tree
git clone https://github.com/arendst/Sonoff-Tasmota.git
cd Sonoff-Tasmota
# Checkout known to work release
git checkout v5.10.0
# Only build the sonoff firmware, not all the language variants
sed -i 's/;env_default = sonoff$/env_default = sonoff/' platformio.ini
# Force older version of espressif to get more reliable wifi
sed -i 's/platform = espressif8266$/&@1.5.0/' platformio.ini
# Edit the configuration to taste; essentially comment out all the USE_*
# defines and enable USE_MQTT_TLS
vim sonoff/user_config.h
# Actually build. Downloads a bunch of deps the first time.
platformio run

I’ve put my Sonoff-Tasmota user_config.h up in case it’s of help when trying to get up and going. At some point I need to try the latest version and see if I can disable enough to make it happy with MQTT/TLS, but for now I have an image that does what I need.)

LongNowOverview: Earth and Civilization in Macroscope

“Once a photograph of the Earth, taken from outside, is available…a new idea as powerful as any in history will be let loose.“ — Astronomer Fred Hoyle, 01948

I. “Why Do You Look In A Mirror?”

InFebruary 01966, Stewart Brand, a month removed from launching a multimedia psychedelic festival that inaugurated the hippie counterculture, sat on the roof of his apartment in San Francisco’s North Beach, doing what he usually did when he was bored and uncertain. He took some LSD and got to scheming.

Stewart Brand and Ken Kesey, 01966. California Historical Society.

“There I sat,” Brand later recalled, “wrapped in a blanket in the chill afternoon sun, trembling with cold and inchoate emotion, gazing at the San Francisco skyline, waiting for my vision. The buildings were not parallel — because the Earth curved under them, and me, and all of us; it closed on itself. I remembered that Buckminster Fuller had been harping on this at a recent lecture — that people perceived the Earth as flat and infinite, and that that was the root of all their misbehavior. Now from my altitude of three stories and one hundred mikes, I could see that it was curved, think it, and finally feel it. But how to broadcast it?”

Scribbled in his journal entry for that day was the answer, in the form of a question: “Why haven’t we seen a photograph of the whole earth yet?”

Stewart Brand’s journal entry when he conceived of his “Why Haven’t We Seen A Photograph of the Whole Earth?” campaign. Stanford University Special Collections.

In the aftermath of World War II, the United States and the Soviet Union competed for nuclear dominion on Earth. With the 01957 launch of Sputnik, the contest expanded to space. But in the race to the moon, neither side had given much thought to the value of training their satellites’ apertures on the world left behind. Brand glimpsed the power such an image could hold.

Brand in the midst of his Whole Earth campaign, 01966.

“A photograph would do it — a color photograph from space of the earth,” Brand said. “There it would be for all to see, the earth complete, tiny, adrift, and no one would ever perceive things the same way.”

Brand mounted a spirited campaign selling buttons that posed the question “Why Haven’t We Seen A Photograph of the Whole Earth?” on college campuses across the country. He often showed up in costume, and he often was chucked out by security. He sent buttons to Marshall McLuhan, Buckminster Fuller, NASA officials, and members of Congress.

According to a 01966 Village Voice article, a student at Columbia asked Brand: “What would happen if we did have a picture? Would it eliminate slums, or meanness, or anything?”

“Maybe not,” said Brand, “but it might tell us something about ourselves.”

“What?” asked the girl.

“It might tell us where we’re at,” said Brand.

“What for?” asked the girl.

“Why do you look in the mirror?” asked Brand.

“Oh,” said the girl, and bought a button.

The first color photograph of the whole earth, from ATS-3 (01967).

Brand would soon get his photo. On November 10, 01967, the NASA geostationary weather and communications satellite ATS-3 captured the first color photograph of the whole earth. Brand used a reproduction of the photo for the cover of the first Whole Earth Catalog, a countercultural bible and forerunner to the World Wide Web that Steve Jobs once called “Google in paperback form.”

But the image didn’t enter the mainstream, as the first copies of The Whole Earth Catalog seldom strayed far from the communes. (That would change by 01972, when The Last Whole Earth Catalog won a National Book Award).

The moment of revelation for a global audience came in 01968, at the conclusion of a year of violence and unrest that saw the assassinations of Martin Luther King and Robert Kennedy, the escalation of war in Vietnam, and the brutal suppression of student protests across the globe.

During the Apollo 8 lunar mission on Christmas Eve, 01968, Astronauts Frank Borman, James Lovell, and Bill Anders left Earth’s orbit for the moon, traveling further than any humans before. And then they looked back.

The first time humans saw the whole earth (01968).

Anders later said the view of a fragile earth hanging suspended in the void “caught us hardened test pilots.”

“Here we came all this way to the Moon, and yet the most significant thing we’re seeing is our own home planet, the Earth. “— Astronaut Bill Anders

The descriptions of awe, connection, and transcendence Lowell, Borman and Anders said they felt that day when they looked back at Earth would be echoed by future astronauts.

“You develop an instant global consciousness, a people orientation, an intense dissatisfaction with the state of the world, and a compulsion to do something about it. From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, ‘Look at that, you son of a bitch.’” — Astronaut Edgar Mitchell

Psychologists call this cognitive shift of awareness during spaceflight the “overview effect.”

The view of Earth for TV audiences during the Apollo 8 Christmas Eve broadcast.

The Apollo 8 astronauts reached for their cameras and started snapping photos. Later that day, in what was, at that time, the most watched television broadcast in history, the astronauts read from the Book of Genesis as the cameras showed a grainy, black and white image of the Earth.

When the astronauts returned to Earth three days later, they brought with them the boon of their new whole earth perspective in the form of a photograph. Earthrise captured what the grainy television cameras could not.

“Earthrise, Seen For The First Time By Human Eyes” (01968). NASA.

“Up there, it’s a black-and-white world,” James Lovell later recalled. “There’s no color. In the whole universe, wherever we looked, the only bit of color was back on Earth…It was the most beautiful thing there was to see in all the heavens.”

Earthrise and its companion Blue Marble (01972) are among the most widely disseminated images in human history. By approximating the overview effect for the earthbound, the photos helped launch the modern environmental movement and reframed how we think about our relationship to the planet.

Blue Marble (01972).

“The sight of the whole Earth, small, alive, and alone, caused scientific and philosophical thought to shift away from the assumption that the Earth was a fixed environment, unalterably given to humankind, and towards a model of the Earth as an evolving environment, conditioned by life and alterable by human activity,” writes historian Robert Poole.² “It was the defining moment of the twentieth century.”

Be that as it may, historian Benjamin Lazier argues that by the twenty-first century, Earthrise and Blue Marble became victims of their own success.

“Views of Earth are now so ubiquitous as to go unremarked,” he writes. “These two images and their progeny now grace T-shirts and tote bags, cartoons and coffee cups, stamps commemorating Earth Day and posters feting the exploits of suicide bombers.” The whole earth’s very omnipresence means that “we ceased, in a fashion, to see it.”

Perhaps. Benjamin Grant, founder of the Daily Overview, believes we just need to look closer.


The Mount Whaleback Iron Ore Mine in the Pilbara region of Western Australia. 98% of world’s mined iron ore is used to make steel and is thus a major component in the construction of buildings, automobiles, and appliances such as refrigerators. Daily Overview.

II. An Amazing Mistake

In02013, Benjamin Grant, then a brand strategist at a buttoned-up consulting firm in New York City, found himself thinking less about marketing and more about outer space. Earlier that year, a meteor whose light shone brighter than the sun exploded into fragments across Russian skies. In September, NASA confirmed that the Voyager space probe entered interstellar space, becoming the first human-made object to leave the solar system. And SpaceX was making strides with the rockets it hoped would one day carry humans to Mars. Grant was fascinated, and decided to start a space club at work.

“It was not a normal thing for anyone at my job to start a club of any kind,” Grant says. “But I figured I would do it and if I got fired for doing it then it probably was not the right place for me to work anyway.”

Grant started giving talks at the firm, and soon became known to his colleagues as the space guy. One introduced him to a short film by Planetary Collective called Overview.

The film explored the overview effect in meditative detail and shared astronauts’ reactions to seeing the earth from space.

“It was so powerful to me, so profound,” Grant says of watching Overview. “Maybe I was searching for something like that.”

Grant began sharing the video with everyone he knew. The overview effect was very much on his mind when he started preparing for a space club talk on GPS satellites. As he was pulling some satellite imagery for the talk, he entered “Earth” into the Apple Maps search bar, hoping it would take him to a zoomed out view of the whole earth. What he saw instead stunned him: Earth, Texas, a small town in the Northern part of the state with a population of 1,048.

The screenshot Benjamin Grant took of Earth, TX, seen from above. Benjamin Grant

Viewed from above, Earth, Texas is dappled by perfect circle after circle of fields, looking not unlike a pattern of verdant vinyl records.

“I had no idea what I was seeing at the time, but I’d studied art history and was dabbling in photography,” Grant says. “This was so stunningly beautiful and I had absolutely no idea what it was. It was this amazing mistake that set me off on this adventure.”

Grant went back to his apartment, plugged his computer into his big-screen, and showed the image to his roommates. They discovered that they were looking at pivot irrigation fields. The image inspired an evening of searching for similarly arresting satellite imagery of man-made systems. A friend from Europe showed him the shipping containers of the Port of Rotterdam, Europe’s largest sea port. Another friend who worked in energy asked if Grant had ever looked at solar concentrators before. A friend’s girlfriend who worked for an NGO at the time showed them the Dadaab Refugee Camp in Kenya.

Top left: The Port of Rotterdam. Top right: A solar farm in Seville, Spain. Bottom: The Dadaab Refugee camp, Kenya. Daily Overview

In an epiphanous moment not unlike Stewart Brand’s whole earth vision—sans LSD—Grant realized that these seldom-considered perspectives might inspire something akin to what seeing the Earth from space did for astronauts.

He launched the Daily Overview on Instagram soon after. Each day, the account shares an image of the Earth from above, called an Overview, that is optimized to capture fleeting attention on social media. Underneath each arresting image is a bite-size caption of two to three sentences describing what you’re seeing, along with geocoordinates. Daily Overview is one of the most popular blogs on social media. On Instagram, no account with an environmental focus has more followers.

“I think we’re inundated and saturated with so much information all the time now,” Grant tells me, “that if you can focus it to a few simple things it can actually stick with someone.”

The Eixample District in Barcelona, Spain. The neighborhood is characterized by its strict grid pattern, octagonal intersections, and apartments with communal courtyards. Daily Overview.

There’s a key difference between these Overviews and the whole earth photographs of yore: Blue Marble and Earthrise showed a planet seemingly unaffected by human activity. (“Raging nationalistic interests, famines, wars, pestilences don’t show from that distance,” Apollo 8 astronaut Frank Borman said). Zooming in changes that.

What one witnesses from this vantage — intricate and vibrant patterns of human activity, construction, and destruction— is still aesthetically-pleasing. But in asking how those systems came to be, and learning about their impact, Grant hopes that one gains a planetary awareness, and, ideally, a motivation to act in a way that ensures planetary flourishing.

Ipanema Beach, Rio de Janeiro, Brazil. Daily Overview.

“If people have a better understanding of what is going on they’re more likely to behave in a way that serves the planet rather than serving themselves,” Grant tells me. “These images are a way to introduce things that people would never look at. If you were like, ‘I want you to look at waste ponds from this iron ore mine,’ people would say, ‘Why would I spend my time doing that?’ But if you can do that in a beautiful way that gets people engaged and gets people to ask questions about why it looks a certain way or is a certain color that’s an opportunity to educate and potentially change behaviors.”

Left: Iron Ore Mine, Tailings Pond, Negaunee, Michigan, USA. Right: Tulip fields in Lisse, Netherlands. Daily Overview

For Grant, inspiring awe with his overviews is as important as inspiring awareness.

“The things that stimulate awe, such as exposure to perceptually vast things, that you can experience if you go to the Grand Canyon or look out your airplane window, results in fascinating behaviors,” Grant says.

02014 study found that exposure to perceptually vast stimuli that transcend current frames of reference (i.e., awe) resulted in increased ethical decision making, generosity, and prosocial values while leading to decreased feelings of entitlement. “Awe,” the study’s authors concluded, “may help situate individuals within broader social contexts and enhance collective concern.”

Evaporation ponds at a Potash mine, Moab, Utah. The mine produces muriate of potash, a potassium-containing salt that is a major component in fertilizers. Daily Overview.

For Grant, stimulating awe with an overview comes down to not just what the satellite image portrays, but its artfulness. Each overview is stitched together out of as many as 25 images, purposefully cropped with balance and composition in mind. Many of Grant’s overviews evoke the works of Piet Mondrian, Mark Rothko, and Ellsworth Kelly.

“My favorite art is abstract expressionist painting—very simple, almost flat two dimensional painting,” Grant says. “When you look at the world from outer space it also appears flat and two dimensional.”

A juxtaposition of an Overview with a Piet Mondrian tableau. Via Benjamin Grant.

“If I can get people to experience awe,” Grant says, “not only because they’re seeing something that’s visually vast, like seeing an entire city in one frame or an entire mine in one frame, but if also I can compose it in such a way that the artistry of the image itself gets someone to feel awe, perhaps I’m being doubly as effective at getting them to think more prosocially or think beyond themselves or think of the collective.”


The first fully illuminated snapshot of the Earth captured by the DSCOVR satellite, a joint NASA, NOAA, and U.S. Air Force mission (02015).

III. A New Icon?

Grant’s notions about his overviews as art reminds me of something Stewart Brand once said when asked to elaborate on his intentions with getting NASA to release an image of the earth from space

“I saw the whole earth as an icon, mainly,” he said, “one that did indeed replace the mushroom cloud as the main image for understanding our world.”

These days, Brand’s focus has shifted to a creating a new icon for a different age, The Long Now Foundation’s Clock of the Long Now. “Ideally, it would do for thinking about time what the photographs of Earth from space have done for thinking about the environment,” Brand writes. “Such icons reframe how people think.”

Brand’s co-founder at Long Now, Brian Eno, sees both the whole earth photographs and The Clock of The Long Now as works of art that are imbued with a “mythic, metaphorical presence.”

“The 20th Century yielded its share of icons,” Eno writes. “In this, the 21st century, we may need icons more than ever before.”

Grant’s overviews present the Earth in piecemeal — fragments of a larger whole delivered to a global audience on platforms engineered for ephemerality.

When asked if he thinks it’s possible for a single image of the Earth to serve as an icon for our current age like the whole Earth photos did half a century ago, Grant says he doesn’t think so.

“I don’t know if you could unify people in that way now,” he says. “It’s certainly necessary.”

Elon Musk recently sent a Tesla roadster into space.

Nonetheless, Grant believes advances in technology and the current space revolution will make the overview effect more and more a part of our lives. Geostationary satellites with better cameras are creating new Blue Marbles. Space tourism is on the rise, with trips to Mars on the horizon. The perspective the whole earth icon points to could—for those fortunate enough to “slip the surly bonds of earth”—become a direct experience.

“The overview effect is going to become more of a thing,” Grant says. “Whether or not it’s called that, or whether or not people are experiencing it first hand…if awe is generated, regardless of how it happens, it will lead to more prosocial values and more collaboration, and that will create a better planet.”


Notes

[1] The Long Now Foundation uses five digit dates to serve as a reminder of the time scale that we endeavor to work in. Since the Clock of the Long Now is meant to run well past the Gregorian year 10,000, the extra zero is to solve the deca-millennium bug which will come into effect in about 8,000 years.

[2] Poole, Robert. Earthrise: How Man First Saw the Earth (02008), Yale University Press, 198–9.

Learn More

  • Watch Benjamin Grant’s Long Now talk and conversation with Stewart Brand.
  • Read Benjamin Grant’s book about the Daily Overview project, Overview(02016).
  • Read “The Overview Effect: Awe and Self-Transcendent Experience in Space Flight” in Psychology of Consciousness: Theory, Research, and Practice (02016), Vol. 3, №1, 1–11.
  • Read “Awe, the Small Self, and Prosocial Behavior” in Journal of Personality and Social Psychology (02015), Vol. 108, №6, 883–899.
  • Read “The Man Who Changed The World, Twice” by David Brooks.
  • Watch Benjamin Grant’s 02017 TED talk.

TEDCalling all social entrepreneurs + nonprofit leaders: Apply for The Audacious Project

Our first collection of Audacious Project winners takes the stage after a stellar session at TED2018, in which each winner made a big, big wish to move their organization’s vision to the next level with help from a new consortium of nonprofits. Photo: Ryan Lash / TED

Creating wide-scale change isn’t easy. It takes incredible passion around an issue, and smart ideas on how to move the needle and, hopefully, improve people’s lives. It requires bottomless energy, a dedicated team, an extraordinary amount of hope. And, of course, it demands real resources.

TED would like to help, on the last part at least. This is an open invitation to all social entrepreneurs and nonprofit leaders: apply to be a part of The Audacious Project in 2019. We’re looking for big, bold, unique ideas that are capable of affecting more than a million people or driving transformational change on a key issue. We’re looking for unexplored plans that have a real, credible path to execution. That can inspire people around the world to come together to act.

Applications for The Audacious Project are open now through June 10. And here’s the best part — this isn’t a long, detailed grant application that will take hours to complete. We’ve boiled it down to the essential questions that can be answered swiftly. So apply as soon as you can. If your idea feels like a good fit, we’ll be in touch with an extended application that you’ll have four weeks to complete.

The Audacious Project process is rigorous — if selected as a Finalist, you’ll participate in an ideation workshop to help clarify your approach and work with us and our partners on a detailed project proposal spanning three to five years. But the work will be worth it, as it can turbocharge your drive toward change.

More than $406 million has already been committed to the first ideas in The Audacious Project. And further support is coming in following the simultaneous launch of the project at both TED2018 and the annual Skoll World Forum last week. Watch the full session from TED, or highlight reel above that screened the next day at Skoll. And who knows? Perhaps you’ll be a part of the program in 2019.

.
.
From left in the photo at the top of this post: The Bail Project‘s Robin Steinberg; Heidi M. Sosik of the Woods Hole Oceanographic Institution; Caroline Harper of Sightsavers; Vanessa Garrison and T. Morgan Dixon of GirlTrek; Fred Krupp from Environmental Defense Fund; Chloe Davis and Maleek Washington of Camille A. Brown and Dancers and pianist Scott Patterson, who gave an astonishing performance of “New Second Line”; Andrew Youn of the One Acre Fund; and Catherine Foster, Camille A. Brown, Timothy Edwards, Juel D. Lane from Camille A. Brown and Dancers. Obscured behind Catherine Foster is Raj Panjabi of Last Mile Health (and dancer Mayte Natalio is offstage).

TEDA behind-the-scenes view of TED2018, to inspire you to apply for The Audacious Project

What’s it like to stand in the wings, preparing to give your TED Talk and share a big idea to create ripples of change? This video, captured at TED2018, gives a taste of that. It follows the first speakers of The Audacious Project, TED’s new initiative to fund big ideas for global change. These speakers had a lot on the line as they gave their talks — in addition to a packed house at the conference, their talks were viewed around the world via Facebook Watch. And they all crushed it, sharing their ideas with unique power. (Want goosebumps? Watch Robin Steinberg’s talk about ending the injustice of the US bail system.)

Have an idea for the social good that feels in the same spirit? Apply to be a part of The Audacious Project next year. Applications are open now through June 10, 2018 — and the questionnaire is intentionally short to encourage you to apply. So go for it. Share your biggest, wildest vision for how to tackle one of the world’s most pressing problems.

Apply for The Audacious Project »

Sociological ImagesSummer Reading with BBQ Becky

Over the past few months, we have seen several high profile news stories about white Americans threatening to call, or calling, police on people of color for a range of everyday activities like looking out of place on a college tour, speaking Spanish with cashiers at a local restaurant, meeting at Starbucks, and removing luggage from your AirBnB. Perhaps most notably, one viral YouTube video showing a white woman calling the police on a group of Black people supposedly violating park rules by using charcoal on their grill spawned the meme “BBQ Becky.”

While the meme pokes fun at white fears of people of color, these incidents reflect bigger trends about who we think belongs in social settings and public spaces. Often, these perceptions — about who should and shouldn’t be at particular places — are rooted in race and racial difference.

There’s research on that! Beliefs about belonging particularly affect how Black people are treated in America. Sociologist Elijah Anderson has written extensively about how certain social settings are cast as a “white space” or a “black space.” Often, these labels extend to public settings, including businesses, shopping malls, and parks. Labels like these are important because they can lead to differences in how some people are treated, like the exclusion of the two Black men from Starbucks.

When addressing race and social space, social scientists often focus on residential segregation, where certain neighborhoods are predominantly comprised of members of one racial group. While these dynamics have been studied since the mid 20th century, research shows that race is still an important factor in determining where people live and who their neighbors are — an effect compounded by the 2008 financial crisis and its impacts on housing.

The memes are funny, but they can also launch important conversations about core sociological trends in who gets to be in certain social spaces.

Amber Joy is a PhD student in sociology at the University of Minnesota. Her current research interests include punishment, sexual violence and the intersections of race, gender, age, and sexuality. Her work examines how state institutions construct youth victimization.

Neeraj Rajasekar is a Ph.D. student in sociology at the University of Minnesota studying race and media.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityWill the Real Joker’s Stash Come Forward?

For as long as scam artists have been around so too have opportunistic thieves who specialize in ripping off other scam artists. This is the story about a group of Pakistani Web site designers who apparently have made an impressive living impersonating some of the most popular and well known “carding” markets, or online stores that sell stolen credit cards.

An ad for new stolen cards on Joker’s Stash.

One wildly popular carding site that has been featured in-depth at KrebsOnSecurity — Joker’s Stash — brags that the millions of credit and debit card accounts for sale via their service were stolen from merchants firsthand.

That is, the people running Joker’s Stash say they are hacking merchants and directly selling card data stolen from those merchants. Joker’s Stash has been tied to several recent retail breaches, including those at Saks Fifth Avenue, Lord and Taylor, Bebe Stores, Hilton HotelsJason’s Deli, Whole Foods, Chipotle and Sonic. Indeed, with most of these breaches, the first signs that any of the companies were hacked was when their customers’ credit cards started showing up for sale on Joker’s Stash.

Joker’s Stash maintains a presence on several cybercrime forums, and its owners use those forum accounts to remind prospective customers that its Web site — jokerstash[dot]bazar — is the only way in to the marketplace.

The administrators constantly warn buyers to be aware there are many look-alike shops set up to steal logins to the real Joker’s Stash or to make off with any funds deposited with the impostor carding shop as a prerequisite to shopping there.

But that didn’t stop a prominent security researcher (not this author) from recently plunking down $100 in bitcoin at a site he thought was run by Joker’s Stash (jokersstash[dot]su). Instead, the proprietors of the impostor site said the minimum deposit for viewing stolen card data on the marketplace had increased to $200 in bitcoin.

The researcher, who asked not to be named, said he obliged with an additional $100 bitcoin deposit, only to find that his username and password to the card shop no longer worked. He’d been conned by scammers scamming scammers.

As it happens, prior to hearing from this researcher I’d received a mountain of research from Jett Chapman, another security researcher who swore he’d unmasked the real-world identity of the people behind the Joker’s Stash carding empire.

Chapman’s research, detailed in a 57-page report shared with KrebsOnSecurity, pivoted off of public information leading from the same jokersstash[dot]su that ripped off my researcher friend.

“I’ve gone to a few cybercrime forums where people who have used jokersstash[dot]su that were confused about who they really were,” Chapman said. “Many of them left feedback saying they’re scammers who will just ask for money to deposit on the site, and then you’ll never hear from them again.”

But the conclusion of Chapman’s report — that somehow jokersstash[dot]su was related to the real criminals running Joker’s Stash — didn’t ring completely accurate, although it was expertly documented and thoroughly researched. So with Chapman’s blessing, I shared his report with both the researcher who’d been scammed and a law enforcement source who’d been tracking Joker’s Stash.

Both confirmed my suspicions: Chapman had unearthed a vast network of sites registered and set up over several years to impersonate some of the biggest and longest-running criminal credit card theft syndicates on the Internet.

THE REAL JOKER’S STASH

The real Joker’s Stash can only be reached after installing a browser extension known as “blockchain DNS.” This component is needed to access any sites ending in the top-level domain names of .bazar,.bit (Namecoin), .coin, .lib and .emc (Emercoin).

Most Web sites use the global Domain Name System (DNS), which serves as a kind of phone book for the Internet by translating human-friendly Web site names (example.com) into numeric Internet address that are easier for computers to manage.

Regular DNS maps Internet addresses to domains by relying on a series of distributed, hierarchical lookups. If one server does not know how to find a domain, that server simply asks another server for the information.

Blockchain-based DNS systems also disseminate that mapping information in a distributed fashion, although via a peer-to-peer method. The entities that operate blockchain-based top level domains (e.g., .bazar) don’t answer to any one central authority — such as the Internet Corporation for Assigned Names and Numbers (ICANN), which oversees the global DNS and domain name space. This potentially makes these domains much more difficult for law enforcement agencies to take down.

This batch of some five million cards put up for sale Sept. 26, 2017 on the (real) carding site Joker’s Stash has been tied to a breach at Sonic Drive-In

Dark Reading explains further: “When an individual registers a .bit — or another blockchain-based domain — they are able to do so in just a few steps online, and the process costs mere pennies. Domain registration is not associated with an individual’s name or address but with a unique encrypted hash of each user. This essentially creates the same anonymous system as Bitcoin for Internet infrastructure, in which users are only known through their cryptographic identity.”

And cybercriminals have taken notice. According to security firm FireEye, over the last year there’s been a surge in the number of threat actors that have started incorporating support for blockchain domains in their malware tools.

THE FAKE JOKER’S STASH

In contrast, the fake version of Joker’s Stash — jokersstash[dot]su — exists on the clear Web and displays a list of “trusted” Joker’s Stash domains that can be used to get on the impostor marketplace.  These lists are common on the login pages of carding and other cybercrime sites that tend to lose their domains frequently when Internet do-gooders report them to authorities. The daily reminder helps credit card thieves easily find the new domain should the primary domain get seized by law enforcement or the site’s domain registrar.

Jokersstash[dot]su lists mirror sites in case the generic domain becomes inaccessible.

Most of the domains in the image above are hosted on the same Internet address: 190.14.38.6 (Offshore Racks S.A. in Panama). But Chapman found that many of these domains map back to just a handful of email addresses, including domain@paysafehost.com, fkaboot@gmail.com, and zanebilly30@gmail.com.

Chapman found that adding credit cards to his shopping cart in the fake Joker’s Stash site caused those same cards to show up in his cart when he accessed his account at one of the alternative domains listed in the screenshot above, suggesting that the sites were all connected to the same back-end database.

The email address fkaboot@gmail.com is tied to the name or alias “John Kelly,” as well as 35 domains, according to DomainTools (the full list is here). Most of the sites at those domains borrow names and logos from established credit card fraud sites, including VaultMarket, T12Shop, BriansClub (which uses the head of yours truly on a moving crab to advertise its stolen cards); and the now defunct cybercrime forum Infraud.

Domaintools says the address domain@paysafehost.com also maps to 35 domains, including look-alike domains for major carding sites Bulba, GoldenDumps, ValidShop, McDucks, Mr. Bin, Popeye, and the cybercrime forum Omerta.

The address zanebilly30@gmail.com is connected to 36 domains that feature many of the same impersonated criminal brands as the first two lists.

The domain “paysafehost.com” is not responding at the moment, but until very recently it redirected to a site that tried to scam or phish customers seeking to buy stolen credit card data from VaultMarket. It looks more or less the same as the real VaultMarket’s login page, but Chapman noticed that in the bottom right corner of the screen was a Zendesk chat service soliciting customer questions.

Signing up for an account at paysafehost.com (the fake VaultMarket site) revealed a site that looked like VaultMarket but otherwise massively displayed ads for another carding service — isellz[dot]cc (one of the domains registered to domain@paysafehost.com).

This same Zendesk chat service also was embedded in the homepage of jokersstash[dot]su.

And on isellz[dot]cc:

Notice the same Zendesk chat client in the bottom right corner of the Isellz home page.

According to Farsight Security, a company that maps historical connections between Internet addresses and domain names, several other interesting domains used paysafehost[dot]com as their DNS servers, including cvv[dot]kz (CVV stands for the card verification value and it refers to stolen credit card numbers, names and cardholder address that can be used to conduct e-commerce fraud).

All three domains — cvv[dot]kz, and isellz[dot]cc and paysafehost[dot]com list in their Web site registration records the email address xperiasolution@gmail.com, the site xperiasol.com, and the name “Bashir Ahmad.”

XPERIA SOLUTIONS

Searching online for the address xperiasolution@gmail.com turns up a help wanted ad on the Qatar Living Jobs site from October 2017 for a freelance system administrator. The ad was placed by the user “junaidky“, and gives the xperiasolution@gmail.com email address for interested applicants to contact.

Chapman says at this point in his research he noticed that xperiasolution@gmail.com was also used to register the domain xperiasol.info, which for several years was hosted on the same server as a handful of other sites, such as xperiasol.com — the official Web site Xperia Solution (this site also features a Zen desk chat client in the lower right portion of the homepage).

Xperiasol.com’s Web site says the company is a Web site development firm and domain registrar in Islamabad, Pakistan. The site’s “Meet our Team” page states the founder and CEO of the company is a guy named Muhammad Junaid. Another man pictured as Yasir Ali is the company’s project manager.

The top dogs at Xperia Sol.

We’ll come back to both of these two individuals in a moment. Xperiasol.info also is no longer responding, but not long ago the home page showed several open file directories:

Clicking in the projects directory and drilling down into a project dated Feb. 8, 2018 turns up some kind of chatroom application in development. Recall that dozens of the fake carding domains mentioned above were registered to a “John Kelly” at fkaboot@gmail.com. Have a look at the name next to the chatroom application Web site that was archived at xperiasol.info:

Could Yasir Ali, the project manager of Xperiasol, be the same person who registered so many fake carding domains? What else do we know about Mr. Ali? It appears he runs another business called Agile: Institute of Information Technology. Agile’s domain — aiit.com.pk — was registered to Xperia Sol Technologies in 2016 and hosted on the same server.

Who else that we know besides Mr. Ali is listed on Agile’s “Meet the Team” page? Why Mr. Muhammad Junaid, of course, the CEO and founder of Xperia Sol.

Notice the placeholder “lorem ipsum” content. This can be seen throughout the Web sites for Xperia Sol’s “customers.”

Chapman shared pages of documentation showing that most of the “customers testimonials” supposedly from Xperia Sol’s Web design clients appear to be half-finished sites with plenty of broken links and “lorem ipsum” placeholder content (as is the case with the aiit.com.pk Web site pictured above).

Another “valuable client” listed on Xperia Sol’s home page is Softlottery[dot]com (previously softlogin[dot]com). This site appears to be a business that sells Web site design templates, but it lists its address as Sailor suite room V124, DB 91, Someplace 71745 Earth.

Softlottery/Softlogin features a “corporate business” Web site template that includes a slogan from a major carding forum.

Among the “awesome” corporate design templates that Softlottery has for sale is one loosely based on a motto that has shown up on several carding sites: “We are those, who we are: Verified forum, verified people, serious deals.” Probably the most well-known cybercrime forum using that motto is Omerta (recall from above that the Omerta forum is another brand impersonated by this group).

Flower Land, with the Web address flowerlandllc.com is also listed as a happy Xperia Sol customer and is hosted by Xperia Sol. But most of the links on that site are dead. More importantly, the site’s content appears to have been lifted from the Web site of an actual flower care business in Michigan called myflowerland.com.

Zalmi-TV (zalmi.tv) is supposedly a news media partner of Xperia Sol, but again the Xperia-hosted site is half-finished and full of “lorem ipsum” placeholder content.

THE MASTER MIND?

But what about Xperia Sol’s founder, Muhammad Junaid, you ask? Mr. Junaid is known by several aliases, including his stage name, “Masoom Parinda,” a.k.a. “Master Mind). As Chapman unearthed in his research, Junaid has starred in some B-movie action films in Pakistan, and Masoom Parinda is his character’s name.

The fan page for Masoon Parinda, the character played by Muhammad Junaid Ahmed.

Mr. Junaid also goes by the names Junaid Ahmad Khan, and Muhammad Junaid Ahmed. The latter is the one included in a flight itinerary that Junaid posted to his Facebook page in 2014.

There are also some interesting photos of his various cars — all of which have the Masoom Parinda nickname “Master Mind” written on the back window. There is also something else on each car’s rear window: A picture of a black and red scorpion.

Recall the logo that was used at the top of isellz[dot]cc, the main credit card fraud site tied to xperiasolutions@gmail.com. It features a giant black and red scorpion:

The isellz Web site features a scorpion as a logo.

I reached out to Mr. Junaid/Khan via his Facebook page. Soon after that, his Facebook profile disappeared. But not before KrebsOnSecurity managed to get a copy of the page going back several years. Mr. Junaid/Khan is apparently friends with a local man named Bashar Ahmad. Recall that a “Bashar Ahmad” was the name tied to the domain registrations — cvv[dot]kz, and isellz[dot]cc and paysafehost[dot]com — and to the email address xperiasolution@gmail.com.

Mr. Ahmed also has a Facebook page going back more than seven years. In one of those posts, he publishes a picture of a scorpion very similar to the one on isellz[dot]cc and on Mr. Khan’s automobiles.

A screen shot from Bashir Ahmad’s Facebook postings.

At the conclusion of his research, Chapman said he discovered one final and jarring connection between Xperia Sol and the carding site isellz[dot]cc: When isellz customers have trouble using the site, they can submit a support ticket. Where does that support ticket go? Would you believe to xperiasol@gmail.com? Click the image below to enlarge.

The support page of the carding site isellz[dot]cc points to Xperia Sol. Click to enlarge.

It could be that all of this evidence pointing back to Xperia Sol is just a coincidence, or an elaborate character assassination scheme cooked up by one of the company’s competitors. Or perhaps Mr. Junaind/Khan is simply researching a new role as a hacker in an upcoming Pakistani cinematic thriller:

Mr. Junaid/Khan, in an online promotion for a movie he stars in about crime.

In many ways, creating a network of fake carding sites is the perfect cybercrime. After all, nobody is going to call the cops on people who make a living ripping off cybercriminals. Nor will anyone help the poor sucker who gets snookered by one of these fake carding sites. Caveat Emptor!

CryptogramKidnapping Fraud

Fake kidnapping fraud:

"Most commonly we have unsolicited calls to potential victims in Australia, purporting to represent the people in authority in China and suggesting to intending victims here they have been involved in some sort of offence in China or elsewhere, for which they're being held responsible," Commander McLean said.

The scammers threaten the students with deportation from Australia or some kind of criminal punishment.

The victims are then coerced into providing their identification details or money to get out of the supposed trouble they're in.

Commander McLean said there are also cases where the student is told they have to hide in a hotel room, provide compromising photos of themselves and cut off all contact.

This simulates a kidnapping.

"So having tricked the victims in Australia into providing the photographs, and money and documents and other things, they then present the information back to the unknowing families in China to suggest that their children who are abroad are in trouble," Commander McLean said.

"So quite circular in a sense...very skilled, very cunning."

Worse Than FailureCodeSOD: Modern Art: The Funnel

They say a picture is worth a thousand words, and when it's a picture of code, you could say that it contains a thousand words, too. Especially when it's bad code.

A 35 line enum definition which channels down to a funnel shape, I apologize for not providing the code in textual form, but honestly, this needs to be seen to be believed

Here we have a work of true art. The symmetry hearkens back to the composition of a frame of a Wes Anderson film, and the fact that this snippet starts on line 418 tells us that there's more to this story, something exotic happening just outside of frame. The artist is actively asking questions about what we know is true, with the method calls? &emdash;I think they're method calls&emdash; which take too many parameters, most of which are false. There are hints of an Inner Platform, but they're left for the viewer to discover. And holding it all together are the funnel-like lines which pull the viewer's eyes, straight through the midline, all the way down to the final DataType.STRING, which really says it all, doesn't it? DataType.STRING indeed.

If I ran an art gallery, I would hang this on a wall.

If I ran a programming team, I'd hang the developer instead.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianTim Retout: Tokenizing IT jobs

One size does not fit all when it comes to building search applications - it is important to think about the business domain and user expectations. Here's a classic example from recruitment search (a domain which has absorbed six years of my life already...) - imagine you are a candidate searching for IT jobs on your favourite job board.

Recall how a full-text index works as implemented in Solr or Elasticsearch - the job posting documents are treated as a bag of words (i.e. the order of the words doesn't matter in the first instance). When indexing each job, the search engine tokenizes the document to get a list of which words are included. Then, for each individual word we create a list of which documents include each word.

Normally you tell the indexer to exclude so-called "stopwords" which do not provide any useful information to the searcher - e.g. "a", "is", "it", "to", "and". These terms are present in most if not all documents, so would take up a huge amount of space in your index for little benefit. The same stopwords are excluded from queries to reduce the complexity of the search problem.

However, look at the word "it". It matches the term "IT" case-insensitively - and it's quite common for candidates to use lowercase when entering queries. So we want the query [it] to return jobs containing "IT" - this means "it" cannot be a stopword for queries!

To solve this in Solr, we end up doing something much more complicated:

  1. First, "it" is not included in our stopwords list.
  2. At index time, the term "IT" is mapped to "informationtechnology", case-sensitively. (I believe this is so that phrase matches might work? You can ensure that the phrase "Information Technology" maps to the same token.)
  3. At query time, the term "it" and similar is mapped to the same token.

To implement this in Solr, use a separate analyzer for index/query time on the field, pointing at different synonym files.

While the implementation is quite ugly, the principle is simple: the recruiter and the candidate intended different things when writing the job posting versus the query, and we need to handle each according to the intention of the author. For a different application that had nothing to do with IT, you could safely ignore the word "it".

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #161

Here’s what happened in the Reproducible Builds effort between Sunday May 20 and Saturday May 26 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

Version 95 was uploaded to unstable by Mattia Rizzolo. It includes contributions already covered by posts in previous weeks as well as new ones from:

tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework, including:

Misc.

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianMichal Čihař: Improved Docker container for Weblate

The Docker container for Weblate got several improvements in past days and if you're using it, it might be worth reviewing your setup.

It has been upgraded to Python 3 and Django 2. This should cause no problems as Weblate itself supports both for quite some time, but if you were extending Weblate somehow, you might have to update these extensions to make them compatible.

The default cache backend is now redis. It will be required in future for some features, so you will have to switch at some point anyway. The memcached support is still there in case you want to stick with current setup.

Cron jobs have been integrated into the main container. So you no longer need to trigger them externally. This save quite some pain with offloaded indexing and another features which rely on regular execution.

Another important change is in logging - all logs are now go to the standard output, so you will get them by docker-compose logs and other Docker management commands. This will make debugging easier.

Filed under: Debian English SUSE Weblate

Planet DebianClint Adams: Guidance counselor

“We will have to leave this planet,” he said, according to Geek Wire. “We’re going to leave it, and it’s going to make this planet better.”

“I wonder who ‘we’ is,” she said, “but I have no doubt it will make this planet better.”

Posted on 2018-05-29
Tags: umismu

,

Krebs on SecurityFBI: Kindly Reboot Your Router Now, Please

The Federal Bureau of Investigation (FBI) is warning that a new malware threat has rapidly infected more than a half-million consumer devices. To help arrest the spread of the malware, the FBI and security firms are urging home Internet users to reboot routers and network-attached storage devices made by a range of technology manufacturers.

The growing menace — dubbed VPNFilter — targets Linksys, MikroTik, NETGEAR and TP-Link networking equipment in the small and home office space, as well as QNAP network-attached storage (NAS) devices, according to researchers at Cisco.

Experts are still trying to learn all that VPNFilter is built to do, but for now they know it can do two things well: Steal Web site credentials; and issue a self-destruct command, effectively rendering infected devices inoperable for most consumers.

Cisco researchers said they’re not yet sure how these 500,000 devices were infected with VPNFilter, but that most of the targeted devices have known public exploits or default credentials that make compromising them relatively straightforward.

“All of this has contributed to the quiet growth of this threat since at least 2016,” the company wrote on its Talos Intelligence blog.

The Justice Department said last week that VPNFilter is the handiwork of “APT28,” the security industry code name for a group of Russian state-sponsored hackers also known as “Fancy Bear” and the “Sofacy Group.” This is the same group accused of conducting election meddling attacks during the 2016 U.S. presidential race.

“Foreign cyber actors have compromised hundreds of thousands of home and office routers and other networked devices worldwide,” the FBI said in a warning posted to the Web site of the Internet Crime Complaint Center (IC3). “The actors used VPNFilter malware to target small office and home office routers. The malware is able to perform multiple functions, including possible information collection, device exploitation, and blocking network traffic.”

According to Cisco, here’s a list of the known affected devices:

LINKSYS DEVICES:

E1200
E2500
WRVS4400N

MIKROTIK ROUTEROS VERSIONS FOR CLOUD CORE ROUTERS:

1016
1036
1072

NETGEAR DEVICES:

DGN2200
R6400
R7000
R8000
WNR1000
WNR2000

QNAP DEVICES:

TS251
TS439 Pro

Other QNAP NAS devices running QTS software

TP-LINK DEVICES:

R600VPN

Image: Cisco

Unfortunately, there is no easy way to tell if your device is infected. If you own one of these devices and it is connected to the Internet, you should reboot (or unplug, wait a few seconds, replug) the device now. This should wipe part of the infection, if there is one. But you’re not out of the woods yet.

Cisco said part of the code used by VPNFilter can still persist until the affected device is reset to its factory-default settings. Most modems and DVRs will have a tiny, recessed button that can only be pressed with something small and pointy, such as a paper clip. Hold this button down for at least 10 seconds (some devices require longer) with the device powered on, and that should be enough to reset the device back to its factory-default settings. In some cases, you may need to hold the tiny button down and keep it down while you plug in the power cord, and then hold it for 30 seconds.

After resetting the device, you’ll need to log in to its administrative page using a Web browser. The administrative page of most commercial routers can be accessed by typing 192.168.1.1, or 192.168.0.1 into a Web browser address bar. If neither of those work, try looking up the documentation at the router maker’s site, or checking to see if the address is listed here. If you still can’t find it, open the command prompt (Start > Run/or Search for “cmd”) and then enter ipconfig. The address you need should be next to Default Gateway under your Local Area Connection.

Once you’re there, make sure you’ve changed the factory-default password that allows you to log in to the device (pick something strong that you can remember).

You’ll also want to make sure your device has the latest firmware updates. Most router Web interfaces have a link or button you click to check for newer device firmware. If there are any updates available, install those before doing anything else.

If you’ve reset the router’s settings, you’ll also want to encrypt your connection if you’re using a wireless router (one that broadcasts your modem’s Internet connection so that it can be accessed via wireless devices, like tablets and smart phones). WPA2 is the strongest encryption technology available in most modern routers, followed by WPA and WEP (the latter is fairly trivial to crack with open source tools, so don’t use it unless it’s your only option).

But even users who have a strong router password and have protected their wireless Internet connection with a strong WPA2 passphrase may have the security of their routers undermined by security flaws built into these routers. At issue is a technology called “Wi-Fi Protected Setup” (WPS) that ships with many routers marketed to consumers and small businesses. According to the Wi-Fi Alliance, an industry group, WPS is “designed to ease the task of setting up and configuring security on wireless local area networks. WPS enables typical users who possess little understanding of traditional Wi-Fi configuration and security settings to automatically configure new wireless networks, add new devices and enable security.”

However, WPS also may expose routers to easy compromise. Read more about this vulnerability here. If your router is among those listed as using WPS, see if you can disable WPS from the router’s administration page. If you’re not sure whether it can be, or if you’d like to see whether your router maker has shipped an update to fix the WPS problem on their hardware, check this spreadsheet.

Turning off any remote administration features that may be turned on by default is always a good idea, as is disabling Universal Plug and Play (UPnP), which can easily poke holes in your firewall without you knowing it). However, Cisco researchers say there is no indication that VPNFilter uses UPnP.

For more tips on how to live with your various Internet of Things (IoT) devices without becoming a nuisance to yourself or the Internet at large, please see Some Basic Rules for Securing Your IoT Stuff.

Sky CroeserICA18 Day 4: labour in the gig economy; resistant media; feminist peer review; love, sex, and friendship; illiberal democracy in Eastern and Central Europe

Voices for Social Justice in the Gig Economy: Where Labor, Policy, Technology, and Activism Converge
uberVoices for Social Justice in the Gig Economy, Michelle Rodino-Colocino.
This research discusses the App-Based Driver Association, looking specifically at Seattle. There’s no “there” for gig economy work: previous spaces of organising, such as the shop floor, aren’t available. One space is a parking lot, where people sit waiting to get lifts. There’s one shady tree, where people tend to converge. Another space is an Ethiopian grocery store, as many drivers are East African. The ABDA is largely funded and supported by the teamsters. Drivers interviewed definitely understand that they’re producing for Uber, and that they’re being exploited. They spoke about the challenges of planning – they can’t go watch a movie. Above all, Uber sells drivers’ availability. One driver was told: “we can always get another Mohammed”. Drivers feel dehumanized. They’re not provided with toilets, there’s nowhere to pray. They’re also cautious about organising, as Uber is clearly anti-union.

Work in the European Gig Economy. Kaire Holts, University of Hertfordshire. This research aims to survey and measure the extent and characteristics of crowd work in Europe. Working conditions are characterised by precariousness (including frequent changes to pay levels), unpredictability, work intensity, the impact of customer ratings, abuse from customers, and poor communication with platform staff (including a lack of face to face contact, and no social etiquette). One driver was asked to deliver drugs to a criminal gang late at night. When she told the platform about it they said it was her responsibility to check what was in the bags. Workers face both physical risks and stresses, and issues with mental health. There are some attempts at collective representation of platform workers in Europe. In UK, for example, there’s the Independent Workers Union of Great Britain delivering Deliveroo drivers, and the United Private Hire Drivers (UPHD) representing Uber drivers.

Reimagining Work [didn’t quite catch the current title], Laura Forlano. This draws on a project with Megan Halpern, using workshops and games that helped people collaborate to imagine what work might look like in the future. One participation spoke up the importance of the shift from talking around around each other to needing to actually physically move as part of the workshop process. Shifts in work are linked to reimagining the city as a (new, urban) factory, so we need to reimagine relationships between work, technology, and the city to embed social justice values into our future.

Information and the Gig Economy. Brian Dolber.
Talks about shifting from a tenure-track position to adjunct work, and then taking up work with Uber and Unite Here (campaigning against Airbnb). From 2008 to 2012, Silicon Valley received little of the broader critique addressed at capitalism more generally. Silicon Valley can be seen within Nancy Fraser’s concept of ‘progressive neoliberalism’, but we’re also seeing a shift towards an emergent neofascism. Airbnb’s valuation is greater than all the hotel chains, which is odd when we think about ‘hosts’ as small business owners. Airbnb has created online communities called ‘Airbnb citizen’ which aim to mobilise hosts to affect city policy. The narrative is very much about facilitating people staying in their homes, paying medical bills, supporting the creative industries, which Dolber argues is cultivating a petit bourgeois attitude that shifts us towards an emergent neofascism.

Power Politics of Resistant Media: Critical Voices From Margins to Center

The opening speaker (whose name I unfortunately didn’t get) discusses the ways in which pop feminism works, and the complexity of vulnerability. There’s a distorted mirroring of vulnerability between popular feminism and white misogyny.

Polemology: counterinsurgency and culture jamming, Jack Bratich.
We need a genealogy to elaborate and understand the persistence and connection of struggles across time.

Rosemary Clark-Parsons (University of Pennsylvania) will discuss de Certeau’s concept of “tactics” within the context of her ethnographic work among grassroots feminist collectives in the city of Philadelphia. She focuses on ‘girl army’, a secret Facebook group developed as a space for women and nonbinary people to share experiences. Tilly and Tarrow’s definition of contentious politics would exclude this group, which isn’t in line with women and nonbinary people’s solidarity and organising work within the group. De Certeau’s concept of tactics allows us to take the everyday seriously; can teach us about strategies; and allows explicit recognition of agency within systems of power. There are limitations, too, including issues with addressing differential access to agency, and theorizing structural change over time. The strategies/tactics binary can be reductive and reify power relations.

hashtagactivism#HashtagActivism: race and gender in America’s Networked Counterpublics. Sarah J. Jackson (Northeastern University). Networked counterpublics theory is one way to understand how marginalised communities create their own public spheres. Mainstream media coverage of the public response to #myNYPD mostly treated it as ‘trolling’, or a PR disaster, that could happen to anyone. In the coverage of #Ferguson, there was a flow of the narrative from ordinary people’s framing through to social movement organisations, and finally the media. #GirlsLikeUs is a useful case, because even within counterpublics, there are people at the margins, who produce their own counter-counterpublics.

Jessa Lingel (University of Pennsylvania) focused on “mainstream creep,” referring to the uneasy relationships between countercultural communities and dominant media platforms, where the former uses the latter reluctantly or in highly-limited ways. How do we construct particular bodies as vulnerable: the language of ‘marginalised people’ is important for understanding structures of power, but does it also construct people as essentially weaker?

Gendered Voices and Practices of Open Peer Review
I opened this panel by reflecting on some of the ways in which I am currently trying to understand, and reconfigure, my approaches to both mothering and academia. I’ll put up a blog post about this later.

The Fembot Collective’s Global South Initiatives. Radhika Gajjala, Bowling Green State University. Problems for women in academia in the Global South start with the much-more-oppressive system of neocolonialism. To participate in autoethnography or other feminist methodologies would be a problem because it’s devalued within universities that see it as navel-gazing. Women need to publish in top-tier journals in order to be successful (or even survive) within their academic spaces. How do we as feminist publishers work with women in the Global South to help them access the resources that their institutions value? How do we support them without asking them to do a lot of extra activist work within their institutions? We need to think about power differences within the networks of solidarity and resistance we build across borders. It’s a messy terrain. We need to work to allow women in academia in the Global South to get access to a space where they can speak (and be heard).

Voicing New Forms of Scholarly Publishing. Sarah Kember, Goldsmith’s, University of London. There’s a seismic shift happening at the moment in academic publishing. Revolution and disruption are not the same thing. We need to understand this within the context of efforts to police and politicise scholarly practices: there’s no distinction between these two at the moment. We need to both uphold something (the trust in academic work), but also change it (the opacity of peer review processes). We’re currently seeing a “pay to say” model of academic publishing in open access, at least in the UK. “Openness” works in different ways, with an asymmetrical structure. Goldsmiths has to be open, Google doesn’t. “Open access” publishing is often incredibly expensive, especially where academics are pushed to continue publishing with traditional academic publishers. Kember cites ADA as a big intervention in these models. The disruptions of scholarly publishing models is a by-product of neoliberalism. The disruption of academia isn’t. We need to restate the university press mission, revise it, and rethink it. The policies around scholarly publishing need careful examination. The issue is not about adding ever-more OA panels, which are entrepreneurial, and technicist.

Peer Review is Dead, Long Live Peer Review: Conflicts in the Field of Academic Production. Bryce Peake, University of Maryland, Baltimore County. Academics often undertake review because it gives access to particular networks. Women tend to receive much more negative feedback from review, and to engage in (be asked to do?) more peer review. There are different ways of understanding peer review: as enforcer (for example, of particular norms), networker, gatekeeper (of one particular journal), and/or mentor.

Ada and Affective Labor. Roopika Risam, Salem State University. ADA and the peer review process intervenes in scholarly systems, but is at risk particularly because of that. Risam talks about an experience drawing on theory from the margins: journal editors for a journal with a more experimental peer review process decided to shift from post-publication review to the traditional peer review process. Generosity in peer review is not the same as being ‘nice’: it’s about the level of engagement in the process. It means that the community takes seriously the project that the author is engaged in, rather than what they think the author should be doing. This means that the community has developed and perpetuated a set of norms. Even when editors are advising authors that their text is not ready for publishing, they are kind. Too often, ‘rigor’ has been set up as opposing kindness. This kind of peer review presents a challenge to the masculinist mode of academic production: it’s collectivist rather than individualist, seeing knowledge as an open system rather than a closed hierarchy. How can we look at the intersection of rigor and kindness? Scholarship is more rigorous when it makes its multiple genealogies visible, writing voices which have been made invisible back into academia.

Carol Stabile, in beginning discussion, prompted us to read Toward a Zombie Epistomology by Deanna Day, asking whether we should be should be considering a nonreproductive (or even antireproductive) approach to academia: one not concerned with leaving behind a specific legacy, either institutional or theoretical. Radhika’s answer was very much in line with my thinking on this: that in trying to rethink our approach not only to academia but also to mothering, she (and I) want to think of mothering not as a process of reproducing ourselves, but as a way of making space for children (and students, and colleagues) to be their own people. Thinking about the important challenges and prompts that (re)reading Revolutionary Mothering, The Argonauts, and more informal conversations with the many amazing people I know reflecting on their parenting experiences, have given me, I’d add that it’s also important to consider the ways in which feminist practices of peer review (and academia more generally), should not only not be about reproducing ourselves, but should be about allowing ourselves to be changed.

There was also some excellent discussion about the role of institutions (like the committees that evaluate promotions and tenure), and citation practices. In a response to a question about how to balance attempts to create change against the requirements of tenure, Carol and Sarah spoke on the importance of joining evaluation panels, both to get a better understanding of how they work and to intervene in them. Sarah notes that when we’re forced to write and research more quickly, it can be hard to find sources to draw on beyond the standard offerings. (I’ve particularly noted this myself: after managing not to cite any men, I think, in my last publication before giving birth, my writing since referring to work has relied far more heavily on the most well-known literature.) Sarah prompts peer reviewers to actively consider the breadth of sources that research draws on.

Love, Sex, Friendship: LGBTQ Relationships and Intimacies
Lover(s), Partner(s), and Friends: Exploring Privacy Management Tactics of Consensual Non-Monogamists in Online Spaces. Jade Metzger, Wayne State University. In 1986 a researcher surveyed around 3,000 people, and found that 15-28% of that population didn’t define themselves as monogamous, and more recent research has also found that many young people don’t define themselves as not strictly monogamous. Consensual non-monogamy is often stigmatised. How do we understand disclosure of consensual non-monogamy? Metzger notes that one of the main researchers in this area doesn’t engage in consensual non-monogamy herself. Metzger’s research, which included open-ended interviews and self-disclosure, found that self-disclosure varied, including ‘keeping it an open secret’, using ambiguous terms (like ‘friend’ or ‘partner…s’), or using terms open to interpretation (‘cuties’, ‘comets’, ‘cat’). Reasons cited for privacy included family disapproval, repercussions at work, harm to parental custody, and general discomfort. Privacy is often negotiated at the small-group community level: self-disclosure often implicates others. For some, social media is a risk that has to be navigated carefully: blocking family, for example, or using multiple accounts. Often, it can be hard not to be connected online: it can be painful to not be able to acknowledge people important to you online. Some sites don’t allow you to list multiple partners, embedding heteronormativity into their structure. We need to see privacy as negotiated at the community level (as opposed to individually, as many neoliberal approaches to privacy understand it). The transparency of networks on social media places risks and burdens on those wanting (or needing) to remain private.

28182642460_2012772a36_bDoes Gender Matter? Exploring Friendship Patterns of LGBTQ Youth in a Gender-Neutral Environment. Traci Gillig, USC Annenberg, Leila Bighash, USC – Annenberg School for Communication and Journalism. Gender is not a binary, but we constantly encounter spaces structured by the social gender binary, and gender stereotypes. Gender is a major driver of peer relationships among youth, including LGBTQ people. This research looked at the Brave Trails LGBTQ youth camp, which is gender neutral. Gillig and Bighash found that here, were students weren’t separated out by gender, friendship groupings didn’t cluster by gender.

Hissing and Hollering: Performing Radical Queerness at Dinner. Greg Niedt, Drexel University. The word ‘radical’ is often seen as a confrontational challenge to the mainstream, which is certainly a part of it. But radical queerness can also be about more quiet, everyday moments of queerness: the queer ordinary. In discussing radical queer ‘family dinners’, there is an act of radical queerness to reconstituting family as chosen family. Radical Faeries came out of activism in the 1970s, borrowing – or appropriating – from various forms of paganism and spirituality. Harry Hay was particularly central (and some of his statements about what it means to be queer are kind of what you might expect from a relatively privileged white man). Existing research is limited, and focuses on the high ritual and performativity. Niedt focuses, instead, on weekly fa(e)mily dinners in Center City Philadelphia. The research methodology drew on Dell Hymes (1974).

Music in Queer Intimate Relationships. Marion Wasserbauer, Universiteit Antwerpen. Thea DeNora discusses music as a touchstone of social relations, but there’s a dearth of beographical analysis of sociological study of music consumption. Wasserbauer talked about one interview in which a 44-year-old woman tracked the entanglement of her relationship with music, and how after the breakup she’d never experienced music again. Another 27-year-old-woman, who mostly enjoyed classical and 1920s music, found herself almost crying at a Bryan Adams concert she attended because a woman she was in a relationship with loved him so much.

I rounded out the day at an excellent panel with Maria Bakardjieva, Jakub Macek, Alena Macková, and Monika Metykova (I think – the last two were not listed in the program), discussing attacks on media and political freedoms in the Czech Republic, Hungary, and Bulgaria. Metykova outlined the incredibly worrying range of attacks on independent press and political opposition in Hungary (some of which are outlined here), noting that these have been legal and difficult to fully track, let alone resist. Becasue there a small audience (the last panel on the last day sadly often suffers), it was more of a discussion and I didn’t take notes in the panel, but I strongly encourage you to follow up the speakers’ work – and the situation in Central and Eastern Europe. It was a bit strange to me that ICA as an institution did little to address the specific situation of communications in the Czech Republic – the odd floating ‘placelessness’ of Western-centric academia (with numerous panels addressing US politics).

Cory DoctorowTalking Walkaway, anarchism, social justice and revolution with The Final Straw Radio

I recorded a great interview (MP3) about my novel Walkaway and how it fits into radical politics; a free, fair and open internet; the Nym Wars, parenting, and insurgency.

Worse Than FailureCodeSOD: Classic WTF: Quantum Computering

When does anything but [0-9A-F] equal "2222"? Well, it's a holiday in the US today, so take a look at this classic WTF where that's exactly what happens… -Remy

A little while back, I posted a function that generated random hexadecimal-like strings for a GUID-like string to identify events. At first, I thought it (and the rest of the system that Taka's company purchased) was just bad code. But now that I look at it further, I'm stunned at its unbelievable complexity. I can honestly say that I've never seen code that is actually prepared to run a quantum computer, where binary just isn't as simple as 1's and 0's ...

Function hex2bin(hex)
  Select Case hex
    Case "0"
      hex2bin = "0000"
    Case "1"
      hex2bin = "0001"
    Case "2"
      hex2bin = "0010"
    Case "3"
      hex2bin = "0011"
    Case "4"
      hex2bin = "0100"
    Case "5"
      hex2bin = "0101"
    Case "6"
      hex2bin = "0110"
    Case "7"
      hex2bin = "0111"
    Case "8"
      hex2bin = "1000"
    Case "9"
      hex2bin = "1001"
    Case "A"
      hex2bin = "1010"
    Case "B"
      hex2bin = "1011"
    Case "C"
      hex2bin = "1100"
    Case "D"
      hex2bin = "1101"
    Case "E"
      hex2bin = "1110"
    Case "F"
      hex2bin = "1111"
    Case Else
      hex2bin = "2222"
  End Select
End Function

The library codefiles for this system has plenty of other ultra-advanced functions. We'll have to explore these another day, but I will leave you with this method of handling quantum hexadecimal ...

Function hex2dec(hex)
  Select Case hex
    Case "0"
      hex2dec = 0
    Case "1"
      hex2dec = 1
    Case "2"
      hex2dec = 2
    Case "3"
      hex2dec = 3
    Case "4"
      hex2dec = 4
    Case "5"
      hex2dec = 5
    Case "6"
      hex2dec = 6
    Case "7"
      hex2dec = 7
    Case "8"
      hex2dec = 8
    Case "9"
      hex2dec = 9
    Case "A"
      hex2dec = 10
    Case "B"
      hex2dec = 11
    Case "C"
      hex2dec = 12
    Case "D"
      hex2dec = 13
    Case "E"
      hex2dec = 14
    Case "F"
      hex2dec = 15
    Case Else
      hex2dec = -1
 End Select
End Function
[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianMichal Čihař: New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

  • ASS Danmaku - firefox extension for downloading danmaku
  • KeePass DX - KeePass implementation for Android
  • Liberapay Everywhere - the official Liberapay browser extension
  • Offline QR Code Generator - browser add-on allowing you to quickly generate a QR code
  • Guake - drop-down terminal for GNOME
  • Planner - Project and Task manager designed for elementary OS
  • EasySSH - The SSH connection manager to make your life easier.
  • Misskey - A Twitter-like SNS
  • Starke Verben - Android Application to learn your strong verbs
  • FMIT - Free Music Instrument Tuner
  • Crappy Firetitle - A firefox extension allowing customisation of windows' title
  • Piano Booster - a MIDI file player that displays the musical notes AND teaches you how to play the piano

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Planet DebianRuss Allbery: Slightly excessive haul

Okay, yes, I've been buying a lot of books. I seem to be ensuring the rate of acquisition outpaces the rate of consumption even though I've gotten back into reading daily on the train. Well, also, Diane Duane had a crisis and therefore a book sale, so I picked up a whole pile of stuff that I don't have an immediate intention to read.

I still have three written but not yet posted reviews to go through, which are waiting on me finding time to do some editing, and another finished book to review. I'm finishing more non-fiction lately than fiction, possibly due to a less-than-ideal choice of the fiction book for my mood.

The problem with non-fiction is that non-fiction authors keep recommending other interesting-sounding books to read!

Ilona Andrews — On the Edge (sff)
Catherine Asaro (ed.) — Irresistible Forces (sff anthology)
Isaac Asimov (ed.) — The New Hugo Winners (sff anthology)
Fredrik Backman — Beartown (mainstream)
Steven Brust — Good Guys (sff)
Steven Brust & Skyler White — The Skill of Our Hands (sff)
Jo Clayton — Skeen's Leap (sff)
Jo Clayton — Skeen's Return (sff)
Jo Clayton — Skeen's Search (sff)
Diane Duane — So You Want to Be a Wizard (sff)
Diane Duane — Deep Wizardry (sff)
Diane Duane — High Wizardry (sff)
Diane Duane — A Wizard Abroad (sff)
Diane Duane — The Wizard's Dilemma (sff)
Diane Duane — A Wizard Alone (sff)
Diane Duane — Wizard's Holiday (sff)
Diane Duane — Wizards at War (sff)
Diane Duane — A Wizard of Mars (sff)
Diane Duane — Tale of the Five (sff)
Diane Duane — The Big Meow (sff)
Charles Duhigg — The Power of Habit (nonfiction)
Max Gladstone — Four Roads Cross (sff)
Max Gladstone — The Ruin of Angels (sff)
Alison Green — Ask a Manager (nonfiction)
Nicola Griffith — So Lucky (mainstream)
Dorothy J. Heydt — The Witch of Syracuse (sff)
N.K. Jemisin — The Awakened Kingdom (sff)
Richard Kadrey — From Myst to Riven (nonfiction)
T. Kingfisher — The Wonder Engine (sff)
Ilana C. Myer — Last Song Before Night (sff)
Cal Newport — Deep Work (nonfiction)
Cal Newport — So Good They Can't Ignore You (nonfiction)
Emilie Richards — When We Were Sisters (mainstream)
Graydon Saunders — The Human Dress (sff)
Bruce Schneier — Data and Goliath (nonfiction)
Brigid Schulte — Overwhelmed (nonfiction)
Rivers Solomon — An Unkindness of Ghosts (sff)
Douglas Stone & Sheila Heen — Thanks for the Feedback (nonfiction)
Jodi Taylor — Just One Damned Thing After Another (sff)
Catherynne M. Valente — Space Opera (sff)

Phew.

You'll notice a few in here that I've already read and reviewed.

The anthologies, the Backman, and a few others are physical books my parents were getting rid of.

So much good stuff in there I really want to read! And of course I've now started various other personal projects that don't involve spending all of my evenings and weekends reading.

,

Planet DebianDominique Dumont: Shutter, a nice Perl application, may be removed from Debian

Hello

Debian is moving away from Gnome2::VFS. This obsolete module will be removed from next release of Debian.

Unfortunately, Shutter, a very nice Gtk2 screenshot application, depends on Gnome::VFS, which means that Shutter will be removed from Debian unless this dependency is removed from shutter. This would be a shame as Shutter is one of the best screenshot tool available on Linux and one of the best looking Perl application. And its popularity is still growing.

Shutter also provides a way to edit screenshots, for instance to mask confidential data. This graphical editor is based on Goo::Canvas which is already gone from Debian.

To be kept on Debian, Shutter must be updated:

  • to use Gnome GIO instead of Gnome2::VFS
  • to use GooCanvas2 instead of Goo::Canvas
  • may be, to be ported to Gtk3 (that’s less urgent)

I’ve done some work to port Shutter to GIO, but I need to face reality: Maintaining cme is taking most of my free time and I don’t have the time to overhaul Shutter.

To view or clone the code, you can either:

See also the bug reports about Shutter problems on Ubuntu bug tracker

I hope this blog will help finding someone to maintain Shutter…

All the best.

Advertisements

Planet DebianEvgeni Golov: Building Legacy 2.0

I've recently read an article by my dear friend and colleague @liquidat about using Ansible to manage RHEL5 and promised him a nice bashingreply.

Background

Ansible, while being agent-less, is not interpreter-less and requires a working Python installation on the target machine. Up until Ansible 2.3 the minimum Python version was 2.4, which is available in EL5. Starting with Ansible 2.4 this requirement has been bumped to Python 2.6 to accommodate future compatibility with Python 3. Sadly Python 2.6 is not easily available for EL5 and people who want/need to manage such old systems with Ansible have to find a new way to do so.

First, I think it's actually not possible to effectively manage a RHEL5 (or any other legacy/EOL system). Running ad-hoc changes in a mostly controlled manner - yes, but not fully manage them. Just imagine how much cruft might have been collected on a system that was first released in 2007 (that's as old as Debian 4.0 Etch). To properly manage a system you need to be aware of its whole lifecycle, and that's simply not the case here. But this is not the main reason I wanted to write this post.

Possible solutions

liquidat's article shows three ways to apply changes to an EL5 system, which I'd like to discuss.

Use the power of RAW

Ansible contains two modules (raw and script) that don't require Python at all and thus can be used on "any" target. While this is true, you're also losing about every nice feature and safety net that Ansible provides you with its Python-based modules. The raw and script modules are useful to bootstrap Python on a target system, but that's about it. When using these modules, Ansible becomes a glorified wrapper around scp and ssh. With almost the same benefits you could use that for-loop that has been lingering in your shell history since 1998.

Using Ansible for the sake of being able to say "I used Ansible"? Nope, not gonna happen.

Also, this makes all the playbooks that were written for Ansible 2.3 unusable and widens the gap between the EL5 systems and properly managed ones :(

Upgrade to a newer Python version

You can't just upgrade the system Python to a newer verion in EL5, too many tools expect it to be 2.4. But you can install a second version, parallel to the current one.

There are just a few gotchas with that: 1. The easiest way to get a newer Python for EL5 is to install python26 from EPEL. But EPEL for EL5 is EOL and does not get any updates anymore. 2. Python 2.6 is also EOL itself and I am not aware of any usable 2.7 packages for EL5. 3. While you might get Python 2.6 working, what's about all the libs that you might need for the various Ansible modules? The system ones will pretty sure not work for 2.6. 4. (That's my favorite) Are you sure there are no (init) scripts that check for the existence of /usr/bin/python26 and execute the code with that instead of the system Python? Now see 3, 2 and 1 again. Initially you said "but it's only for Ansible", right? 5. Oh, and where do you get an approval for such a change of production systems anyways? ;)

Also, this kinda reminds me of the "Python environment" XKCD:

XKCD: Python Environment

Use Ansible 2.3

This is probably the sanest option available. It does not require changes to your managed systems. Neither does not limit you (a lot) in what you can do in your playbooks.

If only Ansible 2.3 was still supported and getting updates…

And yet, I still think that's the sanest solution available. Just make sure you don't use any modules that communicate with the world (which includes the dig lookup!) and only use 2.3 on an as-needed basis for EL5 hosts.

Conclusion

First of all, please get rid of those EL5 systems. The Extended Life-cycle Support for them ends in 2020 and nobody even talks about support for the hardware it's running on. Document the costs and risks those systems are bringing into the environment and get the workloads migrated, please. (I wrote"please" twice in a paragraph, it must be really important).

I called this post "Building Legacy 2.0" because I fear that's a recurring pattern we'll be seeing. On the one hand legacy systems that need to be kept alive. On the other the wish (and also pressure) to introduce automation with tools that are either not compatible with those legacy systems today or won't be tomorrow as the tool develop much faster than the systems you control using them.

And by still forcing those tools into our legacy environments, we just add more oil to the fire. Instead of maintaining that legacy system, we now also maintain a legacy automation stack to pseudo-manage that legacy system. More legacy, yay.

Sky CroeserICA18, Day 3: activism, subalterns, more activism, post/colonial imaginations, and cultural symbols

Activism and Social Media
mamfakinchMamfakinch: From Protest Slogan to Mediated Activism. Annemarie Iddins, Fairfield University. [CN: rape.]
Iddens argues that the digital must be understood as part of a network of different media – the Mamfakinch collective only makes sense as a response to the limitations of the Moroccan media (which combines strong state influence with neoliberal tendencies). Morocco’s uprising, referred to as M20, used “Mamfakinch” (no concessions) as a slogan. Mamfakinch was developed as a citizen media portal, modelled over Nawaat. M20 was largely focused on reform of the existing political system. Protests were mostly planned online. The collective moves effectively between on and offline locations, supporting some campaigns and sparking others. Amina Filali was a 16 year old who swallowed rat poison after marrying her rapist. Protests took place in physical space and online to change the laws, and nearly two years after Filali’s death the laws that allowed rapists to escape prosecution if they married those they’d raped were changed. Mamfakinch was closed in 2014 after attack from a government-backed spyware attack and loss of momentum. Founders started the Association for Digital Rights (ADN), which is still attempting to register as an organisation. What began as an attempt to establish a viable opposition in Morocco has resulted in a restructuring of the norms of how Moroccans interact with power.

The Purchase of Witnessing in Human Rights Activism. Sandra Ristovska, University of Colorado Boulder. Witnessing is often associated with notions of ‘truth-telling’: this paper maps out two different modes of witnessing. Witnessing an event: bearing witness for historical and ethical reasons. Today, we a see a shift towards witnessing for a purpose. This second mode means that witnessing is very much shaped by a sense of strategic framing for a particular audience. If your end-goal is to appeal to a public audience, or a court, the imperatives are different: do you focus on a particular aesthetic, or on making sure that you get key details (such as badge numbers of police, or landmark shots to show where an event takes place). The push towards shaping witnessing towards particular audiences and institutional contexts can constrain, or oven silence, the voices of activists. Activists may feel they can’t let their own passion, or own voice, speak through as they attempt to meet institutional needs to be heard.

Citizen Media and Civic Engagement. Divya C. McMillin, University of Washington – Tacoma. This research examined the conditions that support particular forms of mobilisation and engagement on the ground: how do movements endure, and how do grassroots movements reclaim local spaces. There were two local case studies of grassroots tourism efforts which aim to preserve heritage and promote eco-friendly environments: Anthony’s Kolkata Heritage Tours, and Native Place in Bangalore. McMillin draws on Massey’s understanding of place as not already-existing, but as becoming – place is transformed by use. Indian cities are changing massively, with seven major Indian cities targeted for “megacity” or “smart city” development which makes them sites of urgent struggle for those living there. Using translation as a theoretical framework allows us to understand negotiations within the global economy: a translation of meaning through the opportunities of encounter. The way in which a space is translated into a place of consumption can also work to reclaim places in ways that the government doesn’t facilitate.

Whose Voices Matter? Digital Media Spaces and the Formation of New Publics in the Global South
fanyusuWhat Happens When the Subaltern Speaks?: Worker’s Voice in Post-Socialist China. Bingchun Meng, London School of Economics. It is important to emphasise the class dimension of how we understand the subaltern. Chinese migrant works can be understood as the subaltern (drawing on Sun 2014). The Hukou system divides and discriminates against the rural population. There is a concentration of symbolic resources and an exercise of epistemic violence, with the marginalisation of migrant workers within China. Migrant workers are represented as the other: the looming spectre of social slippage for the children of middle-class urban people, a force for social instability that needs to be contained. Xu Lizhi’s poetry explores the experiences of migrant workers (he committed suicide, working for Foxconn). Fan Yusu’s writing is, however, more well-known within China, and some is available in English translation. She’s in her mid-40s, from rural Hubei, and works in Beijing as a domestic helper. Her writing draws extensively on Chinese literary tradition, and demonstrates a strong egalitarian view. Responses to her writing have included an outpouring of sympathy from the urban middle-class (which positions the subaltern as disadvantaged); warnings from urban elites against mixing literary criteria with moral judgement (seeing the subaltern as uneducated); and criticism of Fan’s writing about her employer (seeing the subaltern as ungrateful). Fan Yusu’s responses to journalists are not always what they expect: for example, she refuses the valuing of intellectual over physical work.

Social Media and Censorship: the Queer Art Exhibition Case in Brazil. Michel Nicolau Netto, State University of Campinas, and Olívia Bandeira, Federal University of Rio de Janeiro. [CN: homophobia.] Physical violence cannot be understood if we don’t take into account symbolic violence. As an emblematic example, we see the murder of Marielle Franco, which can be understood as a violent response to seeing the subaltern voice start to be valued. This research looks at the Queermuseum Art Exhibition. After the exhibition opened, a man visited wearing a shirt reading “I’m a sexist, indeed”, and recorded the video calling visitors names such as “perverted” and “pedophile” – he shared this on a right-wing Facebook group (“Free Brazil Movement”). After this was further shared, the Santander bank hosting the exhibition cancelled it. Posts about the exhibition were then shared even more widely: right-wing groups were empowered by their success. Most-shared posts in Brazil are disproportionately those from the right wing. The bank’s actions can be seen as a way of supporting the extension of neoliberalism in Brazil, via the strengthening of right-wing extremism.

Sound Clouds: Listening and Citizenship in Indian Public Culture. Aswin Punathambekar, University of Michigan, Ann Arbor.
This paper examines the centrality of sound in conveying voice. Sound technologies and practices serve as a vital infrastructure for political culture. The sonic dimensions of the digital turn have received comparatively little attention. This work disagrees with Tung-Hui Hu’s claims that the prehistory of the cloud is one of silences [I may have misunderstood this], focusing on Kolaveri – a song which was widely shared and remixed. Kolaveri became a sonic text that sparked discussion of inequality, violence, and caste.

Selfies as Voice?: Digital Media, Transnational Publics and the Ironic Performance of Selves. Wendy Willems, London School of Economics and Political Science. African digital users are often seen as being on the other side of the digital divide, not contributing to digital culture. This research looks at responses to boastful selfies from a Zimbabwean businessman, Philip Chiyangwa, mostly in Shona and aimed at discussion within the Zimbabwean diaspora (rather than aimed at an external public). There’s an online archive of 3000 images – often playful and ironic selfies and videos exploring the idea of zvirikufaya (“things are fine”). Discussions between diasporic and home-based Zimbabweans played with the history of colonisation, and reinforced or subverted the idea that diasporic Zimbabweans take on demeaning work overseas (for example, a woman in Australia filming herself being served in a cafe by a white man). Willems is keen to situate discussions of the transnational within a particular historical context, and to shift from ‘flowspeak’ to thinking more about mediated encounters. Diasporas can be seen as fundamentally postcolonial, understanding shifts as being responses specifically to the impacts of colonisation (“we are here because you were there” – A. Sivanandan). How do we understand the role of digital media in transnationalising publics?

Digital Constellations: The Individuation of Digital Media and the Assemblage of Female Voices in South Korea. Jaeho Kang, SOAS, University of London. We need to go beyond the limitations of ‘network’ theory, which reduce the social world to ‘actor-constellations’.  One alternative is to understand protests in terms of assemblages of social individuals: non-conscious cognitive assemblages, collective individuation, and the connective action of affect, and non-representative democracy.

In the response, Nick Couldry invited us to think more about the metaphors around sound, including not only the sonic resonance, but also interference. We also need to think about the ways in which the theoretical language that we use reinforces neoliberal values, rather than subverting them.

Hashtag Activism
#BlackLivesMatter and #AliveWhileBlack: A Study of Topical Orientation of Hashtags and Message Content. Chamil Rathnayake, Middlesex University, Jenifer Sunrise Winter, University of Hawaii at Manoa, and Wayne Buente, University of Hawaii at Manoa.The use of hashtags can be seen within the context of collective coping, which can increase resiliency (while not necessarily leading to political change).

The Voices of #MeToo: From Grassroots Activism to a Viral Roar. Carly Michele Gieseler. Tarana Burke’s original goals for the #metoo mission can be seen as largely silenced (or pushed aside) as the roar grew around the hashtag, echoing broader patterns in white feminism. Outrage is selectively deployed – the wall between white women and Black women within feminism isn’t new, but perhaps the digital space can do something to change it. We need to think about the ways in which white feminisms within academia have ignored or appropriate the work of women of colour. Patricia Hill Collins talks about the painstaking process of collecting ideas and experiences of thrown-away Black women, even when these women started the dialogue.

Voice, Domestic Violence, and Digital Activism: Examining Contradictions in Hashtag Feminism. Jasmine Linabary, Danielle Corple, and Cheryl Cooky, Purdue University. This research looks at #WhyIStayed or #WhyILeft within a postfeminist lens, supplementing data gathered online with interviews. This research highlighted the importance of inviting voice (opening spaces for sharing experiences – but with a focus on the individual, which often lead to victim-blaming); multivocality (with openings for a multitude of identities – but this also opened up the conversation for trolling and co-opting); immediacy in action (which allows responses to current events); and the creation of visibility around domestic violence (unfortunately often neglecting broader structural context). Looking at these hashtags with reference to postfeminist contradictions allows both an understanding of how they were important for those participating, but also the limitations in the focus on the individual.

Women’s Voices in the Saudi Arabian Twittersphere. Walaa Bajnaid, Einar Thorsen, and Chindu Sreedharan, Bournemouth University. This research focuses on women’s resistance to the system of male guardianship, asking about how Twitter facilitate cross-gender communication during the campaign. Women’s tweets connected online and offline mobilisation, for example by posting videos of themselves walking in public unaccompanied. Protesters actively tried to keep the hashtag trending, and to gain international attention. Tweets from male opponents attempted to defend the status quo by attempting to derail the campaign, accusing the protesters of being atheists and/or foreign agents trying to destabilise Saudi Arabia. Men frequently seemed hesitant to support the campaign to end male guardianship.

The Mediated Life of Social Movements: The Case of the Women’s March. Katarzyna Elliott-Maksymowicz, Drexel University. This research draws on the literature on new social movement theory, collective identity, and visuality in social movements. Changing dynamics of hashtags and embedded images is a useful way of understanding how the movement changed over time.

Colonial Imaginations, Techno-Oligarchs, and Digital Technology
(The discussion here was interesting and important, but I struggled a bit to take good notes given the flow of the format. Please excuse the especially fragmentary notes gathered under each presenter, as that seemed easier than taking notes following the flow of discussion.)

[Correction: I initially attributed Payal Arora’s excellent prompts to discussion to Radhika Gajjala.]

Discussant: Payal Arora, Erasmus University Rotterdam
We have to remember that colonial theory is buried in different areas, including development discourse. It’s also important that ‘the margins’ aren’t always positive – the extreme right were also once on the margins (though they are being brought to the centre in many places, including Brazil). Is identity politics toxic to our cause, or should we be leveraging aspects of it? When we talk about visibility in the Global South, we largely celebrate it (“They’ve gained visibility! They’re speaking for themselves!”), without recognising the complicated nature of different identities within nations. There’s a lot of talk about data activism and data justice – we need to also look at data resistance. How do we conceptualise resistance in a broader way without moralising it? We also need to think not just about values in design, but also about who the curators of design are (and how they are embedded within particular territorial spaces and power structures). We also need to think about who is operationalising design.

Digital Neo-Colonization: A Perspective From China, Min Jiang, University of North Carolina – Charlotte.
Min Jiang talks about the challenge of working out: is China the colonised, or the coloniser? Looking at the role of large digital companies, we could see Google as colonising China…but also see Chinese companies as having largely replaced Google now, and as colonising Africa. China has its own colonial history. In China today, there’s been so much crackdown on resistance: colleagues in China working in journalism are forbidden for even mentioning the word resistance.

Islamic State’s Digital Warfare and the Global Media System, Marwan M. Kraidy, Annenberg, University of Pennsylvania
North American white supremacists use digital technologies to mess around with spatial perceptions. Social media platforms are working in tandem with all kinds of techniques of spatial control and surveillance. There’s something about the ways in which these platforms claim innocence from the kinds of feelings that they spark, and we shouldn’t release them from responsibility. Kraidy notes the environmental, social, and economic issues tied up in the ways that data works, using data centres that need to be air-conditioned as an example.

Non-Spectacular Politics: Global Social Media and Ideological Formation, Sahana Udupa, LMU Munich
We need to understand not just intersectional oppression, but also nested inequalities, and the ways in which the digital has lead to increased expressions of nationalism. A decolonial approach requires that we recognise the resurgence of previous forms of racism. Is digital media just a tool for discourses of racism and neonationalism that exist outside it? Udupa argues that we should see digital media cultures as inducing effects on users themselves. In India, Facebook is having a huge (but largely invisible) impact on politics. For example, the BJP uses data extensively in crafting particular political narratives.

Decolonial Computing and the Global Politics of Social Media platforms; Wendy Willems, London School of Economics and Political Science.
A decolonial approach means bringing back in structures, and seeing colonisation as fundamental (rather than additive) to processes of identity formation. It resists claims to speak ‘from nowhere’, and helps us to understand the global aspects of platforms. How might we understand the colonisation of digital space by platforms, including the extraction of data? These platforms are positioned as beneficial (‘connecting the unconnected’) – Willems mentions Zuckerberg visiting Africa in shorts and a t-shirt, the image of white innocence this portrays. There’s a challenge around provoking more discussion of these platforms in Africa. There’s a discussion of Internet shut-downs – the state is being seen as the enemy as it shuts down particular services, but we’re not turning the same critical eye on the platforms themselves. She also distinguished between the use of digital media in resistance, and resistance to digital media and datafication itself – there’s been less of the latter. In South Africa, there was #datamustfall in the wake of #RhodesMustFall (focusing on the costs of accessing digital media, rather than contesting platforms themselves). Operators are crucial gatekeepers in accessing the Internet – we need to look at the relationship between operators, platforms, and the state.

Media Representation of Cultural Symbols, Nationalism and Ethnic and Racial Politics
Framing the American Turban: Media Representations of Sikhs, Islamophobia, and Racialized Violence. Srividya Ramasubramanian and Angie Galal, Texas A&M University.
Sikhism is the fifth largest religion in the world. Several waves of Sikh immigration to the US, with various degrees of control. There’s a history of hate crimes against Sikhs in the US, but disaggregated data only began to be collected (by the FBI) in 2015. Anti-Sikh views, and violence, is tied to the othering and dehumanization of Muslims. There’s a long history of negative portrayals of Sikhs (tangled in with Hindus and Muslims) before 9/11. Going on from this research, it’s also important to look at how Sikhs are resisting negative media portrayals. This research located three key moments of rupture in US media portrayals: 9/11, the Wisconsin shootings, and the Muslim Ban/Trump era.

Selfie Nationalism: Twitter, Narendra Modi, and the Making of a Symbolically Hindu/Ethnic Nation. Shakuntala Rao, SUNY, Plattsburgh.
Modi‘s use of Twitter has been seen as particularly strategic, with extensive use of selfies. He always presents himself as someone who can speak to the layperson as “I”. Rao’s methods involve reading, rather than quantifying, tweets, including replies. For example, as soon as Modi starts ‘praying’ online, people upload videos of himself praying. He tweets in seven languages (using local languages when he travels), but mostly a combination of Gujarati, Hindi, and English. He portrays himself as a Hindu god – some people talk about the ‘banalisation of Hindutva’. Part of this is portraying “every Indian” as special. ‘Selfie Nationalism’ has four characteristisc: Modi’s personification of a symbolic self (and driven by him, not others); a rejection of plural religious/cultural narratives of India; a discourse with a short self life driven by optics as in the frequent launch of new policy initiatives (which are then discarded); less concern with media access and more by media use.

Representing the Divine Cow: Indian Media, Cattle Slaughter and Identity Politics Sudeshna Roy, Stephen F. Austin State University. What are discursive strategies used to generate, resist, sustain, or reify discourses of Hindu nationalism surrounding the Divine Cow? Modi has had a lifelong association with the Hindu nationalist organisation RSS. He has been providing the conditions to support the growth of violent identity politics. In 2014, as Gujrat chief minister, he started attacking the beef export industry. In 2017 he instituted a ban against small-time Muslim and low-caste Dalit, leather-workers. Some low-caste Dalit Hindus do eat beef. Roy notes that while we commonly understand culture as private, our common associations and larger context shape how we understand culture. There have been several cases of Hindu mobs murdering Muslim people for (allegedly) eating beef. Newspaper articles on these events frequently refer to the ceremonial, ritual, and religious roles of the cow, including its sanctity and ahimsa (harmlessness); and pastoral Khrishna. There is, however, no monolithic adherence to the sanctity of the cow for Hindus. There’s a forced conflation of private and public culture in the media’s coverage of the symbolic cow. Hindutva is being presented as a way of life.

Do We Truly Belong: Ethnic and Racial Politics of Post-Disaster News Coverage of Puerto Rico. Sumana Chattopadhyay, Marquette University. In surveys, only a very small majority of people in the mainland US knew that Puerto Ricans are American citizens. However, they can’t vote in the national elections, because they’re not represented in the Electoral College. US mainstream media coverage of Hurricane Maria Puerto Rico is like their coverage of foreign countries.

 

Planet DebianIan Campbell: qcontrol 0.5.6

  • Fix for kernels which have permissions 0200 (write-only) on gpio export device.
  • Updates to systemd unit files.
  • Update to README for (not so) new homepage (thanks to Martin Michlmayr).
  • Add a configuration option in the examples to handle QNAP devices which lack a fan (Debian bug #712841, thanks to Martin Michlmayr for the patch and to Axel Sommerfeldt).

Get it from git or http://www.hellion.org.uk/qcontrol/releases/0.5.6/.

The Debian package will be uploaded shortly.

,

Planet DebianAntoine Beaupré: Diversity, education, privilege and ethics in technology

This article is part of a series on KubeCon Europe 2018.

This is a rant I wrote while attending KubeCon Europe 2018. I do not know how else to frame this deep discomfort I have with the way one of the most cutting edge projects in my community is moving. I see it as a symptom of so many things wrong in society at large and figured it was as good a way as any to open the discussion regarding how free software communities seem to naturally evolved into corporate money-making machines with questionable ethics.

A white male looking at his phone while a hair-dresser prepares him for a video shoot, with plants and audio-video equipment in the background A white man groomed by a white woman

Diversity and education

There is often a great point made of diversity at KubeCon, and that is something I truly appreciate. It's one of the places where I have seen the largest efforts towards that goal; I was impressed by the efforts done in Austin, and mentioned it in my overview of that conference back then. Yet it is still one of the less diverse places I've ever participated in: in comparison, Pycon "feels" more diverse, for example. And then, of course, there's real life out there, where women constitute basically half the population, of course. This says something about the actual effectiveness diversity efforts in our communities.

a large conference room full of people that mostly look like white male, with a speaker on a large stage illuminated in white 4000 white men

The truth is that contrary to programmer communities, "operations" knowledge (sysadmin, SRE, DevOps, whatever it's called these days) comes not from institutional education, but from self-learning. Even though I have years of university training, the day to day knowledge I need in my work as a sysadmin comes not from the university, but from late night experiments on my personal computer network. This was first on the Macintosh, then on the FreeBSD source code of passed down as a magic word from an uncle and finally through Debian consecrated as the leftist's true computing way. Sure, my programming skills were useful there, but I acquired those before going to university: even there teachers expected students to learn programming languages (such as C!) in-between sessions.

A bunch of white geeks hanging out with their phones next to a sign that says 'Thanks to our Diversity Scholarship Sponsors' with a bunch of corporate logos Diversity program

The real solutions to the lack of diversity in our communities not only comes from a change in culture, but also real investments in society at large. The mega-corporations subsidizing events like KubeCon make sure they get a lot of good press from those diversity programs. However, the money they spend on those is nothing compared to tax evasion in their home states. As an example, Amazon recently put 7000 jobs on hold because of a tax the city of Seattle wanted to impose on corporations to help the homeless population. Google, Facebook, Microsoft, and Apple all evade taxes like gangsters. This is important because society changes partly through education, and that costs money. Education is how more traditional STEM sectors like engineering and medicine have changed: women, minorities, and poorer populations were finally allowed into schools after the epic social struggles of the 1970s finally yielded more accessible education. The same way that culture changes are seeing a backlash, the tide is turning there as well and the trend is reversing towards more costly, less accessible education of course. But not everywhere. The impacts of education changes are long-lasting. By evading taxes, those companies are keeping the state from revenues that could level the playing field through affordable education.

Hell, any education in the field would help. There is basically no sysadmin education curriculum right now. Sure you can follow a Cisco CCNA or MSCE private trainings. But anyone who's been seriously involved in running any computing infrastructure knows those are a scam: that will tie you down in a proprietary universe (Cisco and Microsoft, respectively) and probably just to "remote hands monkey" positions and rarely to executive positions.

Velocity

Besides, providing an education curriculum would require the field to slow down so that knowledge would settle down and trickle into a curriculum. Configuration management is pretty old, but because the changes in tooling are fast, any curriculum built in the last decade (or even less) quickly becomes irrelevant. Puppet publishes a new release every 6 month, Kubernetes is barely 4 years old now, and is changing rapidly with a ~3 month release schedule.

Here at KubeCon, Mark Zuckerberg's mantra of "move fast and break things" is everywhere. We call it "velocity": where you are going does not matter as much as how fast you're going there. At one of the many keynotes, Abby Kearns from the Cloud Foundry Foundation boasted at how Home Depot, in trying to sell more hammers than Amazon, is now deploying code to production multiple times a day. I am still unclear as whether this made Home Depot actually sell more hammers, or if it's something that we should even care about in the first place. Shouldn't we converge over selling less hammers? Making them more solid, reliable, so that they are passed down from generations instead of breaking and having to be replaced all the time?

Slide from Kearn's keynote that shows a women with perfect nail polish considering a selection of paint colors with the Home Depot logo and stats about 'speed' in their deployment Home Depot ecstasy

We're solving a problem that wasn't there in some new absurd faith that code deployments will naturally make people happier, by making sure Home Depot sells more hammers. And that's after telling us that Cloud Foundry helped the USAF save 600M$ by moving their databases to the cloud. No one seems bothered by the idea that the most powerful military in existence would move state secrets into a private cloud, out of the control of any government. It's the name of the game, at KubeCon.

Picture of a jet fighter flying over clouds, the logo of the USAF and stats about the cost savings due their move to the cloud USAF saves (money)

In his keynote, Alexis Richardson, CEO of Weaveworks, presented the toaster project as an example of what not to do. "He did not use any sourced components, everything was built from scratch, by hand", obviously missing the fact that toasters are deliberately not built from reusable parts, as part of the planned obsolescence design. The goal of the toaster experiment is also to show how fragile our civilization has become precisely because we depend on layers upon layers of parts. In this totalitarian view of the world, people are also "reusable" or, in that case "disposable components". Not just the white dudes in California, but also workers outsourced out of the USA decades ago; it depends on precious metals and the miners of Africa, the specialized labour of the factories and intricate knowledge of the factory workers in Asia, and the flooded forests of the first nations powering this terrifying surveillance machine.

Privilege

Photo of the Toaster Project book which shows a molten toster that looks like it came out of a H.P. Lovecraft novel "Left to his own devices he couldn’t build a toaster. He could just about make a sandwich and that was it." -- Mostly Harmless, Douglas Adams, 1992

Staying in an hotel room for a week, all expenses paid, certainly puts things in perspectives. Rarely have I felt more privileged in my entire life: someone else makes my food, makes my bed, and cleans up the toilet magically when I'm gone. For me, this is extraordinary, but for many people at KubeCon, it's routine: traveling is part of the rock star agenda of this community. People get used to being served, both directly in their day to day lives, but also through the complex supply chain of the modern technology that is destroying the planet.

An empty shipping container probably made of cardboard hanging over the IBM booth Nothing is like corporate nothing.

The nice little boxes and containers we call the cloud all abstract this away from us and those dependencies are actively encouraged in the community. We like containers here and their image is ubiquitous. We acknowledge that a single person cannot run a Kube shop because the knowledge is too broad to be possibly handled by a single person. While there are interesting collaborative and social ideas in that approach, I am deeply skeptical of its impact on civilization in the long run. We already created systems so complex that we don't truly know who hacked the Trump election or how. Many feel it was, but it's really just a hunch: there were bots, maybe they were Russian, or maybe from Cambridge? The DNC emails, was that really Wikileaks? Who knows! Never mind failing close or open: the system has become so complex that we don't even know how we fail when we do. Even those in the highest positions of power seem unable to protect themselves; politics seem to have become a game of Russian roulette: we cock the bot, roll the secret algorithm, and see what dictator will shoot out.

Ethics

All this is to build a new Skynet; not this one or that one, those already exist. I was able to pleasantly joke about the AI takeover during breakfast with a random stranger without raising as much as an eyebrow: we know it will happen, oh well. I've skipped that track in my attendance, but multiple talks at KubeCon are about AI, TensorFlow (it's opensource!), self-driving cars, and removing humans from the equation as much as possible, as a general principle. Kubernetes is often shortened to "Kube", which I always think of as a reference to the Star Trek Borg all mighty ship, the "cube". This might actually make sense given that Kubernetes is an open source version of Google's internal software incidentally called... Borg. To make such fleeting, tongue-in-cheek references to a totalitarian civilization is not harmless: it makes more acceptable the notion that AI domination is inescapable and that resistance truly is futile, the ultimate neo-colonial scheme.

Captain Jean-Luc Picard, played by Patrick Stewart, assimilated by the Borg as 'Locutus' "We are the Borg. Your biological and technological distinctiveness will be added to our own. Resistance is futile."

The "hackers" of our age are building this machine with conscious knowledge of the social and ethical implications of their work. At best, people admit to not knowing what they really are. In the worse case scenario, the AI apocalypse will bring massive unemployment and a collapse of the industrial civilization, to which Silicon Valley executives are responding by buying bunkers to survive the eventual roaming gangs of revolted (and now armed) teachers and young students coming for revenge.

Only the most privileged people in society could imagine such a scenario and actually opt out of society as a whole. Even the robber barons of the 20th century knew they couldn't survive the coming revolution: Andrew Carnegie built libraries after creating the steel empire that drove much of US industrialization near the end of the century and John D. Rockefeller subsidized education, research and science. This is not because they were humanists: you do not become an oil tycoon by tending to the poor. Rockefeller said that "the growth of a large business is merely a survival of the fittest", a social darwinist approach he gladly applied to society as a whole.

But the 70's rebel beat offspring, the children of the cult of Job, do not seem to have the depth of analysis to understand what's coming for them. They want to "hack the system" not for everyone, but for themselves. Early on, we have learned to be selfish and self-driven: repressed as nerds and rejected in the schools, we swore vengeance on the bullies of the world, and boy are we getting our revenge. The bullied have become the bullies, and it's not small boys in schools we're bullying, it is entire states, with which companies are now negotiating as equals.

The fraud

A t-shirt from the Cloudfoundry booth that reads 'Freedom to create' ...but what are you creating exactly?

And that is the ultimate fraud: to make the world believe we are harmless little boys, so repressed that we can't communicate properly. We're so sorry we're awkward, it's because we're all somewhat on the autism spectrum. Isn't that, after all, a convenient affliction for people that would not dare to confront the oppression they are creating? It's too easy to hide behind such a real and serious condition that does affect people in our community, but also truly autistic people that simply cannot make it in the fast-moving world the magical rain man is creating. But the real con is hacking power and political control away from traditional institutions, seen as too slow-moving to really accomplish the "change" that is "needed". We are creating an inextricable technocracy that no one will understand, not even us "experts". Instead of serving the people, the machine is at the mercy of markets and powerful oligarchs.

A recurring pattern at Kubernetes conferences is the KubeCon chant where Kelsey Hightower reluctantly engages the crowd in a pep chant:

When I say 'Kube!', you say 'Con!'

'Kube!' 'Con!' 'Kube!' 'Con!' 'Kube!' 'Con!'

Cube Con indeed...

I wish I had some wise parting thoughts of where to go from here or how to change this. The tide seems so strong that all I can do is observe and tell stories. My hope is that the people that need to hear this will take it the right way, but I somehow doubt it. With chance, it might just become irrelevant and everything will fix itself, but somehow I fear things will get worse before they get better.

Krebs on SecurityWhy Is Your Location Data No Longer Private?

The past month has seen one blockbuster revelation after another about how our mobile phone and broadband providers have been leaking highly sensitive customer information, including real-time location data and customer account details. In the wake of these consumer privacy debacles, many are left wondering who’s responsible for policing these industries? How exactly did we get to this point? What prospects are there for changes to address this national privacy crisis at the legislative and regulatory levels? These are some of the questions we’ll explore in this article.

In 2015, the Federal Communications Commission under the Obama Administration reclassified broadband Internet companies as telecommunications providers, which gave the agency authority to regulate broadband providers the same way as telephone companies.

The FCC also came up with so-called “net neutrality” rules designed to prohibit Internet providers from blocking or slowing down traffic, or from offering “fast lane” access to companies willing to pay extra for certain content or for higher quality service.

In mid-2016, the FCC adopted new privacy rules for all Internet providers that would have required providers to seek opt-in permission from customers before collecting, storing, sharing and selling anything that might be considered sensitive — including Web browsing, application usage and location information, as well as financial and health data.

But the Obama administration’s new FCC privacy rules didn’t become final until December 2016, a month after then President-elect Trump was welcomed into office by a Republican controlled House and Senate.

Congress still had 90 legislative days (when lawmakers are physically in session) to pass a resolution killing the privacy regulations, and on March 23, 2017 the Senate voted 50-48 to repeal them. Approval of the repeal in the House passed quickly thereafter, and President Trump officially signed it on April 3, 2017.

In an op-ed published in The Washington Post, Ajit Pai — a former Verizon lawyer and President Trump’s pick to lead the FCC — said “despite hyperventilating headlines, Internet service providers have never planned to sell your individual browsing history to third parties.”

FCC Commissioner Ajit Pai.

“That’s simply not how online advertising works,” Pai wrote. “And doing so would violate ISPs’ privacy promises. Second, Congress’s decision last week didn’t remove existing privacy protections; it simply cleared the way for us to work together to reinstate a rational and effective system for protecting consumer privacy.”

Sen. Bill Nelson (D-Fla.) came to a different conclusion, predicting that the repeal of the FCC privacy rules would allow broadband providers to collect and sell a “gold mine of data” about customers.

Sky CroeserICA18 Day 2: narrating voice, digital media and the body, feminist theorisation beyond western cultures, collective memory, and voices of freedom and constraint

Narrating Voice and Building Self on Digital and Social Media
thisislebanon‘This is Lebanon’: Narrating Migrant Labor to Resistive Public. Rayya El Zein, University of Pennsylvania. This research looks at the calling into being of an ideal political subject through social media. ‘This is Lebanon’ is a platform run by a Nepalese immigrant, Dipendra Upetry, where migrant workers have been sharing stories of labour abuses. The Lebanese system for migrant work is particularly conducive to labour abuses, as workers often have a ‘sponsor’ who they may also live with. El Zein is looking at how the voices of labourers affect the political imagination around what it means to be Lebanese. ‘This is Lebanon’ inverts a popular tourism hashtag, #thisislebanon, and when Lebanese citizens complain that “this isn’t Lebanon”, Upetry invites them to change working conditions if they want that to be true. The Kafa campaign, run by a Lebanon NGO in coordination with the International Labour Union, shared a series of ads about a young couple trying to decide what the right thing to do is regarding the person doing domestic work with them, imagining change as coming from educated middle class people who just need guidance. These are ideologically-inflected ideas of politics that position the individual as the mechanism of change.

Instagramming Persian Identity: Ritual Identity Negotiations of Iranians and Persians in/out of Iran. Samira Rajabi, University of Pennsylvania. This research came out of trying to understand why some people refer to themselves as Persians, and others as Iranians. Rajabi looked at how identity is being negotiated on social media, particularly Instagram, which led to exploring particularly the ways in which identity are written on women’s bodies. Many women were part of the Iranian revolution, but they were the first losers after the revolution. Trauma has had a huge impact on how identity is negotiated, and tactical media can be one way to respond to the deep symbolic trauma many people from Iran have experienced.

Hijacking Religion on Facebook. Mona Abdel-Fadil, University of Oslo. This focuses on the Norwegian Cross-Case – a newsreader tried to wear a cross while reading the news, and was told she was in breach of guidelines. There’s a Facebook group: “Yes to wearing the cross whenever I choose”. This is a good case study for understanding identity politics, the role of social media users in amplifying conflicts about religion, modes of performing conflict (and understanding who they are performing to), and the politics of affect. The Facebook group is dominated by conservative Christians who are worried about losing Norway’s Christian heritage; nationalists who see Norwegian identity as inextricably tied to Christianity; humanists (predominantly women) who try to bridge differences; fortified secularists, who argue ferociously, particularly against the nationalists; ardent atheists (predominantly men), who tend to be fan the flames by abusing religious people, then step back. The group is shaped by master narratives that require engagement: that wearing the cross is an act of defiance (often against Muslim attack); that Norwegian cultural heritage is under threat (with compliance from politicians). There’s an intensification and amplification of conflict, including distorting and adding to the original conflict. We need to understand that for some people this is entertainment – an attraction to the tension in the group, and how easy it is to inflame emotions.

Discussion session: Lilie Chouliaraki, in responding, noted the role of trauma and victimhood, inviting speakers to reflect on the role of victimhood and self-victimhood in constituting subjects and identities here. Rajabi noted that trauma requires a different level of response – the stakes are different. But trauma is medicalised, we treat it as something to be dealt with individually rather than politically. Abdel-Fadil is trying to work out how to write from a place of vulnerability about this: how to take the sense of suffering expressed by these people who feel like Christianity or Norwegian identity is under threat seriously, while not necessarily accepting that they are actually victims.

Digital Media and the Body

dem9w1zwsaas5qm

Drawing from Abigail Selzer King

Towards a theory of projectilic media: Notes on Islamic State’s Deployment of Fire. Marwan M. Kraidy, Annenberg, University of Pennsylvania. Kraidy asks why ISIS uses the symbolism of fire so frequently. There’s a distinction between digital images, operative images (for example, drone footages) that are part of an image; projectilic images (images as weapons); and prophylactic images (which build a sense of safety and security). In ISIS’s symbolism, fire becomes a metaphor for sudden birth and sudden death, for the war machine, and for flames of justice. Speed is essential to the war machine, and to fire. A one-hour ISIS video would have about half an hour of projectilic sequences. ISIS uses a torch as a metaphor for the war machine, and the hearth as a a metaphor for the utopian homeland. Fire activates new connections between words and images. Immolation confuses the customary chronology (for example, of beheading videos).

You Have Been Tagged: Incanting Names and Incarnating Bodies on Social Media. Paul Frosh, Hebrew University of Jerusalem. Tagging has become a prevalent technique for circulating images on social media, and serves various purposes for social media platforms (for example, adding more data). Naming and figuration are linked to the life of the self. Names aren’t just linguistic designators – they’re also signifiers of power. Names perform the entanglement of the social subject. Tagging requires a systematic circulation of the name (you must join the platform). Tagging interpolates us as subjects of a particular system, and revitalises the ancient magical power of action at a distance through naming. Tagging is a magical act of germination. Being tagged carries a social weight, prompting us to respond. Tagging sends social signals through others’ images, as opposed to selfies. Tagging goes against the grain of networked selfhood in digital culture, re-centring the body. Tagging is the fleshing out of informational networks.

refugee-selfie-001

Selfies as Testimonies of the Flesh. Lilie Chouliaraki, London School of Economics and Political Science. Aesthetic corporeality becomes important when we think about vulnerable bodies. Digital testimonies produced in conflict zones are elements of a broader landscape of violence and suffering. How does the selfie mediate the faces of refugees? What does the remediation of these faces in Western news sites tell us? Three types of images: refugees being photographed to take selfies; refugee selfies with global leaders; celebrities taking photos as if they were refugees. Chouliaraki notes that refugees taking selfies in Lesbos are celebrating not just having arrived, but also having survived the deadliest sea crossing. Refugee selfies are remediated through a series of disembodiments; their faces are, at best, an absent presence, or, at worst, fully absent.

Feminist Theorizations Beyond Western Cultures
Orientalism, Gender, and Media Representation: A Textual Analysis of Afghan Women in US, Afghan, and Chinese Media. Azeta Hatef, Pennsylvania State University and Luwei Rose Luqui, Hong Kong Baptist University. This study looks at media representations of women in Afghanistan, thinking about the purposes these images serve in relation the war on Afghanistan. Media coverage in China is controlled by the government, but soft news is offered a bit more leeway than hard news outlets. Nevertheless, in China mainstream media conveys the same theme: Afghan women oppressed by brown men. Both US and Chinese media portrays Afghanistan as backwards, with women’s freedoms entirely limited. While violence against women in Afghanistan is worthy of attention, but these media representations operate to amplify distinctions between “us” and “them”, justifying intervention (and failing to recognise the violence done by that intervention).

Production of subject of politics through social media: a practice of Iranian women activists. Gilda Seddighi, University of Bergen. This research looked at an Iranian online network of mourning mothers, drawing on Butler’s conceptualization of politicization. There was a group, “Supporters of Mourning Mothers Harstad”, composed mainly of asylum seekers, connected by Facebook and other mechanisms. Motherhood can be seen here as a source of recognition of political subjects across national border. The notion of motherhood was expanded to include children beyond their own. Nevertheless, many women interviewed spoke of their activism as apolitical, and belonging to a particular nation-state was taken for granted.

Subject Transformations: New Media, New Feminist Discourses. Nithila Kanagasabai, Tata Institute of Social Sciences. This research attempts to look at new strands of feminism in India, particularly in smaller towns in Tamil Nadu. Work from urban areas has tended to position Women’s Studies as urban, upper-caste, middle-class, English-speaking, online, and speaking for marginalised groups. Students who Kanagasabai interviewed drew on ‘the feminist canon’ (for example, Virginia Woolf, Shulamith Firestone), but also on little magazines – small local literary magazines in regional dialects of Tamil, which previously circulated predominantly among unemployed, educated men. These magazines have shifted to allow women, Dalits, and people from scheduled tribes to express themselves. Little magazines open space for subjectivity, offering a critique of seemingly universal social norms, including casteism and gender roles. Students interviewed mention these magazines alongside sources like Jstor and Economic and Political Weekly, which speaks to the development of new methodologies. Publishing in little magazines (as opposed to mainstream feminist journals) is seen not just as convenient, but also as a political decision. Moving online did not mean that little magazines transcended the local or temporal – readership remains limited and local, but they are still important spaces. Following feminists online has lead to a deeper everyday engagement with feminist literature. Lurking needs to be viewed within the framework of collaborative learning, and engagement can happen during key moments. Most students didn’t relate to the title of feminism (which they felt required a particular kind of academic competence), but instead related to women’s studies.

Collective Identities and Memories
Collective Memory Matters: Mobilizing Activist Memory in Autonomous Media. Kamilla Petrick, Sandra Jeppesen, Ellen Craig, Cassidy Croft, & Sharmeen Khan, Lakehead University. Unpaid labour within collectives means that institutional memory isn’t actively shared, but instead embodied within long-term members (who may leave).

detroitwall

By Király-Seth – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=42295509

Emergent Voices in Material Memories: Conceptualizing Public Voices of Segregated Memories in Detroit. Scott Mitchell, Wayne State University. An eight-mile wall remains as a visible reminder of the history of segregation in Detroit, also serving as a space of education and hope. The wall was constructed by developers to raise property values for the White area by separating it from Black communities. Grassroots efforts to add a mural have shifted its meaning.

 

Repertoires, Identities, and Issues of Collective Actions of the Candlelight Movements in S. Korea. Young-Gil Chae, Hankuk University of Foreign Studies and Inho Cho, Hanyang UJaehee Cho, Chung-Ang University.

The Mnemonic Black Hole at Guantánamo: Memory and Counter-Memory Digital Practices on Twitter. Muira McCammon, Annenberg School for Communication at the University of Pennsylvania. Guantánamo is often left off maps: Johann Stein has called it a “legal black hole”. McCammon tried to go to the library at Guantánamo for detainees – being unsuccessful, she tried following the Joint Task Force for Guantánamo on Twitter. McCammon asks what some of the mnemonic strategies used on the Twitter feed are. Only images of higher-up command and celebrities are posted. Traces of Guantánamo as a ‘space of exception’ have been deleted (for example, tweets noting the lack of Internet connection). The official ‘memory maker’, when posting on Twitter, can’t escape others’ memory-making (for example, responses to an official tweet about sexual harassment training at Guantánamo which pointed out the tremendous irony). When studying these issues, there are few systematic ways to track and trace digital military memory makers.

The Voice of Silence: Practices of Participation Among East Jerusalem Palestinians. Maya de Vries, Hebrew University of Jerusalem. This research focuses on participation avoidance, for example the boycotting of Facebook over the ways in which it censors Palestinian content, as an active form of resistance. de Vries notes the complexity of power relations in working with Palestinians in East Jerusalem. Interviewees choose not to engage in anything political on Facebook, knowing that it is monitored by the Israeli state. This state monitoring affects their choices around Facebook. There is also kinship monitoring – knowing that family are reading. Self-monitoring also plays a role. One interviewee notes that when she had to put her location down, there was no option for “East Jerusalem, Palestine”. These layers of monitoring mean that Palestinians negotiate their engagement with Facebook cautiously, frequently choosing non-participation.

Voices of Freedom, Voices of Constraint: Race, Citizenship and Public Memory – Then and Now
Selected Research: “The Fire Next Time in the Civil Sphere: Literary Journalism and Justice in America 1963. Kathy Roberts Forde, Associate Professor, Journalism Department, University of Massachusetts-Amherst. After the end of slavery, new systems were put in place to control Black people, and exploit their labour. Black resistance continued, building a vibrant Black public sphere and paving the way for the civil rights movement. James Baldwin wrote that the only thing that White people had that Black people needed was power. White people should not be a model for how to live. White people destroyed, and were destroying, thousands of lives, and did not know it, and did not want to know it. Baldwin’s writing was hugely influential.

Selected Research: Newspaper Wars: Civil Rights and White Resistance in South Carolina, 1935-1965, 2017. Sid Bedingfield, Assistant Professor, Hubbard School of Journalism and Mass Communication, University of Minnesota-Twin Cities. Talks about NAACP leader Roy Wilkins’ 1964 opinion piece complaining about Black youth crime. This had parallels with segregationists’ narratives, and Wilkins’ had cordial communications with some segregationists. These narratives stripped away historical context and ongoing oppression when covering Black protests and expressions of anger and frustration.

Selected Research: Framing the Black Panthers: The Spectacular Rise of a Black Power Icon, 2017, 2nd edition; Rebel Media: Adventures in the History of the Black Public Sphere, In Progress; Jane Rhodes, Professor and Department Head, African American Studies, University of Illinois at Chicago. Almost everything Rhodes finds in the discourses of the 1960s is still relevant today in discourses of nationalism and race. Stuart Hall argues that each surge of social anxiety finds a temporary respite in the projection of fears onto compellingly anxiety-laden themes – like moral panics about Black people and other racialised others. US coverage of Britain in the 1960s tended to frame Britain as having issues with race, but an unwillingness to deal with it. Meanwhile, British press seemed to have almost a lurid fascination with racial violence in the US (with an undercurrent of fear for white safety in the US, and subsequently in Britain). Deep-seated anxieties around race and social change aren’t subtle. As Enoch Powell came to power, media seemed to be tangled in debates about whether US or UK racism was worse.

Planet DebianSteve Kemp: On collecting metrics

Here are some brief notes about metric-collection, for my own reference.

Collecting server and service metrics is a good thing because it lets you spot degrading performance, and see the effect of any improvements you've made.

Of course it is hard to know what metrics you might need in advance, so the common approach is to measure everything, and the most common way to do that is via collectd.

To collect/store metrics the most common approach is to use carbon and graphite-web. I tend to avoid that as being a little more heavyweight than I'd prefer. Instead I'm all about the modern alternatives:

  • Collect metrics via go-carbon
    • This will listen on :2003 and write metrics beneath /srv/metrics
  • Export the metrics via carbonapi
    • This will talk to the go-carbon instance and export the metrics in a compatible fashion to what carbon would have done.
  • Finally you can view your metrics via grafana
    • This lets you make pretty graphs & dashboards.

Configuring all this is pretty simple. Install go-carbon, and give it a path to write data to (/srv/metrics in my world). Enable the receiver on :2003. Enable the carbonserver and make it bind to 127.0.0.1:8888.

Now configure the carbonapi with the backend of the server above:

  # Listen address, should always include hostname or ip address and a port.
  listen: "localhost:8080"

  # "http://host:port" array of instances of carbonserver stores
  # This is the *ONLY* config element in this section that MUST be specified.
  backends:
    - "http://127.0.0.1:8888"

And finally you can add your data-source to grafana of 127.0.0.1:8080, and graph away.

The only part that I'm disliking at the moment is the sheer size of collectd. Getting metrics of your servers (uptime, I/O performance, etc) is very useful, but it feels like installing 10Mb of software to do that is a bit excessive.

I'm sure there must be more lightweight systems out there for collecting "everything". On the other hand I've added metrics exporting to my puppet-master, and similar tools very easily so I have lightweight support for that in the tools themselves.

I have had a good look at metricsd which is exactly the kind of tool I was looking for, but I've not searched too far afield for other alternatives and choices just yet.

I should write more about application-specific metrics in the future, because I've quizzed a few people recently:

  • What's the average response-time of your application? What's the effectiveness of your (gzip) compression?
    • You don't know?
  • What was the quietest time over the past 24 hours for your server?
    • You don't know?
  • What proportion of your incoming HTTP-requests were for HTTP?
    • Do you monitor HTTP-status-codes? Can you see how many times people were served redirects to the SSL version of your site? Will using HST save you bandwidth, if so how much?

Fun times. (Terrible pun is terrible, but I was talking to a guy called Tim. So I could have written "Fun Tims".)

,

Planet DebianSylvain Beucler: Testing GNU FreeDink in your browser

Ever wanted to try this weird GNU FreeDink game, but never had the patience to install it?
Today, you can play it with a single click :)

Play GNU FreeDink

This is a first version that can be polished further but it works quite well.
This is the original C/C++/SDL2 code with a few tweaks, cross-compiled to WebAssembly (and an alternate version in asm.js) with emscripten.
Nothing brand new I know, but things are getting smoother, and WebAssembly is definitely a performance boost.

I like distributed and autonomous tools, so I'm generally not inclined to web-based solutions.
In this case however, this is a local version of the game. There's no server side. Savegames are in your browser local storage. Even importing D-Mods (game add-ons) is performed purely locally in the in-memory virtual FS with a custom .tar.bz2 extractor cross-compiled to WebAssembly.
And you don't have to worry about all these Store policies (and Distros policies^W^W^W.

I'm interested in feedback on how well these works for you in your browsers and devices:

I'm also interested in tips on how to place LibreJS tags - this is all free JavaScript.

Planet DebianSteinar H. Gunderson: Debian XU4 images updated

I've updated my Debian images for the ODROID XU4; the newest build was done before stretch release, and a lot of minor adjustments have happened since then.

The XU4 is fairly expensive for a single-board computer ($59 plus PSU, storage and case), and it's getting a bit long in the tooth with 32-bit and all, but it's probably still the nicest choice among the machines Hardkernel have to option. In particular, it's fairly fast, the eMMC option is so much better than SD, and these days, you can run mainline kernel on them instead of some 3.10 build nobody cares about anymore. (Well, in Debian's kernel, you don't get HDMI, though…) It's not nearly as widely supported as the Raspberry Pi, of course, and it doesn't have the crazy huge ecosystem, but it's definitely faster. :-)

Debian doesn't officially support the XU4, but with only a small amount of non-free bits in the bootloader, you can get an almost vanilla image; Debian U-Boot (with GRUB!), Debian kernel, and a plain image that comes out of debootstrap with only some minor awkwardness for loading the device tree. My personal one runs sid, but stretch is a good start for a server and it's easy to dist-upgrade, so I haven't bothered making sid images. I probably will make buster images at some point, though.

Enjoy!

CryptogramFriday Squid Blogging: Squid Comic

It's not very good, but it has a squid in it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity and Human Behavior (SHB 2018)

I'm at Carnegie Mellon University, at the eleventh Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions -- all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.

This year's program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks. (Ross also maintains a good webpage of psychology and security resources.)

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

Next year, I'll be hosting the event at Harvard.

Planet Linux AustraliaJonathan Adamczewski: Modern C++ Randomness

This thread happened…

So I did a little digging to satisfy my own curiosity about the “modern C++” version, and have learned a few things that I didn’t know previously…

(this is a manual unrolled twitter thread that starts here, with slight modifications)

Nearly all of this I gleaned from the invaluable and . Comments about implementation refer specifically to the gcc-8.1 C++ standard library, examined using Compiler Explorer and the -E command line option.

std::random_device is a platform-specific source of entropy.

std: mt19937 is a parameterized typedef of std::mersenne_twister_engine

specifically:
std::mersenne_twister_engine<uint_fast32_t, 32, 624, 397, 31, 0x9908b0df, 11, 0xffffffff, 7, 0x9d2c5680, 15, 0xefc60000, 18, 1812433253>
(What do those number mean? I don’t know.)

And std::uniform_int_distribution produces uniformly distributed random numbers over a specified range, from a provided generator.

The default constructor for std::random_device takes an implementation-defined argument, with a default value.

The meaning of the argument is implementation-defined – but the type is not: std::string. (I’m not sure why a dynamically modifiable string object was the right choice to be the configuration parameter for an entropy generator.)

There are out-of-line private functions for much of this implementation of std::random_device. The constructor that calls the out-of-line init function is itself inline – so the construction and destruction of the default std::string param is also generated inline.

Also, peeking inside std::random_generator, there is a union with two members:

void* _M_file, which I guess would be used to store a file handle for /dev/urandom or similar.

std::mt19937 _M_mt, which is a … parameterized std::mersenne_twister_engine object.

So it seems reasonable to me that if you can’t get entropy* from outside your program, generate your own approximation. It looks like it is possible that the entropy for the std::mersenne_twister_engine will be provided by a std::mersenne_twister_engine.

Unlike std::random_device, which has its implementation out of line, std::mersenne_twister_engine‘s implementation seems to be all inline. It is unclear what benefits this brings, but it results in a few hundred additional instructions generated.

And then there’s std::uniform_int_distribution, which seems mostly unsurprising. It is again fully inline, which (from a cursory eyeballing) may allow a sufficiently insightful compiler to avoid a couple of branches and function calls.

The code that got me started on this was presented in jest – but (std::random_device + std::mt19937 + std::uniform_int_distribution) is a commonly recommended pattern for generating random numbers using these modern C++ library features.

My takeaways:
std::random_device is potentially very expensive to use – and doesn’t provide strong cross-platform guarantees about the randomness it provides. It is configured with an std::string – the meaning of which is platform dependent. I am not compelled to use this type.

std::mt19937 adds a sizeable chunk of codegen via its inline implementation – and there are better options than Mersenne Twister.

Bottom line: I’m probably going to stick with rand(), and if I need something a little fancier,  or one of the other suggestions provided as replies to the twitter thread.

Addition: the code I was able to gather, representing some relevant parts

Sky CroeserICA Day 1: Kurdish transnational media, racism online, digital labour, and public scholarship

My rough and very incomplete notes from the first day of ICA. There were a bunch of interesting points that I haven’t noted because I was distracted or tired or too busy listening, and great papers that I sadly missed. I mostly use these notes to follow up on work later, but if they’re useful to you too, that’s great!

a_time_for_drunken_horsesUnderstanding Kurdish Media and Communications: Space, Place and Materiality
Theaters of Inhibition and Cinemas of Strategy: Censorship, Space, and Struggle at a Film Festival in Turkey. Josh Carney, American University of Beirut, spoke about Bakur (North), a film about the everyday life of PKK guerrillas. When the Turkish government banned screenings of Bakur, people met at the theatres anyway to discuss the censorship. The directors of Bakur will go on trial in a few days for ‘terrorist propaganda’. Struggles over censorship were tied to struggles over the city space of Istanbul, perhaps in response to the Turkish government’s attempts to erase ideas and spaces that it finds disagreeable. The government wanted to erase Bakur because it was a testament to the peace process, and to the government’s withdrawal from it. This censorship can be seen as an attempt to erase the promise and possibility of peace.

Cinematic Spaces of Solitude, Exile, and Resistance: Telling Kurdish Stories from Norway, Iran, and Turkey. Suncem Koçer, Kadir Has University, spoke on Kurdish filmmaking as a transnational platform for identity politics. Bahman Ghobadi talks about Kurds as a people on the move, and says that cinema as the art of movement is therefore the most suitable medium for documenting Kurdish stories.

Infrastructures, Colonialism and Struggle. Burce Celik, Loughborough University, argues that Kurdish transnational media is still embedded in historical, political, and territorial contexts. Technical and economic concerns, as well as national borders, also shape networks. State interventions can take place at multiple levels. For example, while the Turkish government may not be able to stop television transmissions from Europe, there are reports of police smashing satellite antennas in Kurdish villages. While there are no country-wide Internet shut-downs, there have been region-wide shut-downs in Kurdish provinces of Turkey. We need to consider the materiality of media infrastructures.

Questions: I asked if there were attempts to shift film screenings and other spaces that had been shut down online. Carney noted that film-makers were very resistant to doing this, as film screenings and movie festivals were seen as important. Bakur was leaked online, and the directors asked that people didn’t share or watch it. Koçer affirmed this, and said that censorship in a way also served a generative purpose for film-makers.

Racism in Digital Media Space

Racism in the Hybrid Media System: Analyzing the Finnish ‘Immigration Debate’Gavan Titley, University of Helsinki. Mervi Pantti, U of Helsinki and Kaarina Nikunen, U of Tampere. Pantti opens by noting that even naming racism as racism is often contentious. ‘Hybra’ project – looking at understandings of racism shaped and contested in the interactive everyday cultures of digital media. This paper looks particularly at Suomi24, ‘Finland 24’, one of the largest non-English-language commenting site online. Anti-racist activism in the 1990s helped to fix racism in the public imagination as a result of movements of people, rather than deeper structures. ‘Racism’ is used broadly in Finnish public discourse to mean ‘discrimination’ (for example, ‘obesity racism’), which removes it from it’s particular context. Conservatives talk about “opinion racism”: claims that journalists and others with a ‘multicultural agenda’ are intolerant of other viewpoints. Politically, it’s very difficult to mobilise in terms of racism and anti-racism because of the ways in which this language works.

goodes-smallMore Than Meets the Eye: Understanding Networks of Images in Controversies Around Racism on Social Media. Tim Highfield, Digital Media Research Centre, Queensland University of Technology, and Ariadna Matamoros-Fernandez, Queensland University of Technology. This research, focused on everyday visual representations of racism and counter-racism practices, comes out of the wider literature on racism online that have largely focused on text. It draws on Matamoros-Fernandez’s conceptual work around platform racism. This article looks at the online responses to Adam Goodes’ war cry, many of which used images as a way to push the boundaries for racist viewpoints (often via homophobia). Indigenous social media users frequently added their own images to push back against the racism expressed against Goodes. Mainstream media, though, frequently reinforced hegemonic discourses of racism, rather than giving space to Indigenous voices. There were salient practices on Twitter that are interesting when thinking about platform racism: visual call-outs of racism, often of which were a way of performing distance from Australian racism, which had the effect of amplifying racism. Rather than performing ‘white solidarity’ by amplifying racism, it would be useful to do more to share Indigenous voices and critiques of racism, and link this particular incident to broader structures of racism in Australian society. Visual cultures are an opportunity to understand cover and everyday racism on social media platforms. Even with changes introduced by various platforms to combat racism (after user pressure), there is a lack of consistency and transparency in responses to platformed racism.

Online Hate Speech: Genealogies, Tensions and Contentions. Eugenia Siapera, Dublin City University, Paloma Viejo, Dublin City University and Elena Moreo, Dublin City University.

Theorising Online Racism: The Stream, Affect and Power Laws. Sanjay Sharma, Brunel University. Racialism isn’t an individual act, it’s embedded in material techno-social relations. Ambient racism creates an atmosphere of background hostility. Microaggressions may seem isolated and minor, but they can be all-pervasive.

Working it Out: Emergent Forms of Labor in the Global Digital Economy
Nothing left to lose: bureaucrats in Googleland: Vicki Mayer, Tulane. Stories about Google’s centrality to the economy are highly mediated, even for those working within the organisation. Bureaucrats aren’t meant to sell Google, but they have been pushed to ‘samenwerking’ (planned collaboration) to ‘solve problems’ individually with little structural support. Interviewees used the word “innovative” most often to describe how workers were trying to do more varied tasks with less time and money, while also trying to publicise their achievements. New companies come in all the time saying that they’ll create thousands of jobs, but with limited real results.

radioindigenaDeveloping a Farmworker Low-Power Radio Station in Southern California. Carlos Jimenez, University of Denver. Local Indigenous workers speak Mixteco and Zapotec (sp?) (which is very different from English and Spanish), and listen to Chilena songs – no radio stations in Oxnard catered to this language or musical tastes. The Mexteco Indigena Community Organizing Project partnered with the community. When there was an application made for Radio Indígena for a relatively low-powered antenna, another station fifty miles away, KDB93.7PM, registered a complaint. At first Radio Indígena organisers called to ask them to remove their complaint, but they refused until they received a letter from farmworkers in the area. After a while, the radio community wanted to try shifting towards online transmissions rather than through the radio antenna. But they found that farmworkers’ typical data plans would stop them from listening in. The cost of new media technologies place a greater burden on individual listeners, rather than on the broadcaster.

Production, moderation, representation: three ways of seeing women of color labor in digital culture, Lisa Nakamura, University of Michigan. The lower you go in the chain of production, the more people who aren’t white men you see. It is useful to ask whose labour we misattribute to white men, or even algorithms, on digital platforms. US digital work has been both outsourced and insourced, including to women on reservations. Fairchild ‘invaded’ reservations, and was one of the largest employers in the Navajo Nation until resistance to firings from the American Indian Movement, and unionisation, lead to them leaving. The plant there had produced “high reliability” components, which needed very low failure rates. Employing Navajo workers allowed Fairchild to pay less than the minimum wage. Workers were told that they were building parts for televisions, radios, calculators, and so on (with military applications not mentioned). In a current analogue, moderation work on sites like Facebook is outsourced, sometimes to volunteers. We might also look at the ways in which people like Alexis Ohanian (of Reddit) took credit for the work of teenager Rayouf Alhumedhi in the creation of a hijab emoji.

e2f47f3bc0604084ad088276d23ff610Riot Practices: Immaterial Labor and the Prison-Industrial Complex. Li Cornfeld, Amherst College. There’s a ‘mock prison riot’ at the former state penitentiary in Moundsville yearly, which is a combination of a trade show and a training exercise for ‘correctional officers’. This isn’t what we think of when we consider ‘tech events’, but we should take its claims to be a tech event seriously. It’s a private event, with global attendees. This is one of the ways in which the US exports its technologies of control and norms. It’s also a space to incorporate participants in the tech development process (for example, adding cords to radios for places where batteries are scarce). Technologies of control aren’t just weapons, they include phones, wristbands, and other tracking technologies – many of these are marketed as being not just for prisons, but also for other settings, such as hospitals.

Moving Broadband From Sea to Land: Internet Infrastructure and Labor in Tanzania. Lisa Parks, Massachusetts Institute of Technology. Parks wanted to understand how internet moves from sea to land, and what kinds of digital labor exist in Tanzania to help carry out these operations. She spoke to people who are both formal and informal IT workers, often carrying out risky forms of labour to make the internet more widely available. Drawing on Vicki Mayer, and Labato and Thomas’ The Informal Media Economy. IT ‘development’ projects often lead to unused infrastructure – technology that’s in place, but left unpowered, disconnected, in need of assembly or repair. In Bunda, there are people working in vital jobs like repairing or charging phones. The cost of charging phones is scaled by income. Mobile phone repair workers have designed their own phone which they are going to ask Foxconn to manufacture.

Public Scholars: Engaging With Mainstream Media as Activism

dedcksowaaasm_xThis was a panel discussion, with Amy Adele Hasinoff, University of Colorado Denver;  Charlton McIlwain, New York University; Jean Burgess, Queensland University of Technology; Victor W. Pickard and Maria Repnikova, Georgia State University.
The benefits of media engagement aren’t always direct and obvious – sometimes, for example, they connect unexpected groups and help build alliances. Framing material for a public audience with interventions from editors can be useful in thinking about how we communicate our research, including to other academics outside our own disciplines. Speakers were unsure about the benefits of engaging in hostile spaces – are there useful ways to engage with right-wing media, for example?

There was a lot of interest in the potential issues with engaging with the media. People’s experiences with engaging has differed – some speakers had been discouraged for engaging too much, others felt it was seen as a fundamental part of their job. However, there can be a problem keeping a balance between public scholarship (including dealing with hostile responses) and more traditional academic outputs. It’s important to discriminate between ‘high value’ engagement opportunities and junk.

University support for academics under attack can vary – sometimes they’ll provide legal support, but this isn’t necessarily reliable (or publicised). You’ll often only find out what the university responses to these issues are when a problem comes up. Many of the attacks academics face when speaking publicly aren’t necessarily overt: they might include subtle red-baiting, or questioning about how your background (for example, noting Maria Repnikova’s Russian surname) impacts on your ideas.

There were suggestions for those starting out with media engagement and not yet inundated with media requests:

  • Make sure your colleagues know that you’re interested in media engagement: they should be passing on relevant media queries;
  • Actively contact media when you have research that’s relevant and important – this might involve proposing stories to journalists/editors, or tweeting at journalists.
  • Have useful research to share (especially quantitative data).

How not to get fired? You can’t avoid making any controversial statements – if the press decide to go after you, they will. But aim to have evidence to back your point up, and hopefully aim to also have solidarity networks. (I’d add: maybe join your union!)

When engaging with the media, consider the formats that work for you: text, radio, or television?

Activism, Social Justice and the Role of Contemporary Scholarship
Sasha Costanza-Chock, Massachusetts Institute of Technology. Out of the Shadows, into the Streets! was the result of hands-on, participatory media processes. There isn’t a divide between scholarship and working with social justice organisations: it makes the work more accountable to the people working on the ground, and to their needs. Work with Out for Change led Costanza-Chock to shift their theoretical framework to one of transformative media: it’s about media-making as a healing and identity-forming process.

Kevin Michael Carragee, Suffolk University, began by making a distinction between activist scholarship and scholarship on activism. The former requires establishing partnerships with organisations and movements – there are more calls for this than actual examples. Carragee talked about his work with the Media Research and Action Project. One of the lessons of MRAP is that you want to try to increase the resources available to the group you’re working with. We need to recognise activists as lay scholars. Activists and scholars don’t share the same goals, discourses, and practices – we need to remember that.

Rosemary Clark-Parsons, The Annenberg School for Communication at the University of Pennsylvania. Clark-Parsons draws on feminist standpoint theory: all knowledge is contextually situated; marginalised communities are situated in ways that give them a broader view of power relations; research on those power relations should begin with and centre marginalised communities. To do participatory research, we must position ourselves with activists, but we have to be reflexive about what solidarity means and what power relationships are involved. It’s important to ground theory in practitioners’ perspectives.

Jack Linchuan Qiu, The Chinese University of Hong Kong, talked about the problems with the ‘engagement and impact’ framework, which doesn’t consider how our work has an impact, and to what ends. We need to have hope. As academics we have the luxury of finding hope, and using our classrooms and publications to share that hope.

Chenjerai Kumanyika, Rutgers University – School of Communication and Information. This kind of research offers a corrective to some of the tendencies that exist in our field. Everything Kumanyika has done that’s had an impact has been an “irresponsible job decision”. We have to push back against the priorities of the university, which are about extending empire. We have to push back against understanding class just as an identity parameter, as opposed to a relation between struggles. We need to sneak into the university, be in but not of it.

It was a wrench leaving this final panel of the day, but I had to go meet my partner and Nonsense Baby, so sadly I left before the end.

Planet Linux AustraliaAnthony Towns: Buying in and selling out

I figured “Someday we’ll find it: the Bitcoin connection; the coders, exchanges, and me” was too long for a title. Anyhoo, since very late February I’ve been gainfully employed in the cryptocurrency space, as a developer on Bitcoin Core at Xapo (it always sounds pretentious to shorten that to “bitcoin core developer” to me).

I mentioned this to Rusty, whose immediate response (after “Congratulations”) was “Xapo is weird”. I asked if he could name a Bitcoin company that’s not weird — turns out that’s still an open research problem. A lot of Bitcoin is my kind of weird: open source, individualism, maths, intense arguments, economics, political philosophies somewhere between techno-libertarianism and anarcho-capatalism (“ancap”, which shouldn’t be confused with the safety rating), and a general “we’re going to make the world a better place with more freedom and cleverer technology” vibe of the thing. Xapo in particular is also my kind of weird. For one, it’s founded by Argentinians who have experience with the downsides of inflation (currently sitting at 20% pa, down from 40% and up from 10%), even if that pales in comparison to Venezuela, the world’s current socialist basket case suffering from hyperinflation; and Xapo’s CEO makes what I think are pretty good points about Bitcoin improving global well-being by removing a lot of discretion from monetary policy — as opposed to doing blockchains to make finance more financey, or helping criminals and terrorists out, or just generally getting rich quick. Relatedly, Xapo (seems to me to be) much more of a global company than many cryptocurrency places, which often seem very Silicon Valley focussed (or perhaps NYC, or wherever their respective HQ is); it might be a bit self-indulgent, but I really like being surrounded by people with oddly different cultures, and at least my general impression of a lot of Silicon Valley style tech companies these days is more along the lines of “dysfunctional monoculture” than anything positive. Xapo’s tech choices also seem to be fairly good, or at least in line with my preferences (python! using bitcoin core! microservices!). Xapo is also one of pretty few companies that’s got a strong Bitcoin focus, rather than trying to support every crazy new cryptocurrency or subtoken out there: I tend to think Bitcoin’s the only cryptocurrency that really has good technical and economic fundamentals; so I like “Bitcoin maximilism” in principle, though I guess I’m hard pressed to argue it’s optimal at the business level.

For anyone who follow Bitcoin politics, Xapo might seem a strange choice — Xapo not long ago was on the losing side of the S2X conflict, and why team up with a loser instead of the winners? I don’t take that view for a couple of reasons: I didn’t ever really think doubling the blocksize (the 2X part) was a fundamentally bad idea (not least, because segwit (the S part) already does that and more under some circumstances), but rather the problem was the implementation plan of doing it in just a few months, against the advice of all the most knowledgeable developers, and having an absolutely terrible response when problems with the implementation were found. But although that was probably unavoidable considering the mandate to activate S2X within just a few months, I think the majority of the blame is rightly put on the developers doing the shoddy work, and the solution is for companies to work with developers who can say “no” convincingly, or, preferably, can say “yes, and this is how” long enough in advance that solving the problem well is actually possible. So working with any (or at least most) of the S2X companies just seems like being part of the solution to me. And in any event, I want to live in a world where different viewpoints are welcome and disagreement is okay, and finding out that you’re wrong just means you learned something new, not that you get punished and ostracised.

Likewise, you could argue that anyone who wants to really use Bitcoin should own their private keys, rather than use something like Xapo as a wallet or even a vault, and that working on Xapo is kind-of opposed to the “be your own bank” philosophy at the heart of Bitcoin. My belief is that there’s still a use for banks with Bitcoin: safely storing valuables is hard even when they’re protected by maths instead of (or as well as) locks or guns; so it still makes sense for many people to want to outsource the work of maintaining private keys, and unless you’re an IT professional, it’s probably more sensible to do that to a company that looks kind of like a bank (ie, a custodial wallet like Xapo) rather than one that looks like a software vendor (bitcoin core, electrum, etc) or a hardware vendor (ledger or trezor, eg). In that case, the key benefit that Bitcoin offers is protection from government monetary policy, and, hopefully better/cheaper access or storage of your wealth, which isn’t nothing, even if it’s not fully autonomous control over your wealth.

For the moment, there’s plenty of things to work on at Xapo: I’ve been delaying writing this until I could answer the obvious “when segwit?” question (“now!”), but there’s still more bits to do there, and obviously there are lots of neat things to do improving the app, and even more non-development things to do like dealing with other financial institutions, compliance concerns, and what not. Mostly that’s stuff I help with, but not my focus: instead, the things I’m lucky enough to get to work on are the ones that will make a difference in months/years to come, rather than the next few weeks, which gives me an excuse to keep up to date with things like lightning and Schnorr signatures and work on open source bitcoin stuff in general. It’s pretty fantastic. The biggest risk as I see it is I end up doing too much work on getting some awesome new feature or project prototyped for Xapo and end up having to maintain it, downgrading this from dream job to just a motherforking fantastic one. I mean, aside from the bigger risks like cryptocurrency turns out to be a fad, or we all die from nuclear annihilation or whatever.

I don’t really think disclosure posts are particularly necessary — it’s better to assume everyone has undisclosed interests and biases and judge what they say and do on its own merits. But in the event they are a good idea: financially, I’ve got as yet unvested stock options in Xapo which I plan on exercising and hope will be worth something someday, and some Bitcoin which I’m holding onto and hope will still be worth something some day. I expect those to be highly correlated, so anything good for one will be good for the other. Technically, I think Bitcoin is fascinating, and I’ve put a lot of work into understanding it: I’ve looked through the code, I’ve talked with a bunch of the developers, I’ve looked at a bunch of the crypto, and I’ve even done a graduate diploma in economics over the last couple of years to have some confidence in my ability to judge the economics of it (though to be fair, that wasn’t the reason I had for enrolling initially), and I think it all makes pretty good sense. I can’t say the same about other cryptocurrencies, eg Litecoin’s essentially the same software, but the economics of having a “digital silver” to Bitcoin’s “digital gold” doesn’t seem to make a lot of sense to me, and while Ethereum aims at a bunch of interesting problems and gets the attention it deserves as a result, I’m a long way from convinced it’s got the fundamentals right, and a lot of other cryptocurrency things seem to essentially be scams. Oh, perhaps I should also disclose that I don’t have access to private keys for $10 billion dollars worth of Bitcoin; I’m happily on the open source technology side of things, not on the access to money side.

Of course, my opinions on any of that might change, and my financial interests might change to reflect my changed opinions. I don’t expect to update this blog post, and may or may not post about any new opinions I might form. Which is to say that this isn’t financial advice, I’m not a financial advisor, and if I were, I’m certainly not your financial advisor. If you still want financial advice on crypto, I think Wences’s is reasonable: take 1% of what you’re investing, stick it in Bitcoin, and ignore it for a decade. If Bitcoin goes crazy, great, you’ve doubled your money and can brag about getting in before Bitcoin went up two orders of magnitude; if it goes terrible, you’ve lost next to nothing.

One interesting note: the press is generally reporting Bitcoin as doing terribly this year, maintaining a value of around $7000-$9000 USD after hitting highs of up to $19000 USD mid December. That’s not fake news, but it’s a pretty short term view: for comparison, Wences’s advice linked just above from less than 12 months ago (when the price was about $2500 USD) says “I have seen a number of friends buy at “expensive” prices (say, $300+ per bitcoin)” — but that level of “expensive” is still 20 or 30 times cheaper than today. As a result, in spite of the “bad” news, I think every cryptocurrency company that’s been around for more than a few months is feeling pretty positive at the moment, and most of them are hiring, including Xapo. So if you want to work with me on Xapo’s backend team we’re looking for Python devs. But like every Bitcoin company, expect it to be a bit weird.

CryptogramDetecting Lies through Mouse Movements

Interesting research: "The detection of faked identity using unexpected questions and mouse dynamics," by Merulin Monaro, Luciano Gamberini, and Guiseppe Sartori.

Abstract: The detection of faked identities is a major problem in security. Current memory-detection techniques cannot be used as they require prior knowledge of the respondent's true identity. Here, we report a novel technique for detecting faked identities based on the use of unexpected questions that may be used to check the respondent identity without any prior autobiographical information. While truth-tellers respond automatically to unexpected questions, liars have to "build" and verify their responses. This lack of automaticity is reflected in the mouse movements used to record the responses as well as in the number of errors. Responses to unexpected questions are compared to responses to expected and control questions (i.e., questions to which a liar also must respond truthfully). Parameters that encode mouse movement were analyzed using machine learning classifiers and the results indicate that the mouse trajectories and errors on unexpected questions efficiently distinguish liars from truth-tellers. Furthermore, we showed that liars may be identified also when they are responding truthfully. Unexpected questions combined with the analysis of mouse movement may efficiently spot participants with faked identities without the need for any prior information on the examinee.

Boing Boing post.

Worse Than FailureError'd: Go Home Google News, You're Drunk

"Well, it looks like Google News was inebriated as well!" Daniel wrote.

 

"(Translation: Given names similar to Otto) One must wonder which distance measure algorithm they used to decide that 'Faseaha' is more similar to Otto than Otto," writes Peter W.

 

Andrei V. writes, "What amazing discounts for rental cars offered by Air Baltic!"

 

"I know that Amazon was trying to tell me something about my Kindle author status, but the message appears to have been lost in translation," Bob wrote.

 

"I tried to sign up for severe weather alerts and I'm 100% sure I'm actually signed up. NOT!" writes, Eric R.

 

Lorens writes, "I think the cryptocurrency bubble may have exploded. Or imploded."

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianThomas Lange: Mini DebConf Hamburg

Last week I attended the MiniDebConfHamburg. I worked on new releases of dracut and rinse. Dracut is an initramfs-tools replacement which now supports early microcode loading. Rinse is a tool similar to debootstrap for rpm distributions, which now can create Fedora 28 environments aka chroots.

On Sunday I gave a lightning talk video about how to try out dracut on your computer without removing initramfs-tools. In Debian, we still did not switched the default to dracut, and I like to see more feedback if dracut works in your environment. Later I did a presentation on the FAI.me build service (video, slides). Many thanks to Juri, who implemented a switch on the FAI.me web page for changing between a basic and an advanced mode for the installation images. I've also worked on installing Ubuntu 18.04 LTS (Bionic) using FAI, which was quite simple, because changing the release name from xenial to bionic was most of the work. Yesterday I've added some language support for Ubuntu into FAI, so I hope to release the next version soon.

MiniDebConfHamburg was very nice, a nice location so I hope there will be more MiniDebConfs in Hamburg in the future.

Don MartiHappy GDPR day. Here's some sensitive data about me.

I know I haven't posted for a while, but I can't skip GDPR Day. You don't see a lot of personal info from me here on this blog. But just for once, I'm going to share something.

I'm a blood donor.

This doesn't seem like a lot of information. People sign up for blood drives all the time. But the serious privacy problem here is that when I give blood, they also test me for a lot of diseases, many of which could have a big impact on my life and how much of certain kinds of healthcare products and services I'm likely to need. The fact that I'm a blood donor might also help people infer something about my sex life but the health data is TMI already.

And I have some bad news. I recently got the ad info from my Facebook account and there it is, in the file advertisers_who_uploaded_a_contact_list_with_your_information.html. American Red Cross Blood Donors. Yes, it looks like the people I chose to trust with some of my most sensitive personal info have given it to the least trusted company on the Internet.

In today's marketing scene, the fact that my blood donor information leaked to Facebook isn't too surprising. The Red Cross clearly has some marketing people, and targeting the existing contact list on Facebook is just one of the things that marketing people do without thinking about it too much.Not thinking about privacy concerns is a problem for Marketing as a career field long-term. If everyone thinks of Marketing as the Department of Creepy Stuff it's going to be harder to recruit creative people.

So, wait a minute. Why am I concerned that Facebook has positive health info on me? Doesn't that help maintain my status in the data-driven economy? What's the downside? (Obvious joke about healthy-blood-craving Facebook board member Peter Thiel redacted—you're welcome.)

The problem is that my control over my personal data isn't just a problem for me. As Prof. Arvind Narayanan said (video), Poor privacy harms society as a whole. Can I trust Facebook to use my blood info just to target me for the Red Cross, and not to sort people by health for other purposes? Of course not. Facebook has crossed every creepy line that they have promised not to. To be fair, that's not just a Facebook thing. Tech bros do risky and mean things all the time without really thinking them through, and even when they do set appropriate defaults they half-ass the implementation and shit happens.

Will blood donor status get you better deals, or apartments, or jobs, in the future? I don't know. I do know that the Red Cross made a big point about confidentiality when they got me signed up. I'm waiting for a reply from the Red Cross privacy officer about this, and will post an update.

Anyway, happy GDPR Day, and, in case you missed it, Salesforce CEO Marc Benioff Calls for a National Privacy Law.

TEDIn Case You Missed It: The dawn of “The Age of Amazement” at TED2018

In Case You Missed It TED2018More than 100 speakers — activists, scientists, adventurers, change-makers and more — took the stage to give the talk of their lives this week in Vancouver at TED2018. One blog post could never hope to hold all of the extraordinary wisdom they shared. Here’s a (shamelessly inexhaustive) list of the themes and highlights we heard throughout the week — and be sure to check out full recaps of day 1, day 2, day 3 and day 4.

Discomfort is a proxy for progress. If we hope to break out of the filter bubbles that are defining this generation, we have to talk to and connect with people we disagree with. This message resonated across the week at TED, with talks from Zachary R. Wood and Dylan Marron showing us the power of reaching out, even when it’s uncomfortable. As Wood, a college student who books “uncomfortable speakers,” says: “Tuning out opposing viewpoints doesn’t make them go away.” To understand how society can progress forward, he says, “we need to understand the counterforces.” Marron’s podcast “Conversations With People Who Hate Me” showcases him engaging with people who have attacked him on the internet. While it hasn’t led to world peace, it has helped him develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

The Audacious Project, a new initiative for launching big ideas, seeks to create lasting change at scale. (Photo: Ryan Lash / TED)

Audacious ideas for big impact. The Audacious Project, TED’s newest initiative, aims to be the nonprofit version of an IPO. Housed at TED, it’s a collaboration among some of the biggest names in philanthropy that asks for nonprofit groups’ most audacious dreams; each year, five will be presented at TED with an invitation for the audience and world to get involved. The inaugural Audacious group includes public defender Robin Steinberg, who’s working to end the injustice of bail; oceanographer Heidi M. Sosik, who wants to explore the ocean’s twilight zone; Caroline Harper from Sight Savers, who’s working to end the scourge of trachoma; conservationist Fred Krupp, who wants to use the power of satellites and data to track methane emissions in unprecedented detail; and T. Morgan Dixon and Vanessa Garrison, who are inspiring a nationwide movement for Black women’s health. Find out more (and how you can get involved) at AudaciousProject.org.

Living means acknowledging death. Philosopher-comedian Emily Levine has stage IV lung cancer — but she says there’s no need to “oy” or “ohhh” over her: she’s OK with it. Life and death go hand in hand, she says; you can’t have one without the other. Therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse Rosenthal, wrote about their lives in an article read by millions of people: “You May Want to Marry My Husband” — a meditation on dying disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public — and challenged him to begin anew. He speaks to others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?”

“It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” says Yuval Noah Harari. (Photo: Ryan Lash / TED)

Can we rediscover the humanity in our tech?  In a visionary talk about a “globally tragic, astoundingly ridiculous mistake” companies like Google and Facebook made at the foundation of digital culture, Jaron Lanier suggested a way we can fix the internet for good: pay for it. “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them,” he says. Historian Yuval Noah Harari, appearing onstage as a hologram live from Tel Aviv, warns that with consolidation of data comes consolidation of power. Fascists and dictators, he says, have a lot to gain in our new digital age; and “it’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. Hill turned her family’s apartment into a smart home, loading up on 18 internet-connected appliances; her colleague Mattu built a router that tracked how often the devices connected, who they were transmitting to, what they were transmitting. Through the data, he could decipher the Hill family’s sleep schedules, TV binges, even their tooth-brushing habits. And a lot of this data can be sold, including deeply intimate details. “Who is the true beneficiary of your smart home?” he asks. “You, or the company mining you?”

An invitation to build a better world. Actor and activist Tracee Ellis Ross came to TED with a message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men. Ross believes it is past time that men take responsibility to change men’s bad behavior — and she offers an invitation to men, calling them in as allies with the hope they will “be accountable and self-reflective.” She offers a different invitation to women: Acknowledge your fury. “Your fury is not something to be afraid of,” she says. “It holds lifetimes of wisdom. Let it breathe, and listen.”

Wow! discoveries. Among the TED Fellows, explorer and conservationist Steve Boyes’ efforts to chart Africa’s Okavango Delta has led scientists to identify more than 25 new species; University of Arizona astrophysicist Burçin Mutlu-Pakdil discovered a galaxy with an outer ring and a reddish inner ring that was unlike any ever seen before (her reward: it’s now called Burçin’s Galaxy). Another astronomer, University of Hawaii’s Karen Meech saw — and studied for an exhilarating few days — ‘Oumuamua, the first interstellar comet observed from Earth. Meanwhile, engineer Aaswath Raman is harnessing the cold of deep space to invent new ways to keep us cooler and more energy-efficient. Going from the sublime to the ridiculous, roboticist Simone Giertz showed just how much there is to be discovered from the process of inventing useless things.  

Walter Hood shares his work creating public spaces that illuminate shared memories without glossing over past — and present — injustices. (Photo: Ryan Lash / TED)

Language is more than words. Even though the stage program of TED2018 consisted primarily of talks, many went beyond words. Architects Renzo Piano, Vishaan Chakbrabarti, Ian Firth and Walter Hood showed how our built structures, while still being functional, can lift spirits, enrich lives, and pay homage to memories. Smithsonian Museum craft curator Nora Atkinson shared images from Burning Man and explained how, in the desert, she found a spirit of freedom, creativity and collaboration not often found in the commercial art world. Designer Ingrid Fetell Lee uncovered the qualities that make everyday objects a joy to behold. Illustrator Christoph Niemann reminded us how eloquent and hilarious sketches can be; in her portraits of older individuals, photographer Isadora Kosofsky showed us that visuals can be poignant too. Paul Rucker discussed his painful collection of artifacts from America’s racial past and how the artistic act of making scores of Ku Klux Klan robes has brought him some catharsis. Our physical movements are another way we speak  — for choreographer Elizabeth Streb, it’s expressing the very human dream to fly. For climber Alex Honnold, it was attaining a sense of mastery when he scaled El Capitan alone without ropes. Dolby Laboratories chief scientist Poppy Crum demonstrated the emotions that can be read through physical tells like body temperature and exhalations, and analytical chemist Simone Francese revealed the stories told through the molecules in our fingerprints.  

Kate Raworth presents her vision for what a sustainable, universally beneficial economy could look like. (Photo: Bret Hartman / TED)

Is human growth exponential or limited? There will be almost ten billion people on earth by 2050. How are we going to feed everybody, provide water for everybody and get power to everybody? Science journalist Charles C. Mann has spent years asking these questions to researchers, and he’s found that their answers fall into two broad categories: wizards and prophets. Wizards believe that science and technology will let us produce our way out of our dilemmas — think: hyper-efficient megacities and robots tending genetically modified crops. Prophets believe close to the opposite; they see the world as governed by fundamental ecological processes with limits that we transgress to our peril. As he says: “The history of the coming century will be the choice we make as a species between these two paths.” Taking up the cause of the prophets is Oxford economist Kate Raworth, who says that our economies have become “financially, politically and socially addicted” to relentless GDP growth, and too many people (and the planet) are being pummeled in the process. What would a sustainable, universally beneficial economy look like? A doughnut, says Raworth. She says we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits.

Seeing opportunity in adversity. “I’m basically nuts and bolts from the knee down,” says MIT professor Hugh Herr, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He dreams of a future where humans have augmented their bodies in a way that redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. In a beautiful, touching talk in the closing session of TED2018, Mark Pollock and Simone George take us inside their relationship — detailing how Pollock became paralyzed and the experimental work they’ve undertaken to help him regain motion. In collaboration with a team of engineers who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test — proving that progress is definitely still possible.

TED Fellow and anesthesiologist Rola Hallam started the world’s first crowdfunded hospital in Syria. (Photo: Ryan Lash / TED)

Spotting the chance to make a difference. The TED Fellows program was full of researchers, activists and advocates capitalizing on the spaces that go unnoticed. Psychiatrist Essam Daod, found a “golden hour” in refugees’ treks when their narratives can sometimes be reframed into heroes’ journeys; landscape architect Kotcharkorn Voraakhom realized that a park could be designed to allow her flood-prone city of Bangkok mitigate the impact of climate change; pediatrician Lucy Marcil seized on the countless hours that parents spend in doctors’ waiting rooms to offer tax assistance; sustainability expert DeAndrea Salvador realized the profound difference to be made by helping low-income North Carolina residents with their energy bills; and anesthesiologist Rola Hallam is addressing aid shortfalls for local nonprofits, resulting in the world’s first crowdfunded hospital in Syria.

Catch up on previous In Case You Missed It posts from April 10 (Day 1), April 11 (Day 2), April 12 (Day 3), and yesterday, April 13 (Day 4).

TEDIn Case You Missed It: Bold visions for humanity at day 4 of TED2018

In Case You Missed It TED2018Three sessions of memorable TED Talks covering life, death and the future of humanity made the penultimate day of TED2018 a remarkable space for tech breakthroughs and dispatches from the edges of culture.

Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver.

The future built on genetic code. DNA is built on four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the four letters of the genetic alphabet are not all that unique. He and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. And maybe soon, we’ll be able to use that expanded DNA alphabet to teleport. That’s right, you read it here first: teleportation is real. Biologist and engineer Dan Gibson reports from the front lines of science fact that we are now able to transmit the most fundamental parts of who we are: our DNA. It’s called biological teleportation, and the idea is that biological entities including viruses and living cells can be reconstructed in a distant location if we can read and write the sequence of that DNA code. The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines.

“If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. (Photo: Jason Redmond / TED)

Dispatches from the fight against hate online. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. In 2016, Green collaborated with Moonshot CVE to pilot a new approach, the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups, and used what they learned to create targeted advertising aimed at people susceptible to ISIS’s recruiting — and counter those messages. In English and Arabic, the eight-week pilot program reached more than 300,000 people. “If technology has any hope of overcoming today’s challenges,” Green says, “we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” Dylan Marron is taking a different approach to the problem of hate on the internet. His video series, such as “Sitting in Bathrooms With Trans People,” have racked up millions of views, and they’ve also sent a slew of internet poison in his direction. He developed a coping mechanism: he calls up the people who leave hateful remarks, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace, he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years (he’s now just 18) he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of what we call intelligence is just trial-and-error on a massive scale — a machine can try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. Which really isn’t general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives. Picking up on the thread of pitfalls of current AI, artist and technology critic James Bridle describes how automated copycats on YouTube mimic trusted videos by using algorithmic tricks to create “fake news” for kids. End result: children exploring YouTube videos from their favorite cartoon characters are sent down autoplaying rabbit holes, where they can find eerie, disturbing videos filled with very real violence and very real trauma. Algorithms are touted as the fix, but as Bridle says, machine learning is really just what we call software that does things we don’t understand … and we have enough of that already, no?

Chetna Gala Sinha tells us about a bank in India that meets the needs of rural poor women who want to save and borrow. (Photo: Jason Redmond / TED)

Listen and learn. Takemia MizLadi Smith spoke up for the front-desk staffer, the checkout clerk, and everyone who’s ever been told they need to start collecting information from customers, whether it be an email, zip code or data about their race and gender. Smith makes the case to empower every front desk employee who collects data — by telling them exactly how that data will be used. Chetna Gala Sinha, meanwhile, started a bank in India that meets the needs of rural poor women who want to save and borrow — and whom traditional banks would not touch. How does the bank improve their service? As Chetna says: simply by listening. Meanwhile, sex educator Emily Nagoski talked about a syndrome called emotional nonconcordance, where what your body seems to want runs counter to what you actually want. In an intimate situation, ahem, it can be hard to figure out which one to listen to, head or body. Nagoski gives us full permission and encouragement to listen to your head, and to the words coming out of the mouth of your partner. And Harvard Business School prof Frances Frei gave a crash course in trust — building it, keeping it, and the hardest, rebuilding it. She shares lessons from her stint as an embed at Uber, where far from listening to in meetings, staffers would actually text each other during meetings — about the meeting. True listening, the kind that builds trust, starts with putting away your phone.

Bionic man Hugh Herr envisions humanity soaring out of the 21st century. (Photo: Ryan Lash / TED)

A new way to heal our bodies … and build new ones. Optical engineer Mary Lou Jepsen shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it and doesn’t let it pass through. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. MIT professor Hugh Herr is working on a different way to heal — and augment — our bodies. He’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it “NeuroEmbodied Design,” a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend who lost a foot in a climbing accident. Using the Agonist-antagonist Myoneural Interface, or AAMI, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. What might be next? Maybe, the ability to fly.

Announcements! Back in 2014, space scientist Will Marshall introduced us to his company, Planet, and their proposed fleet of tiny satellites. The goal: to image the planet every day, showing us how Earth changes in near-real time. In 2018, that vision has come good: every day, a fleet of about 200 small satellites pictures every inch of the planet, taking 1.5 million 29-megapixel images every day (about 6T of data daily), gathering data on changes both natural and human-made. This week at TED, Marshall announced a consumer version of Planet, called Planet Stories, to let ordinary people play with these images. Start playing now here. Another announcement comes from futurist Ray Kurzweil: a new way to query the text inside books using something called semantic search — which is a search on ideas and concepts, rather than specific words. Called TalkToBooks, the beta-stage product uses an experimental AI to query a database of 120,000 books in about a half a second. (As Kurzweil jokes: “It takes me hours to read a hundred thousand books.”) Jump in and play with TalkToBooks here. Also announced today: “TED Talks India: Nayi Soch” — the wildly popular Hindi-language TV series, created in partnership with StarTV and hosted by Shah Rukh Khan — will be back for three more seasons.

TEDBody electric: Notes from Session 9 of TED2018

Mary Lou Jepsen demonstrates the ability of red light to scatter when it hits our bodies. Can we leverage this property to see inside ourselves? She speaks at TED2018 on April 13, 2018. Photo: Ryan Lash / TED

During the week of TED, it’s tempting to feel like a brain in a jar — to think on a highly abstracted, intellectual, hypertechnical level about every single human issue. But the speakers in this session remind us that we’re still just made of meat. And that our carbon-based life forms aren’t problems to be transcended but, if you will, platforms. Let’s build on them, explore them, and above all feel at home in them.

When red light means go. The last time Mary Lou Jepsen took the TED stage, she shared the science of knowing what’s inside another person’s mind. This time, the celebrated optical engineer shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. Her demo doubles as a crash course in optics, with red and green lasers and all kinds of cool gear (some of which juuuuust squeaked through customs in time). And it’s a wildly inspiring look at a bold effort to solve an old problem in a new way.

Floyd E. Romesberg imagines a couple new letters in DNA that might allow us to create … who knows what. Photo: Jason Redmond / TED

What if DNA had more letters to work with? DNA is built on only four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the letters of the genetic alphabet are not all that unique. For the problem of life, perhaps, “maybe we’re not the only solution, maybe not even the best solution — just a solution.” And maybe new parts can be built to work alongside the natural parts. Inspired by these insights, Romesberg and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. Worried about unintended consequences? Romesberg says that his augmented 6-letter DNA cannot be replenished within the body. As the unnatural genetic materials are depleted, the semi-synthetic cells die off, protecting us against nightmarish sci-fi scenarios of rogue microorganisms.

On the slide behind Dan Gibson: a teleportation machine, more or less. It’s a “printer” that can convert digital information into biological material, and it holds the promise of sending things like vaccines and medicines over the internet. Photo: Ryan Lash / TED

Beam our DNA up, Scotty. Teleportation is real. That’s right, you read it here first. This method isn’t quite like what the minds behind Star Trek brought to life, but the massive implications attached are just as futuristic. Biologist and engineer Dan Gibson reports from the front lines of science fact, that we are now able to transmit not our entire selves, but the most fundamental parts of who we are: our DNA. Or, simply put, biological teleportation. “The characteristics and functions of all biological entities including viruses and living cells are written into the code of DNA,” says Gibson. “They can be reconstructed in a distant location if we can read and write the sequence of that DNA code.” The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one literally worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines. The process takes weeks now, but could someday come down to 1–2 days. (And don’t worry: Gibson, his team and the government screen every synthesis order against a database to make sure viruses and pathogens aren’t being made.) He says: “For now, I will be satisfied beaming new medicines across the globe, fully automated and on-demand to save lives from emerging deadly infectious diseases and to create personalized cancer medicines for those who don’t have time to wait.”

In a powerful talk, sex educator Emily Nagoski educates us about emotional nonconcordance — when our body and our mind “say” different things in an intimate situation. Which to listen to? Photo: Ryan Lash / TED

Busting one of our most dangerous myths about sex. When it comes to pleasure, humans have something that’s often called “the reward center” — but, explains sex educator Emily Nagoski, that “reward center” is actually three intertwined, separate systems: liking, or whether it feels good or bad; wanting, which motivates us to move toward or away from a stimulus; and learning. Learning is best explained by Pavlov’s dogs, whom he trained to salivate when he rang a bell. Were the dogs hungry for the bell (wanting)? Did they find the bell delicious (liking)? Of course not: “What Pavlov did was make the bell food-related.” The separateness of these three things, wanting, liking and learning, helps explain a phenomenon called emotional nonconcordance, when our physiological response doesn’t match our subjective experience. This happens with all sorts of emotional and motivational systems, including sex. “Research over the last thirty years has found that genital blood flow can increase in response to sex-related stimuli, even if those sex-related stimuli are not also associated with a subjective experience of wanting and liking,” she says. The problem is that we don’t recognize nonconcordance when it comes to sex: in fact, there is a dangerous myth that even if someone says they don’t want it or don’t like it, their body can say differently, and the body is the one telling the “truth.” This myth has serious consequences for victims of unwanted and nonconsensual sexual contact, who are sometimes told that their nonconcordant genital response invalidates their experience … and who can even have that response held up as evidence in sexual assault cases. Nagoski urges all of us to share this crucial information with someone — judges, lawyers, your partners, your kids. “The roots of this myth are deep and they are entangled with some very dark forces in our culture, but with every brave conversation we have, we make the world that little bit better,” she says to one of the biggest standing Os in a standing-O-heavy session.

The musicians and songwriters of LADAMA perform and speak at TED2018. Photo: Ryan Lash / TED

Bringing Latin alternative music to Vancouver. Singing in Spanish, Portuguese and English, LADAMA enliven the TED stage with a vibrant, energizing and utterly danceable musical set. The multinational ensemble of women — Maria Fernanda Gonzalez from Venezuela, Lara Klaus from Brazil, Daniela Serna of Colombia, and Sara Lucas from the US — and their bass player collaborator combine traditional South American and Caribbean styles like cumbia, maracatu and joropo with pop, soul and R&B to deliver a pulsing musical experience. The group took attendees on a musical journey with their modern and soulful compositions, playing original songs “Night Traveler” and “Porro Maracatu.”

Hugh Herr lost both legs below the knee, but the new legs he built allow him once again to run, climb and even dance. Photo: Ryan Lash / TED

“The robot became part of me.” MIT professor Hugh Herr takes the TED stage, his sleek bionic legs conspicuous under his sharp grey suit. “I’m basically nuts and bolts from the knee down,” Herr says, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward realizing a goal that has long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it “NeuroEmbodied Design,” a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend of Herr’s who was in a climbing accident that resulted in the amputation of his foot. Using the Agonist-antagonist Myoneural Interface, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. “Jim moves and behaves as if the synthetic limb is part of him,” Herr says. And he’s even back climbing again. Taking a few moments to dream, Herr describes a future where humans have augmented their bodies in a way that fundamentally redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. “I believe humans will become superheroes,” Herr says. “During the twilight years of this century, I believe humans will be unrecognizable in morphology and dynamics from what we are today. Humanity will take flight and soar.”

Jim Ewing, left, lost a limb in a climbing accident; he partnered with MIT professor Hugh Herr, right, to build a limb that got him back up and climbing again. Photo: Ryan Lash / TED

,

Harald WelteOsmoCon 2018 CfP closes on 2018-05-30

One of the difficulties with OsmoCon2017 last year was that almost nobody submitted talks / discussions within the deadline, early enough to allow for proper planning.

This lad to the situation where the sysmocom team had to come up with a schedule/agenda on their own. Later on much after the CfP deadline,people then squeezed in talks, making the overall schedule too full.

It is up to you to avoid this situation again in 2018 at OsmoCon2018 by submitting your talk RIGHT NOW. We will be very strict regarding late submissions. So if you would like to shape the Agenda of OsmoCon 2018, this is your chance. Please use it.

We will have to create a schedule soon, as [almost] nobody will register to a conference unless the schedule is known. If there's not sufficient contribution in terms of CfP response from the wider community, don't complain later that 90% of the talks are from sysmocom team members and only about the Cellular Network Infrastructure topics.

You have been warned. Please make your CfP submission in time at https://pretalx.sysmocom.de/osmocon2018/cfp before the CfP deadline on 2018-05-30 23:59 (Europe/Berlin)

Harald Welteopenmoko.org archive down due to datacenter issues

Unfortunately, since about 11:30 am CEST on MAy 24, openmoko.org is down due to some power outage related issues at Hetzner, the hosting company at which openmoko.org has been hosting for more than a decade now.

The problem seems to have caused quite a lot of fall-out tom many servers (Hetzner is hosting some 200k machines, not sure how many affected, though), and Hetzner is anything but verbose when it comes to actually explaining what the issue is.

All they have published is https://www.hetzner-status.de/en.html#8842 - which is rather tight lipped about some power grid issues. But then, what do you have UPSs for if not for "a strong voltage reduction in the local power grid"?

The openmoko.org archive machine is running in Hetzner DC10, by the way. This is where they've had the largest number of tickets.

In any case, we'll have to wait for them to resolve their tickets. They appear to be working day and night on that.

I have a number of machines hosted at Hetzner, and I'm actually rather happy that none of the more important systems were affected that long. Some machines simply lost their uplink connectivity for some minutes, while some others were rebooted (power outage). The openmoko.org archive is the only machine that didn't automatically boot after the outage, maybe the power supply needs replacement.

In any case, I hope the service will be back up again soon.

btw: Guess who's been paying for hosting costs ever since Openmoko, Inc. has shut down? Yes, yours truly. It was OK for something like 9 years, but I want to recursively pull the dynamic content through some cache, which can then be made permanent. The resulting static archive can then be moved to some VM somewhere, without requiring a dedicated root server. That should reduce the costs down to almost nothing.

Krebs on Security3 Charged In Fatal Kansas ‘Swatting’ Attack

Federal prosecutors have charged three men with carrying out a deadly hoax known as “swatting,” in which perpetrators call or message a target’s local 911 operators claiming a fake hostage situation or a bomb threat in progress at the target’s address — with the expectation that local police may respond to the scene with deadly force. While only one of the three men is accused of making the phony call to police that got an innocent man shot and killed, investigators say the other two men’s efforts to taunt and deceive one another ultimately helped point the gun.

Tyler “SWAuTistic” Barriss. Photo: AP

According to prosecutors, the tragic hoax started with a dispute over a match in the online game “Call of Duty.” The indictment says Shane M. Gaskill, a 19-year-old Wichita, Kansas resident, and Casey S. Viner, 18, had a falling out over a $1.50 game wager.

Viner allegedly wanted to get back at Gaskill, and so enlisted the help of another man — Tyler R. Barriss — a serial swatter known by the alias “SWAuTistic” who’d bragged of “swatting” hundreds of schools and dozens of private residences.

The federal indictment references transcripts of alleged online chats among the three men. In an exchange on Dec. 28, 2017, Gaskill taunts Barriss on Twitter after noticing that Barriss’s Twitter account (@swattingaccount) had suddenly started following him.

Viner and Barriss both allegedly say if Gaskill isn’t scared of getting swatted, he should give up his home address. But the address that Gaskill gave Viner to pass on to Barriss no longer belonged to him and was occupied by a new tenant.

Barriss allegedly then called the emergency 911 operators in Wichita and said he was at the address provided by Viner, that he’d just shot his father in the head, was holding his mom and sister at gunpoint, and was thinking about burning down the home with everyone inside.

Wichita police quickly responded to the fake hostage report and surrounded the address given by Gaskill. Seconds later, 28-year-old Andrew Finch exited his mom’s home and was killed by a single shot from a Wichita police officer. Finch, a father of two, had no party to the gamers’ dispute and was simply in the wrong place at the wrong time.

Just minutes after the fatal shooting, Barriss — who is in Los Angeles  — is allegedly anxious to learn if his Kansas swat attempt was successful. Someone has just sent Barriss a screenshot of a conversation between Viner and Gaskill mentioning police at Gaskill’s home and someone getting killed. So Barriss allegedly then starts needling Gaskill via instant message:

Defendant BARRISS: Yo answer me this
Defendant BARRISS: Did police show up to your house yes or no
Defendant GASKILL: No dumb fuck
Defendant BARRISS: Lmao here’s how I know you’re lying

Prosecutors say Barriss then posted a screen shot showing the following conversation between Viner and Gaskill:

Defendant VINER: Oi
Defendant GASKILL: Hi
Defendant VINER: Did anyone show @ your house?
Defendant VINER: Be honest
Defendant GASKILL: Nope
Defendant GASKILL: The cops are at my house because someone ik just killed his dad

Barriss and Gaskill then allegedly continued their conversation:

Defendant GASKILL: They showed up to my old house retard
Defendant BARRISS: That was the call script
Defendant BARRISS: Lol
Defendant GASKILL: Your literally retarded
Defendant GASKILL: Ik dumb ass
Defendant BARRISS: So you just got caught in a lie
Defendant GASKILL: No I played along with you
Defendant GASKILL: They showed up to my old house that we own and rented out
Defendant GASKILL: We don’t live there anymore bahahaha
Defendant GASKILL: ik you just wasted your time and now your pissed
Defendant BARRISS: Not really
Defendant BARRISS: Once you said “killed his dad” I knew it worked lol
Defendant BARRISS: That was the call lol
Defendant GASKILL: Yes it did buy they never showed up to my house
Defendant GASKILL: You guys got trolled
Defendant GASKILL: Look up who live there we moved out almost a year ago
Defendant GASKILL: I give you props though you’re the 1% that can actually swat babahaha
Defendant BARRISS: Dude MY point is You gave an address that you dont live at but you were acting tough lol
Defendant BARRISS: So you’re a bitch

Later on the evening of Dec. 28, after news of the fatal swatting started blanketing the local television coverage in Kansas, Gaskill allegedly told Barriss to delete their previous messages. “Bape” in this conversation refers to a nickname allegedly used by Casey Viner:

Defendant GASKILL: Dm asap
Defendant GASKILL: Please it’s very fucking impi
Defendant GASKILL: Hello
Defendant BARRISS: ?
Defendant BARRISS: What you want
Defendant GASKILL: Dude
Defendant GASKILL: Me you and bape
Defendant GASKILL: Need to delete everything
Defendant GASKILL: This is a murder case now
Defendant GASKILL: Casey deleted everything
Defendant GASKILL: You need 2 as well
Defendant GASKILL: This isn’t a joke K troll anymore
Defendant GASKILL: If you don’t you’re literally retarded I’m trying to help you both out
Defendant GASKILL: They know it was swat call

The indictment also features chat records between Viner and others in which he admits to his role in the deadly swatting attack. In the follow chat excerpt, Viner was allegedly talking with someone identified only as “J.D.”

Defendant VINER: I literally said you’re gonna be swatted, and the guy who swatted him can easily say I convinced him or something when I said hey can you swat this guy and then gave him the address and he said yes and then said he’d do it for free because I said he doesn’t think anything will happen
Defendant VINER: How can I not worry when I googled what happens when you’re involved and it said a eu [sic] kid and a US person got 20 years in prison min
Defendant VINER: And he didn’t even give his address he gave a false address apparently
J.D.: You didn’t call the hoax in…
Defendant VINER: Does t [sic] even matter ?????? I was involved I asked him to do it in the first place
Defendant VINER: I gave him the address to do it, but then again so did the other guy he gave him the address to do it as well and said do it pull up etc

Barriss is charged with multiple counts of making false information and hoaxes; cyberstalking; threatening to kill another or damage property by fire; interstate threats, conspiracy; and wire fraud. Viner and Gaskill were both charged with wire fraud, conspiracy and obstruction of justice. A copy of the indictment is available here.

The Associated Press reports that the most serious charge of making a hoax call carries a potential life sentence because it resulted in a death, and that some of the other charges carry sentences of up to 20 years.

The moment that police in Kansas fired a single shot that killed Andrew Finch.

As I told the AP, swatting has been a problem for years, but it seems to have intensified around the time that top online gamers started being able to make serious money playing games online and streaming those games live to thousands or even tens of thousands of paying subscribers. Indeed, Barriss himself had earned a reputation as someone who delighted in watching police kick in doors behind celebrity gamers who were live-streaming.

This case is not the first time federal prosecutors have charged multiple people in the same swatting attacks even if only one person was involved in actually making the phony hoax calls to police. In 2013, my home was the target of a swatting attack that thankfully ended without incident. The government ultimately charged four men — several of whom were minors at the time — with conducting that swat attack as well as many others they’d perpetrated against public figures and celebrities.

But despite spending considerable resources investigating those crimes, prosecutors were able to secure only light punishments for those involved in the swatting spree. One of those men, a serial swatter and cyberstalker named Mir Islam, was sentenced to to just one year in jail for his role in multiple swattings.  Another individual who was part of that group — Eric “Cosmo the God” Taylorgot three years of probation.

Something tells me Barriss, Gaskill and Viner aren’t going to be so lucky. Barriss has admitted his role in many swattings, and he admitted to his last, fatal swatting in an interview he gave to KrebsOnSecurity less than 24 hours after Andrew Finch’s murder — saying he was not the person who pulled the trigger.

Rondam RamblingsBlame where it's due

I can't say I'm even a little bit surprised that the summit with North Korea has fallen through.  I wouldn't even bother blogging about this except that back in April I expressed some cautious optimism that maybe, just maybe, Trump's bull-in-the-china-shop tactics could be working.  Nothing makes me happier than having my pessimistic prophecies be proven wrong, but alas, Donald Trump seems to be

Sociological ImagesEnglish/Gibberish

One major part of introducing students to sociology is getting to the “this is water” lesson: the idea that our default experiences of social life are often strange and worthy of examining. This can be challenging, because the default is often boring or difficult to grasp, but asking the right questions is a good start (with some potentially hilarious results).

Take this one: what does English sound like to a non-native speaker? For students who grew up speaking it, this is almost like one of those Zen koans that you can’t quite wrap your head around. If you intuitively know what the language means, it is difficult to separate that meaning from the raw sounds.

That’s why I love this video from Italian pop singer Adriano Celentano. The whole thing is gibberish written to imitate how English slang sounds to people who don’t speak it.


Another example to get class going with a laugh is the 1990s video game Fighting Baseball for the SNES. Released in Japan, the game didn’t have the licensing to use real players’ names, so they used names that sounded close enough. A list of some of the names still bounces around the internet:

The popular idea of the Uncanny Valley in horror and science fiction works really well for languages, too. The funny (and sometimes unsettling) feelings we get when we watch imitations of our default assumptions fall short is a great way to get students thinking about how much work goes into our social world in the first place.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureImprov for Programmers: Just for Transformers

We're back again with a little something different, brought to you by Raygun. Once again, the cast of "Improv for Programmers" is going to create some comedy on the fly for you, and this time… you could say it's… transformative. Today's episode contains small quantities of profanity.

Raygun provides a window into how users are really experiencing your software applications.

Unlike traditional logging, Raygun silently monitors applications for issues affecting end users in production, then allows teams to pinpoint the root cause behind a problem with greater speed and accuracy by providing detailed diagnostic information for developers. Raygun makes fixing issues 1000x faster than traditional debugging methods using logs and incomplete information.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integrated, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Harald WelteMailing List hosting for FOSS Projects

Recently I've encountered several occasions in which a FOSS project would have been interested in some reliable, independent mailing list hosting for their project communication.

I was surprised how difficult it was to find anyone running such a service.

From the user / FOSS project point of view, the criteria that I would have are:

  • operated by some respected entity that is unlikely to turn hostile, discontinue the service or go out of business altogether
  • free of any type of advertisements (we all know how annoying those are)
  • cares about privacy, i.e. doesn't sell the subscriber lists or non-public archives
  • use FOSS to run the service itself, such as GNU mailman, listserv, ezmlm, ...
  • an easy path to migrate away to another service (or self-hosting) as they grow or their requirements change. A simple mail forward to that new address for the related addresses is typically sufficient for that

If you think mailing lists serve no purpose these days anyways, and everyone is on github: Please have a look at the many thousands of FOSS project mailing lists out there still in use. Not everyone wants to introduce a dependency to the whim of a proprietary software-as-a-service provider.

I never had this problem as I always hosted my own mailman instance on lists.gnumonks.org anyway, and all the entities that I've been involved in (whether non-profit or businesses) had their own mailing list hosts. From franken.de in the 1990ies to netfilter.org, openmoko.org and now osmocom.org, we all pride oursevles in self-hosting.

But then there are plenty of smaller projects that neither have the skills nor the funding available. So they go to yahoo groups or some other service that will then hold them hostage without a way to switch their list archives from private to public, without downloadable archives or forwarding in the case they want to move away :(

Of course the larger FOSS projects also have their own list servers, starting from vger.kernel.org to Linux distributions like Debian GNU/Linux. But what if your FOSS project is not specifically Linux related?

The sort-of obvious candidates that I found all don't really fit:

Now don't get me wrong, I'm of course not expecting that there are commercial entities operating free-of charge list hosting services where you neither pay with money, nor your data, nor by becoming a spam receiver.

But still, in the wider context of the Free Software community, I'm seriously surprised that none of the various non-for-profit / non-commercial foundations or associations are offering a public mailing list hosting service for FOSS projects.

One can of course always ask any from the above list and ask for a mailing list even though it's strictly speaking off-topic to them. But who will do that, if he has to ask uninvited for a favor?

I think there's something missing. I don't have the time to set up a related service, but I would certainly want to contribute in terms of funding in case any existing FOSS related legal entity wanted to expand. If you already have a legal entity, abuse contacts, a team of sysadmins, then it's only half the required effort.

Planet DebianJonathan Dowland: Mastodon

I'm experimenting with Mastodon, an alternative to Twitter. My account is @jon@argh.club. I'm happy for recommendations on interesting people to follow!

Inspired by Iustin, I also started taking a look at Hakyll as a possible replacement for IkiWiki. (That's at grr.argh.club/~jon, although there's nothing to see yet.)

Planet DebianBenjamin Mako Hill: Natural experiment showing how “wide walls” can support engagement and learning

Seymour Papert is credited as saying that tools to support learning should have “high ceilings” and “low floors.” The phrase is meant to suggest that tools should allow learners to do complex and intellectually sophisticated things but should also be easy to begin using quickly. Mitchel Resnick extended the metaphor to argue that learning toolkits should also have “wide walls” in that they should appeal to diverse groups of learners and allow for a broad variety of creative outcomes. In a new paper, Sayamindu Dasgupta and I attempted to provide an empirical test of Resnick’s wide walls theory. Using a natural experiment in the Scratch online community, we found causal evidence that “widening walls” can, as Resnick suggested, increase both engagement and learning.

Over the last ten years, the “wide walls” design principle has been widely cited in the design of new systems. For example, Resnick and his collaborators relied heavily on the principle in the design of the Scratch programming language. Scratch allows young learners to produce not only games, but also interactive art, music videos, greetings card, stories, and much more. As part of that team, Sayamindu was guided by “wide walls” principle when he designed and implemented the Scratch cloud variables system in 2011-2012.

While designing the system, Sayamindu hoped to “widen walls” by supporting a broader range of ways to use variables and data structures in Scratch. Scratch cloud variables extend the affordances of the normal Scratch variable by adding persistence and shared-ness. A simple example of something possible with cloud variables, but not without them, is a global high-score leaderboard in a game (example code is below). After the system was launched, we saw many young Scratch users using the system to engage with data structures in new and incredibly creative ways.

cloud-variable-scriptExample of Scratch code that uses a cloud variable to keep track of high-scores among all players of a game.

Although these examples reflected powerful anecdotal evidence, we were also interested in using quantitative data to reflect the causal effect of the system. Understanding the causal effect of a new design in real world settings is a major challenge. To do so, we took advantage of a “natural experiment” and some clever techniques from econometrics to measure how learners’ behavior changed when they were given access to a wider design space.

Understanding the design of our study requires understanding a little bit about how access to the Scratch cloud variable system is granted. Although the system has been accessible to Scratch users since 2013, new Scratch users do not get access immediately. They are granted access only after a certain amount of time and activity on the website (the specific criteria are not public). Our “experiment” involved a sudden change in policy that altered the criteria for who gets access to the cloud variable feature. Through no act of their own, more than 14,000 users were given access to feature, literally overnight. We looked at these Scratch users immediately before and after the policy change to estimate the effect of access to the broader design space that cloud variables afforded.

We found that use of data-related features was, as predicted, increased by both access to and use of cloud variables. We also found that this increase was not only an effect of projects that use cloud variables themselves. In other words, learners with access to cloud variables—and especially those who had used it—were more likely to use “plain-old” data-structures in their projects as well.

The graph below visualizes the results of one of the statistical models in our paper and suggests that we would expect that 33% of projects by a prototypical “average” Scratch user would use data structures if the user in question had never used used cloud variables but that we would expect that 60% of projects by a similar user would if they had used the system.

Model-predicted probability that a project made by a prototypical Scratch user will contain data structures (w/o counting projects with cloud variables)

It is important to note that the estimated effective above is a “local average effect” among people who used the system because they were granted access by the sudden change in policy (this is a subtle but important point that we explain this in some depth in the paper). Although we urge care and skepticism in interpreting our numbers, we believe our results are encouraging evidence in support of the “wide walls” design principle.

Of course, our work is not without important limitations. Critically, we also found that rate of adoption of cloud variables was very low. Although it is hard to pinpoint the exact reason for this from the data we observed, it has been suggested that widening walls may have a potential negative side-effect of making it harder for learners to imagine what the new creative possibilities might be in the absence of targeted support and scaffolding. Also important to remember is that our study measures “wide walls” in a specific way in a specific context and that it is hard to know how well our findings will generalize to other contexts and communities. We discuss these caveats, as well as our methods, models, and theoretical background in detail in our paper which now available for download as an open-access piece from the ACM digital library.


This blog post, and the open access paper that it describes, is a collaborative project with Sayamindu Dasgupta. Financial support came from the eScience Institute and the Department of Communication at the University of Washington. Quantitative analyses for this project were completed using the Hyak high performance computing cluster at the University of Washington.

Worse Than FailureBusiness Driven Development

Every now and then, you come across a special project. You know the sort, where some business user decides that they know exactly what they need and exactly how it should be built. They get the buy-in of some C-level shmoe by making sure that their lips have intimate knowledge of said C-level butt. Once they have funding, they have people hired and begin to bark orders.

Toonces, the Driving Cat

About 8 years ago, I had the privilege experience of being on such a project. When we were given the phase-I specs, all the senior tech people immediately said that there was no way to perform a sane daily backup and data-roll for the next day. The response was "We're not going to worry about backups and daily book-rolls until later". We all just cringed, made like good little HPCs and followed our orders to march onward.

Fast forward about 10 months and the project had a sufficient amount of infrastructure that the business user had no choice but to start thinking about how to close the books each day, and roll things forward for the next day. The solution he came up with was as follows:

   1. Shut down all application servers and the DB
   2. Remove PK/FK relationships and rename all the tables in the database from: xxx to: xxx.yyyymmdd
   3. Create all new empty tables in the database (named: xxx)
   4. Create all the PK/FK relationships, indices, triggers, etc.
   5. Prime the new: xxx tables with data from the: xxx.<prev-business-date> tables
   6. Run a job to mirror the whole thing to offsite DB servers
   7. Run the nightly backups (to tape)
   8. Fire up the DB and application servers

Naturally, all the tech people groaned, mentioning things like history tables, wasted time regenerating indices, nightmares if errors occurred while renaming tables, etc., but they were ignored.

Then it happened. As is usually the case when non-technical people try to do technical designs, the business user found himself designed into a corner.

The legitimate business-need came up to make adjustments to transactions for the current business day after the table-roll to the next business day had completed.

The business user pondered it for a bit and came up with the following:

    1. Shut down all application servers and the DB
    2. Remove PK/FK relationships and rename the post-roll tables of tomorrow from xxx to xxx.tomorrow
    3. Copy SOME of the xxx.yyyymmdd tables from the pre-roll current day back to: xxx
       (leaving the PK's and indices notably absent)
    4. Restart the DB and application servers (with some tables rolled and some not rolled)
    5. Let the users make changes as needed
    6. Shut down the application and DB servers
    7. Manually run ad-hoc SQL to propagate all changes to the xxx.tomorrow table(s)
    8. Rename the: xxx tables to: xxx.yyyymmdd.1 
       (or 2 or 3, depending upon how many times this happened per day)
    9. Rename the xxx.tomorrow tables back to: xxx
   10. Rebuild all the PK/FK relationships, create new indices and re-associate triggers, etc.
   11. Rerun the mirroring and backup scripts
   12. Restart the whole thing

When we pointed out the insanity of all of this, and the extremely high likelihood of any failure in the table-renaming/moving/manual-updating causing an uncorrectable mess that would result in losing the entire day of transactions, we were summarily terminated as our services were no longer required — because they needed people who knew how to get things done.

I'm the first to admit that there are countless things that I do not know, and the older I get, the more that list seems to grow.

I'm also adamant about not making mistakes I know will absolutely blow up in my face - even if it costs me a job. If you need to see inside of a gas tank, throwing a lit match into it will illuminate the inside, but you probably won't like how it works out for you.

Five of us walked out of there, unemployed and laughing hysterically. We went to our favorite watering hole and decided to keep tabs on the place for the inevitable explosion.

Sure enough, 5 weeks after they had junior offshore developers (who didn't have the spine to say "No") build what they wanted, someone goofed in the rollback, and then goofed again while trying to unroll the rollback.

It took them three days to figure out what to restore and in what sequence, then restore it, rebuild everything and manually re-enter all of the transactions since the last backup. During that time, none of their customers got the data files that they were paying for, and had to find alternate sources for the information.

When they finally got everything restored, rebuilt and updated, they went to their customers and said "We're back". In response, the customers told them that they had found other ways of getting the time-sensitive information and no longer required their data product.

Not only weren't the business users fired, but they got big bonuses for handling the disaster that they had created.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianVincent Bernat: Multi-tier load-balancing with Linux

A common solution to provide a highly-available and scalable service is to insert a load-balancing layer to spread requests from users to backend servers.1 We usually have several expectations for such a layer:

scalability
It allows a service to scale by pushing traffic to newly provisioned backend servers. It should also be able to scale itself when it becomes the bottleneck.
availability
It provides high availability to the service. If one server becomes unavailable, the traffic should be quickly steered to another server. The load-balancing layer itself should also be highly available.
flexibility
It handles both short and long connections. It is flexible enough to offer all the features backends generally expect from a load-balancer like TLS or HTTP routing.
operability
With some cooperation, any expected change should be seamless: rolling out a new software on the backends, adding or removing backends, or scaling up or down the load-balancing layer itself.

The problem and its solutions are well known. From recently published articles on the topic, “Introduction to modern network load-balancing and proxying” provides an overview of the state of the art. Google released “Maglev: A Fast and Reliable Software Network Load Balancer” describing their in-house solution in details.2 However, the associated software is not available. Basically, building a load-balancing solution with commodity servers consists of assembling three components:

  • ECMP routing
  • stateless L4 load-balancing
  • stateful L7 load-balancing

In this article, I describe and support a multi-tier solution using Linux and only open-source components. It should offer you the basis to build a production-ready load-balancing layer.

Update (2018.05)

Facebook just released Katran, an L4 load-balancer implemented with XDP and eBPF and using consistent hashing. It could be inserted in the configuration described below.

Last tier: L7 load-balancing🔗

Let’s start with the last tier. Its role is to provide high availability, by forwarding requests to only healthy backends, and scalability, by spreading requests fairly between them. Working in the highest layers of the OSI model, it can also offer additional services, like TLS-termination, HTTP routing, header rewriting, rate-limiting of unauthenticated users, and so on. Being stateful, it can leverage complex load-balancing algorithm. Being the first point of contact with backend servers, it should ease maintenances and minimize impact during daily changes.

L7 load-balancers
The last tier of the load-balancing solution is a set of L7 load-balancers receiving user connections and forwarding them to the backends.

It also terminates client TCP connections. This introduces some loose coupling between the load-balancing components and the backend servers with the following benefits:

  • connections to servers can be kept open for lower resource use and latency,
  • requests can be retried transparently in case of failure,
  • clients can use a different IP protocol than servers, and
  • servers do not have to care about path MTU discovery, TCP congestion control algorithms, avoidance of the TIME-WAIT state and various other low-level details.

Many pieces of software would fit in this layer and an ample literature exists on how to configure them. You could look at HAProxy, Envoy or Træfik. Here is a configuration example for HAProxy:

# L7 load-balancer endpoint
frontend l7lb
  # Listen on both IPv4 and IPv6
  bind :80 v4v6
  # Redirect everything to a default backend
  default_backend servers
  # Healthchecking
  acl dead nbsrv(servers) lt 1
  acl disabled nbsrv(enabler) lt 1
  monitor-uri /healthcheck
  monitor fail if dead || disabled

# IPv6-only servers with HTTP healthchecking and remote agent checks
backend servers
  balance roundrobin
  option httpchk
  server web1 [2001:db8:1:0:2::1]:80 send-proxy check agent-check agent-port 5555
  server web2 [2001:db8:1:0:2::2]:80 send-proxy check agent-check agent-port 5555
  server web3 [2001:db8:1:0:2::3]:80 send-proxy check agent-check agent-port 5555
  server web4 [2001:db8:1:0:2::4]:80 send-proxy check agent-check agent-port 5555

# Fake backend: if the local agent check fails, we assume we are dead
backend enabler
  server enabler [::1]:0 agent-check agent-port 5555

This configuration is the most incomplete piece of this guide. However, it illustrates two key concepts for operability:

  1. Healthchecking of the web servers is done both at HTTP-level (with check and option httpchk) and using an auxiliary agent check (with agent-check). The later makes it easy to put a server to maintenance or to orchestrate a progressive rollout. On each backend, you need a process listening on port 5555 and reporting the status of the service (UP, DOWN, MAINT). A simple socat process can do the trick:3

    socat -ly \
      TCP6-LISTEN:5555,ipv6only=0,reuseaddr,fork \
      OPEN:/etc/lb/agent-check,rdonly
    

    Put UP in /etc/lb/agent-check when the service is in nominal mode. If the regular healthcheck is also positive, HAProxy will send requests to this node. When you need to put it in maintenance, write MAINT and wait for the existing connections to terminate. Use READY to cancel this mode.

  2. The load-balancer itself should provide an healthcheck endpoint (/healthcheck) for the upper tier. It will return a 503 error if either there is no backend servers available or if put down the enabler backend through the agent check. The same mechanism as for regular backends can be used to signal the unavailability of this load-balancer.

Additionally, the send-proxy directive enables the proxy protocol to transmit the real clients’ IP addresses. This protocol also works for non-HTTP connections and is supported by a variety of servers, including nginx:

http {
  server {
    listen [::]:80 default ipv6only=off proxy_protocol;
    root /var/www;
    set_real_ip_from ::/0;
    real_ip_header proxy_protocol;
  }
}

As is, this solution is not complete. We have just moved the availability and scalability problem somewhere else. How do we load-balance the requests between the load-balancers?

First tier: ECMP routing🔗

On most modern routed IP networks, redundant paths exist between clients and servers. For each packet, routers have to choose a path. When the cost associated to each path is equal, incoming flows4 are load-balanced among the available destinations. This characteristic can be used to balance connections among available load-balancers:

ECMP routing
ECMP routing is used as a first tier. Flows are spread among available L7 load-balancers. Routing is stateless and asymmetric. Backend servers are not represented.

There is little control over the load-balancing but ECMP routing brings the ability to scale horizontally both tiers. A common way to implement such a solution is to use BGP, a routing protocol to exchange routes between network equipments. Each load-balancer announces to its connected routers the IP addresses it is serving.

If we assume you already have BGP-enabled routers available, ExaBGP is a flexible solution to let the load-balancers advertise their availability. Here is a configuration for one of the load-balancers:

# Healthcheck for IPv6
process service-v6 {
  run python -m exabgp healthcheck -s --interval 10 --increase 0 --cmd "test -f /etc/lb/v6-ready -a ! -f /etc/lb/disable";
  encoder text;
}

template {
  # Template for IPv6 neighbors
  neighbor v6 {
    router-id 192.0.2.132;
    local-address 2001:db8::192.0.2.132;
    local-as 65000;
    peer-as 65000;
    hold-time 6;
    family {
      ipv6 unicast;
    }
    api services-v6 {
      processes [ service-v6 ];
    }
  }
}

# First router
neighbor 2001:db8::192.0.2.254 {
  inherit v6;
}

# Second router
neighbor 2001:db8::192.0.2.253 {
  inherit v6;
}

If /etc/lb/v6-ready is present and /etc/lb/disable is absent, all the IP addresses configured on the lo interface will be announced to both routers. If the other load-balancers use a similar configuration, the routers will distribute incoming flows between them. Some external process should manage the existence of the /etc/lb/v6-ready file by checking for the healthiness of the load-balancer (using the /healthcheck endpoint for example). An operator can remove a load-balancer from the rotation by creating the /etc/lb/disable file.

To get more details on this part, have a look at “High availability with ExaBGP.” If you are in the cloud, this tier is usually implemented by your cloud provider, either using an anycast IP address or a basic L4 load-balancer.

Unfortunately, this solution is not resilient when an expected or unexpected change happens. Notably, when adding or removing a load-balancer, the number of available routes for a destination changes. The hashing algorithm used by routers is not consistent and flows are reshuffled among the available load-balancers, breaking existing connections:

Stability of ECMP routing 1/2
ECMP routing is unstable when a change happens. An additional load-balancer is added to the pool and the flows are routed to different load-balancers, which do not have the appropriate entries in their connection tables.

Moreover, each router may choose its own routes. When a router becomes unavailable, the second one may route the same flows differently:

Stability of ECMP routing 2/2
A router becomes unavailable and the remaining router load-balances its flows differently. One of them is routed to a different load-balancer, which do not have the appropriate entry in its connection table.

If you think this is not an acceptable outcome, notably if you need to handle long connections like file downloads, video streaming or websocket connections, you need an additional tier. Keep reading!

Second tier: L4 load-balancing🔗

The second tier is the glue between the stateless world of IP routers and the stateful land of L7 load-balancing. It is implemented with L4 load-balancing. The terminology can be a bit confusing here: this tier routes IP datagrams (no TCP termination) but the scheduler uses both destination IP and port to choose an available L7 load-balancer. The purpose of this tier is to ensure all members take the same scheduling decision for an incoming packet.

There are two options:

  • stateful L4 load-balancing with state synchronization accross the members, or
  • stateless L4 load-balancing with consistent hashing.

The first option increases complexity and limits scalability. We won’t use it.5 The second option is less resilient during some changes but can be enhanced with an hybrid approach using a local state.

We use IPVS, a performant L4 load-balancer running inside the Linux kernel, with Keepalived, a frontend to IPVS with a set of healthcheckers to kick out an unhealthy component. IPVS is configured to use the Maglev scheduler, a consistent hashing algorithm from Google. Among its family, this is a great algorithm because it spreads connections fairly, minimizes disruptions during changes and is quite fast at building its lookup table. Finally, to improve performance, we let the last tier—the L7 load-balancers—sends back answers directly to the clients without involving the second tier—the L4 load-balancers. This is referred to as direct server return (DSR) or direct routing (DR).

Second tier: L4 load-balancing
L4 load-balancing with IPVS and consistent hashing as a glue between the first tier and the third tier. Backend servers have been omitted. Dotted lines represent the path for the return packets.

With such a setup, we expect packets from a flow to be able to move freely between the components of the two first tiers while sticking to the same L7 load-balancer.

Configuration🔗

Assuming ExaBGP has already been configured like described in the previous section, let’s start with the configuration of Keepalived:

virtual_server_group VS_GROUP_MH_IPv6 {
  2001:db8::198.51.100.1 80
}
virtual_server group VS_GROUP_MH_IPv6 {
  lvs_method TUN  # Tunnel mode for DSR
  lvs_sched mh    # Scheduler: Maglev
  sh-port         # Use port information for scheduling
  protocol TCP
  delay_loop 5
  alpha           # All servers are down on start
  omega           # Execute quorum_down on shutdown
  quorum_up   "/bin/touch /etc/lb/v6-ready"
  quorum_down "/bin/rm -f /etc/lb/v6-ready"

  # First L7 load-balancer
  real_server 2001:db8::192.0.2.132 80 {
    weight 1
    HTTP_GET {
      url {
        path /healthcheck
        status_code 200
      }
      connect_timeout 2
    }
  }

  # Many others...
}

The quorum_up and quorum_down statements define the commands to be executed when the service becomes available and unavailable respectively. The /etc/lb/v6-ready file is used as a signal to ExaBGP to advertise the service IP address to the neighbor routers.

Additionally, IPVS needs to be configured to continue routing packets from a flow moved from another L4 load-balancer. It should also continue routing packets from unavailable destinations to ensure we can drain properly a L7 load-balancer.

# Schedule non-SYN packets
sysctl -qw net.ipv4.vs.sloppy_tcp=1
# Do NOT reschedule a connection when destination
# doesn't exist anymore
sysctl -qw net.ipv4.vs.expire_nodest_conn=0
sysctl -qw net.ipv4.vs.expire_quiescent_template=0

The Maglev scheduling algorithm will be available with Linux 4.18, thanks to Inju Song. For older kernels, I have prepared a backport.6 Use of source hashing as a scheduling algorithm will hurt the resilience of the setup.

DSR is implemented using the tunnel mode. This method is compatible with routed datacenters and cloud environments. Requests are tunneled to the scheduled peer using IPIP encapsulation. It adds a small overhead and may lead to MTU issues. If possible, ensure you are using a larger MTU for communication between the second and the third tier.7 Otherwise, it is better to explicitely allow fragmentation of IP packets:

sysctl -qw net.ipv4.vs.pmtu_disc=0

You also need to configure the L7 load-balancers to handle encapsulated traffic:8

# Setup IPIP tunnel to accept packets from any source
ip tunnel add tunlv6 mode ip6ip6 local 2001:db8::192.0.2.132
ip link set up dev tunlv6
ip addr add 2001:db8::198.51.100.1/128 dev tunlv6

Evaluation of the resilience🔗

As configured, the second tier increases the resilience of this setup for two reasons:

  1. The scheduling algorithm is using a consistent hash to choose its destination. Such an algorithm reduces the negative impact of expected or unexpected changes by minimizing the number of flows moving to a new destination. “Consistent Hashing: Algorithmic Tradeoffs” offers more details on this subject.

  2. IPVS keeps a local connection table for known flows. When a change impacts only the third tier, existing flows will be correctly directed according to the connection table.

If we add or remove a L4 load-balancer, existing flows are not impacted because each load-balancer takes the same decision, as long as they see the same set of L7 load-balancers:

L4 load-balancing instability 1/3
Loosing a L4 load-balancer has no impact on existing flows. Each arrow is an example of flow. The dots are flow endpoints bound to the associated load-balancer. If they had moved to another load-balancer, connection would have been lost.

If we add a L7 load-balancer, existing flows are not impacted either because only new connections will be scheduled to it. For existing connections, IPVS will look at its local connection table and continue to forward packets to the original destination. Similarly, if we remove a L7 load-balancer, only existing flows terminating at this load-balancer are impacted. Other existing connections will be forwarded correctly:

L4 load-balancing instability 2/3
Loosing a L7 load-balancer only impacts the flows bound to it.

We need to have simultaneous changes on both levels to get a noticeable impact. For example, when adding both a L4 load-balancer and a L7 load-balancer, only connections moved to a L4 load-balancer without state and scheduled to the new load-balancer will be broken. Thanks to the consistent hashing algorithm, other connections will stay bound to the right L7 load-balancer. During a planned change, this disruption can be minimized by adding the new L4 load-balancers first, waiting a few minutes, then adding the new L7 load-balancers.

L4 load-balancing instability 3/3
Both a L4 load-balancer and a L7 load-balancer come back to life. The consistent hash algorithm ensures that only one fifth of the existing connections would be moved to the incoming L7 load-balancer. Some of them continue to be routed through their original L4 load-balancer, which mitigates the impact.

Additionally, IPVS correctly routes ICMP messages to the same L7 load-balancers as the associated connections. This ensures notably path MTU discovery works and there is no need for smart workarounds.

Tier 0: DNS load-balancing🔗

Optionally, you can add DNS load-balancing to the mix. This is useful either if your setup is spanned accross multiple datacenters, or multiple cloud regions, or if you want to break a large load-balancing cluster into smaller ones. It is not intended to replace the first tier as it doesn’t share the same characteristics: load-balancing is unfair (it is not flow-based) and recovery from a failure is slow.

Complete load-balancing solution
A complete load-balancing solution spanning accross two datacenters.

gdnsd is an authoritative-only DNS server with integrated healthchecking. It can serve zones from master files using the RFC 1035 zone format:

@ SOA ns1 ns1.example.org. 1 7200 1800 259200 900
@ NS ns1.example.com.
@ NS ns1.example.net.
@ MX 10 smtp

@     60 DYNA multifo!web
www   60 DYNA multifo!web
smtp     A    198.51.100.99

The special RR type DYNA will return A and AAAA records after querying the specified plugin. Here, the multifo plugin implements an all-active failover of monitored addresses:

service_types => {
  web => {
    plugin => http_status
    url_path => /healthcheck
    down_thresh => 5
    interval => 5
  }
  ext => {
    plugin => extfile
    file => /etc/lb/ext
    def_down => false
  }
}

plugins => {
  multifo => {
    web => {
      service_types => [ ext, web ]
      addrs_v4 => [ 198.51.100.1, 198.51.100.2 ]
      addrs_v6 => [ 2001:db8::198.51.100.1, 2001:db8::198.51.100.2 ]
    }
  }
}

In nominal state, an A request will be answered with both 198.51.100.1 and 198.51.100.2. An healthcheck failure will update the returned set accordingly. It is also possible to administratively remove an entry by modifying the /etc/lb/ext file. For example, with the following content, 198.51.100.2 will not be advertised anymore:

198.51.100.1 => UP
198.51.100.2 => DOWN
2001:db8::c633:6401 => UP
2001:db8::c633:6402 => UP

You can find all the configuration files and the setup of each tier in the GitHub repository. If you want to replicate this setup at a smaller scale, it is possible to collapse the second and the third tiers by using either localnode or network namespaces. Even if you don’t need its fancy load-balancing services, you should keep the last tier: while backend servers come and go, the L7 load-balancers bring stability, which translates to resiliency.


  1. In this article, “backend servers” are the servers behind the load-balancing layer. To avoid confusion, we will not use the term “frontend.” ↩︎

  2. A good summary of the paper is available from Adrian Colyer. From the same author, you may also have a look at the summary for “Stateless datacenter load-balancing with Beamer.” ↩︎

  3. If you feel this solution is fragile, feel free to develop your own agent. It could coordinate with a key-value store to determine the wanted state of the server. It is possible to centralize the agent in a single location, but you may get a chicken-and-egg problem to ensure its availability. ↩︎

  4. A flow is usually determined by the source and destination IP and the L4 protocol. Alternatively, the source and destination port can also be used. The router hashes these information to choose the destination. For Linux, you may find more information on this topic in “Celebrating ECMP in Linux.” ↩︎

  5. On Linux, it can be implemented by using Netfilter for load-balancing and conntrackd to synchronize state. IPVS only provides active/backup synchronization. ↩︎

  6. The backport is not strictly equivalent to its original version. Be sure to check the README file to understand the differences. Briefly, in Keepalived configuration, you should:

    • not use inhibit_on_failure
    • use sh-port
    • not use sh-fallback

    ↩︎

  7. At least 1520 for IPv4 and 1540 for IPv6. ↩︎

  8. As is, this configuration is a insecure. You need to ensure only the L4 load-balancers will be able to send IPIP traffic. ↩︎

Planet DebianJoachim Breitner: The diameter of German+English

Languages never map directly onto each other. The English word fresh can mean frisch or frech, but frish can also be cool. Jumping from one words to another like this yields entertaining sequences that take you to completely different things. Here is one I came up with:

frechfreshfrishcoolabweisenddismissivewegwerfendtrashingverhauendbangingGeklopfeknocking – …

And I could go on … but how far? So here is a little experiment I ran:

  1. I obtained a German-English dictionary. Conveniently, after registration, you can get dict.cc’s translation file, which is simply a text file with three columns: German, English, Word form.

  2. I wrote a program that takes these words and first canonicalizes them a bit: Removing attributes like [ugs.] [regional], {f}, the to in front of verbs and other embellishment.

  3. I created the undirected, bipartite graph of all these words. This is a pretty big graph – ~750k words in each language, a million edges. A path in this graph is precisely a sequence like the one above.

  4. In this graph, I tried to find a diameter. The diameter of a graph is the longest path between two nodes that you cannot connect with a shorter path.

Because the graph is big (and my code maybe not fully optimized), it ran a few hours, but here it is: The English expression be annoyed by sb. and the German noun Icterus are related by 55 translations. Here is the full list:

  • be annoyed by sb.
  • durch jdn. verärgert sein
  • be vexed with sb.
  • auf jdn. böse sein
  • be angry with sb.
  • jdm. böse sein
  • have a grudge against sb.
  • jdm. grollen
  • bear sb. a grudge
  • jdm. etw. nachtragen
  • hold sth. against sb.
  • jdm. etw. anlasten
  • charge sb. with sth.
  • jdn. mit etw. [Dat.] betrauen
  • entrust sb. with sth.
  • jdm. etw. anvertrauen
  • entrust sth. to sb.
  • jdm. etw. befehlen
  • tell sb. to do sth.
  • jdn. etw. heißen
  • call sb. names
  • jdn. beschimpfen
  • abuse sb.
  • jdn. traktieren
  • pester sb.
  • jdn. belästigen
  • accost sb.
  • jdn. ansprechen
  • address oneself to sb.
  • sich an jdn. wenden
  • approach
  • erreichen
  • hit
  • Treffer
  • direct hit
  • Volltreffer
  • bullseye
  • Hahnenfuß-ähnlicher Wassernabel
  • pennywort
  • Mauer-Zimbelkraut
  • Aaron's beard
  • Großkelchiges Johanniskraut
  • Jerusalem star
  • Austernpflanze
  • goatsbeard
  • Geißbart
  • goatee
  • Ziegenbart
  • buckhorn plantain
  • Breitwegerich / Breit-Wegerich
  • birdseed
  • Acker-Senf / Ackersenf
  • yellows
  • Gelbsucht
  • icterus
  • Icterus

Pretty neat!

So what next?

I could try to obtain an even longer chain by forgetting whether a word is English or German (and lower-casing everything), thus allowing wild jumps like hathuthüttelodge.

Or write a tool where you can enter two arbitrary words and it finds such a path between them, if there exists one. Unfortunately, it seems that the terms of the dict.cc data dump would not allow me to create such a tool as a web site (but maybe I can ask).

Or I could throw in additional languages!

What would you do?

,

Planet DebianJonathan McDowell: Home Automation: Graphing MQTT sensor data

So I’ve setup a MQTT broker and I’m feeding it temperature data. How do I actually make use of this data? Turns out collectd has an MQTT plugin, so I went about setting it up to record temperature over time.

First problem was that although the plugin supports MQTT/TLS it doesn’t support it for subscriptions until 5.8, so I had to backport the fix to the 5.7.1 packages my main collectd host is running.

The other problem is that collectd is picky about the format it accepts for incoming data. The topic name should be of the format <host>/<plugin>-<plugin_instance>/<type>-<type_instance> and the data is <unixtime>:<value>. I modified my MQTT temperature reporter to publish to collectd/mqtt-host/mqtt/temperature-study, changed the publish line to include the timestamp:

publish.single(pub_topic, str(time.time()) + ':' + str(temp),
            hostname=Broker, port=8883,
            auth=auth, tls={})

and added a new collectd user to the Mosquitto configuration:

mosquitto_passwd -b /etc/mosquitto/mosquitto.users collectd collectdpass

And granted it read-only access to the collectd/ prefix via /etc/mosquitto/mosquitto.acl:

user collectd
topic read collectd/#

(I also created an mqtt-temp user with write access to that prefix for the Python script to connect to.)

Then, on the collectd host, I created /etc/collectd/collectd.conf.d/mqtt.conf containing:

LoadPlugin mqtt

<Plugin "mqtt">
        <Subscribe "ha">
                Host "mqtt-host"
                Port "8883"
                User "collectd"
                Password "collectdpass"
                CACert "/etc/ssl/certs/ca-certificates.crt"
                Topic "collectd/#"
        </Subscribe>
</Plugin>

I had some initial problems when I tried setting CACert to the Let’s Encrypt certificate; it actually wants to point to the “DST Root CA X3” certificate that signs that. Or using the full set of installed root certificates as I’ve done works too. Of course the errors you get back are just of the form:

collectd[8853]: mqtt plugin: mosquitto_loop failed: A TLS error occurred.

which is far from helpful. Once that was sorted collectd started happily receiving data via MQTT and producing graphs for me:

Study temperature

This is a pretty long winded way of ending up with some temperature graphs - I could have just graphed the temperature sensor using collectd on the Pi to send it to the monitoring host, but it has allowed a simple MQTT broker, publisher + subscriber setup with TLS and authentication to be constructed and confirmed as working.

Planet DebianEddy Petrișor: rust for cortex-m7 baremetal

This is a reminder for myself, if you want to install rust for a baremetal Cortex-M7 target, this seems to be a tier 3 platform:

https://forge.rust-lang.org/platform-support.html

Higlighting the relevant part:

Target std rustc cargo notes
...
msp430-none-elf * 16-bit MSP430 microcontrollers
sparc64-unknown-netbsd NetBSD/sparc64
thumbv6m-none-eabi * Bare Cortex-M0, M0+, M1
thumbv7em-none-eabi *

Bare Cortex-M4, M7
thumbv7em-none-eabihf * Bare Cortex-M4F, M7F, FPU, hardfloat
thumbv7m-none-eabi * Bare Cortex-M3
...
x86_64-unknown-openbsd 64-bit OpenBSD

In order to enable the relevant support, use the nightly build and add the relevant target:
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
If not using nightly, switch to that:

eddy@feodora:~/usr/src/rust-uc$ rustup default nightly-x86_64-unknown-linux-gnu
info: using existing install for 'nightly-x86_64-unknown-linux-gnu'
info: default toolchain set to 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
Add the needed target:
eddy@feodora:~/usr/src/rust-uc$ rustup target add thumbv7em-none-eabi
info: downloading component 'rust-std' for 'thumbv7em-none-eabi'
  5.4 MiB /   5.4 MiB (100 %)   5.1 MiB/s ETA:   0 s               
info: installing component 'rust-std' for 'thumbv7em-none-eabi'
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

installed targets for active toolchain
--------------------------------------

thumbv7em-none-eabi
x86_64-unknown-linux-gnu

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
Then compile with --target.

Cory DoctorowWhere to find me at Phoenix Comics Fest this week

I’m heading to Phoenix Comics Fest tomorrow (going straight to the airport from my daughter’s elementary school graduation) (!), and I’ve got a busy schedule so I thought I’d produce a comprehensive list of the places you can find me in Phoenix:


Wednesday, May 23: Elevenageddon at Poisoned Pen books, 4014 N Goldwater Blvd, Scottsdale, AZ 85251, 7-8PM (“A Multi-Author Sci-Fi Event”)

Thursday, May 24:

Transhumans and Transhumanism in Fiction, North 126AB, with Emily Devenport and Sylvain Neuvel, 12PM-1PM

Prophets of Sci-Fi, North 125AB, with Emily Devenport, Sylvain Neuvel and John Scalzi, 3PM-4PM

Tor Authors Signing, Exhibitor Hall Author Signing area, 4:30PM-530PM

Building a Franken-Book, North 126C, with Bob Beard, Joey Eschrich and Ed Finn


Friday, May 25:

Two Truths and a Lie, North 122ABC, with Myke Cole, Emily Devenport, K Arsenault Rivera and John Scalzi, 1030AM-1130AM

Solo Presentation, North 122ABC, 1:30PM-2:30PM

Signing, Exhibitor Hall Author Signing Area, 3PM-4PM

Saturday, May 26:

Cory Doctorow & John Scalzi in Conversation about Politics in Sci Fi and Fantasy, North 125AB, 12PM-1PM

Signing, North 124AB, 1:15PM-2:15PM

Rondam RamblingsA quantum mechanics puzzle, part drei

[This post is the third part of a series.  You should read parts one and two before reading this or it won't make any sense.] So we have two more cases to consider: Case 3: we pulse the laser with very short pulses, emitting only one photon at a time.  This is actually not possible with a laser, but it is possible with something like this single-photon-emitting light source (which was actually

Krebs on SecurityMobile Giants: Please Don’t Share the Where

Your mobile phone is giving away your approximate location all day long. This isn’t exactly a secret: It has to share this data with your mobile provider constantly to provide better call quality and to route any emergency 911 calls straight to your location. But now, the major mobile providers in the United States — AT&T, Sprint, T-Mobile and Verizon — are selling this location information to third party companies — in real time — without your consent or a court order, and with apparently zero accountability for how this data will be used, stored, shared or protected.

Think about what’s at stake in a world where anyone can track your location at any time and in real-time. Right now, to be free of constant tracking the only thing you can do is remove the SIM card from your mobile device never put it back in unless you want people to know where you are.

It may be tough to put a price on one’s location privacy, but here’s something of which you can be sure: The mobile carriers are selling data about where you are at any time, without your consent, to third-parties for probably far less than you might be willing to pay to secure it.

The problem is that as long as anyone but the phone companies and law enforcement agencies with a valid court order can access this data, it is always going to be at extremely high risk of being hacked, stolen and misused.

Consider just two recent examples. Earlier this month The New York Times reported that a little-known data broker named Securus was selling local police forces around the country the ability to look up the precise location of any cell phone across all of the major U.S. mobile networks. Then it emerged that Securus had been hacked, its database of hundreds of law enforcement officer usernames and passwords plundered. We also found out that Securus’ data was ultimately obtained from a California-based location tracking firm LocationSmart.

On May 17, KrebsOnSecurity broke the news of research by Carnegie Mellon University PhD student Robert Xiao, who discovered that a LocastionSmart try-before-you-buy opt-in demo of the company’s technology was wide open — allowing real-time lookups from anyone on anyone’s mobile device — without any sort of authentication, consent or authorization.

Xiao said it took him all of about 15 minutes to discover that LocationSmart’s lookup tool could be used to track the location of virtually any mobile phone user in the United States.

Securus seems equally clueless about protecting the priceless data to which it was entrusted by LocationSmart. Over the weekend KrebsOnSecurity discovered that someone — almost certainly a security professional employed by Securus — has been uploading dozens of emails, PDFs, password lists and other files to Virustotal.com — a service owned by Google that can be used to scan any submitted file against dozens of commercial antivirus tools.

Antivirus companies willingly participate in Virustotal because it gives them early access to new, potentially malicious files being spewed by cybercriminals online. Virustotal users can submit suspicious files of all kind; in return they’ll see whether any of the 60+ antivirus tools think the file is bad or benign.

One basic rule that all Virustotal users need to understand is that any file submitted to Virustotal is also available to customers who purchase access to the service’s file repository. Nevertheless, for the past two years someone at Securus has been submitting a great deal of information about the company’s operations to Virustotal, including copies of internal emails and PDFs about visitation policies at a number of local and state prisons and jails that made up much of Securus’ business.

Some of the many, many files uploaded to Virustotal.com over the years by someone at Securus Technologies.

One of the files, submitted on April 27, 2018, is titled “38k user pass microsemi.com – joomla_production.mic_users_blockedData.txt”.  This file includes the names and what appear to be hashed/scrambled passwords of some 38,000 accounts — supposedly taken from Microsemi, a company that’s been called the largest U.S. commercial supplier of military and aerospace semiconductor equipment.

Many of the usernames in that file do map back to names of current and former employees at Microsemi. KrebsOnSecurity shared a copy of the database with Microsemi, but has not yet received a reply. Securus also has not responded to requests for comment.

These files that someone at Securus apparently submitted regularly to Virustotal also provide something of an internal roadmap of Securus’ business dealings, revealing the names and login pages for several police departments and jails across the country, such as the Travis County Jail site’s Web page to access Securus’ data.

Check out the screen shot below. Notice that forgot password link there? Clicking that prompts the visitor to enter their username and to select a “security question” to answer. There are but three questions: “What is your pet’s name? What is your favorite color? And what town were you born in?” There don’t appear to be any limits on the number of times one can attempt to answer a secret question.

Choose wisely and you, too, could gain the ability to look up anyone’s precise mobile location.

Given such robust, state-of-the-art security, how long do you think it would take for someone to figure out how to reset the password for any authorized user at Securus’ Travis County Jail portal?

Yes, companies like Securus and Location Smart have been careless with securing our prized location data, but why should they care if their paying customers are happy and the real-time data feeds from the mobile industry keep flowing?

No, the real blame for this sorry state of affairs comes down to AT&T, Sprint, T-Mobile and Verizon. T-Mobile was the only one of the four major providers that admitted providing Securus and LocationSmart with the ability to perform real-time location lookups on their customers. The other three carriers declined to confirm or deny that they did business with either company.

As noted in my story last Thursday, LocationSmart included the logos of the four carriers on their home page — in addition to those of several other major firms (that information is no longer available on the company’s site, but it can still be viewed by visiting this historic record of it over at the Internet Archive).

Now, don’t think for a second that these two tiny companies are the only ones with permission from the mobile giants to look up such sensitive information on demand. At a minimum, each one of these companies can in theory resell (or leak) this information and access to others. On 15 May, ZDNet reported that Securus was getting its data from the carriers by going through an intermediary: 3Cinteractive, which was getting it from LocationSmart.

However, it is interesting that the first insight we got that the mobile firms were being so promiscuous with our private location data came in the Times story about law enforcement officials seeking the ability to access any mobile device’s location data in real time.

All technologies are double-edged swords, which means that each can be used both for good and malicious ends. As much as police officers may wish to avoid the hassle and time constraints of having to get a warrant to determine the precise location of anyone they please whenever they wish, those same law enforcement officers should remember that this technology works both ways: It also can just as easily be abused by criminals to track the real-time movements of police and their families, informants, jurors, witnesses and even judges.

Consider the damage that organized crime syndicates — human traffickers, drug smugglers and money launderers — could inflict armed with an app that displays the precise location of every uniformed officer from within 300 ft to across the country. All because they just happened to know the cell phone number tied to each law enforcement official.

Maybe you have children or grandchildren who — like many of their peers these days — carry a mobile device at all times for safety and for quick communication with parents or guardians. Now imagine that anyone in the world has the instant capability to track where your kid is at any time of day. All they’d need is your kid’s digits.

Maybe you’re the current or former target of a stalker, jilted ex-spouse, or vengeful co-worker. Perhaps you perform sensitive work for the government. All of the above-mentioned parties and many more are put at heightened personal risk by having their real-time location data exposed to commercial third parties.

Some people might never sell their location data for any price: I suspect most of us would like this information always to be private unless and until we change the defaults (either in a binary “on/off” way or app-specific). On the other end of the spectrum there are probably plenty of people who don’t care one way or another provided that sharing their location information brings them some real or perceived financial or commercial benefit.

The point is, for many of us location privacy is priceless because, without it, almost everything else we’re doing to safeguard our privacy goes out the window.

And this sad reality will persist until the mobile providers state unequivocally that they will no longer sell or share customer location data without having received and validated some kind of legal obligation — such as a court-ordered subpoena.

But even that won’t be enough, because companies can and do change their policies all the time without warning or recourse (witness the current reality). It won’t be enough until lawmakers in this Congress step up and do their jobs — to prevent the mobile providers from selling our last remaining bastion of privacy in the free world to third party companies who simply can’t or won’t keep it secure.

The next post in this series will examine how we got here, and what Congress and federal regulators have done and might do to rectify the situation.

Update, May 23, 12:34 am ET: Securus responded with the following comment:

“Securus Technologies does not use the Google tool, Virustotal.com as part of our normal business practice for confidential information.  We use other antivirus tools that meet our high standards for security and reliability.  Importantly,Virustotal.com will associate a file with a URL or domain merely because the URL or domain is included in the file.  Our initial review concluded that the overwhelming majority of files that Virustotal.com associates with www.securustech.net were not uploaded by Securus.  Our review also showed that a few employees accessed the site in an abundance of caution to verify that outside emails were virus free.  As a result, many of the files indicated in your article were not directly uploaded by Securus and/or are not Securus documents. A vast majority of files merely mention our URL.  Our review also determined that the Microsemi file mentioned in your article is only associated with Securus because two Securus employee email addresses were included in the file, and not because Securus uploaded the file.”

“Because we take the security of information very seriously, we are continuing to look into this matter to ensure proper procedures are followed to protect company and client information. We will update you if we learn that procedures were not followed.”

CryptogramAnother Spectre-Like CPU Vulnerability

Google and Microsoft researchers have disclosed another Spectre-like CPU side-channel vulnerability, called "Speculative Store Bypass." Like the others, the fix will slow the CPU down.

The German tech site Heise reports that more are coming.

I'm not surprised. Writing about Spectre and Meltdown in January, I predicted that we'll be seeing a lot more of these sorts of vulnerabilities.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown.

I still predict that we'll be seeing lots more of these in the coming months and years, as we learn more about this class of vulnerabilities.

Cory DoctorowThe paperback of Walkaway is out today, along with reissues of all my adult novels in matching covers!

Today marks the release of the paperback of Walkaway, along with reissues of my five other adult novels, all in matching covers designed by the incredible Will Stahle (and if ebooks are your thing, check out my fair-trade ebook store, where you can get all my audiobooks and ebooks sold on the same terms as physical editions, with no DRM and no license agreements!).

Worse Than FailureRepresentative Line: Aggregation of Concatenation

A few years back, JSON crossed the “really good hammer” threshold. It has a good balance of being human readable, relatively compact, and simple to parse. It thus has become the go-to format for everything. “KoHHeKT” inherited a service which generates some JSON from an in-memory tree structure. This is exactly the kind of situation where JSON shines, and it would be trivial to employ one of the many JSON serialization libraries available for C# to generate JSON on demand.

Orrrrr… you could use LINQ aggregations, string formatting and trims…

private static string GetChildrenValue(int childrenCount)
{
        string result = Enumerable.Range(0, childrenCount).Aggregate("", (s, i) => s + $"\"{i}\",");
        return $"[{result.TrimEnd(',')}]";
}

Now, the concatenation and trims and all of that is bad. But I’m mostly stumped by what this method is supposed to accomplish. It’s called GetChildrenValue, but it doesn’t return a value- it returns an array of numbers from 0 to children count. Well, not an array, obviously- a string that can be parsed into an array. And they’re not actually numbers- they’re enclosed in quotes, so it’s actually text, not that any JavaScript client would care about the difference.

Why? How is this consumed? KoHHeKT couldn’t tell us, and we certainly aren’t going to figure it out from this block. But it is representative of the entire JSON constructing library- aggregations and concatenations with minimal exception handling and no way to confirm that it output syntactically valid JSON because nothing sanitizes its inputs.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #160

Here’s what happened in the Reproducible Builds effort between Sunday May 13 and Saturday May 19 2018:

Packages reviewed and fixed, and bugs filed

In addition, build failure bugs were reported by Adrian Bunk (2) and Gilles Filippini (1).

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages.

reprotest development

reprotest is our tool to build software and check it for reproducibility.

  • kpcyrd:
  • Chris Lamb:
    • Update referencess to Alioth now that the repository has migrated to Salsa. (1, 2, 3)

jenkins.debian.net development

There were a number of changes to our Jenkins-based testing framework, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Levente Polyak and Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet Linux AustraliaOpenSTEM: Nellie Bly – investigative journalist extraordinaire!

May is the birth month of Elizabeth Cochrane Seaman, better known as “Nellie Bly“. Here at OpenSTEM, we have a great fondness for Nellie Bly – an intrepid 19th century journalist and explorer, who emulated Jules Verne’s fictional character, Phileas Fogg, in racing around the world in less than 80 days in 1889/1890. Not only […]

,

Planet DebianDima Kogan: More Vnlog demos

More demos of vnlog and feedgnuplot usage! This is pretty pointless, but should be a decent demo of the tools at least. This is a demo, not documentation; so for usage details consult the normal docs.

Each Wednesday night I join a group bike ride. This is an organized affair, and each week an email precedes the ride, very roughly describing the route. The two organizers alternate leading the ride each week, and consequently the emails alternate also. I was getting the feeling that some of the announcements show up in my mailbxo more punctually than others, and after a recent 20-minutes-before-the ride email, I decided this just had to be quantified.

The emails all go to a google-group email. The google-groups people are a wheel-reinventing bunch, so talking to the archive can't be done with normal tools (NNTP? mbox files? No?). A brief search revealed somebody's home-grown tool to programmatically grab the archive:

https://github.com/icy/google-group-crawler.git

The docs look funny, but are actually correct: you really do run the script to download stuff and generate another script; and then run that script to download the rest of the stuff.

Anyway, I used that tool to grab all the emails that are available. Then I wrote a quick/dirty script to parse out the data I care about and dump everything into a vnlog:

#!/usr/bin/perl
use strict;
use warnings;

use feature ':5.10';

my %daysofweek = ('Mon' => 0,
                  'Tue' => 1,
                  'Wed' => 2,
                  'Thu' => 3,
                  'Fri' => 4,
                  'Sat' => 5,
                  'Sun' => 6);
my %months = ('Jan' => 1,
              'Feb' => 2,
              'Mar' => 3,
              'Apr' => 4,
              'May' => 5,
              'Jun' => 6,
              'Jul' => 7,
              'Aug' => 8,
              'Sep' => 9,
              'Oct' => 10,
              'Nov' => 11,
              'Dec' => 12);


say '# path ridenum who whenwedh date wordcount subject';

for my $path (<mbox/m.*>)
{
    my ($ridenum,$who,$date,$whenwedh,$subject);

    my $wordcount = 0;
    my $inbody    = undef;

    open FD, '<', $path;
    while(<FD>)
    {
        if( !$inbody && /^From: *(.*?)\s*$/ )
        {
            $who = $1;
            if(   $who =~ /sean/i)   { $who = 'sean'; }
            elsif($who =~ /nathan/i) { $who = 'nathan'; }
            else                     { $who = 'other'; }
        }
        if( !$inbody &&
            /^Subject: \s*
             (?:=\?UTF-8\?Q\?)?
             (.*?) \s* $/x )
        {
            $subject = $1;
            ($ridenum) = $subject =~ /^(?: \# | (?:=\?ISO-8859-1\?Q\?=23) )
                                      ([0-9]+)/x;
            $subject =~ s/[\s#]//g;
        }
        if( !$inbody && /^Date: *(.*?)\s*$/ )
        {
            $date = $1;

            my ($zone) = $date =~ / (\(.+\) | -0700 | -0800) /x;
            if( !defined $zone)
            {
                die "No timezone in: '$date'";
            }
            if( $zone !~ /PST|PDT|-0700|-0800/)
            {
                die "Unexpected timezone: '$zone'";
            }

            my ($Dayofweek,$D,$M,$Y,$h,$m,$s) = $date =~ /^(...),? +(\d+) +([a-zA-Z]+) +(20\d\d) +(\d\d):(\d\d):(\d\d)/;
            if( !(defined $Dayofweek && defined $h && defined $m && defined $s) )
            {
                die "Unparseable date '$date'";
            }
            my $dayofweek = $daysofweek{$Dayofweek} // die "Unparseable day-of-week '$Dayofweek'";

            my $t     = $dayofweek*24 + $h + ($m + $s/60)/60;
            my $twed0 = 2*24; # start of wed
            $M = $months{$M} // die "Unknown month '$M'. Line: '$_'";
            $date = sprintf('%04d%02d%02d', $Y,$M,$D);

            $whenwedh = $t - $twed0;
        }

        if( !$inbody && /^[\r\n]*$/ )
        {
            $inbody = 1;
        }
        if( $inbody )
        {
            if( /------=_Part/ || /Content-Type:/)
            {
                last if $wordcount > 0;
                $inbody = undef;
                next;
            }
            my @words = /(\w+)/g;
            $wordcount += @words;
        }
    }
    close FD;

    $who      //= '-';
    $subject  //= '-';
    $ridenum  //= '-';
    $date     //= '-';
    $whenwedh //= '-';

    say "$path $ridenum $who $whenwedh $date $wordcount $subject";
}

The script isn't important, and the resulting data is here. Now that I have a log on disk, I can do stuff with it. The first few lines of the log look like this:

dima@scrawny:~/projects/passagemining/google-group-crawler/the-passage-announcements$ < rides.vnl head

# path ridenum who whenwedh date wordcount subject
mbox/m.-EF1u5bbw5A.SywitKQ3y1sJ 265 sean 1.40722222222222 20140903 190 265-Coasting
mbox/m.-JdiiTIvyYs.Jgy_rCiwAGAJ 151 sean 18.6441666666667 20120606 199 151-FinalsWeek
mbox/m.-l6z9-1WC78.SgP3ytLsDAAJ 312 nathan 19.5394444444444 20150812 189 312-SpaceFilling
mbox/m.-vfVuoUxJ0w.FwpRRWC7EgAJ 367 nathan 18.1766666666667 20160831 164 367-Dislocation
mbox/m.-YHTEvmbIyU.HHWjbs_xpesJ 110 sean 10.9108333333333 20110810 407 110-SouslesParcs,laPoubelle
mbox/m.0__GMaUD_O8.Pjupq0AwBAAJ 404 sean 13.5255555555556 20170524 560 404-Bumped
mbox/m.0CT9ybx3uIU.sdZGwo8rSQUJ 53 sean -23.1402777777778 20100629 223 53WeInventedtheRemix
mbox/m.0FtQxCkxVHA.AjhGJ7mgAwAJ 413 nathan 20.4155555555556 20170726 178 413-GradientAssent
mbox/m.0haCNC_N2fY.bJ-93LQSFQAJ 337 nathan 57.3708333333333 20160205 479 337-TheCronutRide

I can align the columns to make it more human-readable:

dima@scrawny:~/projects/passagemining/google-group-crawler/the-passage-announcements$ < rides.vnl head | vnl-align

#             path              ridenum   who       whenwedh        date   wordcount           subject          
mbox/m.-EF1u5bbw5A.SywitKQ3y1sJ 265     sean     1.40722222222222 20140903 190       265-Coasting               
mbox/m.-JdiiTIvyYs.Jgy_rCiwAGAJ 151     sean    18.6441666666667  20120606 199       151-FinalsWeek             
mbox/m.-l6z9-1WC78.SgP3ytLsDAAJ 312     nathan  19.5394444444444  20150812 189       312-SpaceFilling           
mbox/m.-vfVuoUxJ0w.FwpRRWC7EgAJ 367     nathan  18.1766666666667  20160831 164       367-Dislocation            
mbox/m.-YHTEvmbIyU.HHWjbs_xpesJ 110     sean    10.9108333333333  20110810 407       110-SouslesParcs,laPoubelle
mbox/m.0__GMaUD_O8.Pjupq0AwBAAJ 404     sean    13.5255555555556  20170524 560       404-Bumped                 
mbox/m.0CT9ybx3uIU.sdZGwo8rSQUJ  53     sean   -23.1402777777778  20100629 223       53WeInventedtheRemix       
mbox/m.0FtQxCkxVHA.AjhGJ7mgAwAJ 413     nathan  20.4155555555556  20170726 178       413-GradientAssent         
mbox/m.0haCNC_N2fY.bJ-93LQSFQAJ 337     nathan  57.3708333333333  20160205 479       337-TheCronutRide          
dima@scrawny:~/projects/passagemining/google-group-crawler/the-passage-announcements$

If memory serves, we're at around ride 450 right now. Is that right?

$ < rides.vnl vnl-sort -nr -k ridenum | head -n2 | vnl-filter -p ridenum

# ridenum
452

Cool. This command was longer than it needed to be in order to produce nicer output. If I was exploring the dataset, I'd save keystrokes and do this instead:

$ < rides.vnl vnl-sort -nrk ridenum | head

# path ridenum who whenwedh date wordcount subject
mbox/m.7TnUbcShAz8.67KgwBGhAAAJ 452 nathan 20.7694444444444 20180502 175 452-CastingtoType
mbox/m.ej7Oz6sDzgc.bEnN04VEAQAJ 451 sean 0.780833333333334 20180425 258 451-Recovery
mbox/m.LWfydBtpd_s.35SgEJEqAgAJ 450 nathan 67.9608333333333 20180420 659 450-AnotherGreenWorld
mbox/m.3mv-Cm0EzkM.oAm3MkNYCAAJ 449 sean 17.5875 20180411 290 449-DoYouHaveRockNRoll?
mbox/m.AEV4ukSjO5U.IPlUabfEBgAJ 448 nathan 20.6138888888889 20180404 175 448-TheThirdString
mbox/m.bYTM6kgxtJs.5iHcVQKPBAAJ 447 sean 15.8355555555556 20180328 196 447-PassParticiple
mbox/m.tHMqRWp9o_Y.FQ8hFvnqCQAJ 446 nathan 20.5213888888889 20180321 139 446-Chiaroscuro
mbox/m.jr0SBsDBzgk.UHrbCv4VBQAJ 445 sean 15.3280555555556 20180314 111 445-85%
mbox/m.K2Yg_FRXuAo.SyViTwXXAQAJ 444 nathan 19.6180555555556 20180307 171 444-BackintheLoop

OK, how far back does the archive go? I do the same thing as before, but sort in the opposite order to find the ealiest rides

$ < rides.vnl vnl-sort -n -k ridenum | head -n2 | vnl-filter -p ridenum

# ridenum

Nothing. That's odd. Let me look at whole records, and at more than just the first two lines

$ < rides.vnl vnl-sort -n -k ridenum | head | vnl-align

#             path              ridenum   who       whenwedh       date   wordcount                       subject                      
mbox/m.2gywN9pxMI4.40UBrDjnAwAJ -       nathan  17.6572222222222 20171206  95       Noridetonight;daytimeridethisSaturday!             
mbox/m.49fZsvZac_U.a0CazPinCAAJ -       sean   -34.495           20170320 463       Extraridethisweekend+Passage400save-the-date       
mbox/m.5gJd21W24vo.ICDEHrnQJvcJ -       nathan  12.1063888888889 20130619 172       NoPassageRideTonight;GalleryOpeningTomorrowNight   
mbox/m.7qEbhBWSN1U.Cx6cxYTECgAJ -       nathan  17.7891666666667 20180418 134       Noridetonight;Passage450onSaturday!                
mbox/m.DVssP4Th__4.jXzzu9clZLQJ -       sean    20.9138888888889 20101222 209       TheWrathofTlaloc                                   
mbox/m.E6etBSqEQIc.C35-SkBllHoJ -       sean    50.7575          20131220 292       Noridenextweek;seeyounextyear                      
mbox/m.GyJ16HiK8Ds.z6yNC4W5SeUJ -       sean   -11.5666666666667 20120529 228       NoRideThisWeek!...AIDS/Lifecycle...ThirdAnniversary
mbox/m.H3QGBvjeTfM.CS-xRn1WDQAJ -       sean    17.0180555555555 20171227 257       Noridetonight;nextride1/6                          
mbox/m.K2P6D_BGfYU.ve6a_8l6AAAJ -       sean    37.8166666666667 20170223 150       RemainingPassageRouteMapShirtsAvailableforPurchase

Aha. A bunch of emails aren't announncing a ride, but are announcing that there's no ride that week. Let's ignore those

$ < rides.vnl vnl-filter -p +ridenum | vnl-sort -n -k ridenum | head -n2

# ridenum
52

Bam. So we have emails going back to ride 52. Good enough. All right. I'm aiming to create a time histogram for Sean's emails and another for Nathan's emails. What about emails that came from neither one? In theory there shouldn't be any of those, but there could be a parsing error, or who knows what.

$ < rides.vnl vnl-filter 'who == "other"'

# path ridenum who whenwedh date wordcount subject
mbox/m.A-I0_i9-YOs.QRX1P99_uiUJ 65 other 65.1413888888889 20100917 330 65-LosAngelesRidesItself+specialscreening
mbox/m.pHpzsjH7H68.O7CP_v6bcEoJ 67 other 16.5663888888889 20101006 50 67Sortition,NotSaturation

OK. Exactly 2 emails out of hundreds. That's not bad, and I'll just ignore those. Out of curiosity, what happened? Is this a parsing error?

$ grep From: $(< rides.vnl vnl-filter 'who == "other"' --eval '{print path}')

mbox/m.A-I0_i9-YOs.QRX1P99_uiUJ:From: The Passage Announcements <the-passage-...@googlegroups.com>
mbox/m.pHpzsjH7H68.O7CP_v6bcEoJ:From: The Passage Announcements <the-passage-...@googlegroups.com>

So on rides 65 and 67 "The Passage Announcements" emailed themselves. Oops. Since the ride leaders alternate, I can infer who actually sent these by looking at the few rides around this one:

$ < rides.vnl vnl-filter 'ridenum > 60 && ridenum < 70' -p ridenum,who | vnl-sort -n -k ridenum

# ridenum who
61 sean
62 nathan
63 sean
64 nathan
65 other
66 nathan
67 other
68 nathan
69 sean

That's pretty conclusive: clearly these emails came from Sean. I'm still going to ignore them, though.

The ride is on Wed evening, and the emails generally come in the day or two before then. Does my data set contain any data outside this reasonable range? Hopefully very little, just like the "other" author emails.

$ < rides.vnl vnl-filter --has ridenum -p whenwedh | feedgnuplot --histo 0 --binwidth 1 --xlabel 'Hour (on Wed)' --ylabel 'Email frequency'

frequency-all.svg

The ride starts at 21:00 on Wed, and we see a nice spike immediately before. The smaller cluster prior to that is the emails that go out the night before. There's a tiny number of stragglers going out the previous day (that I'm simply going to ignore). And there're a number of emails going out after Wed. These likely announce an occasional weekend ride that I will also ignore. But let's do check. How many are there?

$ < rides.vnl vnl-filter --has ridenum 'whenwedh > 22' | wc -l

16

Looking at these manually, most are indeed weekend rides, with a small number of actual extra-early announcements for Wed. I can parse the email text more fancily to pull those out, but that's really not worth my time.

OK. I'm now ready for the main thing.

$ < rides.vnl | 
    vnl-filter --has ridenum 'who != "other"' -p who,whenwedh |
    feedgnuplot --dataid --autolegend
                --histo sean,nathan --binwidth 0.5
                --style sean   'with boxes fill transparent solid 0.3 border lt -1'
                --style nathan 'with boxes fill transparent pattern 1 border lt -1'
                --xmin -12 --xmax 24
                --xlabel "Time (hour)" --ylabel 'Email frequency'
                --set 'xtics ("12\n(Tue)" -12,"16\n(Tue)" -8,"20\n(Tue)" -4,"0\n(Wed)" 0,"4\n(Wed)" 4,"8\n(Wed)" 8,"12\n(Wed)" 12,"16\n(Wed)" 16,"21\n(Wed)" 21,"0\n(Thu)" 24)'
                --set 'arrow from 21, graph 0 to 21, graph 1 nohead lw 3 lc "red"'
                --title "Passage email timing distribution"

frequency-zoomed.svg

This looks verbose, but most of the plotting command is there to make things look nice. When analyzing stuff, I'd omit most of that. Anyway, I can now see what I suspected: Nathan is a procrastinator! His emails almost always come in on Wed, usually an hour or two before the deadline. Sean's emails are bimodal: one set comes in on Wed afternoon, and another in the extreme early morning on Wed. Presumably he sleeps in-between.

We have more data, so we can make more pointless plots. For instance, what does the verbosity of the emails look like? Is one sender more verbose than another?

$ < rides.vnl vnl-sort -n -k ridenum |
  vnl-filter 'who != "other"' -p +ridenum,who,wordcount |
  feedgnuplot --lines --domain --dataid --autolegend
              --xlabel 'Ride number' --ylabel 'Words per email'

verbosity_unfiltered.svg

$ < rides.vnl vnl-filter 'who != "other"' --has ridenum -p who,wordcount |
  feedgnuplot --dataid --autolegend
              --histo sean,nathan --binwidth 20
              --style sean   'with boxes fill transparent solid 0.3 border lt -1'
              --style nathan 'with boxes fill transparent pattern 1 border lt -1'
              --xlabel "Words per email" --ylabel 'frequency'
              --title "Passage verbosity distribution"

verbosity_histogram.svg

The time series doesn't obviously say anything, but from the histogram, it looks like Sean is a bit more verbose, maybe? What's the average?

$ < rides.vnl vnl-filter --eval 'ridenum != "-" { if(who == "sean")   { Ns++; Ws+=wordcount; }
                                                  if(who == "nathan") { Nn++; Wn+=wordcount; } }
                                 END { print "Mean verbosity sean,nathan: "Ws/Ns, Wn/Nn }'

Mean verbosity sean,nathan: 304.955 250.425

Indeed. Is the verbosity time-dependent? Is anybody getting more or less verbose over the years? The time-series plot above is pretty noisy, so it's not clear. Let's filter it to reduce the noise. We're getting into an area that's too complicated for these tools, and moving to something more substantial at this point would be warranted. But I'll do one more thing with these tools, and then stop. I can implement a half-assed filter by time-shifting the verbosity series, re-joining the shifted series, and computing the mean. I do this separately for the two email authors, and then re-combine the series. I could join these two, but simply catting the two data sets together is sufficient here.

$ < rides.vnl vnl-sort -n -k ridenum |
    vnl-filter 'who == "nathan"' --has ridenum |
    vnl-filter -p ridenum,idx=NR,wordcount > nathanrp0

$ < rides.vnl vnl-sort -n -k ridenum |
    vnl-filter 'who == "nathan"' --has ridenum |
    vnl-filter -p ridenum,idx=NR-1,wordcount > nathanrp-1

$ < rides.vnl vnl-sort -n -k ridenum |
    vnl-filter 'who == "nathan"' --has ridenum |
    vnl-filter -p ridenum,idx=NR+1,wordcount > nathanrp+1

$ ... same for Sean ...

$ cat <(vnl-join --vnl-suffix2 after --vnl-sort n -j idx
                 <(vnl-join --vnl-suffix2 before --vnl-sort n -j idx
                            nathanrp{0,-1})
                 nathanrp+1 |
        vnl-filter -p ridenum,who='"nathan"','wordcountfiltered=(wordcount+wordcountbefore+wordcountafter)/3')

      <(vnl-join --vnl-suffix2 after --vnl-sort n -j idx
                 <(vnl-join --vnl-suffix2 before --vnl-sort n -j idx
                            seanrp{0,-1})
                 seanrp+1 |
        vnl-filter -p ridenum,who='"sean"','wordcountfiltered=(wordcount+wordcountbefore+wordcountafter)/3') |
  feedgnuplot --lines --domain --dataid --autolegend
              --xlabel 'Ride number' --ylabel 'Words per email'

verbosity_filtered.svg

Whew. Clearly this was doable, but that's a one-liner that has clearly gotten out of hand, and pushing it further would be unwise. Looking at the data there isn't any obvious time dependence. But what you can clearly see is the extra verbiage around the round-number rides 100, 200, 300, 350, 400, etc. These were often a special weekend ride, with the email containing lots of extra instructions and such.

This was all clearly a waste of time, but as a demo of vnlog workflows, this was ok.

Planet DebianDaniel Pocock: OSCAL'18 Debian, Ham, SDR and GSoC activities

Over the weekend I've been in Tirana, Albania for OSCAL 2018.

Crowdfunding report

The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

The funds were used to purchase an Ultracell UCG45-12 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to support any members of the ham community who want to visit the hackerspaces and events.

Debian and Ham radio booth

Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

A versatile venue and the dictator's revenge

It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

Meeting with Debian's Google Summer of Code students

Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

Workshops and talks

On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

CryptogramJapan's Directorate for Signals Intelligence

The Intercept has a long article on Japan's equivalent of the NSA: the Directorate for Signals Intelligence. Interesting, but nothing really surprising.

The directorate has a history that dates back to the 1950s; its role is to eavesdrop on communications. But its operations remain so highly classified that the Japanese government has disclosed little about its work ­ even the location of its headquarters. Most Japanese officials, except for a select few of the prime minister's inner circle, are kept in the dark about the directorate's activities, which are regulated by a limited legal framework and not subject to any independent oversight.

Now, a new investigation by the Japanese broadcaster NHK -- produced in collaboration with The Intercept -- reveals for the first time details about the inner workings of Japan's opaque spy community. Based on classified documents and interviews with current and former officials familiar with the agency's intelligence work, the investigation shines light on a previously undisclosed internet surveillance program and a spy hub in the south of Japan that is used to monitor phone calls and emails passing across communications satellites.

The article includes some new documents from the Snowden archive.

Planet DebianDaniel Silverstone: Runtime typing

I have been wrestling with a problem for a little while now and thought I might send this out into the ether for others to comment upon. (Or, in other words, Dear Lazyweb…)

I am writing system which collects data from embedded computers in my car (ECUs) over the CAN bus, using the on-board diagnostics port in the vehicle. This requires me to generate packets on the CAN bus, listen to responses, including managing flow control, and then interpret the resulting byte arrays.

I have sorted everything but the last little bit of that particular data pipeline. I have a prototype which can convert the byte arrays into "raw" values by interpreting them either as bitfields and producing booleans, or as anything from an unsigned 8 bit integer to a signed 32 bit integer in either endianness. Fortunately none of the fields I'd need to interpret are floats.

This is, however, pretty clunky and nasty. Since I asked around and a majority of people would prefer that I keep the software configurable at runtime rather than doing meta-programming to describe these fields, I need to develop a way to have the data produced by reading these byte arrays (or by processing results already interpreted out of the arrays) type-checked.

As an example, one field might be the voltage of the main breaker in the car. It's represented as a 16 bit big-endian unsigned field, in tenths of a volt. So the field must be divided by ten and then given the type "volts". Another field is the current passing through that main breaker. This is a 16 bit big-endian signed value measured in tenths of an amp, so must be interpreted as as such, divided by ten, and then given the type "amps". I intend for all values handled beyond the raw byte arrays themselves to simply be floats, so there'll be signedness available regardless.

What I'd like, is to later have a "computed" value, let's call it "power flow", which is the voltage multiplied by the current. Naturally this would need to be given the type 'watts'. What I'd dearly love is to build into my program the understanding that volts times amps equals watts, and then have the reader of the runtime configuration type-check the function for "power flow".

I'm working on this in Rust, though for now the language is less important than the algorithms involved in doing this (unless you know of a Rust library which will help me along). I'd dearly love it if someone out there could help me to understand the right way to handle such expression type checking without having to build up a massively complex type system.

Currently I am considering things (expressed for now in yaml) along the lines of:

- name: main_voltage
  type: volts
  expr: u16_be(raw_bmc, 14) / 10
- name: main_current
  type: amps
  expr: i16_be(raw_bmc, 12) / 10
- name: power_flow
  type: watts
  expr: main_voltage * main_current

What I'd like is for each expression to be type-checked. I'm happy for untyped scalars to end up auto-labelled (so the u16_be() function would return an untyped number which then ends up marked as volts since 10 is also untyped). However when power_flow is typechecked, it should be able to work out that the type of the expression is volts * amps which should then typecheck against watts and be accepted. Since there's also consideration needed for times, distances, booleans, etc. this is not a completely trivial thing to manage. I will know the set of valid types up-front though, so there's that at least.

If you have any ideas, ping me on IRC or perhaps blog a response and then drop me an email to let me know about it.

Thanks in advance.

Planet DebianSune Vuorela: Managing cooking recipes

I like to cook. And sometimes store my recipes. Over the years I have tried KRecipes, kept my recipes in BasKet notes, in KJots notes, in more or less random word processor documents.

I liked the free form entering recipes in various notes applications and word processor documents, but I lacked some kind of indexing them. What I wanted was free-ish text for writing recipes, and some thing that could help me find them by tags I give them. By Title. By how I organize them. And maybe by Ingredient if I don’t know how to get rid of the soon-to-be-bad in my refridgerator.

Given I’m a software developer, maybe I should try scratch my own itch. And I did in the last month and a half during some evenings. This is also where my latest Qt and modern C++ blog posts comes from

The central bit is basically a markdown viewer, and the file format is some semi structured markdown in one file per recipe. Structured in the file system however you like it.

There is a recipes index which simply is a file system view with pretty titles on top.

There is a way to insert tags into recipes.

I can find them by title.

And I can find recipes by ingredients.

Given it is plain text, it can easily be synced using Git or NextCloud or whatever solution you want for that.

You can give it a spin if you want. It lives here https://cgit.kde.org/scratch/sune/kookbook.git/. There is a blueprint for a windows installer here: https://phabricator.kde.org/D12828

There is a markdown file describing the specifics of the file format. It is not declared 100% stable yet, but I need good reasons to break stuff.

My recipe collection is in my native language Danish, so I’m not sure sharing it for demo purposes makes too much sense.

Worse Than FailureThe New Guy (Part I)

After working mind-numbing warehouse jobs for several years, Jesse was ready for a fresh start in Information Technology. The year 2015 brought him a newly-minted Computer and Networking Systems degree from Totally Legit Technical Institute. It would surely help him find gainful employment, all he had to do was find the right opportunity.

DNS hierarchy Seeking the right opportunity soon turned in to any opportunity. Jesse came across a posting for an IT Systems Administrator that piqued his interest but the requirements and responsibilities left a lot to be desired. They sought someone with C++ and Microsoft Office experience who would perform "General IT Admin Work" and "Other Duties as assigned". None of those things seemed to fit together, but he applied anyway.

During the interview, it became clear that Jesse and this small company were essentially in the same boat. While he was seeking any IT employment, they were seeking any IT Systems admin. Their lone admin recently departed unexpectedly and barely left any documentation of what he actually did. Despite several red flags about the position, he decided to accept anyway. Jesse was assured of little oversight and freedom to do things his way - an extreme rarity for a young IT professional.

Jesse got to work on his first day determined to map out the minefield he was walking in to. The notepad with all the admin passwords his predecessor left behind was useful for logging in to things. Over the next few days, he prodded through the network topology to uncover all the horrors that lie within. Among them:

  • The front-end of their most-used internal application was using Access 97 that interfaced with a SQL Server 2008 machine
  • The desktop computers were all using Windows XP (Half of them upgraded from NT 4.0)
  • The main file server and domain controller were still running on NT 4.0
  • There were two other mystery servers that didn't seem to perform any discernible function. Jesse confirmed this by unplugging them and leaving them off

While sorting through the tangled mess he inherited, Jesse got a high priority email from Ralph, the ancient contracted Networking Admin whom he hadn't yet had the pleasure of meeting. "U need to fix the website. FTP not working." While Ralph wasn't one for details, Jesse did learn something from him - they had a website, it used FTP for something, and it was on him to fix it.

Jesse scanned the magic password notepad and came across something called "Website admin console". He decided to give that a shot, only to be told the password was expired and needed to be reset. Unfortunately the reset email was sent to his predecessor's deactivated account. He replied to Ralph telling him he wasn't able to get to the admin console to fix anything.

All that he got in return was a ticket submitted by a customer explaining the problem and the IP address of the FTP server. It seemed they were expecting to be able to fetch PDF reports from an FTP location and were no longer able to. He went to the FTP server and didn't find anything out of the ordinary, other than the fact that is should really be using SFTP. Despite the lack of security, something was still blocking the client from accessing it.

Jesse suddenly had an idea born of inexperience for how to fix the problem. When he was having connectivity issues on his home WiFi network, all he had to do was reboot the router and it would work! That same logic could surely apply here. After tracking down the router, he found the outlet wasn't easily accessible. So he decided to hit the (factory) Reset button on the back.

Upon returning to his desk, he was greeted by nearly every user in their small office. Nobody's computer worked any more. After turning a deep shade of red, Jesse assured everyone he would fix it. He remembered something from TL Tech Institute called DNS that was supposed to let computers talk to each other. He went around and set everyone's DNS server to 192.168.1.0, the address they always used in school. It didn't help.

Jesse put in a call to Ralph and explained the situation. All he got was a lecture from the gravelly-voiced elder on the other end, "You darn kids! Why don't ye just leave things alone! I've been working networks since before there were networks! Give me a bit, I'll clean up yer dang mess!" Within minutes, Ralph managed to restore connectivity to the office. Jesse checked his DNS settings out of curiosity to find that the proper setting was 2.2.2.0.

The whole router mishap made him completely forget about the original issue - the client's FTP. Before he could start looking at it again, Ralph forwarded him an email from the customer thanking them for getting their reports back. Jesse had no idea how or why that was working now, but he was willing to accept the praise. He solved his first problem, but the fun was just beginning...

To be continued...

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianSteve Kemp: This month has been mostly golang-based

This month has mostly been about golang. I've continued work on the protocol-tester that I recently introduced:

This has turned into a fun project, and now all my monitoring done with it. I've simplified the operation, such that everything uses Redis for storage, and there are now new protocol-testers for finger, nntp, and more.

Sample tests are as basic as this:

  mail.steve.org.uk must run smtp
  mail.steve.org.uk must run smtp with port 587
  mail.steve.org.uk must run imaps
  https://webmail.steve.org.uk/ must run http with content 'Prayer Webmail service'

Results are stored in a redis-queue, where they can picked off and announced to humans via a small daemon. In my case alerts are routed to a central host, via HTTP-POSTS, and eventually reach me via the pushover

Beyond the basic network testing though I've also reworked a bunch of code - so the markdown sharing site is now golang powered, rather than running on the previous perl-based code.

As a result of this rewrite, and a little more care, I now score 99/100 + 100/100 on Google's pagespeed testing service. A few more of my sites do the same now, thanks to inline-CSS, inline-JS, etc. Nothing I couldn't have done before, but this was a good moment to attack it.

Finally my "silly" Linux security module, for letting user-space decide if binaries should be executed, can-exec has been forward-ported to v4.16.17. No significant changes.

Over the coming weeks I'll be trying to move more stuff into the cloud, rather than self-hosting. I'm doing a lot of trial-and-error at the moment with Lamdas, containers, and dynamic-routing to that end.

Interesting times.

Planet Debianbisco: First GSoC Report

To whom it may concern, this is my report over the first few weeks of gsoc under the umbrella of the Debian project. I’m writing this on my way back from the minidebconf in Hamburg, which was a nice experience, maybe there will be another post about that ;)

So, the goal of my GSOC project is to design and implement a new SSO solution for Debian. But that only touches one part of the projects deliveries. As you can read in the description Alexander Wirth originally posted in the Debian Wiki1, the project consists of two parts, where the first one is the design and coding of a new backend and self-service interface for Debian guest users (this includes the accounts of Debian Maintainers).

It should also allow creating and selfservice for guest users and DMs. Those users belong into their own backend and should be suffixed with -guest

So after getting in touch with my two mentors, Alexander (formorer) and Nicolas (babelouest), we talked a bit about how to communicate and organize meetings; then i started looking into possible solutions for the guest backend. This is actually the more time critical part, as the current -guest accounts are stored on alioth and alioth will be shut down at the end of may. But Alexander assured that he will maintain the guest user database by hand for the time being, until the new -guest account solution can go into production.

Even before the official acceptance for GSOC i thought about how to implement this and i also talked a bit about that with Alexander. The first decision to make was to choose a data store for the backend. LDAP was a candidate but it would also have been possible to use relational databases. But LDAP is already being used in Debian in the userdir-ldap project and there is also more support for LDAP from potential existing SSO solutions, so it was an obvious choice. There second decision to make was to choose a Webframework for the self service web frontend. I already had some experience with Ruby and Rails, but there are some Django applications in Debian ecosystem (i.e. tracker.d.o.) and i wanted to learn something new. Also i had to do a Python course at the university so wanted to bring the mostly theoretical knowledge to practical use.

Alexander asked me to write a design document for the guest-backend, which i published a few weeks ago. Nicolas gave some feedback on the document right away and Alexander and i reviewed the design document again this weekend during MiniDebConf which resulted in some additional requirements for the backend, like the support of groups.

In the few weeks after writing the design document, i looked more into the possibilities of the different ldap-django extensions. There are two ldap extensions that allow authentication against an ldap server in the debian archive (django-auth-ldap and django-python3-ldap), where the former has a slightly better popcon score. And there is also django-ldapdb, which maps the objects from ldap to django models; django-ldapdb was not packaged yet, but the day i wanted to create an ITP, #898750 was created and the package was uploaded a few days ago. Also i started getting into Django coding itself. I went through most of the Writing your first Django app tutorial and started by writing simple webapps. Also simpleisbetterthancomplex has a lot of helpful Django resources.

I then also started coding the self service web application and had a basic prototype ready after a week. The prototype allows to register an account, which will only become active after the email address has been confirmed using a token. Activated accounts can login and modify their profile, which at the moment only means changing the password; the next step will be to implement a password reset feature, implement an admin interface, add some more fields to the user profile, etc…

You can see some screenshots of the prototype below:

Screenshot of the login form Screenshot of the signup form Screenshot of the activation link message Screenshot of the activation email Screenshot of the 'account activated' message Screenshot of the profile page

I’ve named the webapp ‘nacho’, you can see the code in my salsa repo.

Planet DebianMartin Pitt: De-Googling my phone, reloaded

Three weeks ago I blogged about how to get rid of non-free Google services and moving to free software on my Android phone. I’ve got a lot of feedback via email, lwn, and Google+, many thanks to all of you for helpful hints! As this is obviously important to many people, I want to tie up some lose ends and publish the results of these discussions.

Alternative apps and stores

  • Yalp is a free app that is able to search, install, and update installed apps from the Google Play Store. It doesn’t even need you to have a Google account, although you can use it to install already paid apps (however, you can’t buy apps within Yalp). I actually prefer that over uptodown now.

  • I moved from FreeOTP to AndOTP. The latter offers backing up your accounts with password or GPG encryption, which is certainly much more convenient than what I’ve previously been doing with noting down the accounts and TOTP secrets in an encrypted file on my laptop.

  • We often listen to internet radio at home. I replaced the non-free ad-ware TuneIn with Transistor, a simple and free app that even has convenient launcher links for a chosen station, so it’s exactly what we want. It does not have a builtin radio station list/search, but if you care about that, take a look at RadioDroid (but that doesn’t have the convenient quick starters).

Transport

In this area the situation is now much happier than my first post indicated. As promised I used trainline.eu for booking some tickets (both for Deutsche Bahn and also on Thalys), and indeed this does a fine job. Same price, European rebate cards like BahnCard 50 are supported, and being able to book with a lot of European train services with just one provider is really neat. However, I’m missing a lot of DB navigator’s great features: realtime information and alternatives, seat selection, car position indicator, regional tariffs, or things like “Länderticket”.

Fortunately it turns out that DB Navigator works just great with a trick: Disable the “Karte anzeigen” option in the menu, and it will immediately stop complaining about missing Play Services after each action. Also, logging in with your DB account never finishes, but after terminating and restarting the app you are logged in and everything works fine. That might be a “regular” bug or just a side effect without Play Services.

Wrt. rental bikes: citybik.es is an awesome project and freely available API that shows available bikes on a map all over Europe. The OpenBikeSharing uses that on Android. That plus the ordinary Nextbike app works well enough.

microG

A lot of people pointed out microG as a free implementation of Google Play Service APIs. Indeed I did try this even before my first blog post; but I didn’t mention it as I wanted to find out which apps actually need this API.

Also, this really appears to be something for the daunting: On my rooted Nexus 4 with LineageOS I didn’t get it to work, even after installing the handful of hacks that you need for signature spoofing; and I daresay that on a standard vendorized installation without root/replaced bootloader it’s outright impossible.

Fortunately there are LineageOS builds with microG included, which gets you much further. But even with that e. g. location still does not work out of the box, but one needs to hunt down and install various providers. I’ve heard from several people that they use this successfully, but as this wasn’t the point of my exercise I just gave up after that.

A really useful piece of functionality of Play Services is tracking and remote-controlling (lock, warn tone, erase) lost or stolen phones. With having backup, encryption and proper locking, a stolen phone is not the end of the world, but it’s still relatively important for me (even though I never had to actually use it yet). The only alternative that I found is Cerberus which looks quite comprehensive. It’s not free though (neither as in beer nor in speech), so unless you particularly distrust Google and are not a big company, it might just be better to keep using Play Services for this functionality.

Calendar and Contacts

I’m really happy with DAVDroid and radicale after using them for over a month. But most people don’t have a personal server to run these. etesync looks like an interesting alternative which provide the hosting for you for five coffees a year, and also offer (free) self-hosting for those who can and want to.

,

Planet DebianAndrej Shadura: Porting inputplug to XCB

5 years ago I wrote inputplug, a tiny daemon which connects to your X server and monitors its input devices, running an external command each time a device is connected or disconnected.

I have used a custom keyboard layout and a fairly non-standard settings for my pointing devices since 2012. I always annoyed me those settings would be re-set every time the device was disconnected and reconnected again, for example, when the laptop was brought back up from the suspend mode. I usually solved that by putting commands to reconfigure my input settings into the resume hook scripts, but that obviously didn’t solve the case of connecting external keyboards and mice. At some point those hook scripts stopped to work because they would run too early when the keyboard and mice were not they yet, so I decided to write inputplug.

Inputplug was the first program I ever wrote which used X at a low level, and I had to use Xlib to access the low-level features I needed. More specifically, inputplug uses XInput X extension and listens to XIHierarchyChanged events. In June 2014, Vincent Bernat contributed a patch to rely on XInput2 only.

During the MiniDebCamp, I had a typical case of yak shaving despite not having any yaks around: I wanted to migrate inputplug’s packaging from Alioth to Salsa, and I had an idea to update the package itself as well. I had an idea of adding optional systemd user session integration, and the easiest way to do that would be to have inputplug register a D-Bus service. However, if I just registered the service, introspecting it would cause annoying delays since it wouldn’t respond to any of the messages the clients would send to it. Handling messages would require me to integrate polling into the event loop, and it turned out it’s not easy to do while sticking to Xlib, so I decided to try and port inputplug to XCB.

For those unfamiliar with XCB, here’s a bit of background: XCB is a library which implements the X11 protocol and operates on a slightly lower level than Xlib. Unlike Xlib, it only works with structures which map directly to the wire protocol. The functions XCB provides are really atomic: in Xlib, it not unusual for a function to perform multiple X transactions or to juggle the elements of the structures a bit. In XCB, most of the functions are relatively thin wrappers to enable packing and unpacking of the data. Let me give you an example.

In Xlib, if you wanted to check whether the X server supports a specific extension, you would write something like this:

XQueryExtension(display, "XInputExtension", &xi_opcode, &event, &error)

Internally, XQueryExtension would send a QueryExtension request to the X server, wait for a reply, parse the reply and return the major opcode, the first event code and the first error code.

With XCB, you need to separately send the request, receive the reply and fetch the data you need from the structure you get:

const char ext[] = "XInputExtension";

xcb_query_extension_cookie_t qe_cookie;
qe_cookie = xcb_query_extension(conn, strlen(ext), ext);

xcb_query_extension_reply_t *rep;
rep = xcb_query_extension_reply(conn, qe_cookie, NULL);

At this point, rep has its field preset set to true if the extension is present. The rest of the things are in the structure as well, which you have to free yourself after the use.

Things get a bit more tricky with requests returning arrays, like XIQueryDevice. Since the xcb_input_xi_query_device_reply_t structure is difficult to parse manually, XCB provides an iterator, xcb_input_xi_device_info_iterator_t which you can use to iterate over the structure: xcb_input_xi_device_info_next does the necessary parsing and moves the pointer so that each time it is run the iterator points to the next element.

Since replies in the X protocol can have variable-length elements, e.g. device names, XCB also provides wrappers to make accessing them easier, like xcb_input_xi_device_info_name.

Most of the code of XCB is generated: there is an XML description of the X protocol which is used in the build process, and the C code to parse and generate the X protocol packets is generated each time the library is built. This means, unfortunately, that the documentation is quite useless, and there aren’t many examples online, especially if you’re going to use rarely used functions like XInput hierarchy change events.

I decided to do the porting the hard way, changing Xlib calls to XCB calls one by one, but there’s an easier way: since Xlib is now actually based on XCB, you can #include <X11/Xlib-xcb.h> and use XGetXCBConnection to get an XCB connection object corresponding to the Xlib’s Display object. Doing that means there will still be a single X connection, and you will be able to mix Xlib and XCB calls.

When porting, it often is useful to have a look at the sources of Xlib: it becomes obvious what XCB functions to use when you know what Xlib does internally (thanks to Mike Gabriel for pointing this out!).

Another thing to remember is that the constants and enums Xlib and XCB define usually have the same values (mandated by the X protocol) despite having slightly different names, so you can mix them too. For example, since inputplug passes the XInput event names to the command it runs, I decided to keep the names as Xlib defines them, and since I’m creating the corresponding strings by using a C preprocessor macro, it was easier for me to keep using XInput2.h instead of defining those strings by hand.

If you’re interested in the result of this porting effort, have a look at the code in the Mercurial repo. Unfortunately, it cannot be packaged for Debian yet since the Debian package for XCB doesn’t ship the module for XInput (see bug #733227).

P.S. Thanks again to Mike Gabriel for providing me important help — and explaining where to look for more of it ;)

Planet DebianSune Vuorela: Where KDEInstallDirs points to

The other day, some user of Extra CMake Modules (A collection of utilities and find modules created by KDE), asked if there was an easy way to query cmake for wherever the KDEInstallDirs points to (KDEInstallDirs is a set of default paths that mostly is good for your system, iirc based upon GNUInstallDirs but with some extensions for various Qt, KDE and XDG common paths, as well as some cross platform additions). I couldn’t find an easy way of doing it without writing a couple of lines of CMake code.

Getting the KDE_INSTALL_(full_)APPDIR with default options is:

$ cmake -DTYPE=APPDIR ..
KDE_INSTALL_FULL_APPDIR:/usr/local/share/applications

and various other options can be set as well.

$ cmake -DCMAKE_INSTALL_PREFIX=/opt/mystuff -DTYPE=BINDIR ..
KDE_INSTALL_FULL_BINDIR: /opt/mystuff/bin

This is kind of simple, but let’s just share it with the world:

cmake_minimum_required(VERSION 3.0)
find_package(ECM REQUIRED)
set (CMAKE_MODULE_PATH ${ECM_MODULE_PATH})

include(KDEInstallDirs)

message("KDE_INSTALL_FULL_${TYPE}: " ${KDE_INSTALL_FULL_${TYPE}})

I don’t think it is complex enough to claim any sorts of copyrights, but if you insist, you can use it under one of the following licenses: CC0, Public Domain (if that’s in your juristiction), MIT/X11, WTFPL (any version), 3-clause BSD, GPL (any version), LGPL (any version) and .. erm. whatever.

I was trying to get it to work as a cmake -P script, but some of the find_package calls requires working CMakeCache. Comments welcome.

Planet DebianHolger Levsen: 20180520-Debian-is-wrong

So, the MiniDebConf Hamburg 2018 is about to end, it's sunny, no clouds are visible and people seem to be happy.

And, I have time to write this blog post! So, just as a teaser for now, I'll present to you the content of some slides of our "Reproducible Buster" talk today. Watch the video!

Debian is wrong

93% is a lie. We need infrastructure, processes and policies. (And testing. Currently we only have testing and a vague goal.)

With the upcoming list of bugs (skipped here) we don't want to fingerpoint at individual teams, instead I think we can only solve this if we as Debian decide we want to solve it for buster.

I think this is not happening because people believe things have been sorted out and we take care of them. But we are not, we can't do this alone.

Debian stretch

the 'reproducibly in theory but not in practice' release

Debian buster

the 'we should be reproducible but we are not' release?

Debian bullseye

the 'we are almost there but still haven't sorted out...' release???


I rather hope for:

Debian buster

the release is still far away and we haven't frozen yet! ;-)

Planet DebianDirk Eddelbuettel: Rcpp 0.12.17: More small updates

Another bi-monthly update and the seventeenth release in the 0.12.* series of Rcpp landed on CRAN late on Friday following nine (!!) days in gestation in the incoming/ directory of CRAN. And no complaints: we just wish CRAN were a little more forthcoming with what is happenening when, and/or would let us help supplying additional test information. I do run a fairly insane amount of backtests prior to releases, only to then have to wait another week or more is ... not ideal. But again, we all owe CRAN and immense amount of gratitude for all they do, and do so well.

So once more, this release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, the 0.12.14.release in November 2017, the 0.12.15.release in January 2018 and the 0.12.16.release in March 2018 making it the twenty-first release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1362 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 138 in the current BioConductor release 3.7.

Compared to other releases, this release contains again a relatively small change set, but between Kevin and Romain cleaned a few things up. Full details are below.

Changes in Rcpp version 0.12.17 (2018-05-09)

  • Changes in Rcpp API:

    • The random number Generator class no longer inhreits from RNGScope (Kevin in #837 fixing #836).

    • A spurious parenthesis was removed to please gcc8 (Dirk fixing #841)

    • The optional Timer class header now undefines FALSE which was seen to have side-effects on some platforms (Romain in #847 fixing #846).

    • Optional StoragePolicy attributes now also work for string vectors (Romain in #850 fixing #849).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJoerg Jaspert: Mini DebConf Hamburg

Since Friday around noon time, I and my 6-year-old son are at the Mini DebConf in Hamburg. Attending together with my son is quite a different experience than plain alone or with also having my wife around. Though he is doing pretty good, it mostly means the day ends for me around 2100 when he needs to go to sleep.

Friday

Friday we had a nice train trip up here, with a change to schedule, needed to switch to local trains to actually get where we wanted. Still, arrived in time for lunch, which is always good, afterwards we first went to buy drinks for the days and discovered a nice playground just around the corner.

The evening, besides dinner, consisted of chatting, hacking and getting Nils busy with something - for the times he came to me. He easily found others around and is fast in socialising with people, so free hacking time for me.

Saturday

The day started with a little bit of a hurry to, as Nils suddenly got the offer to attend a concert in the Elbphilharmonie and I had to get him over there fast. He says he liked it, even though it didn’t make much sense. Met him later for lunch again, followed by a visit to the playground, and then finally hacking time again.

While Nils was off looking after other conference attendees (and appearently getting ice cream too), after attending the Salsa talk, I could hack on stuff, and that meant dozens of merge requests for dak got processed (waldi and lamby are on a campaign against flake8 errors, it appears).

Apropos Salsa: The gitlab instance is the best that happened to Debian in terms of collaboration for a long time. It allows a so much better handling of any git related stuff, its worlds between earlier and now.

Holger showed Nils and me the venue, including climbing up one of the towers, quite an adventure for Nils, but a real nice view from up there.

In the evening the dak master branch was ready to get merged into our deploy branch - and as such automagically deployed on all machines where we run. It consisted of 64 commits and appearently a bug, that thankfully I found a merge request to fix it from waldi in the morning.

Oh, and the most important thing: THERE HAVE BEEN PANCAKES!

Sunday

Started the morning, after breakfast, with merging the fixup for the bug, and getting it into the deploy branch. Also asked DSA to adjust group rights for the ftpteam, today we got one promotion from ftptrainee to ftpteam, everybody tell your condolences to waldi. Also added more ftptrainees as we got more volunteers, and removed inactive ones.

Soon we have to start our way back home, but I am sure to come back for another Mini Conf, if it happens again here.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2018 Workshop: Being an Acrobat: Linux and PDFs

Jun 16 2018 12:30
Jun 16 2018 16:30
Jun 16 2018 12:30
Jun 16 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Portable Document Format (PDF) is a file format first specified by Adobe Systems in 1993. It was a proprietary format until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization.

This workshop presentation will provide various ways that PDF files can can be efficiently manipulated in Linux and other free software that may not be easy in proprietary operating systems or applications.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

June 16, 2018 - 12:30

read more

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2018 Main Meeting: VoxxedDays conference report

Jun 5 2018 18:30
Jun 5 2018 20:30
Jun 5 2018 18:30
Jun 5 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE NEW LOCATION

6:30 PM to 8:30 PM Tuesday, June 5, 2018
Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Andrew Pam, Voxxed Days conference report

Andrew will report on a conference he recently attended, covering Language-Level Virtualization with GraalVM, Aggressive Web Apps and more.

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

June 5, 2018 - 18:30

Planet DebianBen Hutchings: Help the Debian kernel team to help you

I gave the first talk this morning at Mini-DebConf Hamburg, titled "Help the kernel team to help you". I briefly described several ways that Debian users and developers can make it easier (or harder) for us to deal with their requests. The slides are up in on my talks page, and video should be available soon.

Planet DebianAndrej Shadura: Porting inputplug to XCB

5 years ago I wrote inputplug, a tiny daemon which connects to your X server and monitors its input devices, running an external command each time a device is connected or disconnected.

I have used a custom keyboard layout and a fairly non-standard settings for my pointing devices since 2012. I always annoyed me those settings would be re-set every time the device was disconnected and reconnected again, for example, when the laptop was brought back up from the suspend mode. I usually solved that by putting commands to reconfigure my input settings into the resume hook scripts, but that obviously didn’t solve the case of connecting external keyboards and mice. At some point those hook scripts stopped to work because they would run too early when the keyboard and mice were not they yet, so I decided to write inputplug.

Inputplug was the first program I ever wrote which used X at a low level, and I had to use Xlib to access the low-level features I needed. More specifically, inputplug uses XInput X extension and listens to XIHierarchyChanged events. In June 2014, Vincent Bernat contributed a patch to rely on XInput2 only.

During the MiniDebCamp, I had a typical case of yak shaving despite not having any yaks around: I wanted to migrate inputplug’s packaging from Alioth to Salsa, and I had an idea to update the package itself as well. I had an idea of adding optional systemd user session integration, and the easiest way to do that would be to have inputplug register a D-Bus service. However, if I just registered the service, introspecting it would cause annoying delays since it wouldn’t respond to any of the messages the clients would send to it. Handling messages would require me to integrate polling into the event loop, and it turned out it’s not easy to do while sticking to Xlib, so I decided to try and port inputplug to XCB.

For those unfamiliar with XCB, here’s a bit of background: XCB is a library which implements the X11 protocol and operates on a slightly lower level than Xlib. Unlike Xlib, it only works with structures which map directly to the wire protocol. The functions XCB provides are really atomic: in Xlib, it not unusual for a function to perform multiple X transactions or to juggle the elements of the structures a bit. In XCB, most of the functions are relatively thin wrappers to enable packing and unpacking of the data. Let me give you an example.

In Xlib, if you wanted to check whether the X server supports a specific extension, you would write something like this:

XQueryExtension(display, "XInputExtension", &xi_opcode, &event, &error)

Internally, XQueryExtension would send a QueryExtension request to the X server, wait for a reply, parse the reply and return the major opcode, the first event code and the first error code.

With XCB, you need to separately send the request, receive the reply and fetch the data you need from the structure you get:

const char ext[] = "XInputExtension";

xcb_query_extension_cookie_t qe_cookie;
qe_cookie = xcb_query_extension(conn, strlen(ext), ext);

xcb_query_extension_reply_t *rep;
rep = xcb_query_extension_reply(conn, qe_cookie, NULL);

At this point, rep has its field preset set to true if the extension is present. The rest of the things are in the structure as well, which you have to free yourself after the use.

Things get a bit more tricky with requests returning arrays, like XIQueryDevice. Since the xcb_input_xi_query_device_reply_t structure is difficult to parse manually, XCB provides an iterator, xcb_input_xi_device_info_iterator_t which you can use to iterate over the structure: xcb_input_xi_device_info_next does the necessary parsing and moves the pointer so that each time it is run the iterator points to the next element.

Since replies in the X protocol can have variable-length elements, e.g. device names, XCB also provides wrappers to make accessing them easier, like xcb_input_xi_device_info_name.

Most of the code of XCB is generated: there is an XML description of the X protocol which is used in the build process, and the C code to parse and generate the X protocol packets is generated each time the library is built. This means, unfortunately, that the documentation is quite useless, and there aren’t many examples online, especially if you’re going to use rarely used functions like XInput hierarchy change events.

I decided to do the porting the hard way, changing Xlib calls to XCB calls one by one, but there’s an easier way: since Xlib is now actually based on XCB, you can #include <X11/Xlib-xcb.h> and use XGetXCBConnection to get an XCB connection object corresponding to the Xlib’s Display object. Doing that means there will still be a single X connection, and you will be able to mix Xlib and XCB calls.

When porting, it often is useful to have a look at the sources of Xlib: it becomes obvious what XCB functions to use when you know what Xlib does internally (thanks to Mike Gabriel for pointing this out!).

Another thing to remember is that the constants and enums Xlib and XCB define usually have the same values (mandated by the X protocol) despite having slightly different names, so you can mix them too. For example, since inputplug passes the XInput event names to the command it runs, I decided to keep the names as Xlib defines them, and since I’m creating the corresponding strings by using a C preprocessor macro, it was easier for me to keep using XInput2.h instead of defining those strings by hand.

If you’re interested in the result of this porting effort, have a look at the code in the Mercurial repo. Unfortunately, it cannot be packaged for Debian yet since the Debian package for XCB doesn’t ship the module for XInput (see bug #733227).

P.S. Thanks again to Mike Gabriel for providing me important help — and explaining where to look for more of it ;)

Planet DebianRuss Allbery: California state election

Hm, I haven't done one of these in a while. Well, time to alienate future employers and make awkward mistakes in public that I have to explain if I ever run for office! (Spoiler: I'm highly unlikely to ever run for office.)

This is only of direct interest to California residents. To everyone else, RIP your feed reader, and I'm sorry for the length. (My hand-rolled blog software doesn't do cut tags.) I'll spare you all the drill-down into the Bay Area regional offices. (Apparently we elect our coroner, which makes no sense to me.)

Propositions

I'm not explaining these because this is already much too long; those who aren't in California and want to follow along can see the voter guide.

Proposition 68: YES. Still a good time to borrow money, and what we're borrowing money for here seems pretty reasonable. State finances are in reasonable shape; we have the largest debt of any state for the obvious reason that we have the most people and the most money.

Proposition 69: YES. My instinct is to vote no because I have a general objection to putting restrictions on how the state manages its budget. I don't like dividing tax money into locked pools for the same reason that I stopped partitioning hard drives. That said, this includes public transit in the spending pool from gasoline taxes (good), the opposition is incoherent, and there are wide-ranging endorsements. That pushed me to yes on the grounds that maybe all these people understand something about budget allocations that I don't.

Proposition 70: NO. This is some sort of compromise with Republicans because they don't like what cap-and-trade money is being spent on (like high-speed rail) and want a say. If I wanted them to have a say, I'd vote for them. There's a reason why they have to resort to backroom tricks to try to get leverage over laws in this state, and it's not because they have good ideas.

Proposition 71: YES. Entirely reasonable change to say that propositions only go into effect after the election results are final. (There was a real proposition where this almost caused a ton of confusion, and prompted this amendment.)

Proposition 72: YES. I'm grumbling about this because I think we should get rid of all this special-case bullshit in property taxes and just readjust them regularly. Unfortunately, in our current property tax regime, you have to add more exemptions like this because otherwise the property tax hit (that would otherwise not be incurred) is so large that it kills the market for these improvements. Rainwater capture is to the public benefit in multiple ways, so I'll hold my nose and vote for another special exception.

Federal Offices

US Senator: Kevin de León. I'll vote for Feinstein in the general against any Republican, and she's way up on de León in the polls, but there's no risk in voting for the more progressive candidate here since there's no chance Feinstein won't get the most votes in the primary. De León is a more solidly progressive candidate than Feinstein. I'd love to see a general election between the two of them.

State Offices

I'm omitting all the unopposed ones, and all the ones where there's only one Democrat running in the primary. (I'm not going to vote for any Republican except for one exception noted below, and third parties in the US are unbelievably dysfunctional and not ready to govern.) For those outside the state, California has a jungle primary where the top two vote-getters regardless of party go to the general election, so this is more partisan and more important than other state primaries.

Governor: Delaine Eastin. One always has to ask, in our bullshit voting system, whether one has to vote tactically instead of for the best candidate. But, looking at polling, I think there's no chance Gavin Newsom (the second-best candidate and the front-runner) won't advance to the general election, so I get to vote for the candidate I actually want to win, even though she's probably not going to. Eastin is by far the most progressive candidate running who actually has the experience required to be governor. (Spoiler: Newsom is going to win, and I'll definitely vote for him in the general against Villaraigosa.)

Lieutenant Governor: Eleni Kounalakis. She and Bleich are the strongest candidates. I don't see a ton of separation between them, but Kounalakis's endorsements are a bit stronger for me. She's also the one candidate who has a specific statement about what she plans to do with the lieutenant governor role of oversight over the university system, which is almost its only actual power. (This political office is stupid and we should abolish it.)

Secretary of State: Alex Padilla. I agree more with Ruben Major's platform (100% paper ballots is the correct security position), but he's an oddball outsider and I don't think he can accomplish as much. Padilla has an excellent track record as the incumbant and is doing basically the right things, just less dramatically.

Treasurer: Fiona Ma. I like Vivek Viswanathan and support his platform, but Fiona Ma has a lot more political expertise and I think will be more effective. I look forward to voting for Viswanathan for something else someday.

Attorney General: Dave Jones. Xavier Becerra hasn't been doing a bad job fighting off bad federal policy, but that seems to be all that he's interested in, and he's playing partisan games with the office. He has an air of amateurishness and political hackery. Dave Jones holds the same positions in a more effective way, is more professional, and has done a good job as Insurance Commissioner.

Insurance Commissioner: Steve Poizner. I'm going to vote for the (former) Republican here. Poizner expressed some really bullshit views on immigration when he ran for governor (which he's apologized for). I wouldn't support him for a more political office. But he was an excellent insurance commissioner (see, for instance, the response to Blue Cross's rate increase request). I'm closer to Ricardo Lara politically, but in his statements to the press he comes across as a clown: self-driving car insurance problems, cannabis insurance, climate change insurance, and a bunch of other nonsense that makes me think he doesn't understand the job. The other democrat, Mahmood, seems like less of a partisan hack, but he's a virtual unknown. If this were an important partisan office, I'd hold my nose and vote for Lara, but the job of insurance commissioner is more to be an auditor and negotiator, and Poizner was really good at it.

Superintendent of Public Instruction: Tony Thurmond. The other front-runner is Marshall Tuck, who is a charter school advocate. I hate charter schools with the passion of a burning sun.

Local Measures

Regional Measure 3: YES. Even more hyper-local than the rest of this post, but mentioning it because it was a narrow call. Bridge tolls are regressive, and I'm not a big fan of raising them as opposed to, say, increasing property taxes (yes please) or income taxes. That said, taxing cars to pay for (largely) public transit is the direction the money should flow. It was thinly balanced for me, but the thrust of the projects won out over the distaste at the regressive tax.

,

Planet DebianRuss Allbery: Free software log (April 2018)

This is rather late since I got distracted by various other things including, ironically, releasing a bunch of software. This is for April, so doesn't include the releases from this month.

The main release I worked on was remctl 3.14, which fixed a security bug introduced in 3.12 with the sudo configuration option. This has since been replaced by 3.15, which has more thorough maintainer testing infrastructure to hopefully prevent this from happening again.

I also did the final steps of the release process for INN 2.6.2, although as usual Julien ÉLIE did all of the hard work.

On the Debian side, I uploaded a new rssh package for the migration to GitLab (salsa.debian.org). I have more work to do on that front, but haven't yet had the time. I've been prioritizing some of my own packages over doing more general Debian work.

Finally, I looked at my Perl modules on CPANTS (the CPAN testing service) and made note of a few things I need to fix, plus filed a couple of bugs for display issues (one of which turned out to be my fault and fixed in Git). I also did a bit of research on the badges that people in the Rust community use in their documentation and started adding support to DocKnot, some of which made it into the subsequent release I did this month.

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.5

A maintenance update of RcppGSL just brought version 0.3.5 to CRAN, a mere twelve days after the RcppGSL 0.3.4. release. Just like yesterday's upload of inline 0.3.15 it was prompted by a CRAN request to update the per-package manual page; see the inline post for details.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.5 (2018-05-19)

  • Update package manual page using references to DESCRIPTION file [CRAN request].

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMartín Ferrari: MiniDebConf Hamburg - Friday/Saturday

MiniDebCamp Hamburg - Friday 18/5, Saturday 19/5

Friday and Saturday have been very productive days, I love events where there is time to hack!

I had more chats about contributors.d.o with Ganneff and Formorer, and if all goes according to plan, soon salsa will start streaming commit information to contributors and populate information about different teams: not only about normal packaging repos, but also about websites, tools, native packages, etc.

Note that the latter require special configuration, and the same goes if you want to have separate stats for your team (like for the Go team or the Perl team). So if you want to offer proper attribution to members of your team, please get in touch!


I spent loads of time working on Prometheus packages, and finally today (after almost a year) I uploaded a new version of prometheus-alertmanager to experimental. I decided to just drop all the web interface, as packaging all the Elm framework would take me months of work. If anybody feels like writing a basic HTML/JS interface, I would be happy to include it in the package!

While doing that, I found bugs in the CI pipeline for Go packages in Salsa. Solving these will hopefully make the automatic testing more reliable, as API breakage is sadly a big problem in the Go ecosystem.


I am loving the venue here. Apart from hosting some companies and associations, there is an art gallery which currently has a photo exhibition called Echo park; there were parties happening last night, and tonight apparently there will be more. This place is amazing!

Comment

Planet DebianThorsten Glaser: Progress report from the Movim packaging sprint at MiniDebconf

Nik wishes you to know that the Movim packaging sprint (sponsored by the DPL, thank you!) is handled under the umbrella of the Debian Edu sprint (similarily sponsored) since this package is handled by the Teckids Debian Task Force, personnel from Teckids e.V.

After arriving, I’ve started collecting knowledge first. I reviewed upstream’s composer.json file and Wiki page about dependencies and, after it quickly became apparent that we need much more information (e.g. which versions are in sid, what the package names are, and, most importantly, recursive dependencies), a Wiki page of our own grew. Then I made a hunt for information about how to package stuff that uses PHP Composer upstream, and found the, ahem, wonderfully abundant, structured, plentiful and clear documentation from the Debian PHP/PEAR Packaging team. (Some time and reverse-engineering later I figured out that we just ignore composer and read its control file in pkg-php-tools converting dependency information to Debian package relationships. Much time later I also figured out it mangles package names in a specific way and had to rename one of the packages I created in the meantime… thankfully before having uploaded it.) Quickly, the Wiki page grew listing the package names we’re supposed to use. I created a package which I could use as template for all others later.

The upstream Movim developer arrived as well — we have quite an amount of upstream developers of various projects attending MiniDebConf, to the joy of the attendees actually directly involved in Debian, and this makes things much easier, as he immediately started removing dependencies (to make our job easier) and fixing bugs and helping us understand how some of those dependencies work. (I also contributed code upstream that replaces some Unicode codepoints or sequences thereof, such as 3⃣ or ‼ or 👱🏻‍♀️, with <img…/> tags pointing to the SVG images shipped with Movim, with a description (generated from their Unicode names) in the alt attribute.)

Now, Saturday, all dependencies are packaged so far, although we’re still waiting for maintainer feedback for those two we’d need to NMU (or have them upload or us take the packages over); most are in NEW of course, but that’s no problem. Now we can tackle packaging Movim itself — I guess we’ll see whether those other packages actually work then ☺

We also had a chance to fix bugs in other packages, like guacamole-client and musescore.

In the meantime we’ve also had the chance to socialise, discuss, meet, etc. other Debian Developers and associates and enjoy the wonderful food and superb coffee of the “Cantina” at the venue; let me hereby express heartfelt thanks to the MiniDebConf organisation for this good location pick!

Update, later this night: we took over the remaining two packages with permission from their previous team and uploader, and have already started with actually packaging Movim, discovering untold gruesome things in the upstream of the two webfonts it bundles.

,

CryptogramFriday Squid Blogging: Flying Squid

Flying squid are real.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityT-Mobile Employee Made Unauthorized ‘SIM Swap’ to Steal Instagram Account

T-Mobile is investigating a retail store employee who allegedly made unauthorized changes to a subscriber’s account in an elaborate scheme to steal the customer’s three-letter Instagram username. The modifications, which could have let the rogue employee empty bank accounts associated with the targeted T-Mobile subscriber, were made even though the victim customer already had taken steps recommended by the mobile carrier to help minimize the risks of account takeover. Here’s what happened, and some tips on how you can protect yourself from a similar fate.

Earlier this month, KrebsOnSecurity heard from Paul Rosenzweig, a 27-year-old T-Mobile customer from Boston who had his wireless account briefly hijacked. Rosenzweig had previously adopted T-Mobile’s advice to customers about blocking mobile number port-out scams, an increasingly common scheme in which identity thieves armed with a fake ID in the name of a targeted customer show up at a retail store run by a different wireless provider and ask that the number to be transferred to the competing mobile company’s network.

So-called “port out” scams allow crooks to intercept your calls and messages while your phone goes dark. Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves who have already stolen a target’s password(s) can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In this case, however, the perpetrator didn’t try to port Rosenzweig’s phone number: Instead, the attacker called multiple T-Mobile retail stores within an hour’s drive of Rosenzweig’s home address until he succeeded in convincing a store employee to conduct what’s known as a “SIM swap.”

A SIM swap is a legitimate process by which a customer can request that a new SIM card (the tiny, removable chip in a mobile device that allows it to connect to the provider’s network) be added to the account. Customers can request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

However, thieves and other ne’er-do-wells can abuse this process by posing as a targeted mobile customer or technician and tricking employees at the mobile provider into swapping in a new SIM card for that customer on a device that they control. If successful, the SIM swap accomplishes more or less the same result as a number port out (at least in the short term) — effectively giving the attackers access to any text messages or phone calls that are sent to the target’s mobile account.

Rosenzweig said the first inkling he had that something wasn’t right with his phone was on the evening of May 2, 2018, when he spotted an automated email from Instagram. The message said the email address tied to the three-letter account he’d had on the social media platform for seven years — instagram.com/par — had been changed. He quickly logged in to his Instagram account, changed his password and then reverted the email on the account back to his original address.

By this time, the SIM swap conducted by the attacker had already been carried out, although Rosenzweig said he didn’t notice his phone displaying zero bars and no connection to T-Mobile at the time because he was at home and happily surfing the Web on his device using his own wireless network.

The following morning, Rosenzweig received another notice — this one from Snapchat — stating that the password for his account there (“p9r”) had been changed. He subsequently reset the Instagram password and then enabled two factor authentication on his Snapchat account.

“That was when I realized my phone had no bars,” he recalled. “My phone was dead. I couldn’t even call 611,” [the mobile short number that all major wireless providers make available to reach their customer service departments].”

It appears that the perpetrator of the SIM swap abused not only internal knowledge of T-Mobile’s systems, but also a lax password reset process at Instagram. The social network allows users to enable notifications on their mobile phone when password resets or other changes are requested on the account.

But this isn’t exactly two-factor authentication because it also lets users reset their passwords via their mobile account by requesting a password reset link to be sent to their mobile device. Thus, if someone is in control of your mobile phone account, they can reset your Instagram password (and probably a bunch of other types of accounts).

Rosenzweig said even though he was able to reset his Instagram password and restore his old email address tied to the account, the damage was already done: All of his images and other content he’d shared on Instagram over the years was still tied to his account, but the attacker had succeeded in stealing his “par” username, leaving him with a slightly less sexy “par54384321,” (apparently chosen for him at random by either Instagram or the attacker).

As I wrote in November 2015, short usernames are something of a prestige or status symbol for many youngsters, and some are willing to pay surprising sums of money for them. Known as “OG” (short for “original” and also “original gangster”) in certain circles online, these can be usernames for virtually any service, from email accounts at Webmail providers to social media services like InstagramSnapchatTwitter and Youtube.

People who traffic in OG accounts prize them because they can make the account holder appear to have been a savvy, early adopter of the service before it became popular and before all of the short usernames were taken.

Rosenzweig said a friend helped him work with T-Mobile to regain control over his account and deactivate the rogue SIM card. He said he’s grateful the attackers who hijacked his phone for a few hours didn’t try to drain bank accounts that also rely on his mobile device for authentication.

“It definitely could have been a lot worse given the access they had,” he said.

But throughout all of this ordeal, it struck Rosenzweig as odd that he never once received an email from T-Mobile stating that his SIM card had been swapped.

“I’m a software engineer and I thought I had pretty good security habits to begin with,” he said. “I never re-use passwords, and it’s hard to see what I could have done differently here. The flaw here was with T-Mobile mostly, but also with Instagram. It seems like by having the ability to change one’s [Instagram] password by email or by mobile alone negates the second factor and it becomes either/or from the attackers point of view.”

Sources close to the investigation say T-Mobile is investigating a current or former employee as the likely culprit. The mobile company also acknowledged that it does not currently send customers an email to the email address on file when SIM swaps take place. A T-Mobile spokesperson said the company was considering changing the current policy, which sends the customer a text message to alert them about the SIM swap.

“We take our customers privacy and security very seriously and we regret that this happened,” the company said in a written statement. “We notify our customers immediately when SIM changes occur, but currently we do not send those notifications via email. We are actively looking at ways to improve our processes in this area.”

In summary, when a SIM swap happens on a T-Mobile account, T-Mobile will send a text message to the phone equipped with the new SIM card. But obviously that does not help someone who is the target of a SIM swap scam.

As we can see, just taking T-Mobile’s advice to place a personal identification number (PIN) on your account to block number port out scams does nothing to flag one’s account to make it harder to conduct SIM swap scams.

Rather, T-Mobile says customers need to call in to the company’s customer support line and place a separate “SIM lock” on their account, which can only be removed if the customer shows up at a retail store with ID (or, presumably, anyone with a fake ID who also knows the target’s Social Security Number and date of birth).

I checked with the other carriers to see if they support locking the customer’s current SIM to the account on file. I suspect they do, and will update this piece when/if I hear back from them. In the meantime, it might be best just to phone up your carrier and ask.

Please note that a SIM lock on your mobile account is separate from a SIM PIN that you can set via your mobile phone’s operating system. A SIM PIN is essentially an additional layer of physical security that locks the current SIM to your device, requiring you to input a special PIN when the device is powered on in order to call, text or access your data plan on your phone. This feature can help block thieves from using your phone or accessing your data if you lose your phone, but it won’t stop thieves from physically swapping in their own SIM card.

iPhone users can follow these instructions to set or change a device’s SIM PIN. Android users can see this page. You may need to enter a carrier-specific default PIN before being able to change it. By default, the SIM PIN for all Verizon and AT&T phones is “1111;” for T-Mobile and Sprint it should default to “1234.”

Be advised, however, that if you forget your SIM PIN and enter the wrong PIN too many times, you may end up having to contact your wireless carrier to obtain a special “personal unlocking key” (PUK).

At the very least, if you haven’t already done so please take a moment to place a port block PIN on your account. This story explains exactly how to do that.

Also, consider reviewing twofactorauth.org to see whether you are taking full advantage of any multi-factor authentication offerings so that your various accounts can’t be trivially hijacked if an attacker happens to guess, steal, phish or otherwise know your password.

One-time login codes produced by mobile apps such as Authy, Duo or Google Authenticator are more secure than one-time codes sent via automated phone call or text — mainly because crooks can’t steal these codes if they succeed in porting your mobile number to another service or by executing a SIM swap on your mobile account [full disclosure: Duo is an advertiser on this blog].

Update, May 19, 3:16 pm ET: Rosenzweig reports that he has now regained control over his original Instagram account name, “par.” Good on Instagram for fixing this, but it’s not clear the company has a real strong reporting process for people who find their usernames are hijacked.

CryptogramMaliciously Changing Someone's Address

Someone changed the address of UPS corporate headquarters to his own apartment in Chicago. The company discovered it three months later.

The problem, of course, is that in the US there isn't any authentication of change-of-address submissions:

According to the Postal Service, nearly 37 million change-of-address requests ­ known as PS Form 3575 ­ were submitted in 2017. The form, which can be filled out in person or online, includes a warning below the signature line that "anyone submitting false or inaccurate information" could be subject to fines and imprisonment.

To cut down on possible fraud, post offices send a validation letter to both an old and new address when a change is filed. The letter includes a toll-free number to call to report anything suspicious.

Each year, only a tiny fraction of the requests are ever referred to postal inspectors for investigation. A spokeswoman for the U.S. Postal Inspection Service could not provide a specific number to the Tribune, but officials have previously said that the number of change-of-address investigations in a given year totals 1,000 or fewer typically.

While fraud involving change-of-address forms has long been linked to identity thieves, the targets are usually unsuspecting individuals, not massive corporations.

Worse Than FailureError'd: Perfectly Technical Difficulties

David G. wrote, "For once, I'm glad to see technical issues being presented in a technical way."

 

"Springer has a very interesting pricing algorithm for downloading their books: buy the whole book at some 10% of the sum of all its individual chapters," writes Bernie T.

 

"While browsing PlataGO! forums, I noticed the developers are erasing technical debt...and then some," Dariusz J. writes.

 

Bill K. wrote, "Hooray! It's an 'opposite sale' on Adidas' website!"

 

"A trail camera disguised at a salad bowl? Leave that at an all you can eat buffet and it'll blend right in," wrote Paul T.

 

Brian writes, "Amazon! That's not how you do math!"

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet Linux AustraliaMichael Still: How to maintain a local mirror of github repositories

Share

Similarly to yesterday’s post about mirroring ONAP’s git, I also want to mirror all of the git repositories for certain github projects. In this specific case, all of the Kubernetes repositories.

So once again, here is a script based on something Tony Breeds and I cooked up a long time ago for OpenStack…

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

from github import Github as github


GITHUB_ACCESS_TOKEN = '...use yours!...'


def get_github_projects():
    g = github(GITHUB_ACCESS_TOKEN)
    for user in ['kubernetes']:
        for repo in g.get_user(login=user).get_repos():
            yield('https://github.com', repo.full_name)


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = []
for res in list(get_github_projects()):
    if len(res) == 3:
        projects.append(res)
    else:
        projects.append((res[0], res[1], res[1]))
    
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(starting_dir)

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

This script is basically the same as the ONAP one, but it understands how to get a project list from github and doesn’t need to handle ONAP’s slightly strange repository naming scheme.

I hope it is useful to someone other than me.

Share

The post How to maintain a local mirror of github repositories appeared first on Made by Mikal.

,

Krebs on SecurityTracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

On May 10, The New York Times broke the news that a different cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to a sheriff’s office in Mississippi County, Mo.

On May 15, ZDnet.com ran a piece saying that Securus was getting its data through an intermediary — Carlsbad, CA-based LocationSmart.

Wednesday afternoon Motherboard published another bombshell: A hacker had broken into the servers of Securus and stolen 2,800 usernames, email addresses, phone numbers and hashed passwords of authorized Securus users. Most of the stolen credentials reportedly belonged to law enforcement officers across the country — stretching from 2011 up to this year.

Several hours before the Motherboard story went live, KrebsOnSecurity heard from Robert Xiao, a security researcher at Carnegie Mellon University who’d read the coverage of Securus and LocationSmart and had been poking around a demo tool that LocationSmart makes available on its Web site for potential customers to try out its mobile location technology.

LocationSmart’s demo is a free service that allows anyone to see the approximate location of their own mobile phone, just by entering their name, email address and phone number into a form on the site. LocationSmart then texts the phone number supplied by the user and requests permission to ping that device’s nearest cellular network tower.

Once that consent is obtained, LocationSmart texts the subscriber their approximate longitude and latitude, plotting the coordinates on a Google Street View map. [It also potentially collects and stores a great deal of technical data about your mobile device. For example, according to their privacy policy that information “may include, but is not limited to, device latitude/longitude, accuracy, heading, speed, and altitude, cell tower, Wi-Fi access point, or IP address information”].

But according to Xiao, a PhD candidate at CMU’s Human-Computer Interaction Institute, this same service failed to perform basic checks to prevent anonymous and unauthorized queries. Translation: Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”

Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving. By pinging the friend’s mobile network multiple times over several minutes, he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.

“This is really creepy stuff,” Xiao said, adding that he’d also successfully tested the vulnerable service against one Telus Mobility mobile customer in Canada who volunteered to be found.

Before LocationSmart’s demo was taken offline today, KrebsOnSecurity pinged five different trusted sources, all of whom gave consent to have Xiao determine the whereabouts of their cell phones. Xiao was able to determine within a few seconds of querying the public LocationSmart service the near-exact location of the mobile phone belonging to all five of my sources.

LocationSmart’s demo page.

One of those sources said the longitude and latitude returned by Xiao’s queries came within 100 yards of their then-current location. Another source said the location found by the researcher was 1.5 miles away from his current location. The remaining three sources said the location returned for their phones was between approximately 1/5 to 1/3 of a mile at the time.

Reached for comment via phone, LocationSmart Founder and CEO Mario Proietti said the company was investigating.

“We don’t give away data,” Proietti said. “We make it available for legitimate and authorized purposes. It’s based on legitimate and authorized use of location data that only takes place on consent. We take privacy seriously and we’ll review all facts and look into them.”

LocationSmart’s home page features the corporate logos of all four the major wireless providers, as well as companies like Google, Neustar, ThreatMetrix, and U.S. Cellular. The company says its technologies help businesses keep track of remote employees and corporate assets, and that it helps mobile advertisers and marketers serve consumers with “geo-relevant promotions.”

LocationSmart’s home page lists many partners.

It’s not clear exactly how long LocationSmart has offered its demo service or for how long the service has been so permissive; this link from archive.org suggests it dates back to at least January 2017. This link from The Internet Archive suggests the service may have existed under a different company name — loc-aid.com — since mid-2011, but it’s unclear if that service used the same code. Loc-aid.com is one of four other sites hosted on the same server as locationsmart.com, according to Domaintools.com.

LocationSmart’s privacy policy says the company has security measures in place…”to protect our site from the loss or misuse of information that we have collected. Our servers are protected by firewalls and are physically located in secure data facilities to further increase security. While no computer is 100% safe from outside attacks, we believe that the steps we have taken to protect your personal information drastically reduce the likelihood of security problems to a level appropriate to the type of information involved.”

But these assurances may ring hollow to anyone with a cell phone who’s concerned about having their physical location revealed at any time. The component of LocationSmart’s Web site that can be abused to look up mobile location data at will is an insecure “application programming interface” or API — an interactive feature designed to display data in response to specific queries by Web site visitors.

Although the LocationSmart’s demo page required users to consent to having their phone located by the service, LocationSmart apparently did nothing to prevent or authenticate direct interaction with the API itself.

API authentication weaknesses are not uncommon, but they can lead to the exposure of sensitive data on a great many people in a short period of time. In April 2018, KrebsOnSecurity broke the story of an API at the Web site of fast-casual bakery chain PaneraBread.com that exposed the names, email and physical addresses, birthdays and last four digits of credit cards on file for tens of millions of customers who’d signed up for an account at PaneraBread to order food online.

In a May 9 letter sent to the top four wireless carriers and to the U.S. Federal Communications Commission in the wake of revelations about Securus’ alleged practices, Sen. Ron Wyden (D-Ore.) urged all parties to take “proactive steps to prevent the unrestricted disclosure and potential abuse of private customer data.”

“Securus informed my office that it purchases real-time location information on AT&T’s customers — through a third party location aggregator that has a commercial relationship with the major wireless carriers — and routinely shares that information with its government clients,” Wyden wrote. “This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government may conduct surveillance of Americans’ phone records, and needlessly exposes millions of Americans to potential abuse and unchecked surveillance by the government.”

Securus, which reportedly gets its cell phone location data from LocationSmart, told The New York Times that it requires customers to upload a legal document — such as a warrant or affidavit — and to certify that the activity was authorized. But in his letter, Wyden said “senior officials from Securus have confirmed to my office that it never checks the legitimacy of those uploaded documents to determine whether they are in fact court orders and has dismissed suggestions that it is obligated to do so.”

Securus did not respond to requests for comment.

THE CARRIERS RESPOND

It remains unclear what, if anything, AT&T, Sprint, T-Mobile and Verizon plan to do about any of this. A third-party firm leaking customer location information not only would almost certainly violate each mobile providers own stated privacy policies, but the real-time exposure of this data poses serious privacy and security risks for virtually all U.S. mobile customers (and perhaps beyond, although all my willing subjects were inside the United States).

None of the major carriers would confirm or deny a formal business relationship with LocationSmart, despite LocationSmart listing them each by corporate logo on its Web site.

AT&T spokesperson Jim Greer said AT&T does not permit the sharing of location information without customer consent or a demand from law enforcement.

“If we learn that a vendor does not adhere to our policy we will take appropriate action,” Greer said.

T-Mobile referred me to their privacy policy, which says T-Mobile follows the “best practices” document (PDF) for subscriber location data as laid out by the CTIA, the international association for the wireless telecommunications industry.

A T-Mobile spokesperson said that after receiving Sen. Wyden’s letter, the company quickly shut down any transaction of customer location data to Securus and LocationSmart.

“We take the privacy and security of our customers’ data very seriously,” the company said in a written statement. “We have addressed issues that were identified with Securus and LocationSmart to ensure that such issues were resolved and our customers’ information is protected. We continue to investigate this.”

Verizon also referred me to their privacy policy.

Sprint officials shared the following statement:

“Protecting our customers’ privacy and security is a top priority, and we are transparent about our Privacy Policy. To be clear, we do not share or sell consumers’ sensitive information to third parties. We share personally identifiable geo-location information only with customer consent or in response to a lawful request such as a validated court order from law enforcement.”

“We will answer the questions raised in Sen. Wyden’s letter directly through appropriate channels. However, it is important to note that Sprint’s relationship with Securus does not include data sharing, and is limited to supporting efforts to curb unlawful use of contraband cellphones in correctional facilities.”

WHAT NOW?

Stephanie Lacambra, a staff attorney with the the nonprofit Electronic Frontier Foundation, said that wireless customers in the United States cannot opt out of location tracking by their own mobile providers. For starters, carriers constantly use this information to provide more reliable service to the customers. Also, by law wireless companies need to be able to ascertain at any time the approximate location of a customer’s phone in order to comply with emergency 911 regulations.

But unless and until Congress and federal regulators make it more clear how and whether customer location information can be shared with third-parties, mobile device customers may continue to have their location information potentially exposed by a host of third-party companies, Lacambra said.

“This is precisely why we have lobbied so hard for robust privacy protections for location information,” she said. “It really should be only that law enforcement is required to get a warrant for this stuff, and that’s the rule we’ve been trying to push for.”

Chris Calabrese is vice president of the Center for Democracy & Technology, a policy think tank in Washington, D.C. Calabrese said the current rules about mobile subscriber location information are governed by the Electronic Communications Privacy Act (ECPA), a law passed in 1986 that hasn’t been substantially updated since.

“The law here is really out of date,” Calabrese said. “But I think any processes that involve going to third parties who don’t verify that it’s a lawful or law enforcement request — and that don’t make sure the evidence behind that request is legitimate — are hugely problematic and they’re major privacy violations.”

“I would be very surprised if any mobile carrier doesn’t think location information should be treated sensitively, and I’m sure none of them want this information to be made public,” Calabrese continued. “My guess is the carriers are going to come down hard on this, because it’s sort of their worst nightmare come true. We all know that cell phones are portable tracking devices. There’s a sort of an implicit deal where we’re okay with it because we get lots of benefits from it, but we all also assume this information should be protected. But when it isn’t, that presents a major problem and I think these examples would be a spur for some sort of legislative intervention if they weren’t fixed very quickly.”

For his part, Xiao says we’re likely to see more leaks from location tracking companies like Securus and LocationSmart as long as the mobile carriers are providing third party companies any access to customer location information.

“We’re going to continue to see breaches like this happen until access to this data can be much more tightly controlled,” he said.

Sen. Wyden issued a statement on Friday in response to this story:

“This leak, coming only days after the lax security at Securus was exposed, demonstrates how little companies throughout the wireless ecosystem value Americans’ security. It represents a clear and present danger, not just to privacy but to the financial and personal security of every American family. Because they value profits above the privacy and safety of the Americans whose locations they traffic in, the wireless carriers and LocationSmart appear to have allowed nearly any hacker with a basic knowledge of websites to track the location of any American with a cell phone.”

“The threats to Americans’ security are grave – a hacker could have used this site to know when you were in your house so they would know when to rob it. A predator could have tracked your child’s cell phone to know when they were alone. The dangers from LocationSmart and other companies are limitless. If the FCC refuses to act after this revelation then future crimes against Americans will be the commissioners’ heads.”

 

Sen. Mark Warner (D-Va.) also issued a statement:

“This is one of many developments over the last year indicating that consumers are really in the dark on how their data is being collected and used,” Sen. Warner said. “It’s more evidence that we need 21st century rules that put users in the driver’s seat when it comes to the ways their data is used.”

In a statement provided to KrebsOnSecurity on Friday, LocationSmart said:

“LocationSmart provides an enterprise mobility platform that strives to bring secure operational efficiencies to enterprise customers. All disclosure of location data through LocationSmart’s platform relies on consent first being received from the individual subscriber. The vulnerability of the consent mechanism recently identified by Mr. Robert Xiao, a cybersecurity researcher, on our online demo has been resolved and the demo has been disabled. We have further confirmed that the vulnerability was not exploited prior to May 16th and did not result in any customer information being obtained without their permission.”

“On that day as many as two dozen subscribers were located by Mr. Xiao through his exploitation of the vulnerability. Based on Mr. Xiao’s public statements, we understand that those subscribers were located only after Mr. Xiao personally obtained their consent. LocationSmart is continuing its efforts to verify that not a single subscriber’s location was accessed without their consent and that no other vulnerabilities exist. LocationSmart is committed to continuous improvement of its information privacy and security measures and is incorporating what it has learned from this incident into that process.”

It’s not clear who LocationSmart considers “customers” in the phrase, “did not result in any customer information being obtained without their permission,” since anyone whose location was looked up through abuse of the service’s buggy API could not fairly be considered a “customer.”

Update, May 18, 11:31 AM ET: Added comments from Sens. Wyden and Warner, as well as updated statements from LocationSmart and T-Mobile.

Sociological Images“I Felt Like Destroying Something Beautiful”

When I was eight, my brother and I built a card house. He was obsessed with collecting baseball cards and had amassed thousands, taking up nearly every available corner of his childhood bedroom. After watching a particularly gripping episode of The Brady Bunch, in which Marsha and Greg settled a dispute by building a card house, we decided to stack the cards in our favor and build. Forty-eight hours later a seven-foot monstrosity emerged…and it was glorious.

I told this story to a group of friends as I ran a stack of paper coasters through my fingers. We were attending Oktoberfest 2017 in a rural university town in the Midwest. They collectively decided I should flex my childhood skills and construct a coaster card house. Supplies were in abundance and time was no constraint. 

I began to construct. Four levels in, people around us began to take notice; a few snapped pictures. Six levels in, people began to stop, actively take pictures, and inquire as to my progress and motivation. Eight stories in, a small crowd emerged. Everyone remained cordial and polite. At this point it became clear that I was too short to continue building. In solidarity, one of my friends stood on a chair to encourage the build. We built the last three levels together, atop chairs, in the middle of the convention center. 

Where inquires had been friendly in the early stages of building, the mood soon turned. The moment chairs were used to facilitate the building process was the moment nearly everyone in attendance began to take notice. As the final tier went up, objects began flying at my head. Although women remained cordial throughout, a fraction of the men in the crowd began to become more and more aggressive. Whispers of  “I bet you $50 that you can’t knock it down” or “I’ll give you $20 if you go knock it down” were heard throughout.  A man chatted with my husband, criticizing the structural integrity of the house and offering insight as to how his house would be better…if he were the one building. Finally, a group of very aggressive men began circling like vultures. One man chucked empty plastic cups from a few tables away. The card house was complete for a total of 2-minutes before it fell. The life of the tower ended as such: 

Man: “Would you be mad if someone knocked it down?”

Me: “I’m the one who built it so I’m the one who gets to knock it down.”

Man: “What? You’re going to knock it down?”

The man proceeded to punch the right side of the structure; a quarter of the house fell. Before he could strike again, I stretched out my arms knocking down the remainder. A small curtsey followed, as if to say thank you for watching my performance. There was a mixture of cheers and boos. Cheers, I imagine from those who sat in nearby tables watching my progress throughout the night. Boos, I imagine, from those who were denied the pleasure of knocking down the structure themselves.

As an academic it is difficult to remove my everyday experiences from research analysis.  Likewise, as a gender scholar the aggression displayed by these men was particularly alarming. In an era of #metoo, we often speak of toxic masculinity as enacting masculine expectations through dominance, and even violence. We see men in power, typically white men, abuse this very power to justify sexual advances and sexual assault. We even see men justify mass shootings and attacks based on their perceived subordination and the denial of their patriarchal rights.

Yet toxic masculinity also exits on a smaller scale, in their everyday social worlds. Hegemonic masculinity is a more apt description for this destructive behavior, rather than outright violent behavior, as hegemonic masculinity describes a system of cultural meanings that gives men power — it is embedded in everything from religious doctrines, to wage structures, to mass media. As men learn hegemonic expectations by way of popular culture—from Humphrey Bogart to John Wayne—one cannot help but think of the famous line from the hyper-masculine Fight Club (1999), “I just wanted to destroy something beautiful.”

Power over women through hegemonic masculinity may best explain the actions of the men at Ocktoberfest. Alcohol consumption at the event allowed men greater freedom to justify their destructive behavior. Daring one another to physically remove a product of female labor, and their surprise at a woman’s choice to knock the tower down herself, are both in line with this type of power over women through the destruction of something “beautiful”.

Physical violence is not always a key feature of hegemonic masculinity (Connell 1987: 184). When we view toxic masculinity on a smaller scale, away from mass shootings and other high-profile tragedies, we find a form of masculinity that embraces aggression and destruction in our everyday social worlds, but is often excused as being innocent or unworthy of discussion.

Sandra Loughrin is an Assistant Professor at the University of Nebraska at Kearney. Her research areas include gender, sexuality, race, and age.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowTalking education and technology with the Future Trends Forum

“Science fiction writer and cyberactivist Cory Doctorow joined the Future Trends Forum to explore possibilities for technology and education.”

CryptogramWhite House Eliminates Cybersecurity Position

The White House has eliminated the cybersecurity coordinator position.

This seems like a spectacularly bad idea.

Worse Than FailureImprov for Programmers: Inventing the Toaster

We always like to change things up a little bit here at TDWTF, and thanks to our sponsor Raygun, we've got a chance to bring you a little treat, or at least something a little different.

We're back with a new podcast, but this one isn't a talk show or storytelling format, or even a radio play. Remy rounded up some of the best comedians in Pittsburgh who were also in IT, and bundled them up to do some improv, using articles from our site and real-world IT news as inspiration. It's… it's gonna get weird.

Thanks to Erin Ross, Ciarán Ó Conaire, and Josh Cox for lending their time and voices to this project.

Music: "Happy Happy Game Show" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/

Raygun gives you a window into the real user-experience for your software. With a few minutes of setup, all the errors, crashes, and performance issues will be identified for you, all in one tool. Not only does it make your applications better, with Raygun APM, it proactively identifies performance issues and builds a workflow for solving them. Raygun APM sorts through the mountains of data for you, surfacing the most important issues so they can be prioritized, triaged and acted on, cutting your Mean Time to Resolution (MTTR) and keeping your users happy.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integration, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Linux AustraliaMichael Still: How to maintain a local mirror of ONAP’s git repositories

Share

For various reasons, I like to maintain a local mirror of git repositories I use a lot, in this case ONAP. This is mostly because of the generally poor network connectivity in Australia, but its also because it makes cloning a new repository super fast.

Tony Breeds and I baked up a script to do this for OpenStack repositories a while ago. I therefore present a version of that mirror script which does the right thing for ONAP projects.

One important thing to note here that differs from OpenStack — ONAP projects aren’t named in a way where they will consistently sit in a directory structure together. For example, there is an “oom” repository, as well as an “oom/registrator” repository. We therefore need to normalise repository names on clone to ensure they don’t clobber each other — I do that by replacing path separators with underscores.

So here’s the script:

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

ONAP_GIT_BASE = 'ssh://mikal@gerrit.onap.org:29418'


def get_onap_projects():
    data = subprocess.check_output(
               ['ssh', 'gerrit.onap.org', 'gerrit',
                'ls-projects']).split('\n')
    for project in data:
        yield (ONAP_GIT_BASE, project,
               'onap/%s' % project.replace('/', '_'))


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = list(get_onap_projects())
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(os.path.abspath(starting_dir))

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

Note that your ONAP gerrit username probably isn’t “mikal”, so you might want to change that.

This script will checkout all ONAP git repositories into a directory named “onap” in your current working directory. A second run will add any new repositories, as well as updating the existing ones. Note that these are clones intended to be served with a local git server, instead of being clones you’d edit directly. To clone one of the mirrored repositories for development, you would then do something like:

$ git clone onap/aai_babel development/aai_babel

Or similar.

Share

The post How to maintain a local mirror of ONAP’s git repositories appeared first on Made by Mikal.

,

CryptogramAccessing Cell Phone Location Information

The New York Times is reporting about a company called Securus Technologies that gives police the ability to track cell phone locations without a warrant:

The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show.

Another article.

Boing Boing post.

Worse Than FailureCodeSOD: Return of the Mask

Sometimes, you learn something new, and you suddenly start seeing it show up anywhere. The Baader-Meinhof Phenomenon is the name for that. Sometimes, you see one kind of bad code, and the same kind of bad code starts showing up everywhere. Yesterday we saw a nasty attempt to use bitmasks in a loop.

Today, we have Michele’s contribution, of a strange way of interacting with bitmasks. The culprit behind this code was a previous PLC programmer, even if this code wasn’t running straight on the PLC.

public static bool DecodeBitmask(int data, int bitIndex)
{
        var value = data.ToString();
        var padding = value.PadLeft(8, '0');
        return padding[bitIndex] == '1';
}

Take a close look at the parameters there- data is an int. That’s about what you’d expect here… but then we call data.ToString() which is where things start to break down. We pad that string out to 8 characters, and then check and see if a '1' happens to be in the spot we’re checking.

This, of course, defeats the entire purpose and elegance of bit masks, and worse, doesn’t end up being any more readable. Passing a number like 2 isn’t going to return true for any index.

Why does this work this way?

Well, let’s say you wanted a bitmask in the form 0b00000111. You might say, “well, that’s a 7”. What Michele’s predecssor said was, "that’s text… "00000111". But the point of bitmasks is to use an int to pass data around, so this developer went ahead on and turned "00000111" into an integer by simply parsing it, creating the integer 111. But there’s no possibly way to check if a certain digit is 1 or not, so we have to convert it back into a string to check the bitmask.

Unfortunately, the software is so fragile and unreliable that no one is willing to let the developers make any changes beyond “it’s on fire, put it out”.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

LongNowThe Role of Art in Addressing Climate Change: An interview with José Luis de Vicente

Sounds super depressing,” she texted. “That’s why I haven’t gone. Sort of went full ostrich.”

That was my friend’s response when I asked her if she had attended Després de la fi del món (After the End of the World), the exhibition on the present and future of climate change at the Center of Contemporary Culture in Barcelona (CCCB).

Burying one’s head in the sand when it comes to climate change is a widespread impulse. It is, to put it brusquely, a bummer story — one whose drama is slow-moving, complex, and operating at planetary scale. The media, by and large, underreports it. Politicians who do not deny its existence struggle to coalesce around long-term solutions. And while a majority of people are concerned about climate change, few talk about it with friends and family.

Given all of this, it would seem unlikely that art, of all things, can make much of a difference in how we think about that story.

José Luis de Vicente, the curator of Després de la fi del món, believes that it can.

“The arts can play a role of fleshing out social scenarios showing that other worlds are possible, and that we are going to be living in them,” de Vicente wrote recently. “Imagining other forms of living is key to producing them.”

Scenes from “After the End of the World.” Via CCCB.

The forms of living on display at Després de la fi del món are an immersive, multi-sensory confrontation. The show consists of nine scenes, each a chapter in a spatial essay on the present and future of the climate crisis by some of the foremost artists and thinkers contemplating the implications of the anthropocene.

“Mitigation of Shock” by Superflux. Via CCCB.

In one, I find myself in a London apartment in the year 02050.¹ The familiar confines of cookie-cutter IKEA furniture give way to an unsettling feeling as the radio on the kitchen counter speaks of broken food supply chains, price hikes, and devastating hurricanes. A newspaper on the living room table asks “HOW WILL WE EAT?” The answer is littered throughout the apartment, in the form of domestic agriculture experiments glowing under purple lights, improvised food computers, and recipes for burgers made out of flies.

“Overview” by Benjamin Grant. Via Daily Overview.

In another, I am surrounded by satellite imagery of the Earth that reveals the beauty of human-made systems and their impact on the planet.

“Win><Win” by Rimini Protokoll. Via CCCB.

The most radical scene, Rimini Protokoll’s “Win><Win,” is one de Vicente has asked me not to disclose in detail, so as to not ruin the surprise when Després de la fi del món goes on tour in the United Kingdom and Singapore. All I can say is that it has something to do with jellyfish, and that it is one of the most remarkable pieces of interactive theater I have ever seen.

A “decompression chamber” featuring philosopher Timothy Morton. Via CCCB.

Visitors transition between scenes via waiting rooms that de Vicente describes as “decompression chambers.” In each chamber, the Minister Of The Future, played by philosopher Timothy Morton, frames his program. The Minister claims to represent the interests of those who cannot exert influence on the political process, either because they have not yet been born, or because they are non-human, like the Great Barrier Reef.

“Aerocene” by Tomás Seraceno. Via Aerocene Foundation.

A key thesis of Després de la fi del món is that knowing the scientific facts of climate change is not enough to adequately address its challenges. One must be able to feel its emotional impact, and find the language to speak about it.

My fear—and the reason I go “full ostrich”—has long been that such a feeling would come about only once we experience climate change’s deleterious effects as an irrevocable part of daily life. My hope, after attending the exhibition and speaking with José Luis de Vicente, is that it might come, at least in part, through art.


“This Civilization is Over. And Everybody Knows It.”

The following interview has been edited for length and clarity.

AHMED KABIL: I suspect that for a lot of us, when we think about climate change, it seems very distant — both in terms of time and space. If it’s happening, it’s happening to people over there, or to people in the future; it’s not happening over here, or right now. The New York Times, for example, published a story finding that while most in the United States think that climate change will harm Americans, few believe that it will harm them personally. One of the things that I found most compelling about Després de la fi del món was how the different scenes of the exhibition made climate change feel much more immediate. Could you say a little bit about how the show was conceived and what you hoped to achieve?

José Luis de Vicente. Photo by Ahmed Kabil.

JOSÉ LUIS DE VICENTE: We wanted the show to be a personal journey, but not necessarily a cohesive one. We wanted it to be like a hallucination, like the recollection of a dream where you’re picking up the pieces here and there.

We didn’t want to do a didactic, encyclopedic show on the science and challenge of climate change. Because that show has been done many, many times. And also, we thought the problem with the climate crisis is not a problem of information. We don’t need to be told more times things that we’ve been told thousands of times.

“Unravelled” by Unknown Fields Division. Via CCCB.

We wanted something that would address the elephant in the room. And the elephant in the room for us was: if this is the most important crisis that we face as a species today, if it transcends generations, if this is going to be the background crisis of our lives, why don’t we speak about it? Why don’t we know how to relate to it directly? Why does it not lead newspapers in five columns when we open them in the morning? That emotional distance was something that we wanted to investigate.

One of the reasons that distance happens is because we’re living in a kind of collective trauma. We are still in the denial phase of that trauma. The metaphor I always like to use is, our position right now is like the one you’re in when you go to the doctor, and the doctor gives you a diagnosis saying that actually, there’s a big, big problem, and yet you still feel the same. You don’t feel any different after being given that piece of news, but at the same time intellectually you know at that point that things are never going to be the same. That’s where we are collectively when it comes to climate change. So how do we transition out of this position of trauma to one of empathy?

“Win><Win” by Rimini Protokoll. Via CCCB.

We also wanted to look at why this was politically an unmanageable crisis. And there’s two reasons for that. One is because it’s a political message no politician will be able to channel into a marketable idea, which is: “We cannot go on living the way we live.” There is no political future for any way you market that idea.

The other is—and Timothy Morton’s work was really influential in this idea—the notion that: “What if simply our senses and communicative capacities are not tuned to understanding the problem because it moves in a different resolution, because it proceeds on a scale that is not the scale of our senses?”

Morton’s notion of the hyper-objectthis idea that there are things that are too big and move too slow for us to see—was very important. The title of the show comes from the title of his book Hyperobjects: An Ecology of Nature After the End of the World (02013).

AHMED KABIL: One of the recent instances of note where climate change did make front-page news was the 02015 Paris Agreement. In Després de la fi del món, the Paris Agreement plays a central role in framing the future of climate change. Why?

JOSÉ LUIS DE VICENTE: If we follow the Paris Agreement to its final consequences, what it’s saying is that, in order to prevent global temperature from rising from 3.6 to 4.8 median degrees Celsius by the end of the 21st century, we have to undertake the biggest transformation that we’ve ever done. And even doing that will mean that we’re only halfway to our goal of having global temperatures not rise more than 2 degrees, ideally 1.5, and we’re already at 1 degree. So that gives a sense of the challenge. And we need to do it for the benefit of the humans and non-humans of 02100, who don’t have a say in this conversation.

“Overview” by Benjamin Grant. Via CCCB.

There are two possibilities here: either we make the goals of the Paris Agreement—the bad news here being that this problem is much, much bigger than just replacing fossil fuels with renewable energies. The Tesla way of going at it, of replacing every car in the world with a Tesla—the numbers just don’t add up. We’re going to have to rethink most systems in society to make this a possibility. That’s possibility number one.

Possibility number two: if we don’t make the goals of the Paris Agreement, we know that there’s no chance that life in the end of the 21st century is going to look remotely similar to today. We know that the kind of systemic crises we have are way more serious than the ones that would allow essential normalcy as we understand it today. So whether we make the goals of the Paris Agreement or not, there is no way that life in the second part of the 21st century looks as it does today.

That’s why we open the exhibition with McKenzie Wark’s quote.

“This civilization is over. And everybody knows it.” — McKenzie Wark

This civilization is over, not in the apocalyptic sense that the end of the world is coming, but that the civilization we built from the mid-nineteenth century onward on this capacity of taking fossil fuels out of the Earth and turning that into a labor force and turning that into an equation of “growth equals development equals progress” is just not sustainable.

“Environmental Health Clinic” by Natalie Jeremijenko. Via CCCB.

So with all these reference points, the show asks: What does it mean to understand this story? What does it mean to be citizens acknowledging this reality? What are possible scenes that look at either aspects of the anthropocene planet today or possible post-Paris futures?

This show should mean different things for you whether you’re fifty-five or you’re twelve. Because if you’re fifty-five, these are all hypothetical scenarios for a world that you’re not going to see. But if you’re twelve this is the world that you’re going to grow up into.

02100 may seem very far away, but the people who will see the world of 02100 are already born.

AHMED KABIL: What role will technology play in our climate change future?

JOSÉ LUIS DE VICENTE: Technology will, of course, play a role, but I think we have to be non-utopian about what that role will be.

The climate crisis is not a technological or socio-cultural or political problem; it’s all three. So the problem can only be solved at the three axes. The one that I am less hopeful about is the political axis, because how do we do it? How do we break that cycle of incredibly short-term incentives built into the political power structure? How do we incorporate the idea of: “Okay, what you want as my constituent is not the most important thing in the world, so I cannot just give you what you want if you vote for me and my position of power.” Especially when we’re seeing the collapse of systems and mechanisms of political representation.

“Sea State 9: Proclamation” by Charles Lim. Via CCCB.

I want to believe—and I’m not a political scientist—that huge social transformations translate to political redesigns, in spite of everything. I’m not overly optimistic or utopian about where we are right now. But our capacity to coalesce and gather around powerful ideas that transmit very easily to the masses allows for shifts of paradigm better than previously. Not only good ones, but bad ones as well.

AHMED KABIL: Is there a case for optimism on climate change?

JOSÉ LUIS DE VICENTE: I cannot be optimistic looking at the data on the table and the political agendas, but I am in the sense of saying that incredible things are happening in the world. We’re witnessing a kind of political awakening. These huge social shifts can happen at any moment.

And I think, for instance, that the fossil fuel industry knows that it’s the end of the party. What we’re seeing now is their awareness that their business model is not going to be viable for much longer. And obviously neither Putin nor Trump are good news for the climate, but nevertheless these huge shifts are coming.

“Mitigation of Shock” by Superflux. Via CCCB.

Kim Stanley Robinson always mentions this “pessimism of the intellect, optimism of the will.” I think that’s where you need to be, knowing that big changes are possible. Of course, I have no utopian expectations about it—this is going to be the backstory for the rest of our lives and we’re going to have traumatic, sad things happening because they’re already happening. But I’m quite positive that the world will definitely not look like this one in many aspects, and many things that big social revolutions in the past tried to make possible will be made possible.

If this show has done anything I hope it’s made a small contribution in answering the question of how we think about the future of climate change, how we talk about it, and how we understand what it means. We have to exist on timescales more expansive than the tiny units of time of our lives. We have to think of the world in ways that are non-anthropocentric. We have to think that the needs and desires of the humans of now are not the only thing that matters. That’s a huge philosophical revolution. But I think it’s possible.


Notes

[1] The Long Now Foundation uses five digit dates to serve as a reminder of the time scale that we endeavor to work in. Since the Clock of the Long Now is meant to run well past the Gregorian year 10,000, the extra zero is to solve the deca-millennium bug which will come into effect in about 8,000 years.

Learn More

  • Stay updated on the After The End of The World exhibition.
  • Read The Guardian’s 02015 profile of Timothy Morton.
  • Watch Benjamin Grant’s upcoming Seminar About Long-Term Thinking, “Overview: Earth and Civilization in the Macroscope.”
  • Watch Kim Stanley Robinson’s 02016 talk at The Interval At Long Now on how climate will evolve government and society.
  • Read José Luis de Vicente’s interview with Kim Stanley Robinson.

CryptogramSending Inaudible Commands to Voice Assistants

Researchers have demonstrated the ability to send inaudible commands to voice assistants like Alexa, Siri, and Google Assistant.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple's Siri, Amazon's Alexa and Google's Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online ­-- simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon's Echo speaker might hear an instruction to add something to your shopping list.

Worse Than FailureCodeSOD: A Bit Masked

The “for-case” or “loop-switch” anti-pattern creates some hard to maintain code. You know the drill: the first time through the loop, do one step, the next time through the loop, do a different step. It’s known as the “Anti-Duff’s Device”, which is a good contrast: Duff’s Device is a clever way to unroll a loop and turn it into a sequential process, while the “loop-switch” takes a sequential process and turns it into a loop.

Ashlea inherited an MFC application. It was worked on by a number of developers in Germany, some of which used English to name identifiers, some which used German, creating a new language called “Deunglish”. Or “Engleutch”? Whatever you call it, Ashlea has helpfully translated all the identifiers into English for us.

Buried deep in a thousand-line “do everything” method, there’s this block:

if(IS_SOMEFORMATNAME()) //Mantis 24426
{
  if(IS_CONDITION("RELEASE_4"))
  {
    m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL="";
    CString strKey;
    for (unsigned int i=1; i<16; i++) // Test all combinations
    {
      strKey="#W#X#Y#Z";
      if(i & 1)
        strKey.Replace("#W", m_strActualPromo);  // MANTIS 45587: Search with and without promotion code
      if(i & 2)
        strKey.Replace("#X",m_BAR.m_TLC_FIELDS.OBJECTCODE);
      if(i & 4)
        strKey.Replace("#Y",TOKEN(strFoo,H_BAZCODE));
      if(i & 8)
        strKey.Replace("#Z",TOKEN(strFoo,H_CHAIN));

      strKey.Replace("#W","");
      strKey.Replace("#X","");
      strKey.Replace("#Y","");
      strKey.Replace("#Z","");

      if(m_lDistributionchannel.GetFirst(strKey))
      {
        m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL="R";
        break;
      }
    }
  }
  else
    m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL=m_lDistributionchannel.GetFirstLine(m_BAR.m_TLC_FIELDS.OBJECTCODE+m_strActualPromo);
}

Here, we see a rather unique approach to using a for-case- by using bitmasks to combine steps on each iteration of the loop. From what I can tell, they have four things which can combine to make an identifier, but might get combined in many different ways. So they try every possible combination, and if it exists, they can set the DISTRIBUTIONCHANNEL field.

That’s ugly and awful, and certainly a WTF, but honestly, that’s not what leapt out to me. It was this line:

if(IS_CONDITION("RELEASE_4"))

It’s quite clear that, as new versions of the software were released, they needed to control which features were enabled and which weren’t. This is probably related to a database, and thus the database may or may not be upgraded to the same release version as the code. So scattered throughtout the code are checks like this, which enable blocks of code at runtime based on which versions match with these flags.

Debugging that must be a joy.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

CryptogramDetails on a New PGP Vulnerability

A new PGP vulnerability was announced today. Basically, the vulnerability makes use of the fact that modern e-mail programs allow for embedded HTML objects. Essentially, if an attacker can intercept and modify a message in transit, he can insert code that sends the plaintext in a URL to a remote website. Very clever.

The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim's email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

A few initial comments:

1. Being able to intercept and modify e-mails in transit is the sort of thing the NSA can do, but is hard for the average hacker. That being said, there are circumstances where someone can modify e-mails. I don't mean to minimize the seriousness of this attack, but that is a consideration.

2. The vulnerability isn't with PGP or S/MIME itself, but in the way they interact with modern e-mail programs. You can see this in the two suggested short-term mitigations: "No decryption in the e-mail client," and "disable HTML rendering."

3. I've been getting some weird press calls from reporters wanting to know if this demonstrates that e-mail encryption is impossible. No, this just demonstrates that programmers are human and vulnerabilities are inevitable. PGP almost certainly has fewer bugs than your average piece of software, but it's not bug free.

3. Why is anyone using encrypted e-mail anymore, anyway? Reliably and easily encrypting e-mail is an insurmountably hard problem for reasons having nothing to do with today's announcement. If you need to communicate securely, use Signal. If having Signal on your phone will arouse suspicion, use WhatsApp.

I'll post other commentaries and analyses as I find them.

EDITED TO ADD (5/14): News articles.

Slashdot thread.

Cory DoctorowPodcast: Petard, Part 02

Here’s the second part of my reading (MP3) of Petard (part one), a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Krebs on SecurityDetecting Cloned Cards at the ATM, Register

Much of the fraud involving counterfeit credit, ATM debit and retail gift cards relies on the ability of thieves to use cheap, widely available hardware to encode stolen data onto any card’s magnetic stripe. But new research suggests retailers and ATM operators could reliably detect counterfeit cards using a simple technology that flags cards which appear to have been altered by such tools.

A gift card purchased at retail with an unmasked PIN hidden behind a paper sleeve. Such PINs can be easily copied by an adversary, who waits until the card is purchased to steal the card’s funds. Image: University of Florida.

Researchers at the University of Florida found that account data encoded on legitimate cards is invariably written using quality-controlled, automated facilities that tend to imprint the information in uniform, consistent patterns.

Cloned cards, however, usually are created by hand with inexpensive encoding machines, and as a result feature far more variance or “jitter” in the placement of digital bits on the card’s stripe.

Gift cards can be extremely profitable and brand-building for retailers, but gift card fraud creates a very negative shopping experience for consumers and a costly conundrum for retailers. The FBI estimates that while gift card fraud makes up a small percentage of overall gift card sales and use, approximately $130 billion worth of gift cards are sold each year.

One of the most common forms of gift card fraud involves thieves tampering with cards inside the retailer’s store — before the cards are purchased by legitimate customers. Using a handheld card reader, crooks will swipe the stripe to record the card’s serial number and other data needed to duplicate the card.

If there is a PIN on the gift card packaging, the thieves record that as well. In many cases, the PIN is obscured by a scratch-off decal, but gift card thieves can easily scratch those off and then replace the material with identical or similar decals that are sold very cheaply by the roll online.

“They can buy big rolls of that online for almost nothing,” said Patrick Traynor, an associate professor of computer science at the University of Florida. “Retailers we’ve worked with have told us they’ve gone to their gift card racks and found tons of this scratch-off stuff on the ground near the racks.”

At this point the cards are still worthless because they haven’t yet been activated. But armed with the card’s serial number and PIN, thieves can simply monitor the gift card account at the retailer’s online portal and wait until the cards are paid for and activated at the checkout register by an unwitting shopper.

Once a card is activated, thieves can encode that card’s data onto any card with a magnetic stripe and use that counterfeit to purchase merchandise at the retailer. The stolen goods typically are then sold online or on the street. Meanwhile, the person who bought the card (or the person who received it as a gift) finds the card is drained of funds when they eventually get around to using it at a retail store.

The top two gift cards show signs that someone previously peeled back the protective sticker covering the redemption code. Image: Flint Gatrell.

Traynor and a team of five other University of Florida researchers partnered with retail giant WalMart to test their technology, which Traynor said can be easily and quite cheaply incorporated into point-of-sale systems at retail store cash registers. They said the WalMart trial demonstrated that researchers’ technology distinguished legitimate gift cards from clones with up to 99.3 percent accuracy.

While impressive, that rate still means the technology could still generate a “false positive” — erroneously flagging a legitimate customer as using a fraudulently obtained gift card in a non-trivial number of cases. But Traynor said the retailers they spoke with in testing their equipment all indicated they would welcome any additional tools to curb the incidence of gift card fraud.

“We’ve talked with quite a few retail loss prevention folks,” he said. “Most said even if they can simply flag the transaction and make a note of the person [presenting the cloned card] that this would be a win for them. Often, putting someone on notice that loss prevention is watching is enough to make them stop — at least at that store. From our discussions with a few big-box retailers, this kind of fraud is probably their newest big concern, although they don’t talk much about it publicly. If the attacker does any better than simply cloning the card to a blank white card, they’re pretty much powerless to stop the attack, and that’s a pretty consistent story behind closed doors.”

BEYOND GIFT CARDS

Traynor said the University of Florida team’s method works even more accurately in detecting counterfeit ATM and credit cards, thanks to the dramatic difference in jitter between bank-issued cards and those cloned by thieves.

The magnetic material on most gift cards bears a quality that’s known in the industry as “low coercivity.” The stripe on so-called “LoCo” cards is usually brown in color, and new data can be imprinted on them quite cheaply using a machine that emits a relatively low or weak magnetic field. Hotel room keys also rely on LoCo stripes, which is why they tend to so easily lose their charge (particularly when placed next to something else with a magnetic charge).

In contrast, “high coercivity” (HiCo) stripes like those found on bank-issued debit and credit cards are usually black in color, hold their charge much longer, and are far more durable than LoCo cards. The downside of HiCo cards is that they are more expensive to produce, often relying on complex machinery and sophisticated manufacturing processes that encode the account data in highly uniform patterns.

These graphics illustrate the difference between original and cloned cards. Source: University of Florida.

Traynor said tests indicate their technology can detect cloned bank cards with virtually zero false-positives. In fact, when the University of Florida team first began seeing positive results from their method, they originally pitched the technique as a way for banks to cut losses from ATM skimming and other forms of credit and debit card fraud.

Yet, Traynor said fellow academicians who reviewed their draft paper told them that banks probably wouldn’t invest in the technology because most financial institutions are counting on newer, more sophisticated chip-based (EMV) cards to eventually reduce counterfeit fraud losses.

“The original pitch on the paper was actually focused on credit cards, but academic reviewers were having trouble getting past EMV — as in, “EMV solves this and it’s universally deployed – so why is this necessary?'”, Traynor said. “We just kept getting reviews back from other academics saying that credit and bank card fraud is a solved problem.”

The trouble is that virtually all chip cards still store account data in plain text on the magnetic stripe on the back of the card — mainly so that the cards can be used in ATM and retail locations that are not yet equipped to read chip-based cards. As a result, even European countries whose ATMs all require chip-based cards remain heavily targeted by skimming gangs because the data on the chip card’s magnetic stripe can still be copied by a skimmer and used by thieves in the United States.

The University of Florida researchers recently were featured in an Associated Press story about an anti-skimming technology they developed and dubbed the “Skim Reaper.” The device, which can be made cheaply using a 3D printer, fits into the mouth of ATM’s card acceptance slot and can detect the presence of extra card reading devices that skimmer thieves may have fitted on top of or inside the cash machine.

The AP story quoted a New York Police Department financial crimes detective saying the Skim Reapers worked remarkably well in detecting the presence of ATM skimmers. But Traynor said many ATM operators and owners are simply uninterested in paying to upgrade their machines with their technology — in large part because the losses from ATM card counterfeiting are mostly assumed by consumers and financial institutions.

“We found this when we were talking around with the cops in New York City, that the incentive of an ATM bodega owner to upgrade an ATM is very low,” Traynor said. “Why should they go to that expense? Upgrades required to make these machines [chip-card compliant] are significant in cost, and the motivation is not necessarily there.”

Retailers also could choose to produce gift cards with embedded EMV chips that make the cards more expensive and difficult to counterfeit. But doing so likely would increase the cost of manufacturing by $2 to $3 per card, Traynor said.

“Putting a chip on the card dramatically increases the cost, so a $10 gift card might then have a $3 price added,” he said. “And you can imagine the reaction a customer might have when asked to pay $13 for a gift card that has a $10 face value.”

A copy of the University of Florida’s research paper is available here (PDF).

The FBI has compiled a list of recommendations for reducing the likelihood of being victimized by gift card fraud. For starters, when buying in-store don’t just pick cards right off the rack. Look for ones that are sealed in packaging or stored securely behind the checkout counter. Also check the scratch-off area on the back to look for any evidence of tampering.

Here are some other tips from the FBI:

-If possible, only buy cards online directly from the store or restaurant.
-If buying from a secondary gift card market website, check reviews and only buy from or sell to reputable dealers.
-Check the gift card balance before and after purchasing the card to verify the correct balance on the card.
-The re-seller of a gift card is responsible for ensuring the correct balance is on the gift card, not the merchant whose name is listed. If you are scammed, some merchants in some situations will replace the funds. Ask for, but don’t expect, help.
-When selling a gift card through an online marketplace, do not provide the buyer with the card’s PIN until the transaction is complete.
-When purchasing gift cards online, be leery of auction sites selling gift cards at a steep discount or in bulk.

CryptogramCritical PGP Vulnerability

EFF is reporting that a critical vulnerability has been discovered in PGP and S/MIME. No details have been published yet, but one of the researchers wrote:

We'll publish critical vulnerabilities in PGP/GPG and S/MIME email encryption on 2018-05-15 07:00 UTC. They might reveal the plaintext of encrypted emails, including encrypted emails sent in the past. There are currently no reliable fixes for the vulnerability. If you use PGP/GPG or S/MIME for very sensitive communication, you should disable it in your email client for now.

This sounds like a protocol vulnerability, but we'll learn more tomorrow.

News articles.

CryptogramRay Ozzie's Encryption Backdoor

Last month, Wired published a long article about Ray Ozzie and his supposed new scheme for adding a backdoor in encrypted devices. It's a weird article. It paints Ozzie's proposal as something that "attains the impossible" and "satisfies both law enforcement and privacy purists," when (1) it's barely a proposal, and (2) it's essentially the same key escrow scheme we've been hearing about for decades.

Basically, each device has a unique public/private key pair and a secure processor. The public key goes into the processor and the device, and is used to encrypt whatever user key encrypts the data. The private key is stored in a secure database, available to law enforcement on demand. The only other trick is that for law enforcement to use that key, they have to put the device in some sort of irreversible recovery mode, which means it can never be used again. That's basically it.

I have no idea why anyone is talking as if this were anything new. Several cryptographers have already explained why this key escrow scheme is no better than any other key escrow scheme. The short answer is (1) we won't be able to secure that database of backdoor keys, (2) we don't know how to build the secure coprocessor the scheme requires, and (3) it solves none of the policy problems around the whole system. This is the typical mistake non-cryptographers make when they approach this problem: they think that the hard part is the cryptography to create the backdoor. That's actually the easy part. The hard part is ensuring that it's only used by the good guys, and there's nothing in Ozzie's proposal that addresses any of that.

I worry that this kind of thing is damaging in the long run. There should be some rule that any backdoor or key escrow proposal be a fully specified proposal, not just some cryptography and hand-waving notions about how it will be used in practice. And before it is analyzed and debated, it should have to satisfy some sort of basic security analysis. Otherwise, we'll be swatting pseudo-proposals like this one, while those on the other side of this debate become increasingly convinced that it's possible to design one of these things securely.

Already people are using the National Academies report on backdoors for law enforcement as evidence that engineers are developing workable and secure backdoors. Writing in Lawfare, Alan Z. Rozenshtein claims that the report -- and a related New York Times story -- "undermine the argument that secure third-party access systems are so implausible that it's not even worth trying to develop them." Susan Landau effectively corrects this misconception, but the damage is done.

Here's the thing: it's not hard to design and build a backdoor. What's hard is building the systems -- both technical and procedural -- around them. Here's Rob Graham:

He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.

A bunch of us cryptographers have already explained why we don't think this sort of thing will work in the foreseeable future. We write:

Exceptional access would force Internet system developers to reverse "forward secrecy" design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

Finally, Matthew Green:

The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line an environment where we believe with high probability that the system will fail.

EDITED TO ADD (5/14): An analysis of the proposal.

Worse Than FailureCodeSOD: CONDITION_FAILURE

Oliver Smith sends this representative line:

bool long_name_that_maybe_distracted_someone()
{
  return (execute() ? CONDITION_SUCCESS : CONDITION_FAILURE);
}

Now, we’ve established my feelings on the if (condition) { return true; } else { return false; } pattern. This is just an iteration on that theme, using a ternary, right?

That’s certainly what it looks like. But Oliver was tracking down an unusual corner-case bug and things just weren’t working correctly. As it turns out, CONDITION_SUCCESS and CONDITION_FAILURE were both defined in the StatusCodes enum.

Screenshot of the intellisense which shows CONDITION_FAILURE defined as 2

Yep- CONDITION_FAILURE is defined as 2. The method returns a bool. Guess what happens when you coerce a non-zero integer into a boolean in C++? It turns into true. This method only ever returns true. Ironically, the calling method would then do its own check against the return value, looking to see if it were CONDITION_SUCCESS or CONDITION_FAILURE.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaClinton Roy: Actively looking for work

I am now actively looking for work, ideally something with Unix/C/Python in the research/open source/not-for-proft space. My long out of date resume has been updated.

,

Planet Linux AustraliaFrancois Marier: Running mythtv-setup over ssh

In order to configure a remote MythTV server, I had to run mythtv-setup remotely over an ssh connection with X forwarding:

ssh -X mythtv@machine

For most config options, I can either use the configuration menus inside of of mythfrontend (over a vnc connection) or the Settings section of MythWeb, but some of the backend and tuner settings are only available through the main setup program.

Unfortunately, mythtv-setup won't work over an ssh connection by default and prints the following error in the terminal:

$ mythtv-setup
...
W  OpenGL: Could not determine whether Sync to VBlank is enabled.
Handling Segmentation fault
Segmentation fault (core dumped)

The fix for this was to specify a different theme engine:

mythtv-setup -O ThemePainter=qt

Sky CroeserMothering

Today I am thinking about mothering as a way in which we can make the world (in all its messiness and difficulty) better.

“Children are the ways that the world begins again and again. If you fasten upon that concept of their promise, you will have trouble finding anything more awesome, and also anything more extraordinarily exhilarating, than the opportunity or/and the obligation to nurture a child into his or her own freedom.” – June Jordan

Mothering is often treated by our society as an inherently conservative activity, something that’s about preserving the past (past traditions, past family structures, past values). But I’m learning from so many people (including people who aren’t biological mothers) who are knitting together strands from the past and hopes for the future.

Care for nature, for the world around us, for our mothers’ and grandmothers’ knowledge and experience. And dreams of more space for children to be who they want to be, to welcome and nurture others, to grow freely.

My mother and grandmother taught me so much, and still do. They are kind and fierce and have managed change and dislocation while always providing me with a steady point in the world.

My beautiful friends who are mothers teach me every day through their examples and their honesty about the difficult moments as well as the wonderful ones.

And I learn from mothers beyond my little circles, too.

From Noongar mothers, and other Aboriginal mothers who fought for recognition of the kidnapping of their children, and who are working today to build a society where their children will be safe and valued as they should be.

From Black mothers like June Jordan, Alexis Pauline Gumbs, and others in the ‘Revolutionary Mothering’ collection, which I return to again and again. They have done so much to help me understand other mothers’ experiences, and to see the possibilities and work that I should be taking up. And others, like Sylvia Federici, who have helped me see what I might not have, otherwise.

From mothers who must be brave enough to leave war or economic insecurity, hoping for safety, even though it also means leaving behind family and friends and home and the language and culture that has been held dear.

From mothers who work quietly and consistently and without recognition, from mothers who are sometimes difficult because of the work they do, from mothers who struggle with their own pasts, and who nevertheless keep trying to create the world anew, more full of love and possibility than before.

,

Planet Linux AustraliaMichael Still: Head On

Share

A sequel to Lock In, this book is a quick and fun read of a murder mystery. It has Scalzi’s distinctive style which has generally meshed quite well for me, so it’s not surprise that I enjoyed this book.

 

Head On Book Cover Head On
John Scalzi
Fiction
Tor Books
April 19, 2018
336

To some left with nothing, winning becomes everything In a post-virus world, a daring sport is taking the US by storm. It's frenetic, violent and involves teams attacking one another with swords and hammers. The aim: to obtain your opponent's head and carry it through the goalposts. Impossible? Not if the players have Hayden's Syndrome. Unable to move, Hayden's sufferers use robot bodies, which they operate mentally. So in this sport anything goes, no one gets hurt - and crowds and competitors love it. Until a star athlete drops dead on the playing field. But is it an accident? FBI agents Chris Shane and Leslie Vann are determined to find out. In this game, fortunes can be made - or lost. And both players and owners will do whatever it takes to win, on and off the field.John Scalzi returns with Head On, a chilling near-future SF with the thrills of a gritty cop procedural. Head On brings Scalzi's trademark snappy dialogue and technological speculation to the future world of sports.

Share

The post Head On appeared first on Made by Mikal.

Don MartiCan markets for intent data even be a thing?

Doc Searls is optimistic that surveillance marketing is going away, but what's going to replace it? One idea that keeps coming up is the suggestion that prospective buyers should be able to sell purchase intent data to vendors directly. This seems to be appealing because it means that the Marketing department will still get to have Big Data and stuff, but I'm still trying to figure out how voluntary transactions in intent data could even be a thing.

Here's an example. It's the week before Thanksgiving, and I'm shopping for a kitchen stove. Here are two possible pieces of intent information that I could sell.

  • "I'm cutting through the store on the way to buy something else. If a stove is on sale, I might buy it, but only if it's a bargain, because who needs the hassle of handling a stove delivery the week before Thanksgiving?"

  • "My old stove is shot, and I need one right away because I have already invited people over. Shut up and take my money."

On a future intent trading platform, what's my incentive to reveal which intent is the true one?

If I'm a bargain hunter, I'm willing to sell my intent information, because it would tend to get me a lower price. But in that case, why would any store want to buy the information?

If I need the product now, I would only sell the information for a price higher than the expected difference between the price I would pay and the price a bargain hunter would pay. But if the information isn't worth more than the price difference, why would the store want to buy it?

So how can a market for purchase intent data happen?

Or is the idea of selling access to purchase intent only feasible if the intent data is taken from the "data subject" without permission?

Anyway, I can see how search advertising and signal-based advertising can assume a more important role as surveillance marketing becomes less important, but I'm not sure about markets for purchase intent. Maybe user data sharing will be not so much a stand-alone thing but a role for trustworthy news and cultural sites, as people choose to share data as part of commenting and survey completion, and that data, in aggregated form, becomes part of a site's audience profile.

,

CryptogramFriday Squid Blogging: How the Squid Lost Its Shell

Squids used to have shells.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesWho Gets a Ticket?

The recent controversial arrests at a Philadelphia Starbucks, where a manager called the police on two Black men who had only been in the store a few minutes, are an important reminder that bias in the American criminal justice system creates both large scale, dramatic disparities and little, everyday inequalities. Research shows that common misdemeanors are a big part of this, because fines and fees can pile up on people who are more likely to be policed for small infractions.

A great example is the common traffic ticket. Some drivers who get pulled over get a ticket, while others get let off with a warning. Does that discretion shake out differently depending on the driver’s race? The Stanford Open Policing Project has collected data on over 60 million traffic stops, and a working paper from the project finds that Black and Hispanic drivers are more likely to be ticketed or searched at a stop than white drivers.

To see some of these patterns in a quick exercise, we pulled the project’s data on over four million stop records from Illinois and over eight million records from South Carolina. These charts are only a first look—we split the recorded outcomes of stops across the different codes for driver race available in the data and didn’t control for additional factors. However, they give a troubling basic picture about who gets a ticket and who drives away with a warning.

(Click to Enlarge)
(Click to Enlarge)

These charts show more dramatic disparities in South Carolina, but a larger proportion of white drivers who were stopped got off with warnings (and fewer got tickets) in Illinois as well. In fact, with millions of observations in each data set, differences of even a few percentage points can represent hundreds, even thousands of drivers. Think about how much revenue those tickets bring in, and who has to pay them. In the criminal justice system, the little things can add up quickly.

(View original at https://thesocietypages.org/socimages)

CryptogramAirline Ticket Fraud

New research: "Leaving on a jet plane: the trade in fraudulently obtained airline tickets:"

Abstract: Every day, hundreds of people fly on airline tickets that have been obtained fraudulently. This crime script analysis provides an overview of the trade in these tickets, drawing on interviews with industry and law enforcement, and an analysis of an online blackmarket. Tickets are purchased by complicit travellers or resellers from the online blackmarket. Victim travellers obtain tickets from fake travel agencies or malicious insiders. Compromised credit cards used to be the main method to purchase tickets illegitimately. However, as fraud detection systems improved, offenders displaced to other methods, including compromised loyalty point accounts, phishing, and compromised business accounts. In addition to complicit and victim travellers, fraudulently obtained tickets are used for transporting mules, and for trafficking and smuggling. This research details current prevention approaches, and identifies additional interventions, aimed at the act, the actor, and the marketplace.

Blog post.

Worse Than FailureError'd: Kind of...but not really

"On occasion, SQL Server Management Studio's estimates can be just a little bit off," writes Warrent B.

 

Jay D. wrote, "On the surface, yeah, it looks like a good deal, but you know, pesky laws of physics spoil all the fun."

 

"When opening a new tab in Google Chrome I saw a link near the bottom of the screen that suggested I 'Explore the world's iconic locations in 3D'," writes Josh M., "Unfortunately, Google's API felt differently."

 

Stuart H. wrote, "I think I might have missed out on this deal, the clock was counting up, no I mean down, I mean negative AHHHH!"

 

"Something tells me this site's programmer is learning how to spell the hard(est) way," Carl W. writes.

 

"Why limit yourself with one particular resource of the day when you can substitute any resource you want," wrote Ari S.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaBlueHackers: Vale Janet Hawtin Reid

Janet Hawtin ReidJanet Hawtin Reid (@lucychili) sadly passed away last week.

A mutual friend called me a earlier in the week to tell me, for which I’m very grateful.  We both appreciate that BlueHackers doesn’t ever want to be a news channel, so I waited writing about it here until other friends, just like me, would have also had a chance to hear via more direct and personal channels. I think that’s the way these things should flow.

knitted Moomin troll by Janet Hawtin ReidI knew Janet as a thoughtful person, with strong opinions particularly on openness and inclusion.  And as an artist and generally creative individual,  a lover of nature.  In recent years I’ve also seen her produce the most awesome knitted Moomins.

Short diversion as I have an extra connection with the Moomin stories by Tove Jansson: they have a character called My, after whom Monty Widenius’ eldest daughter is named, which in turn is how MySQL got named.  I used to work for MySQL AB, and I’ve known that My since she was a little smurf (she’s an adult now).

I’m not sure exactly when I met Janet, but it must have been around 2004 when I first visited Adelaide for Linux.conf.au.  It was then also that Open Source Industry Australia (OSIA) was founded, for which Janet designed the logo.  She may well have been present at the founding meeting in Adelaide’s CBD, too.  OSIA logo - by Janet Hawtin ReidAnyhow, Janet offered to do the logo in a conversation with David Lloyd, and things progressed from there. On the OSIA logo design, Janet wrote:

I’ve used a star as the current one does [an earlier doodle incorporated the Southern Cross]. The 7 points for 7 states [counting NT as a state]. The feet are half facing in for collaboration and half facing out for being expansive and progressive.

You may not have realised this as the feet are quite stylised, but you’ll definitely have noticed the pattern-of-7, and the logo as a whole works really well. It’s a good looking and distinctive logo that has lasted almost a decade and a half now.

Linux Australia logo - by Janet Hawtin ReidAs Linux Australia’s president Kathy Reid wrote, Janet also helped design the ‘penguin feet’ logo that you see on Linux.org.au.  Just reading the above (which I just retrieved from a 2004 email thread) there does seem to be a bit of a feet-pattern there… of course the explicit penguin feet belong with the Linux penguin.

So, Linux Australia and OSIA actually share aspects of their identity (feet with a purpose), through their respective logo designs by Janet!  Mind you, I only realised all this when looking through old stuff while writing this post, as the logos were done at different times and only a handful of people have ever read the rationale behind the OSIA logo until now.  I think it’s cool, and a fabulous visual legacy.

Fir tree in clay, by Janet Hawtin ReidFir tree in clay, by Janet Hawtin Reid. Done in “EcoClay”, brought back to Adelaide from OSDC 2010 (Melbourne) by Kim Hawtin, Janet’s partner.

Which brings me to a related issue that’s close to my heart, and I’ve written and spoken about this before.  We’re losing too many people in our community – where, in case you were wondering, too many is defined as >0.  Just like in a conversation on the road toll, any number greater than zero has to be regarded as unacceptable. Zero must be the target, as every individual life is important.

There are many possible analogies with trees as depicted in the above artwork, including the fact that we’re all best enabled to grow further.

Please connect with the people around you.  Remember that connecting does not necessarily mean talking per-se, as sometimes people just need to not talk, too.  Connecting, just like the phrase “I see you” from Avatar, is about being thoughtful and aware of other people.  It can just be a simple hello passing by (I say hi to “strangers” on my walks), a short email or phone call, a hug, or even just quietly being present in the same room.

We all know that you can just be in the same room as someone, without explicitly interacting, and yet feel either connected or disconnected.  That’s what I’m talking about.  Aim to be connected, in that real, non-electronic, meaning of the word.

If you or someone you know needs help or talk right now, please call 1300 659 467 (in Australia – they can call you back, and you can also use the service online).  There are many more resources and links on the BlueHackers.org website.  Take care.

Planet Linux AustraliaDavid Rowe: FreeDV 700D Part 4 – Acquisition

Since 2012 I have built a series of modems (FDMDV, COHPSK, OFDM) for HF Digital voice. I always get stuck on “acquisition” – demodulator algorithms that acquire and lock onto the received signal. The demod needs to rapidly estimate the frequency offset and “coarse” timing – the position where the modem frame starts in the sequence of received samples.

For my application (Digital Voice over HF), it’s complicated by the low SNR and fading HF channels, and the requirement for fast sync (a few hundred ms). For Digital Voice (DV) we need something fast enough to emulate Push To Talk (PTT) operation. In comparison HF data modems have it easy – they can take many lazy seconds to synchronise.

The latest OFDM modem has been no exception. I’ve spent several weeks messing about with acquisition algorithms to get half decent performance. Still some tuning to do but for my own sanity I think I’ll stop development here for now, write up the results, and push FreeDV 700D out for general consumption.

Acquisition and Sync Requirements

  1. Sync up quickly (a few 100ms) with high SNR signals.
  2. Sync up eventually (a few is seconds OK) for low SNR signals over poor channels. Sync eventually is better than none on channels where even SSB is struggling.
  3. Detect false sync and get out of it quickly. Don’t stay stuck in a false sync state forever.
  4. Hang onto sync through fades of a few seconds.
  5. Assume the operator can tune to within +/- 20Hz of a given frequency.
  6. Assume the radio drifts no more than +/- 0.2Hz/s (12 Hz a minute).
  7. Assume the sample clock offset (difference in ADC/DAC sample rates) is no more than 500ppm.

Actually the last three aren’t really requirements, it’s just what fell out of the OFDM modem design when I optimised it for low SNR performance on HF channels! The frequency stability of modern radios is really good; sound card sample clock offset less so but perhaps we can measure that and tell the operator if there is a problem.

Testing Acquisition

The OFDM modem sends pilot (known) symbols every frame. The demodulator correlates (compares) the incoming signal with the pilot symbol sequence. When it finds a close match it has a coarse timing candidate. It can then try to estimate the frequency offset. So we get a coarse timing estimate, a metric (called mx1) that says how close the match is, and a frequency offset estimate.

Estimating frequency offsets is particularly tricky, I’ve experienced “much wailing and gnashing of teeth” with these nasty little algorithms in past (stop laughing Matt). The coarse timing estimator is more reliable. The problem is that if you get an incorrect coarse timing or frequency estimate the modem can lock up incorrectly and may take several seconds, or operator intervention, before it realises its mistake and tries again.

I ended up writing a lot of GNU Octave functions to help develop and test the acquisition algorithms in ofdm_dev.

For example the function below runs 100 tests, measures the timing and frequency error, and plots some histograms. The core demodulator can cope with about +/ 1.5Hz of residual frequency offset and a few samples of timing error. So we can generate probability estimates from the test results. For example if we do 100 tests of the frequency offset estimator and 50 are within 1.5Hz of being correct, then we can say we have a 50% (0.5) probability of getting the correct frequency estimate.

octave:1> ofdm_dev
octave:2> acquisition_histograms(fin_en=0, foff_hz=-15, EbNoAWGN=-1, EbNoHF=3)
AWGN P(time offset acq) = 0.96
AWGN P(freq offset acq) = 0.60
HF P(time offset acq) = 0.87
HF P(freq offset acq) = 0.59

Here are the histograms of the timing and frequency estimation errors. These were generated using simulations of noisy HF channels (about 2dB SNR):


The x axis of timing is in samples, x axis of freq in Hz. They are both a bit biased towards positive errors. Not sure why. This particular test was with a frequency offset of -15Hz.

Turns out that as the SNR improves, the estimators do a better job. The next function runs a bunch of tests at different SNRs and frequency offsets, and plots the acquisition probabilities:

octave:3> acquisition_curves




The timing estimator also gives us a metric (called mx1) that indicates how strong the match was between the incoming signal and the expected pilot sequence. Here is a busy little plot of mx1 against frequency offset for various Eb/No (effectively SNR):

So as Eb/No increases, the mx1 metric tends to gets bigger. It also falls off as the frequency offset increases. This means sync is tougher at low Eb/No and larger frequency offsets. The -10dB value was thrown in to see what happens with pure noise and no signal at the input. We’d prefer not to sync up to that. Using this plot I set the threshold for a valid signal at 0.25.

Once we have a candidate time and freq estimate, we can test sync by measuring the number of bit errors a set of 10 Unique Word (UW) bits spread over the modem frame. Unlike the payload data in the modem frame, these bits are fixed, and known to the transmitter and receiver. In my initial approach I placed the UW bits right at the start of the modem frame. However I discovered a problem – with certain frequency offsets (e.g. multiples of the modem frame rate like +/- 6Hz) – it was possible to get a false sync with no UW errors. So I messed about with the placement of the UW bits until I had a UW that would not give any false syncs at any incorrect frequency offset. To test the UW I wrote another script:

octave:4> debug_false_sync

Which outputs a plot of UW errors against the residual frequency offset:

Note how at any residual frequency offset other than -1.5 to +1.5 Hz there are at least two bit errors. This allows us to reliably detect a false sync due to an incorrect frequency offset estimate.

State Machine

The estimators are wrapped up in a state machine to control the entire sync process:

  1. SEARCHING: look at a buffer of incoming samples and estimate timing, freq, and the mx1 metric.
  2. If mx1 is big enough, lets jump to TRIAL.
  3. TRIAL: measure the number of Unique Word bit errors for a few frames. If they are bad this is probably a false sync so jump back to SEARCHING.
  4. If we get a low number of Unique Word errors for a few frames it’s high fives all round and we jump to SYNCED.
  5. SYNCED: We put up with up two seconds of high Unique Word errors, as this is life on a HF channel. More than two seconds, and we figure the signal is gone for good so we jump back to SEARCHING.

Reading Further

HF Modem Frequency Offset Estimation, an earlier look at freq offset estimation for HF modems
COHPSK and OFDM waveform design spreadsheet
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
README_ofdm.txt, including specifications of the OFDM modem.

,

CryptogramSupply-Chain Security

Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.

It's a legitimate fear, and perhaps a prudent action. But it's just one instance of the much larger issue of securing our supply chains.

All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.

In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add "backdoors" to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It's relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.

This isn't the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: "Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems."

Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country's tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn't trust another country, then it can't trust that country's computer products.

But this trust isn't limited to the country where the company is based. We have to trust the country where the software is written -- and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips ­ without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.

We also have to trust the programmers. Today's large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries' citizens are writing software for Apple or Microsoft or Google.

We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.

In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.

I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn't an option; the tech world is far too internationally interdependent for that. We can't trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.

We don't know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don't know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It's doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can't buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we're largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.

Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn't get too far out of control.

This essay previously appeared in the Washington Post.

Worse Than FailureCodeSOD: A Quick Replacement

Lucio Crusca was doing a bit of security auditing when he found this pile of code, and it is indeed a pile. It is PHP, which doesn’t automatically make it bad, but it makes use of a feature of PHP so bad that they’ve deprecated it in recent versions: the create_function method.

Before we even dig into this code, the create_function method takes a string, runs eval on it, and returns the name of the newly created anonymous function. Prior to PHP 5.3.0 this was their method of doing lambdas. And while the function is officially deprecated as of PHP 7.2.0… it’s not removed. You can still use it. And I’m sure a lot of code probably still does. Like this block…

        public static function markupToPHP($content) {
                if ($content instanceof phpQueryObject)
                        $content = $content->markupOuter();
                /* <php>...</php> to <?php...? > */
                $content = preg_replace_callback(
                        '@<php>\s*<!--(.*?)-->\s*</php>@s',
                        array('phpQuery', '_markupToPHPCallback'),
                        $content
                );
                /* <node attr='< ?php ? >'> extra space added to save highlighters */
                $regexes = array(
                        '@(<(?!\\?)(?:[^>]|\\?>)+\\w+\\s*=\\s*)(\')([^\']*)(?:&lt;|%3C)\\?(?:php)?(.*?)(?:\\?(?:&gt;|%3E))([^\']*)\'@s',
                        '@(<(?!\\?)(?:[^>]|\\?>)+\\w+\\s*=\\s*)(")([^"]*)(?:&lt;|%3C)\\?(?:php)?(.*?)(?:\\?(?:&gt;|%3E))([^"]*)"@s',
                );
                foreach($regexes as $regex)
                        while (preg_match($regex, $content))
                                $content = preg_replace_callback(
                                        $regex,
                                        create_function('$m',
                                                'return $m[1].$m[2].$m[3]."<?php "
                                                        .str_replace(
                                                                array("%20", "%3E", "%09", "&#10;", "&#9;", "%7B", "%24", "%7D", "%22", "%5B", "%5D"),
                                                                array(" ", ">", "       ", "\n", "      ", "{", "$", "}", \'"\', "[", "]"),
                                                                htmlspecialchars_decode($m[4])
                                                        )
                                                        ." ?>".$m[5].$m[2];'
                                        ),
                                        $content
                                );
                return $content;
        }

From what I can determine from the comments and the code, this is taking some arbitrary content in the form <php>PHP CODE HERE</php> and converting it to <?php PHP CODE HERE ?>. I don’t know what happens after this function is done with it, but I’m already terrified.

The inner-loop fascinates me. while (preg_match($regex, $content)) implies that we need to call the replace function multiple times, but preg_replace_callback by default replaces all instances of the matching regex, so there’s absolutely no reason fo the while loop. Then, of course, the use of create_function, which is itself a WTF, but it’s also worth noting that there’s no need to do this dynamically- you could just as easily have declared a callback function like they did above with _markupToPHPCallback.

Lucio adds:

I was looking for potential security flaws: well, I’m not sure this is actually exploitable, because even black hats have limited patience!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Geek FeminismInformal Geek Feminism get-togethers, May and June

Some Geek Feminism folks will be at the following conferences and conventions in the United States over the next several weeks, in case contributors and readers would like to have some informal get-togethers to reminisce and chat about inheritors of the GF legacy:

If you’re interested, feel free to comment below, and to take on the step of initiating open space/programming/session organizing!

CryptogramVirginia Beach Police Want Encrypted Radios

This article says that the Virginia Beach police are looking to buy encrypted radios.

Virginia Beach police believe encryption will prevent criminals from listening to police communications. They said officer safety would increase and citizens would be better protected.

Someone should ask them if they want those radios to have a backdoor.

Krebs on SecurityThink You’ve Got Your Credit Freezes Covered? Think Again.

I spent a few days last week speaking at and attending a conference on responding to identity theft. The forum was held in Florida, one of the major epicenters for identity fraud complaints in United States. One gripe I heard from several presenters was that identity thieves increasingly are finding ways to open new mobile phone accounts in the names of people who have already frozen their credit files with the big-three credit bureaus. Here’s a look at what may be going on, and how you can protect yourself.

Carrie Kerskie is director of the Identity Fraud Institute at Hodges University in Naples. A big part of her job is helping local residents respond to identity theft and fraud complaints. Kerskie said she’s had multiple victims in her area recently complain of having cell phone accounts opened in their names even though they had already frozen their credit files at the big three credit bureausEquifax, Experian and Trans Union (as well as distant fourth bureau Innovis).

The freeze process is designed so that